The sharper focus, pooled knowledge, and other benefits of managed NOC services lead to better results The rush of businesses to the public cloud has been dizzying over the last few years. Some view our outlook on the matter as contrarian, but the truth is that it’s simply a reflection of our overall philosophy. We want companies to do their homework. Here’s a great example: Sometimes it makes sense to keep your IT department fully on-premises, and sometimes it makes sense to export parts. Using managed NOC services just makes sense for the majority of businesses. In other words, if you’re Visa, Paypal, or another massive company that lives on the bleeding edge of network security, the advice in this article probably isn’t for you. But for everybody else, here’s why you should give managed NOC services a chance to prove their value. Before we establish why, let’s be clear in what we’re talking about. What Are Managed NOC Services?Managed NOC services are companies that handle the day-to-day administration, maintenance, and security of your network operations center (NOC) for you. Let’s break that down a little more. NOCs are the nerve centers of business’ IT operations. They’re responsible for making sure your network works and all the users and applications within it can communicate with each other when and where they’re supposed to. The security operations centers (SOCs) within them ensure that nobody else can crash the party. There are effectively two ways to approach the duties of a NOC:
Here at Roundstone, we think that most businesses who do their homework will find that the latter option is a better fit. 5 Reasons Why Managed NOC Services Are the Smarter ChoiceOur years of experience in advising clients across a range of sectors tell us that using a managed NOC service is probably the smarter choice for your business. Here’s why. (In a hurry? Here’s your TL;DR: Each of these five reasons feeds into the fact that managed NOC services can give you a better security posture, and that’s not part of your business you want to skimp on.) Sharper FocusNOC services only have one job. Their business model is built on making the network experience of their clients better and safer, and they invest all their resources into it. When you do it yourself, you only dedicate a piece of your overall operations to the cause. Your internal team must manage the numerous moving parts of keeping a NOC online as they attend to other tasks. “Keeping a NOC online” isn’t good enough when we’re talking about the nerve center of your business. It needs to be proactive in dealing with emerging threat vectors and vulnerabilities because the bad guys only need to outfox you once to shut your business down. Pooled KnowledgeManaged NOC services have a narrow focus applied broadly. Working with many customers across multiple industries gives them experience in dealing with many different types of attacks and in-depth knowledge of how systems interact. They then use that broad exposure to strengthen the security they offer to each individual customer. Even if you’ve been lucky (or pay generously) enough to assemble one of the best security teams in the field, their exposure will be more limited. They’ll struggle to keep up with the shared experience a managed NOC service naturally accumulates in the course of doing business. Better CustomizationYou may already be saying “wait a minute” on this one. Doesn’t rolling your own internal solution almost always mean better customization, even if it ends up being more expensive or time-consuming than working with a third party? In this case, the answer is no, not really. Managed NOC services have a robust set of tools to use and experiences to draw from. This gives them a much broader breadth of options to choose from. They will also have the practical wisdom to tell you which options are good ideas and which ones may not work out how you want them to. After all, information security is one thing you never want to take big, risky swings on. One of the vendors we partner with is Arctic Wolf. Security operations are their only business, and they don’t care what platforms, hardware, software, or tools you may use. They will manage them. Now, once they learn what you want to accomplish, they will likely be able to say, “Hey, here’s a better way to do that.” Their narrow focus applied broadly means they can make more intelligent suggestions, and you can decide on which ones to use together. Time and Resource SavingsWhen you manage your own NOC, it’s you against the world. Like we said before, you have to be on your A-game all the time, while attackers in every time zone only need to take a lucky stab once to potentially destroy your organization. And yes, you may still be able to do it competently all by yourself. But think about what you have to give up to make that happen: time, resources, training time, compensation (more on that in a bit). Even if they only did it as well as your team could, bringing in a third-party managed NOC service would still let you step back from the many nuts-and-bolts tasks of managing a NOC. That’s a lot of extra time and resources you could reinvest in your teams and products. And that’s a lot more likely to make an impact on your bottom line than the pride you feel in keeping it all in-house. Cost and TalentSpeaking of your bottom line, business leaders often point to inflated costs as an argument against using assets such as managed NOC services. That’s true for many areas of IT outsourcing and we very much encourage that kind of critical thinking (even if we wonder where it is in many other areas of buying business IT solutions). Here’s the problem: the market for high-level NOC talent is competitive as hell right now. Many of the most skilled and experienced workers in the space favor “hired gun” work at places like managed NOC services, where they can flex their specialization as they deal with a whole portfolio’s worth of networks, tools, and threats. The rest are getting scooped up by huge firms that can afford their skyrocketing salaries, like Visa and Paypal. If you want industry-standard network security, you need to go where the industry-standard network security engineers are. Since managed NOC services put them on your company’s case without needing to cover a whole team’s ultra-competitive compensation, they’re typically the better choice value-wise. Still Uncertain? We’ll Help You Make the Right ChoiceAll of this is wisdom from our experience working with a range of clients. We can say, generally speaking, that you’ll be better off working with managed NOC services for all the reasons outlined above.
But when you’re making a decision for your business in particular, you shouldn’t stick with generalities. It always pays to take a step back and assess your unique challenges and goals, and figure out your own path from one to the other. We’re happy to help you mark out that path for your business. If you’d like to learn more about potentially working with managed NOC services, or have any other questions about your organization’s IT infrastructure, contact us today. It will be worth your time.
0 Comments
Ransomware attacks are a question of when, not if, and your business needs to be prepared If you store business data digitally, odds are good that you’re eventually going to get hit by a ransomware attack. The sooner you accept that, the sooner you can move on to the critical question: What do I do to prepare? Here’s how to position yourself for the best possible ransomware incident response. Every Business Is a TargetMore than 8 in 10 ransomware attacks hit small and midsize businesses. Why? Because they’re big enough to be worth the risk but not quite big enough to have invested in cutting-edge security. That’s especially true if the company isn’t in the tech sector, which tends to be more security-minded. Think of these ransomware guys as neighborhood crooks. When they’re roaming the streets deciding where to break in, the posh gated community with security staff and more cameras than trees is too much work to crack. On the other hand, the cramped apartments with boarded-up windows can’t pay enough to be worth the risk. But the single-family homes with standard locks? That’s the sweet spot. Why Don’t We Hear About More Attacks?There were more than 600 million ransomware attacks in 2021, so why do so few make it to the news? Simple: Companies don’t want you to know when they’ve been hit. If news of the attack were to get out, their customers, clients, and partners would all lose faith in them. That could have a catastrophic effect on their market value, as it did when Clorox went public with news of its attack in September. Plus, it flags your business as potentially vulnerable to future attackers. If a company can handle an attack without the public ever finding out, it almost always will. (Even though sharing that info could help the entire industry stay safe.) How to Execute Ransomware Incident ResponseLet’s get one thing out of the way: There’s no magic to ransomware incident response. The best-case scenario requires thinking ahead (more on that later). If you get hit before you’ve taken the right precautions, all you can do is contain the damage. Step One: QuarantiningWhen you learn you’ve been hit, the first thing you should do is revoke system access from anybody outside your company. Then, you can quarantine your existing systems to prevent any further network communications. The bad guys are in now; don’t let them dig their claws in any deeper. Step Two: Find a Clean BackupMost breaches happen long in advance of when the attack is triggered or discovered. The bad guys will sneak something into your system, let it sit there, and then all of a sudden it will activate. That lag between the breach and the attack could mean your backups are compromised going back further than you expect. If you’re going to restore your business to working order, you need to bring a completely clean copy of your data into your systems after they’ve been re-secured. In chronological order, go back through your backups, scanning for vulnerabilities. The more recent your clean backup, the better, because all the business you’ve done since will be jeopardized or lost. You’ll have to rebuild everything from that backup on, which is almost impossible to do. That’s a huge part of why ransomware kills so many businesses. Step Three: Find New InfrastructureOnce you’ve found a clean backup, you’ll need to plug its data into new, clean infrastructure. Many public cloud vendors will provide that infrastructure. Other companies have secondary systems of their own for disaster recovery. If you’re in that group, it is absolutely essential that you make sure your backup site didn’t also get hit. Should You Pay the Ransom?It’s the $1 million (or more) question: Should you pay what the bad guys demand? If you do, you may be able to get your systems up and running pretty quickly. The problem is, you won’t know if they’re clean. The bad guys could easily have left other exploitables in the system that they can set off again six months down the road, and then you’re back at square one. The best approach is to look at the numbers. How much value are you losing to this outage? If you’re losing $2 million per day and they’re asking for a $3 million ransom, it may be worth paying because the business disruption would outstrip the payment. Either way, you’ll need to reset all your systems to zero and go through reinstalling everything. How to Prepare for Ransomware Incident ResponseUp until recently, companies thought if they spent enough on security products, they would be safe. But this only works for so long. Cyber security experts are constantly trying to stay a step ahead of bad actors, and most of the time, they do. But the bad guys only need to be right once to get in. And one day, they will. That’s why cybersecurity is never a static situation. You can never think, “I’ve done this one thing; now I’m set forever.” You are not. But you can come close. How? With a Software-as-a-Service third-party data isolation and recovery solution. Here’s how it works: Every day, your vendor makes a backup of all your data. It encrypts that data and stores it in data stores in the public cloud. No one on your team can access it without going through the vendor. That results in an isolated and immutable backup of your critical business data. This is key to ransomware incident response because attackers who break into your system and try to ransom your data no longer have power over you. You can just restart your apps on clean infrastructure, pull the data from your backups, and continue business as usual. The bad guys would need to hit both your system and the vendor’s simultaneously, which is all but impossible. That means your data stays hidden and protected. Stay Safe With RoundstoneThese SaaS security solutions are relatively new. They’ve only gained traction over the last two or three years, and not everyone has caught up yet. But here at Roundstone Solutions, we’re on the cutting edge of cybersecurity. We can connect you with vendors such as Cohesity, whose FortKnox software can help keep your data secure even in a ransomware attack. To find the right security solution for your business, contact us today.
How this strategic partnership accelerates hybrid cloud deployment Among the biggest challenges IT leaders face in the current technology landscape are figuring out their cloud strategy and modernizing their infrastructure. Two of the most prominent companies helping IT teams navigate these challenges, Cisco and Roundstone Solutions partner Nutanix, have long been competitors in the hybrid cloud infrastructure space. That all changed in July 2023 when Cisco announced it had forged a “global strategic partnership” with Nutanix. This partnership aims to “simplify hybrid multicloud and fuel business transformation.” While this collaboration is still in its early stages, its potential to “deliver the industry’s most complete hybrid cloud solution” is exciting for those in the technology space. But what does this really mean, particularly for existing Cisco users? Let’s take a closer look at this collaboration and how it helps IT organizations overcome operational hurdles. Understanding the Nutanix/Cisco PartnershipFounded in 1984, Cisco has been a pioneer in networking and telecommunications for nearly four decades. It specializes in “smarter, more secure routing” via “future-ready routers for every network.” In 2016, Cisco entered the hyperconverged infrastructure space with the release of the Cisco HyperFlex Data Platform, its proprietary hybrid cloud software solution. Meanwhile, Nutanix has focused primarily on cloud computing since its 2009 inception and was the first in the hyperconverged infrastructure space. This often put the two companies at odds, as HyperFlex was a direct competitor to Nutanix’s offerings. Cisco customers could technically use Nutanix software, but Cisco didn’t officially support it. This partnership changes all of that. Among its benefits is the ability to simplify infrastructure operations with a single hyperconverged solution that leverages both companies’ strengths. This gives IT managers more flexibility as they continue to adapt the latest technologies, SaaS innovations, and multi-cloud operations. Or, to put it in Cisco’s words, “You can deploy hybrid-cloud infrastructure faster and focus on business outcomes with a seamless end-to-end experience.” Here’s how it all works together: Under the banner of Cisco Compute Hyperconverged with Nutanix, Cisco’s servers, storage, networking, and SaaS operations will integrate with the Nutanix Cloud Platform. This gives businesses working in the cloud a solution that combines “Cisco’s award-winning SaaS-managed compute portfolio with Nutanix’s market-proven cloud platform software,” according to Cisco Senior VP and General Manager Jeremy Foster. Those who take advantage of this “technology alliance,” as Nutanix calls it, will also have access to Cisco’s top-of-the-line security features like Cisco Secure Firewall Threat Defense Virtual. It’s the best of both worlds: An industry-leading multicloud solution integrated with industry-leading security. Cisco began rolling out the integrated solution in late 2023. One side effect of this partnership is that, as of September 2023, the Cisco HyperFlex Data Platform has entered its end-of-life stage. Cisco will be retiring the platform over the course of the next year or so, allowing Nutanix’s cloud platform to take center stage. Next Steps for HyperFlex UsersWhile the phrase “end of life” might cause an IT manager’s hair to stand on end, there’s no need to panic. The transition away from HyperFlex will be gradual; you’re not going to lose access to your work in the immediate future. Cisco HyperFlex Data Platform sales will end in September 2024, and software maintenance will continue until September 2025. That said, existing HyperFlex customers should definitely start thinking about what’s next and try to avoid investing any more time and resources into the platform than they have to. For many HyperFlex users, the most natural move will be to move cloud operations over to the duo’s “turnkey hyperconverged solution,” which is “optimized for a wide range of workloads and capacities.” Generally speaking, Nutanix users report a number of benefits, among them simpler operations, scaling flexibility, and improved system performance — all at significantly lower costs than the public cloud. Of course, switching to a new IT infrastructure is never as simple as pushing a button or pulling a lever, but the right partner can help make the migration as painless as possible. Streamlining the TransitionWhen it comes time to migrate from HyperFlex to Nutanix HCI, Roundstone Solutions can help. As a specialist in hyperconverged infrastructure, Nutanix is our leading partner, and we’ve been helping businesses move to more modern IT solutions for over 10 years. We’ll work with you to find the best solution for your specific needs. Want to learn more? Get in touch.
The formula for how to calculate cloud costs has more variables than simple computing power, storage, and network concerns When a developer builds a new apartment building, they connect it to local utilities without a second thought. Whether it’s Pacific Gas & Electric on the West Coast or Con Edison on the East, these providers are so much more efficient than any alternative that there’s no question of cost, let alone of trying to generate power and gas independently. Many businesses think of the public cloud in the same way, never bothering to learn how to calculate cloud costs. Although the public cloud aims to work like other utilities, the truth is that it’s nowhere near that efficient. Infrastructure costs are only the beginning; the hidden costs can turn a relatively small commitment into a behemoth investment. So before your business spends past the point of no return, let’s take a more holistic view of what moving to the public cloud costs. How to Calculate Cloud Costs: Compute, Storage, and NetworkingAt first, understanding cloud costs seems as simple as any other IT infrastructure investment. After all, they share the same three basic components. Your costs will break down across compute, storage, and networking fees. How much processing power do you need from virtual machines? How much cloud storage space will you use? And what will it cost to keep your systems in contact with one another? Answering these questions starts with examining historical data. For example, what quantity of compute power has brought your business to where it is today? Next, you can consider your growth projections. After all, the bigger your company grows, the more it will require of all three components. Look at how your demand for each has developed and use that to extrapolate forward. When doing this work for on-premises cloud infrastructure, you can treat these costs as capital investments and amortize them over several years. That makes them a simple, predictable recurring cost for budgeting. With the public cloud, the thinking goes, you can just monitor your costs and scale up or down as needed. You’ll need to watch for ever-changing costs based on your usage and on potential rate changes set by your provider, which makes accounting a little trickier. But there are a lot of costs you’ll run into long before you even face that challenge. How Migrating Adds to Public Cloud CostsIf you’re considering a shift to the public cloud, you’re probably thinking something along the lines of “Well, we’re not data center experts. Someone else can probably run one for us better than we can ourselves.” That may be true. But here’s the first wrinkle: You have to actually move your apps and data to the cloud. That’s not an overnight process — in fact, it has an enormous cost in terms of working hours. Throughout the migration, you’ll have to run two separate environments. Those two environments may use different operating systems or hypervisors. That means you’ll have to pay for and keep track of two differently priced environments for the length of the migration. Your employees will be forced to split their time between the two, and any software you use to manage the two environments will raise the overall cost of the migration. Refactoring: How Much Time Invested Is Too Much?When you move to the public cloud, you must decide whether to refactor your apps to make them “cloud native.” Doing so is a little like cleaning out a computer you’ve used for many years. Over that time, it’s accumulated thousands of files, most of which you probably never use. Your computer might run more smoothly if you went in and deleted all those useless files. But doing so would take a lot of work, and the computer works well enough as is. Is that work worthwhile? That’s refactoring for the cloud in a nutshell. It lets you benefit from the additional efficiencies of the public cloud, but the upfront costs in terms of labor are immense. Most companies expect to get the efficiency bonuses of the cloud without doing this work, and they wind up blindsided by it in the middle of a migration. As business leaders, we have to ask ourselves whether these time investments are worth the efficiencies they unlock down the line. Furthermore, are they worth their immediate economic costs? All the time employees spend refactoring an app is time they aren’t using to innovate or produce new value. If you’re in business to make a profit, you need your employees working toward that goal rather than treading water. The Cost of Being WrongIf you commit to the public cloud only to find it doesn’t suit your business, you may find yourself in a serious scrape. If you haven’t refactored your apps, moving back to on-premises is going to be extremely expensive and time-consuming. If you have refactored, there’s no turning back. You’ve invested too much, and you’d have to spend even more to undo that work. This is why knowing how to calculate public cloud costs is so crucial; if you only start to worry about these issues mid-migration, it’s already too late. Tracking Cloud Usage to Maintain EfficiencyWhen you use on-premises cloud infrastructure, your costs are mostly capital, though some are operational. They don’t change based on usage, so it doesn’t matter if you max out your capacity or forget to touch it at all. In the public cloud, costs are based on usage. The more you use, the more you pay. And if you’re using the same amount of infrastructure in the public cloud as you were in your on-premises solution, you’re likely to see your costs roughly double. Of course, the whole point of moving to the public cloud is to use only as much as you need. That way, at least in theory, you can pay a higher rate but still come out ahead. But achieving that level of efficiency isn’t as easy as it sounds. Cloud vendors don’t provide tools for monitoring usage versus capacity. The way they see it, if you’re paying for capacity that you don’t need, that’s your problem. Using third-party software such as NCM Cost Governance (formerly known as Beam) by Nutanix can help avoid overpaying. This solution tracks workloads in the public cloud and empowers you to spot inefficiencies. It can also compare the rate you pay with your vendor against what other vendors would charge you — and against what an on-prem solution would cost. Of course, changing cloud vendors means migrating your apps and data a second time, but at least you’ll know what you’re missing. Let Roundstone Solutions HelpSome workloads make sense in the public cloud. Some make sense on-premises. Doing your due diligence, evaluating your options, projecting out costs for migrating and refactoring, and making only measured moves; there’s no better way to know for sure which cloud solution is the right one for your business than to do your homework. And there’s no one better to help than Roundstone Solutions. We’ve aided startups and enterprise businesses alike in getting the most out of their cloud use, and we’re ready to help you, too. Get in touch today to find out more.
Is the public cloud really your best option? Before following the crowd, do your homework. There are thousands of reasons why entrepreneurs start their own businesses. Ultimately, though, those myriad motivators boil down to one thing: the desire to create something of value. Your definition of “value” will largely depend on your company’s objective; it might be marketability, sustainability, innovation, or — in many cases — profitability. Once you’ve established your business goal, everything you do should be in service to that goal, which takes money, time, and people. This all sounds relatively obvious on paper, but in practice, many companies aren’t evaluating whether every business decision they make serves their ultimate goal. Take IT spending, for instance. For many, the IT department is seen as a cost center, a necessary expense to keep the business running. It’s the same for the public cloud — everyone else is using it, so it must be part of the cost of doing business, right? But what is actually the business value of cloud computing? If you haven’t asked yourself that question before, it’s time to take a step back and reassess. Do You Really Know the Value of Cloud Computing?Any business trying to compete in the modern marketplace needs a comprehensive IT infrastructure in place. When the public cloud is viewed as just another necessary cost of this infrastructure, executives and IT professionals often neglect to reassess its value, even as the company grows and evolves. In some cases, otherwise savvy businesses are wasting up to 50% of their IT infrastructure budget by treating these costs as a foregone conclusion. There are three factors that contribute to this mentality: the ease of maintaining the status quo, an aversion to risk, and a herd mentality that’s all too common in the technology industry. Maintaining the Status QuoIt’s not hard to see why IT professionals become comfortable with the status quo. There’s business value in saving the time, money, and personnel it would take to re-evaluate all of your current solutions. At some point, however, continuing along the same path means you’re probably leaving money on the table. Don’t over-focus on short-term results. Risk AversionBut what if you spend the time and effort to come up with an alternate solution and it doesn’t work? What if it adds unplanned expenses or downtime? Those are valid concerns, but they can build up into a risk-averse attitude that simply doesn’t work in cutting-edge industries. Eventually, one of your competitors is going to take those risks, leaving you behind in the process. Following the HerdThe technology industry has a long history of herd mentality. In the past, we saw this with outsourced data centers, cryptocurrency and NFTs, the media’s “pivot to video” — none of which were proven to be sustainable over time. The latest shiny distractions are AI and cloud computing, both of which have very promising technological applications. The problem is, many companies aren’t fully evaluating those applications or the value they bring; instead, decision-makers assume that everyone who came before them has already done their due diligence, and it’s safe to walk the same path. In the case of the public cloud, this can be an expensive assumption. Companies often turn to the public cloud to get up and running quickly; others see this immediate success and follow suit without properly doing their homework. The best solution for another company doesn’t mean it will suit your particular needs. In some cases, public cloud computing might legitimately be the best option, but how can you really know that if you haven’t evaluated its value? Embracing Change as the Only ConstantThink about a startup with a handful of employees. It has to do a lot with very little, so many of the functions of an enterprise business — payroll, HR, and so on — are likely getting outsourced to cloud-based software solutions. In those early months and sometimes years, that makes perfect sense, but what happens when your company matures? Is it still worth it to pay those recurring infrastructure costs, or is there a better way? To avoid getting stuck in the “If it ain’t broke, don’t fix it” trap, you need to accept the fact that change is inevitable. Whenever your business grows, you need to take the time to evaluate the solutions you’re using and consider whether there are better alternatives. Is the value of cloud computing the same now as it was a year ago, three years ago, five years ago? If you’re not sure of the answer, it’s time for an audit. Do your research, weigh your choices, and only after you’ve established which option provides the most value, make a choice — but understand that choice isn’t permanent. In another three to five years, it might be time for another audit. That doesn’t mean you made the wrong choice; it just means that change has come once again. Don’t fear it. Embrace it. Roundstone Gets Business ValueAt Roundstone, evaluating the available solutions and making the best choice for a particular set of circumstances is our specialty. Whether you’re struggling to find the right cloud strategy, attempting to modernize existing architecture, or hoping to find ways to stretch your resources further, Roundstone Solutions can help. We work with best-in-class vendors and create efficient solutions designed with your needs in mind. To learn more, get in touch.
Companies and governments are investing heavily to build AI, but many don’t know what they’re buying — or how to use it The sky is falling for anyone not getting into artificial intelligence (AI) immediately. If you build AI for your business, you’re going to revolutionize the industry by saving huge amounts of time, boosting efficiency, and raising productivity, and it’s only going to get better over time. Just look at how good ChatGPT is now: You give it a prompt, and it spits out 700 words like it’s nothing. Evangelists spread this gospel from every corner. Businesses and governments alike see dollar signs. There’s just one problem: Many of the leaders at these organizations don’t actually understand AI. The hype around the technology has them exploring how to build AI rather than how to use it in practical ways. By failing to cover their fundamentals, these leaders risk burning time and cash on AI investments that don’t pan out. So, in the interest of saving both, let’s take a deep breath and a deeper look at what AI is and what it can do for us. What Is AI?Underneath the shiny, marketing-buffed exterior, AI is an application like anything else. That means it takes servers, data, networking, and software to build AI successfully. Let’s take ChatGPT as an example. This bot works by crawling the internet to index the contents — just as Google Search does — and then store huge chunks of it on developer OpenAI’s servers. The bot then examines all that language to learn syntax, usage, and facts that help it mimic human writing. When you prompt ChatGPT, it accesses those servers, chews through a bunch of data, and returns it to you as a big block of text. The classic components of an application are all there. Servers host ChatGPT’s data, networks connect those servers, and software pulls it all together into text for users to read and use. That makes it fundamentally the same as your human resources or payroll apps. The only difference is in its workload. How to Build AI InfrastructureAt this juncture, business leaders need to look past the hype around AI and treat it like any other application. The apps may do some of the work for you, but if you plan to integrate an AI solution, you will need to invest in infrastructure. Outsourcing vs. Going In-HouseWhether it’s storage, cybersecurity, or AI, implementing a new solution means examining whether or not to outsource its infrastructure. In each case, a business has to answer for itself by evaluating the expected return on investment of either option. When it comes to storage and cybersecurity, we have an ocean of data to pore through about how to provision infrastructure. We have a strong sense of what benefits and drawbacks we can expect from different deployment methods. That leads to smarter business decisions and more efficient operations. AI is a different story. Its use cases in business are still nascent. As a result, we simply don’t have the data we need to determine whether it’s more cost-effective to build AI infrastructure in-house or to rent it from tech giants like Google. Moreover, it will be a long time before we have that data. And there’s going to be a lot of money burned between now and then. The Case for Letting Others LeadDespite the enthusiasm for impressive AI deployments like ChatGPT, it’s still not clear exactly how this technology will unlock the efficiency gains executives are looking for. That won’t stop them from looking. Lots of people have lots of ideas about how AI can save time and money, but they know very little for sure. Leaders in the space will likely spend billions of dollars and make tons of mistakes before they perfect the formula for AI deployment. For giant corporations, that kind of investment may be worthwhile. But for most businesses, it will prove much more cost-effective to adopt a wait-and-see approach. Imagine a banking giant like Wells Fargo, for example. It could spend $1 billion on AI just to see $100 million of value. That return may grow over time, but the initial costs would be ruinous for a smaller company. It’s far more efficient for that smaller company to wait until the use cases have become clear and begun to demonstrate value. That way, they can invest less upfront and still see similar returns. Think of AI like a bridge under construction via trial and error. Deep-pocketed interests can afford to send their goods across the bridge over and over despite the risk of a bridge collapse. If a collapse comes, they can eat the cost of lost goods and then use what they learned to make the bridge a little better. Over a long enough timeline — and with sufficient lost goods — they’ll come up with a great bridge. But in the meantime, it makes more sense for the rest of us to head downstream to the ford. It may take a little longer, but it’s significantly safer. Then, once the infrastructure is in place, we can swoop in and benefit from their investments at a lower cost. How to Plan for the FutureEven if you’re not investing in AI immediately, there are still ways you can prepare for its widespread adoption. The most important thing you can do right now is consider what, specifically, you want AI to do for your business. What are the processes that seem ripe for efficiency gains? Where are the simple, repetitive tasks that could be handled by AI? Create a plan for how you might integrate AI technology. In each case, be sure to clearly outline how it will deliver increased business value. By doing so, you can position yourself as the smart money coming to capitalize on AI. Build Efficiency Through ModernizationFor many businesses, AI’s promise lies in its ability to increase efficiency. Budgets are stretched thin, and organizations of all sizes are looking to do more with less. While you wait for AI to realize that promise, there are plenty of ways to up efficiency. From Hybrid Cloud Infrastructure to Unified Communications as a Service, Roundstone Solutions can put you in touch with best-in-class vendors ready to get you the biggest return on every dollar spent. Contact us today to learn more.
From the well-known to the highly rated, these are the partners you need to know about As many are discovering, the time to migrate to hyperconverged infrastructure (HCI) is now. I’ve written before about the benefits of HCI. Put simply, no one is in the business of owning technology for the sake of owning technology. If you’re going to own tech, it should be beneficial to your business. HCI benefits businesses by offering an efficient entry to the latest infrastructure tech that will save you money. Don’t just take my word for it. Public cloud solutions are taking up a lot of bandwidth lately, with many businesses following the herd to the cloud without giving a lot of thought to why or what other solutions might be available. But here’s a secret: public cloud providers and other hyperscalers use HCI. In fact, one of our top HCI vendors was created by people who built the infrastructure for one of the largest current public cloud providers. HCI offers an evolution of traditional server architecture by automating and moving tasks to software. It provides the simplicity of a one-touch experience in a scalable solution for any enterprise user. Here are the top 5 hyperconverged infrastructure vendors I think you should know about. Top 5 HCI VendorsThere are a variety of HCI vendors out there. Most of them are good. Some are great. Which HCI vendor is right for you can depend on your needs and budget. These are the HCI vendors that would be a good fit for almost any enterprise or small business. Starting at the top, with the HCI vendor I recommend more than any other. NutanixNutanix was founded in 2009 by engineers who worked at Google creating the Google file system, including a man known as the “Father of Hyperconvergence,” Mohit Aron. Google was one of the earliest pioneers of HCI. The company, in its goal to create a scalable architecture, moved many processes into automation through software. This helped streamline operations and allowed for faster data access. Aron was on the team that created the file system for managing all of that data, which Google still uses to this day. Believing more companies could benefit from HCI technology, Aron left Google and co-founded Nutanix. The company achieved “unicorn” status in 2013 as a startup with a valuation of over $1 billion and went public in 2016. Today, the company is worth over $8 billion. Nutanix was the first in the HCI space and is still one of the best. It created a simple-to-use service with the power of a file system created for a hyper-scaler. To me, Nutanix is the industry’s best-kept secret. A lot of people haven’t heard of it, but those who know, know. Net Promoter Score (NPS) is a survey that asks responders a single question: how likely are you to recommend this product or service? Respondents answer with a number from 0-10. Numbers 0-6 are considered “detractors,” 7 and 8 are considered “passives,” and 9 and 10 are considered “promoters.” The percentage of detractors is then subtracted from the percentage of promoters, and the result is a number between -100 and 100, which is a company’s NPS. Nutanix’s NPS is 90+ and has been for six years. That is unheard of. Compare that to Amazon Web Services (AWS) at 59, Google Cloud Platform at 45, and Microsoft at 40. People who use Nutanix love Nutanix. Pros:
Cons:
HPE SimpliVitySimpliVity was also founded in 2009 and was acquired by HPE in 2017. HPE has since bolted SimpliVity onto its offerings as its native HCI solution. Unfortunately, SimpliVity has always had hardware dependencies. It is not a full software solution like Nutanix. And, now that it is a part of HPE, it is wholly dependent on HPE equipment. So if you buy SimpliVity, you’re buying HPE equipment. The other side of that coin is that it makes HPE a one-stop shop. You can actually buy hardware from them to go with your HCI solution. Pros:
Cons:
Dell VxRailDell’s solution is novel in that it didn’t purchase a separate HCI company, it created its own out of pre-existing parts. But unlike Nutanix, the Dell VxRail solution is not built from the ground up for HCI, it is cobbled together out of existing Dell infrastructure solutions. The result, while it functions adequately, is not as elegant or efficient a solution as Nutanix. VxRail, like Simplivity, also eliminates choice. You’ll be running Vmware whether you want Vmware or not. You will also be tethered to Dell hardware. The upside of these dependencies is they take the guesswork out of choosing virtualization and hardware. Plus, as with SimpliVity, you can at least buy hardware from Dell. Pros:
Cons:
Cisco HyperflexHyperflex was created by Cisco in partnership with Springpath in 2016. Cisco then acquired Springpath in 2017 for $320 million. The main benefit of Hyperflex was the Cisco brand name, but given that Cisco is mainly a networking company and has no stake in the storage market, its value as an HCI solution is questionable. Hyperflex was mainly used by Cisco to seed its server business with a proprietary HCI solution, but in August of 2023, Cisco announced a partnership with Nutanix to offer a wholly new HCI solution for Cisco server equipment. Hyperflex has since been discontinued. Pros:
Cons:
Scale ComputingScale Computing is the smallest of the top 5 HCI vendors. Scale was founded in 2008 and launched its HCI solution in 2012.
Scale is a solid solution for SMBs, but it shouldn’t be considered enterprise class. If you have small workloads and a solution like Nutanix is too expensive, Scale is a solid choice. Otherwise, I’d stick with one of the other vendors. Pros:
Cons:
Migrating to HCI doesn’t have to be complicated. Roundstone can walk you through the process and talk you through your options to find the solution that’s best for you. Whether that’s one of the five HCI vendors listed here or something entirely different. To get started with your HCI migration and to learn more about how Roundstone can help you with your technology needs, contact us today. By now, you've probably read about Cisco partnering up with Nutanix, after Cisco "threw in the towel" on HyperFlex. The purpose of this post is to give you some comfort in your way forward.
First, some history. Cisco started selling servers in 2009. This was a strategic move by Cisco to hedge against competition in the networking space, where, at the time, Cisco held an 80+% market share. The idea was that since Cisco was in so many corporate data centers on the networking side, there was value to being able to also supply compute, in the form of their UCS servers. Made sense, although there was never a storage component to that idea, which I never understood. In 2016, Cisco was seeing that companies like Nutanix and a few others were doing well selling hyper-converged infrastructure (HCI). So, to stem eventual competition for their UCS servers, Cisco purchased a company called Springpath, which was a struggling company offering HCI. Springpath was not a main player in the HCI space at the time. Cisco combined their UCS servers and Springpath software to create their version of HCI, which they named HyperFlex. HyperFlex started selling in 2016, I think. Or, should I say, Cisco started giving away a lot of HyperFlex appliances with networking deals, in order to seed the market. You see, HyperFlex was never a big hit with users. Of the HyperFlex market share (very small), many of the users got the product free as a part of a networking deal, and didn't really pay for HyperFlex itself. I know this because I saw many HyperFlex shipping boxes sitting at customer sites, still with the appliances inside. I would ask if they purchased HyperFlex, and most times, I learned they were given the products. That was my experience...it may not have been yours. Please note that I'm not saying HyperFlex didn't work. It does, but in a different way than Nutanix. HyperFlex consists of hardware acceleration to make UCS perform with HCI software, whereas Nutanix is all about moving function into software. Well, after trying to make a go of it with HyperFlex for 7 years, Cisco has finally given up on the platform and announced the end of life for HyperFlex. Cisco has partnered up with Nutanix, which is the industry leader in HCI and has been since the start. Which makes sense. Why sell a product that customers weren't interested in buying when you could offer them a solution that customers love (Nutanix)? So, now that Cisco has given you the word that there isn't a future for HyperFlex, what's your move? Well, let us help. Roundstone Solutions is one of Nutanix's primary focused partners in the Northern CA and NY/NJ markets. We specialize in Nutanix; it's literally 80% of what we do. We know the platform as well as the folks at Nutanix, and know how to help our Clients get the most from Nutanix. Let us help you. Call us at 925-324-1582 in Northern CA or 201-740-2190 in NY/NJ. Or, email us at [email protected]. We'll get right back to you and will be more than happy to help you chart a course towards Nutanix. Your future is bright with Nutanix.
You’ll only know you’ve under-invested in cybersecurity solutions when something goes wrong
Of the thousands of decisions you make at your company, choosing the right cybersecurity solutions may be the most important. Just ask the folks at Clorox. Ransomware hackers hit the company in August, and paying the ransom was only the beginning of the trouble. It spent $25 million on response, from forensic investigators to legal and technical assistance. Then, in October, it announced that the disruption caused by the attack had led to a 23-28% loss in net sales. And that’s all to say nothing of the reputational damage. Nobody wants to do business with a company that has Swiss cheese for security.
As ransomware attacks become more and more common, choosing the right cybersecurity solutions for business only grows more important. By examining the forces shaping the cybersecurity market, you’ll be better equipped to find the right solution for your company. Factors Affecting the Search for Cybersecurity Solutions for BusinessStakes Have Never Been Higher
As technology has grown more advanced, companies have started holding more and more of their resources within that technology. That’s especially true for cloud service providers such as Google and Amazon. Proprietary product designs, protected client information, banking information, and more now reside in virtual data centers. All that data in one place has encouraged bad actors to scale up their hacking efforts accordingly. There were more than 600 million ransomware attacks in 2021, and there were 140 million in the first half of 2023. Hackers and tech companies are now locked in an arms race, with trillions of dollars on the line.
Budgeting Questions Are Complex
It would make life a lot easier if cybersecurity solutions for business could be budgeted in the same way operational IT solutions are. In IT, a company can evaluate workload size and speed, and then estimate a budget based on that data.
Estimating a budget in cybersecurity is much more opaque because there are few signs that your solutions are working. You may never know how many attacks your security repels. On the other hand, as soon as you’ve under-invested, you’ll know. And by then it will be too late. Security solutions have to be right every time, but bad actors only need to be right once. That’s why IT managers tend to over-invest in cybersecurity: Better safe than sorry. Security Talent Is Scarce and Expensive
Budgeting also requires more than investing in the right tools. Operating a cybersecurity staff of sufficient size is another critical piece of the puzzle. But this brings up another complication. Cybersecurity demands a lot of talent. That talent is in limited supply, and hiring competition is stiff, to say the least. Whatever your company can pay, Microsoft and Facebook can probably pay more. That makes it very difficult to attract the best talent to your business.
After the giants have taken their picks of the cybersecurity talent, the remaining professionals will still expect high salaries. And even if they’re within your budget, supply is so constrained that you may not be able to hire enough of them to meet your needs. There simply aren’t enough cybersecurity experts to go around. Security Operation Centers (SOCs) and You
Major corporations know that the bigger they are, the bigger the targets on their backs. The slightest misstep could let bad actors past their defenses, leading to eye-watering value losses. To combat that threat, they create departments whose sole focus is maintaining cybersecurity. These departments are known as security operations centers, or SOCs. They’re full of cybersecurity experts equipped with top-of-the-line tools who serve as the eyes and ears of the organization, taking in telemetry information, assimilating it, identifying trends and upcoming threats, and thus staying one step ahead of bad actors.
Mid-size and smaller corporations are in a difficult position with regard to cybersecurity. They may not face the same volume of attacks as an Apple or a Walmart, but they also have far fewer resources to fend off hackers. They can’t afford to dedicate an entire department to security. Instead, those responsibilities fall to the IT department. Those workers are likely capable in cybersecurity matters, but their plates are already full of other priorities, including maintaining continuity of services. If they fail on that front because they prioritized security, we run into the spending black box problem we discussed earlier. There’s no way to know if that priority on security was misplaced. (Shared) Knowledge Is Power
Every device on your company’s network is a potential entry point for bad actors. That includes servers, network switches, storage devices, and even some connected printers. That’s why so many cybersecurity solutions for business focus on creating strong locks on those access points. But the truth is that, despite what movies might have you believe, 95% of cybersecurity issues are the result of simple human error. Phishing attempts, fraud, and other manipulative tricks are the most common ways bad actors get into company systems. As a result, a truly comprehensive security strategy should find ways to address and guard against those techniques.
This is where sharing knowledge gains critical importance. In an every-man-for-himself environment, each business has an incredibly limited amount of information available to it. If you’re lucky enough to have 20 staff working on cybersecurity, that’s all the intelligence you can rely on to stay ahead of hackers. But if those 20 workers can share data with another 20, let alone 200, their frame of reference for possible vulnerabilities grows enormously. And the more vulnerabilities they know about, the more they can plug up. Making SOCs Accessible with Outsourcing
Companies hit by cyberattacks are often understandably cautious about discussing how their security systems failed. But if they’re able to overcome that reluctance and contribute to the knowledge of the cybersecurity community, it can build a critical mass of information about how hackers target and attempt to infiltrate business systems.
Cybersecurity firm Arctic Wolf aims to provide that wider frame of reference. Its cybersecurity experts essentially function as an outsourced SOC. They hook into your security tools and fine-tune them for maximum security. They then monitor those tools in addition to trends in hacking attempts, find new security solutions, and implement them for you. To find out how Arctic Wolf can help your business, contact us today. With cloud costs higher than expected, many businesses are looking for ways to bring expenses — and data — back under their control You’ve done your research and realized that the public cloud isn’t the right decision for your business. Maybe you’ve determined that the cost of keeping your workloads in the cloud is too high, but you remain hesitant that cloud repatriation is right for you. You’re not alone. The public cloud is a powerful tool for the right workloads — but it’s not the right choice for all of them, and the costs can be tremendous, especially when compared to running the same workloads on premises. However, a level of groupthink often influences decisions when it comes to staying in the cloud, where businesses decide to keep paying these high costs just because every other company around them is. But here’s the thing: if you’re already researching cloud repatriation, your intuition tells you it’s the right move. And by doing that research, you’ve already put in the work it would take to go through the process. Plus, I guarantee that more businesses than you think are having these same internal conversations about shifting from the public cloud back to on premises to save money. In fact, some surveys show that up to 80% of companies are repatriating at least some of their data back to on premises infrastructure each year. Cloud repatriation isn’t a heavy lift, so why wait? Here, we’ll cover the basics of cloud repatriation, some reasons why CTOs hesitate to make the move, and why repatriation is a far simpler decision than most make it out to be. That way, you’ll have all the knowledge you need to make the right decision for your organization’s bottom line. What Is Cloud Repatriation?Cloud repatriation is the process of shifting applications from public cloud environments back to on premises infrastructure. It’s essentially a cloud migration in reverse — you’re taking the workloads you previously moved to the cloud and bringing them back into your complete control. Why Do Organizations Seek Out Cloud Repatriation?There are several reasons why organizations pursue a cloud repatriation. For many, it comes from a realization that migrating their workloads to the cloud has failed to achieve any of the benefits cloud providers promised. Typically, the reasons behind cloud repatriation can be organized into three categories:
So Why Are Businesses Investigating Cloud Repatriation Now?The public cloud has been rising in popularity for around ten years now, and businesses have been steadily migrating to it over that period. This steady shift has allowed public cloud platforms like AWS and Google Cloud to become massive and hone in on profiting from this migration. They’ve also evolved their messaging to persuade organizations to make the leap, hyping up the benefits of leaving on premises infrastructure behind. Then, the COVID pandemic hit at the beginning of 2020 and supercharged this migration process into overdrive. IT departments couldn’t come to work, visit their data centers, or manage the resources of their now-remote workforce without a drastic change in infrastructure. There was no time for due diligence, so IT departments bought compute power in the public cloud and got their operations up and running quickly. According to a survey conducted by the Information Systems Audit and Control Association, 90% of respondents said cloud usage was higher than initially planned due to the COVID-19 pandemic. Those who could quickly navigate this uncharted territory saw improved efficiency during those uncertain times. Now that the dust has settled and CTOs have time to dig into the numbers, many realize just how expensive those decisions have become. They may have been the right decisions at the time, but are they still the right decisions today? For a lot of businesses, the answer is “no.” Why Are People Hesitant to Begin Cloud Repatriation?Even realizing that the public cloud isn’t the right choice for their business, many haven’t decided to pull the trigger yet to start repatriation. There are a few reasons for this. For one, the “deep recession” analysts have predicted for a few years hasn’t fully materialized. While some belt-tightening has happened throughout the economy, IT departments haven’t been significantly pushed to cut costs. And if there are more pressing issues to take care of, exploring repatriation options decreases in priority. So many are content with sticking with what they have until they’re backed into a corner. Much of the hesitancy comes down to the mindset surrounding repatriation, whether that’s a feeling that repatriation is a complex process, that spending money on non-cloud infrastructure is a waste, or from rationalizing prior mistakes. Both come from a “sunk cost fallacy” mindset, where a lot of time, money, and effort has already been spent moving infrastructure to the cloud. Plus, repatriation isn’t always cheap: many cloud providers charge for egress — for both the amount of data being transferred out of the cloud as well as the speed — and many organizations don’t factor this cost into their initial migration. A combination of these factors ultimately leads to a hesitancy to repatriate, even if doing so would be in the company’s best interest over the long term. Here’s Why You Should Start Cloud Repatriation AnywayMany IT professionals psych themselves out about the challenges of repatriating to on premises infrastructure and make it seem more complicated than it really is. Here’s the thing: Cloud repatriation is not a difficult process, and you already have the resources to get it done. For one, there’s a good chance you’ve already got the on premises hardware needed to repatriate, whether it’s leftover from cloud migration or you’re still using it in day-to-day operations. When deciding whether it’s time to repatriate, it’s crucial to evaluate your current inventory to determine whether it matches the workloads you’re currently operating in the cloud. Once you’ve completed this investigation, you’ll fully understand the workloads running in the cloud and how much space they take up. Using this information, you know what equipment your data center requires. You’ll also likely have direct proof that running those workloads on premises is cheaper since you’ll have complete control over costs. Then, it’s time to repatriate. It’s not a heavy lift; you’ve already moved data into the cloud, and you know how to get it out. Ensure that you’re setting up your on premises hardware with the same configurations as your workloads on the cloud, and understand that it will cost some money to get your data out of the cloud due to egress fees. But once your data is out, you’re free to deactivate your cloud operations and reduce spending in the long term. Honestly, the most challenging part of repatriation is realizing that migration wasn’t the right choice in the first place and choosing to move operations back to on premises. Think of it like ripping off a bandage — the thought of doing it hurts more than the reality of doing it. But once you’ve decided to repatriate, you can finally take full control of your costs and have more time to investigate your options should another solution come along. To reiterate: Cloud repatriation is not a heavy lift, and once the dust has settled, you’ll have more control over your data, lower operational costs, and better performance. Let the Experts at Roundstone Guide Your Cloud Repatriation EffortsThe best cloud repatriation is the one you don’t have to do. But getting to that point means doing your homework upfront. However, you don’t have to go it alone. Roundstone Solutions has helped organizations of all sizes in the private and public sector, from scrappy startups to global enterprises, optimize and modernize their IT infrastructure to fit their exacting needs. Want to learn more? Get in touch today.
|
AuthorTim Joyce, Founder, Roundstone Solutions Archives
November 2024
Categories
All
|