It’s only growing – in popularity, in offerings, even into pop culture. In almost every new project I start, the client mentions cloud hosting from services like Amazon Web Services (AWS) and Microsoft Azure. In January, a survey of more than 1000 IT professionals showed increased adoption and changing perspectives of cloud computing benefits and risks. I would guess that anyone in an Infrastructure decision-making role has researched the possibilities. But in reality, making the move from internal hosting to Infrastructure as a Service is often a difficult step.
Springthrough works with companies as small as 3 people and as large as 150,000 to make the decision. It’s never a simple case of yes – turn on the proverbial switch – and move to the cloud. Instead, our clients share their concerns. We listen. And often, we help the client make and execute on their final decision. Below are some of the most common benefits and risks that impact our clients’ cloud decisions.
Time & Money Investment
Typically, the first concern that comes up while discussing a move to the cloud is the investment of time and money. This concern affects nearly every level of the organization.
Just the thought of these concerns leads some to automatically decide to host their solutions on-premise. It seems like they are saving money when they use existing equipment and resources; however, what usually escapes them is the need to maintain that equipment and to ensure that it is constantly up-to-date on security patches. That maintenance requires man-hours, up-front planning to ensure a minimal amount of downtime, and someone knowledgeable enough to execute the plan. With cloud services, maintenance and security updates for the underlying platform are handled automatically with no impact to the uptime of your services.
Cloud users can also thank healthy competition in the marketplace for easing the strain on their budgets. Cloud service giants, Amazon and Microsoft, compete with each other by dropping prices and providing access to new, innovative features to draw more users to their platforms. They also offer free, introductory plans that give users the chance to test out their services before they commit to more long-term usage. The next few years will likely show us continued battles for users’ loyalty – with experts already placing their bets on the victor.
Another consideration, especially for clients building a product or service that has the potential to reach a large number of users, is the ability to scale and grow on demand. One of the hardest things to predict is how popular a new service could become. Look at services like Twitter, Instagram, and SnapChat -- just a few examples of simple concepts that quickly exploded into incredibly popular platforms with user counts in the millions.
We've seen clients in the past struggle to keep up with the volume of traffic they receive on their website or through the software services they provide to their users. In most cases, the struggle is around the physical hardware that can handle a moderate amount of traffic, but in the event of a traffic spike, it quickly reveals bottlenecks and results in a rush to procure more resources. One client, a theater chain, sees much higher traffic the first day of ticket sales for a blockbuster film than an average day. There is going to be a much higher demand on their systems that day. Other businesses may have expanded quickly – adding new people, locations, or services – without expanding their infrastructure. They are stuck in a position of playing catch-up.
Cloud services typically charge based on the resources used and how long they are used for, which can immediately give clients sticker shock if they think they have to have high-performing resources running at all times of the day in order to stay prepared for those traffic spikes. That’s actually a scenario cloud services have solved. By leveraging cloud services, you can set conditions for when the system should automatically scale resources up and back down again. For clients like the theater chain, they would see a huge cost-savings since they would only need those extra resources on movie premiere nights. Once the traffic spike is over, the system is smart enough to know that it can scale back down and incur less expenses.
Despite how powerful your hardware may be or how long it has gone with any issues, the fact of the matter is that failures happen and usually when you least expect them. That’s why it is always a good idea to prepare for the worst and have a contingency plan for when systems go down. Using technology like load balancers, replication, and redundant failovers provides a way for applications to stay up and running if any one particular appliance goes down. Cloud services, by default, provide easy ways to create point-in-time backups or snapshots of your resources so that you can quickly recover in the event of a total system failure.
In extreme cases, like natural disasters which have the power to knock out an entire data center, businesses need to ensure that their applications exist in other geographic areas in order keep things running. Services like AWS and Azure both offer the flexibility for users to select not only the resources to spin up, but also where to allocate those resources. They also offer options for automatic failover to resources in other data centers should something catastrophic happen. Recently, Azure began offering geo-replication for all of their database tiers (including their basic offering) which means even small businesses can take advantage of a feature that only enterprise-level applications had access to before.
Last, but definitely not least, is the concern over security. One of the most valuable assets organizations possess today is the data related to their clients, customers, and/or products, so it's understandable when our clients emphasize a need to protect their data and secure their web presence from malicious users.
Vulnerabilities are found every day, which makes it important to stay up-to-date on security updates and preventative measures. Cloud services provide security updates and options for additional protection at nearly every level. Databases can be encrypted at the record-level, web applications have access to free SSL certificates to encrypt data transmissions, and firewall restrictions exist by default to immediately limit the accessibility of critical systems. Cloud services are also actively keeping up-to-date on all compliances related to web traffic (e.g. PCI and HIPAA). Accomplishing the same security features on-premise would involve additional hardware specifically to handle firewall rules and restrictions, scheduled maintenance windows to update operating systems, and a management staff to stay current with compliance requirements and expectations.
Even with all of the offerings that the cloud provides, some clients request a level of security and privacy that can only be attained when the hardware exists on-premise. One such client of ours felt that the data they maintained was so confidential that only individuals physically present at their office and connected to their network could access it. In the rare event of an outside breach, they also needed the ability to cut off all power to their servers to effectively eliminate any and all means of reaching that data. Granted, this is an extreme case of data protection, but it serves as an example of how protective some clients can be with regards to their data.
Clearly there isn’t a one-size-fits-all approach. With cloud computing, benefits and risks appear at each turn. Again, that’s why we advocate that our clients take time to really understand their concerns and their plans for the future. What we have seen is that as the market changes, more and more businesses move to the cloud – sometimes even in a hybrid solution. Now, we’re seeing clients switching between cloud services as they better define how the cloud can work for them. It will be a topic to continue to research and consider questions like which service to use, what will the migration process look like, and how will that affect future workflows.