Rob Coward is one of our senior cloud consultants, he has over 26 years tech experience working in companies such as IBM and Game. He specialises in cloud migration and helps companies to effectively optimise their cloud solutions against cyber attacks, project delays and budget extensions.
The core classification of cloud computing has always been the commoditisation of self-provisioned, software defined compute, storage and networking, known as Infrastructure-as-a-Service or IaaS. Further offerings covering Platform-as-a-Service where you are no longer responsible for managing server, only code, and Software-as-a-Service where you pay to use entire platforms hosted and maintained by a 3rd party existed 10 years ago, but people’s risk tolerances and readiness to trust parties with their business-critical systems was a lot lower. As the use of cloud technologies has matured, we have seen a much larger adoption of server-less (PaaS) infrastructure as well as proliferation of SaaS offerings, not only from the big cloud vendors, but also from software vendors offering hosted solutions for their products instead of being installed ‘on-premises’ on customer owned and managed infrastructure.
I have been architecting, deploying and automating infrastructure deployment across multiple varying cloud vendors for the last decade of my career, utilising both public cloud vendors and community or private clouds hosted by Crown Hosting and private data centers.
Architecting solutions has always been a balance between resilience / availability of systems against the cost of reducing the chance of an outage, even when running your own data centers or renting rack space in someone else’s. With cloud vendors offering IaaS solutions, the risk of geographical spread is more easily mitigated with most vendors having many regions around the globe allowing a global solution to be deployed with ease. However, some enterprises also see reliance on a single supplier as a risk to be mitigated, and that is where the term multi-cloud originated, with their architecture being spread across multiple cloud vendors, further mitigating the risk of one vendor having a serious outage and taking them offline. In addition to the risk mitigation, having your infrastructure architected to be fluid and dynamically provisioned also means that as vendor prices vary, you can easily move your infrastructure around to benefit from cost savings in one vendor over another.
While the concept of cloud computing is consistent and widely understood, the actual implementation by the cloud vendors will vary, as will their own SLAs and support levels. In all types of cloud, Iaas / PaaS / SaaS, at some point in the infrastructure stack you will become dependent on the vendor’s infrastructure and their support processes, making it impossible for you to set SLO / SLAs that by implication are no better than the service offered by the vendors. When something goes wrong, especially if your architecture is deployed across multiple cloud vendors, coordinating the problem analysis and remediation can get exponentially harder the more 3rd parties have to be involved in the process. Coordinating low-level network tracing and diagnostics when by definition, you only have responsibility for a partial picture of the overall architecture, and then liaising with the support teams of several other parties to try and obtain the complete picture, can be time consuming and problematic – not something you relish in the middle of an outage incident.
Data can often be considered the life-blood of a business. In that respect, regardless of whether using cloud services, data center deployed systems, or even desktop computing, the 3-2-1 rule of backups should always apply – 3 copies, across 2 different medium, at least 1 ‘offsite’. With data held in cloud-based systems, this task of backing up your important data is usually taken care of for you, with data replicating between highly-available servers, both in-region and across multiple geographically remote regions being a common feature between cloud vendors. However, coordinating data replication across architecture spread over multiple vendors will usually require 3rd party solutions or a bespoke integration, since the cloud vendor’s services don’t typically integrate with each other.
If you are using a SaaS offering from a vendor, such as Office365, Google Workspaces, Atlassian Suite of tools or Salesforce, you need to ask yourself ‘what will happen to my business if I cannot access the data in these systems?’ and should probably have in place procedures to take regular extracts of data from the SaaS environment to some other archival storage.
This depends on what data and the volumes involved. Most cloud vendors make it very easy for you to bring your data to their environment, offering various transfer services to upload your data. These often involve using your existing connectivity to the internet to transfer files using secure protocols, however for really large volumes that may take a long time to transfer over limited bandwidth connections, there is often physical transport options ranging from a single hard-drive that can be shipped from your premises, to having an articulated lorry parked next to your data center with storage solutions capable of transporting petabytes of data. The real question to ask though is – “How can I get my data out of this cloud environment when I want to leave ?”
WeShape are ISO 27001 certified which ensures customers' data is secure during cloud migration of data. Our consultant network is within the top 5% of associates, to make sure there are no project delays, skills or knowledge gaps in Cloud projects. Our associates have a wide breadth of experience with cloud vendor and advisory services that they can offer when it comes to architecting cloud solutions for customers.
To find out more about our service lines, click here.
Read more of our recent insights, ideas and points of view, curated by our expert network: