Scale or fail: How to build disaster-proof websites

From pandemics to PR disasters, there are many instances when a business' traffic might suddenly spike. Here are 6 techniques we use to create sites and apps that can handle whatever's thrown at them.
Years ago, Google introduced the idea of ‘micro moments’; tiny windows of opportunity where people are ready and willing to buy. So, what happens if someone visits your website, and finds it’s unavailable? They’re frustrated. They leave. You’ve lost a lead and possibly gained an angry customer.
This is why we work hard to keep our clients’ sites from crashing.
While not every website has high volumes of traffic, nobody has a crystal ball. As we now know all too well, the unexpected can and does happen.
It could be a massive news story about an organisation, or the industry they work in. It could be some sort of big event or a seemingly innocent social media post that ends up going viral.
If you don’t have the right application, infrastructure and architecture in place, your site could go down and your business reputation will suffer along with it.
So how can you prepare for the worst? This article goes through the six basic ways we build sites that scale.

1. Cloud hosting

Although we do have our own managed data centre for some existing clients, here at Tangent we are a cloud first organisation.
For all new systems, we take a cloud based, managed service approach.
The advantage PaaS (Platform as a Service) infrastructure is the cloud service provides manages the tin and plastic (physical servers) and the operating systems installed on them.
This means that they are always kept up-to-date, secure and fit for purpose. It takes care of all the headaches and support overheads of the IaaS (Infrastructure as a Service) route and means we can get on with developing applications for our clients.

2. Auto-scaling

One significant benefit of hosting applications in PaaS services is being able to scale them depending on the real-time requirements for the infrastructure.
You can configure a baseline number of virtual machines that are always running and then distribute the load between.
In addition, we can configure thresholds for CPU and memory. When the threshold is reached, it automatically adds an additional instance to help manage the load.

3. Globally distributed sites

First things first, what is a globally distributed system? Let’s start with a distributed system – this means a system running multiple instances of itself in multiple locations/regions.
Take this one step further and distribute these locations around the world and, bingo– you have a globally distributed system.
There are two major advantages to implementing a distributed system:
  1. Better performance – as the load is distributed between multiple systems
  2. Redundancy – if one system fails your customer load is handled by one or more of the remaining systems.
With a globally distributed system, you also have the advantage of reducing network latency between customer and system. This is due to these distributed locations being ultimately closer to the end customer.

4. Content delivery networks - CDN

A content delivery system, or CDN, can massively help to reduce the load on your backend systems should your visitor numbers blow up.
This builds on top of the advantages of a globally distributed system, as a CDN distributes assets such as images, mark-up, JavaScript, files such as a PDF.
It means that these assets will load super quickly for anyone around the globe, as they are globally distributed.
This also means that any requests for said assets no longer go to the backend systems and are solely handled by the CDN. Hence reducing load/requests to your backend.

5. Hot, warm and cold failover

We touched on how we implement failover and redundancy to keep sites from crashing in a previous article. To recap, redundancy means you more than one instances of a system running (yep, you got it – a distributed system), so if one were to run into problems you can failover to the other.
Cold failover, normally means the system does not exist and is spun up in the event of disaster (e.g. DR/Disaster recovery).
Warm failover means you have another system that is good to go, e.g. up and running. However, this system would only be under load if failed over from the primary.
Lastly, hot failover (the Rolls Royce of failover), means the two or more systems run in parallel which distributes load and data is replicated between them. If one of these were to fail the others would take the load.

6. Ongoing monitoring

While we never know what lies in store, we can take some pretty decent guesses based on changes in visitor behaviours. Ongoing website monitoring can be used to pick up on potential spikes in traffic before they happen, giving businesses a fighting chance of adapting as needed. Our expert Technical Lead, Chris King, has covered this extensively in his post on website monitoring if you want to learn more.

Ready to scale?

Businesses must be able to scale to need, whatever that need may be, and whenever that need may appear.
Scaling technologies have been around for years. They’re tried and tested. They’re highly effective. But they’re not one-button solutions. They do need a bit of setting up to make sure they’re fit for purpose. Which is where our expertise comes in.
With the right support, you can be confident in building a website that grows at the same rate you do.
Have a look at what we do to ensure our clients' websites are always ready for anything, and get in touch to chat how we can help you with yours.