There are many ways of implementing a high quality service to support high-load websites. Each of these methods has advantages and disadvantages. In building out our own HeartyHosting platform, we have explored many of these various solutions. Below I have tried to combine the advantages, simplicity, and affordability. You will notice that in all cases, we found Amazon Web Services (AWS) to be a superior platform – which is why it is so central to HeartyHosting!
So, we want to support a high-load Web application with minimal human intervention. First we need to implement the correct application structure. Your application should be scalable, easy to deploy, “fail-over” ready, and denial-of-service (DoS) protected. Lets examine each of these items.
By “scalable” we mean that the application must maintain fast response time even when load ramps up exponentially. When the load becomes heavy, we expand our back-end farm – and when load lightens we shrink or compact the farm.
Shared file systems and caching servers are important elements to think about when discussing scalability. You can use GlusterFS, NFS, and/or Amazon’s Elasticache (memcache/redis). Whatever you want/need/like!
Amazon AWS has very good solution for scaling which they term “AutoScaling.” AutoScaling allows end-users to access your application with optimal speed and efficiency regardless of the load.
Easy to Deploy:
To keep your deployments running smoothly and efficiently, there are many continuous integration tools out there: Jenkins, TeamCity, etc. At Boyle Software, we like Jenkins as it is very simple to understand, has lots of useful plugins, and has minimal system requirements. You can even run it on an AWS t2.micro instance!
I recommend having a minimum of two back-end instances to ensure your fail-over readiness. This is true for AWS EC2 instances, RDS instances, and Cached instances. You will sleep better if you have at least two EC2 back-ends, RDS databases in Multi-AZ mode and a cluster of two Elasticache servers.
Denial-of-service attacks are a perpetual issue for all Web-based applications. But there are some tricks to learn that will increase confidence in your application’s ability to withstand such an attack and keep performing optimally.
One of these “tricks” is Varnish Caching. We recommend using Varnish for the front-end for your applications. There are two big advantages of using Varnish: you will have an additional caching level allowing you to manage incoming requests more flexibly; it has minimal system requirements so you can use it even on t2 instances.
AWS’s CloudWatch alarms are another useful weapon for battling DoS attacks. You can configure alarms to send you notifications for various events, as they occur. Alerts come via email/sms or can be set up as POST requests directly to your server. You’ll be aware of every event within your network!
We also recommend using the Android/iOS mobile AWS console and mobile SSH client to be able to make various fixes and modifications on your network.
One important thing that the AWS team has not yet integrated is Database AutoScaling. For HeartyHosting we set this up very quickly on our own. We strongly recommend that you read the Amazon API and combine RDS documentation with CloudWatch alarms and, using your preferred scripting language, dynamically create and remove Read Replicas in your application.
After many years of using cloud services, I can conclude that there is nothing prohibitively complex in supporting high-load Web applications – and AWS makes it easier than ever!