three-tier-cloud-architecture5

Highload Solutions

Some of the mechanisms that can be used to scale the front end are

  • load balancers,
  • the application server array, and
  • the caching tier.

However, the success or failure of many applications is dependent on a well-conceptualized, architected, and implemented database system.
The standard, tried-and-true method for architecting a highly available database tier in the cloud is to have a single master and one (or more) slave(s) replicating from the master, and to have each of these servers in segregated zones such that they are on separate power, cooling, and network infrastructures. In an ideal world, the capabilities of that master database would suffice for all of the application’s lifecycles, from its infancy, through the growth phase, the maturity/maintenance phase, and then through the end-of-life cycle. Of course we all know this is never the case, as the demands on the database tier fluctuate greatly over time, so a “one size fits all” approach is not really feasible.

Database Scaling

For an application to continue to be successful as its lifecycle progresses, it has to be scalable at all levels of the architecture. As more and more users interact with the application, the resource demands of each tier will continue to increase.
While the ultimate goal of database design would allow the automated horizontal scaling of the database tier, the practical implementation of such a solution continues to remain an elusive goal. However, there are design concepts you can follow to allow database scaling to varying degrees, which include both vertical and horizontal scaling of the database tier.

Vertical Scaling

In the early stages of an application, when database load is light, a small instance size can often be effectively used for both the master and slave databases. As load increases, the master database can be migrated to a larger instance size, allowing it to take advantage of additional processing power, I/O throughput, and available memory.
For database requests that involve complex queries or joins of multiple tables, the additional memory provided by the larger instance types can assist greatly in accelerating the query response. When possible, the working set of a database should be contained in memory as this greatly reduces the disk I/O requirements of the application, and can greatly enhance the application’s overall performance. Situations may arise in which the CPUs of an instance are greatly underutilized, but the majority of memory is in use. Although this may appear as a poor use of a powerful (and costly) resource, the performance gains realized by containing the working set entirely in memory can far outweigh the costs incurred by these more expensive instance sizes.

Horizontal Scaling

It’s highly recommend implementing one or more slave databases in addition to the master database, regardless of the phase of an application’s lifecycle. The presence of multiple slave databases increases the overall reliability and availability of the application, as well as enabling horizontal scaling of the database using a proxy mechanism for database reads.

In a proxy configuration, the application servers send their database write requests to the master database, while the read requests are directed to a load balancer (or preferably, a pair of load balancers), which distributes those read requests to a pool of slave databases.

It is important to note that replication lag to the slave databases may result in outdated data being returned in response to a read request if the read is made quickly after the data is written to the master database. For applications that rapidly write and then read the same data object, a proxy solution may not be the most effective method of database scaling. With a read-proxy implementation, although database write performance is unaffected, read performance is enhanced since all read requests are distributed among all available slaves.

For applications that are read-intensive, a proxy solution such as the one shown above can provide a significant decrease in database load, and therefore a significant increase in application performance. Each application is unique, but read versus write requests should be benchmarked throughout an application’s lifecycle to see what, if any, benefit would be gained from a database proxy solution.

Author: Brian Adler