Among many other responsibilities, a relational database system must make efficient
use of main memory (RAM) for buffering and caching purposes. RAM is far faster and
easier to access than SSD or magnetic storage; a properly sized and tuned cache
or buffer pool can do wonders for database performance.
Today we are improving
Amazon RDS for MySQL with support
for InnoDB cache warming. When an Amazon RDS DB instance that is running MySQL is
shut down, it can now be configured to save the state of its buffer pool, for
later reloading when the instance starts up. The instance will be ready to
handle common queries in an efficient fashion, without the need for any
This feature is supported for RDS DB instances that are running version
5.6 (or later) of MySQL. To enable it, simply set the
innodb_buffer_pool_load_at_startup parameters to 1 in the
parameter group for your DB instance.
Users of MySQL version 5.6.19 and later can manage the buffer pool using the
mysql.rds_innodb_buffer_pool_load_abort stored procedures.
Once enabled, the buffer pool will be saved as part of a normal
(orderly) shutdown of the DB instance. It will not be saved if the
instance does not shut down normally, such as during a failover. In
this case, MySQL will load whatever buffer pool is available when
the instance is restarted. This is harmless, but possibly less
efficient. Applications can call
procedure on a periodic basis if this potential innefficiency is a
cause for concern.
DB Instances launched or last rebooted before August 14, 2014, will
need to be rebooted to gain access to this new feature. However, no
action is required for DB Instances launched or rebooted on or after
August 14, 2014. To learn more, take a look at
InnoDB Cache Warming in the Amazon RDS User Guide.
For companies running distributed applications at scale, colocation remains an essential piece of a high-performance infrastructure. While traditional colocation is often viewed as simply a physical location with power, cooling and networking functionality, today’s colocation services offer increased flexibility and control for your environment.
Let’s take a look at some real-world examples of companies that are using colocation as a core element of their infrastructure to run a distributed app at scale.
Outbrain is the leading content discovery platform on the web, helping companies grow their audience and increase reader engagement through an online content recommendations engine. The company’s data centers are designed to be DR-ready, and operate in active-active mode so everything is always available when you need it.
Outbrain’s continuous deployment process involves pushing around 100 changes per day to their production environment, including code and configuration changes. This agile, controlled process demonstrates how a traditional solution like colocation can be flexible enough to support a truly distributed application at scale.
eXelate is the smart data and technology company that powers smarter digital marketing decisions worldwide. As a real-time data provider, they need to operate as a distributed application to handle large amounts of consumer-generated traffic and transactions on their networks around the world. Their infrastructure has to support dynamic content and data in order to provide meaningful insights for consumers and marketers.
eXelate’s colocation environment includes unique hardware that is outside the realm of normal commodities. The ability to incorporate Fusion-io and data warehousing services like Netezza, as well as make CPU changes and RAM upgrades, helps eXelate support the high number of optimizations required by their application. The company also uses bare-metal cloud to spin up additional instances through the API as needed. This combination of colocation and cloud creates a best-fit infrastructure for eXelate’s data-intensive application.
Whether your organization runs a continuous deployment or requires the ability to process real-time data, colocation provides the flexibility to create a best-fit infrastructure. State-of-the art colocation facilities support a hybrid approach, allowing you to combine colocation and cloud in the manner that best meets the requirements of distributed apps at scale.
Get the white paper: Next-Generation Colocation Drives Operational Efficiencies0