Using RAM Cache to Speed Up Linux Disk Performance

Problems with Cloud IOPS? Or maybe your Hard Disks are not fast enough for your needs? In this article I will teach you how to create a Linux Tuned Profile specially prepared to increase disk access performance, through the use of RAM cache.

Introduction

We know that the fastest disks today are SSDs, which can be found on traditional SAS/SATA interfaces, as well as the new NVME models, which, through the PCIE bus, have a direct path to data access, making them much faster than traditional SSDs.

The problem is that the more performance, the more expensive the product, making the cost per megabyte of an NVME up to 5 times higher than a traditional SSD disk.

Half Term

What if we could somehow increase the performance of a traditional SAS/SATA disk without having to shell out a huge amount of money?

Cache for Performance

  • Image captured from a Xeon 2620v3 with 32GB DDR4 and SATA Disks.
Diskcache Tuned Profile

This high performance will be achieved as long as there is RAM available to store entire files. When there’s no more cache memory, performance will drop to around 200/300 Mbps, which isn’t bad either.

Caveats

That said, my recommendation is to use this technique only in controlled environments that are supported by a good quality UPS. Techniques such as anti-affinity policies (in case you are working with virtual machines) are also very welcome.

Where to Use

  • File Servers
  • Database Clusters
  • Elasticsearch Clusters
  • Log Storage Clusters
  • Anything that intensively read files

NOTE: In the case of databases, make sure the environment follows good power and anti-affinity standards, as well as a correct strategy to avoid data loss.

Example of a fault-resilient DB architecture

A commonly used technique is to set up an active-active cluster, but treat it as if it were active-passive:

  • Use a single node for INSERTS and UPDATES, and the other nodes for queries.

In this way, a failed node will automatically have its content retrieved without human interactions.

Tuned Profile

  • Explanations are in the file itself, and in this link.
$ cat /usr/lib/tuned/disk-cache/tuned.conf
#
# tuned configuration
#

[main]
summary=Maximize RAM usage for disk operations
include=throughput-performance

[sysctl]
# tell the kernel to use up to 99% of the RAM as cache for writes and instruct kernel to use up to 50% of RAM before slowing down the process that's writing (default for dirty_background_ratio is 10).
vm.dirty_ratio = 99

# Start background writeback (via writeback threads) at this percentage
vm.dirty_background_ratio = 50

# And tell that 1h write delay is ok to even start writing stuff on the disk (again, I recommend caution):
vm.dirty_expire_centisecs = 360000
vm.dirty_writeback_centisecs = 360000

# vfs_cache_pressure at 2000 causes that most of computing happens in the RAM and very late disk writes.
vm.vfs_cache_pressure = 2000

Check if you can list your new tuned profile, and if so, apply it:

$ tuned-adm list | grep cache
- disk-cache - Maximize RAM usage for disk operations

Use With Moderation

Think about it this way: Does the risk outweigh the benefit? If so, you may not be good to go.

TIP: You can use the “sync” Linux command to commit cached files on disk. A full cache commit will take around 2 / 3 minutes.

Full Cache Disk Sync

Finishing

Sonic :-)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Andre Rocha

I'm just a SysAdmin with some experience in OpenSource, DevOps and Datacenter Services, who likes to share knowledge.