Optimising Oracle DB with VMware’s vFlash Read Cache feature

This post is a slightly different one that I’ve usually made simply because it is more notes based than editorial or comment, however I hope that the simple steps and data captured here will be useful. In fact it’s taken me a while to get this data out, but even though it’s about a year old now the performance improvement should be even better with ESXi 6.x. In this test we were interested basically in evaluating whether VMware’s new Flash Read Cache(vFRC)  feature released in ESXi 5.5 would benefit read heavy virtual workloads such as Oracle DB.

Test scenario:

Oracle 11g 11.2.0.1 DB with 4vCPU, 8,192MB RAM and 200GB Oracle ASM disk for database
HP DL380 G7 with 2 x Intel Xeon 5650 6C 2.67GHz CPU and 128GB RAM, locally attached 4 x 7.2K SAS RAID array
VMware ESXi 5.5 Enterprise Plus license with vFlash Read Cache capability.

Creating a baseline (before applying vFRC)

Using esxitop to establish typical baseline values:Disk latency typical across measured virtual machines – 11.97ms latencyCorrelation of baseline latency and command per second values with vCenter Operations Manager:

High and low water disk latency – between 4 and 16ms (using 7.2K RPM drives in 4 disk RAID5 array).

Disk usage was negligible following VM boot and Oracle DB startup:

In order to set the vFlash Read Cache block size correctly we need to find out the typical write block size (so that small writes do not consume too large a cache block if it is set higher than the mean).Using vscsiStats to measure the frequency of different sized I/O commands:


Highlighted frequency values (above) show that 4,096 byte I/Os were the most common across both write and read buckets, and therefore the overall number of operations peaked in the same window.In order to establish the baseline Oracle performance an I/O calibration script was run several times.Oracle DB I/O metrics calculation:

Max IOPs were found to lie between 576 and 608 per second using a 200GB VMDK located on the 4 disk RAID array.The high water mark for disk latency rose to 28ms during the test, versus 12ms when the instance was idle – indicating contention on the spindles during read/write activity.

During the I/O calibration test the high water mark for disk throughput rose to 76,000 KBps, versus 3,450 KBps when the instance was idle. This shows that the array throughput max is around 74MB/s.

Having established that the majority of writes during the above test were in fact using an 8KB block size (not as shown in the screenshot which was taken from a different test (4KB)) the vFRC was enabled only on the 200GB ASM disk using an arbitrary 50GB reservation (25% of total disk size). No reboot was required, VMware inserts the cache in front of the disk storage transparently to the VM.

With Flash Read Cache enabled on 200GB ASM disk

After adding a locally attached 200GB SATA SSD disk to the ESXi server and claiming the storage for Flash Read Cache a 50GB vFRC cache was enabled on the Oracle ASM data disk within the guest OS configuration:

Once the vFRC function was enabled the Oracle I/O calibration script was run again, and surprisingly the first pass was considerably slower than previous runs (max IOPs 268). This is because each read from the SSD cache initially fails, because prior writes have not primed the cache. By writing to SSD before committing to disk (write-through caching), data is continually added to the vFRC cache such that performance should improve over time:

Esxcli was used to view the resulting cache efficiency after running I/O calibration (showing 29% read hit rate via SSD cache vs reads from SAS disk):

In the example above, no blocks have been evicted from the cache yet meaning that the 50GB cache assigned to this VMDK still offers room for growth. When all of the cache blocks are exhausted the ESXi storage stack will begin to remove older blocks in favour of storing more relevant up to date data.The resulting I/O calibration performance is shown below – both before and after enabling the vFRC feature.

In brief conclusion, the vFlash Read Cache feature is an excellent way to add in-line SSD based read caching for specific virtual machines and volumes. You must enable the option on specific VMs only, and then track their usage and cache effectiveness over time in order to make sure that you have allocated not too much, or not too little cache. However, once the cache is primed with data there is a marked and positive improvement to the read throughput, and a much reduced number of IOPS needing to be dealt with by the physical storage array. For Oracle servers which are read biased this should significantly improve performance where non-SSD storage arrays are being utilised.

Leave a Reply