I wanted to test our qcow2 versus raw image format performance, so I created two virtual machines on my hypervisor, below are the results I encountered.
Hypervisor: CentOS 6.4 with KVM
Processor:Intel Xeon E3-1230v2 3.30 GHz, 8M Cache, Turbo, Quad Core/8T (69W)
Memory:16GB Memory (4x4GB), 1600Mhz, Dual Ranked, Low Volt UDIMM (speed is CPU dependent)
RAID Controller:PERC H200 Adapter RAID Controller
Disk:2x1TB 7.2K RPM SATA 3Gbps 3.5in Cabled Hard Drive
Raw-VM: 10GB RAW disk format
Qcow2-VM:10GB Qcow2 format
Raw-VM and Qcow2-VM Filesystem type: ext4
Operating system: Raw-VM is Ubuntu 12.04 LTS and Qcow2 VM is CentOS 6.4
VM Memory and VCPU: Both VM’s have 2GB RAM and 1 VCPU of the same speed
Both VM’s are on a XFS based filesystem on the hypervisor.
I used hdparm and ran the following:
sudo hdparm -Tt
My results are as follows:
Raw-VM:
Timing cached reads: 19414 MB in 2.00 seconds = 9715.65 MB/sec
Timing buffered disk reads: 350 MB in 3.02 seconds = 116.09 MB/sec
Timing cached reads: 19428 MB in 2.00 seconds = 9722.02 MB/sec
Timing buffered disk reads: 614 MB in 3.01 seconds = 204.17 MB/sec
Timing cached reads: 19900 MB in 2.00 seconds = 9958.25 MB/sec
Timing buffered disk reads: 896 MB in 3.01 seconds = 297.31 MB/sec
Qcow2-VM:
Timing cached reads: 20594 MB in 2.00 seconds = 10311.02 MB/sec
Timing buffered disk reads: 396 MB in 3.02 seconds = 131.07 MB/sec
Timing cached reads: 19916 MB in 2.00 seconds = 9972.19 MB/sec
Timing buffered disk reads: 408 MB in 3.02 seconds = 134.96 MB/sec
Timing cached reads: 19386 MB in 2.00 seconds = 9704.97 MB/sec
Timing buffered disk reads: 406 MB in 3.02 seconds = 134.40 MB/sec
Based on hdparm man pages, the timing cached reads (-t) are :
This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, with‐out any filesystem overhead.
And Timing buffered disk reads (-T) are:
This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.
I was under the impression that RAW disk format is a lot faster than QCow2, however in my results above both performed similar with respect to timing cached reads, which is a measure of the disk speed. On the other hand, the timing buffered disk read was much faster on the Ubuntu raw VM, than the CentOS Qcow2 VM, which does not make much sense, this the timing buffered disk read is a measure of processor, cache and memory throughput and I would think it should be the same on both?
So I ran dd to check my numbers:
Qcow2-VM:
# dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 2.43185 s, 55.2 MB/s
# dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 2.17012 s, 61.8 MB/s
# dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 1.93576 s, 69.3 MB/s
Raw-VM:
$ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 1.37035 s, 97.9 MB/s
$ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 1.24225 s, 108 MB/s
$ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 1.21815 s, 110 MB/s
Looks like Raw-VM performed much better than Qcow-2 in dd write operations.
What has your experience been like? Share your comments below.