On 05/02/2014 03:00 PM, jjs - mainphrame wrote:
Just for kicks, here are the data from the tests. (these were run on a
rather modest old machine)
Here are the raw dbench data:
#clients vzhost simfs CT ploop CT
---------------------------------------------------------------------
1 11.1297MB/sec 9.96657MB/sec 19.7214MB/sec
2 12.2936MB/sec 14.3138MB/sec 23.5628MB/sec
4 17.8909MB/sec 16.0859MB/sec 45.1936MB/sec
8 25.8332MB/sec 22.9195MB/sec 84.2607MB/sec
16 32.1436MB/sec 28.921MB/sec 155.207MB/sec
32 35.5809MB/sec 32.1429MB/sec 206.571MB/sec
64 34.3609MB/sec 29.9307MB/sec 221.119MB/sec
Well, I can't explain this, but there's probably something wrong with
the test.
Here is the script used to invoke dbench:
HOST=`uname -n`
WD=/tmp
FILE=/usr/share/dbench/client.txt
for i in 1 2 4 8 16 32 64
do
dbench -D $WD -c $FILE $i &>dbench-${HOST}-${i}
done
Here are the dd commands and outputs:
OPENVZ HOST
----------------
[root@vzhost ~]# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 11.813 s, 45.4 MB/s
[root@vzhost ~]# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda2 ext4 20642428 2390620 17203232 13% /
tmpfs tmpfs 952008 0 952008 0% /dev/shm
/dev/sda1 ext2 482922 68436 389552 15% /boot
/dev/sda4 ext4 51633780 3631524 45379332 8% /local
[root@vzhost ~]#
PLOOP CT
----------------
root@vz101:~# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.50071 s, 215 MB/s
This one I can explain :)
This is caused by ploop optimization that was enabled in the kernel
recently.
If data block is all zeroes, it is not written to the disk (same thing
as sparse files,
just for ploop).
So you need to test it with some real data (anything but not all zeroes).
I am not sure how fast is /dev/urandom but this is one of the options.
root@vz101:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/ploop11054p1 ext4 4539600 1529316 2804928 36% /
none devtmpfs 262144 4 262140 1% /dev
none tmpfs 52432 52 52380 1% /run
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 262144 0 262144 0% /run/shm
root@vz101:~#
SIMFS CT
----------------
root@vz102:~# dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 12.6913 s, 42.3 MB/s
root@vz102:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/simfs simfs 4194304 1365500 2828804 33% /
none devtmpfs 262144 4 262140 1% /dev
none tmpfs 52432 52 52380 1% /run
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 262144 0 262144 0% /run/shm
root@vz102:~#
Regards,
J J
On Fri, May 2, 2014 at 2:10 PM, jjs - mainphrame <j...@mainphrame.com
<mailto:j...@mainphrame.com>> wrote:
You know the saying, "when something seems too good to be true"...
I just installed centos 6.5 and openvz on an older machine, and
when I built an ubuntu 12.04 CT I noticed that ploop is now the
default layout. Cool. So I built another ubuntu12.04 CT, identical
in every way except that I specified smifs, so I could do a quick
performance comparison.
First I did a quick timed dd run, then I ran dbench with varying
numbers of clients.
The simfs CT showed performance roughly similar to the host, which
was not too surprising.
What did surprise me was that the ploop CT showed performance
which was significantly better than the host, in both the dd test
and the dbench tests.
I know someone will tell me "dbench is a terrible benchmark" but
it's also a standard. Of course, if anyone knows a "better"
benchmark, I'd love to try it.
Regards,
J J
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users