Hello All,
I used a 1GB file created from the output of /dev/urandom
The /cache directory is a 330GB RAM disk
benchmark.py outputs this:
[root@fast-dr contrib]# ../benchmark.py --authfile /etc/s3ql.authinfo
--cachedir /cache swift://tin/fast_dr/ ./1GB.random
Preparing test data...
Measuring throughput to cache...
Cache throughput with 4 KiB blocks: 14188 KiB/sec
Cache throughput with 8 KiB blocks: 17977 KiB/sec
Cache throughput with 16 KiB blocks: 33760 KiB/sec
Cache throughput with 32 KiB blocks: 53328 KiB/sec
Cache throughput with 64 KiB blocks: 141503 KiB/sec
Cache throughput with 128 KiB blocks: 169001 KiB/sec
Measuring raw backend throughput..
Backend throughput: 24012 KiB/sec
Test file size: 1024.00 MiB
compressing with lzma-6...
lzma compression speed: 1712 KiB/sec per thread (in)
lzma compression speed: 1712 KiB/sec per thread (out)
compressing with bzip2-6...
bzip2 compression speed: 4285 KiB/sec per thread (in)
bzip2 compression speed: 4307 KiB/sec per thread (out)
compressing with zlib-6...
zlib compression speed: 15118 KiB/sec per thread (in)
zlib compression speed: 15123 KiB/sec per thread (out)
With 128 KiB blocks, maximum performance for different compression
algorithms and thread counts is:
Threads: 1 2 4 8
24
Max FS throughput (lzma): 1712 KiB/s 3424 KiB/s 6849 KiB/s 13698
KiB/s 24011 KiB/s
..limited by: CPU CPU CPU CPU
uplink
Max FS throughput (bzip2): 4285 KiB/s 8570 KiB/s 17140 KiB/s 23888
KiB/s 23888 KiB/s
..limited by: CPU CPU CPU uplink
uplink
Max FS throughput (zlib): 15118 KiB/s 24005 KiB/s 24005 KiB/s 24005
KiB/s 24005 KiB/s
..limited by: CPU uplink uplink uplink
uplink
Questions:
* I see discussion of tuning for block size but no mention of this in
formatting or mounting the file system, do you mean specifying a block size
in my call to rsync?
* I've mounted the volume with mount.s3ql calling for 24 uplink threads, is
this correct?
* I've tried several different combinations of arguments attempting to set
the cache size and maximum cache entries but in every case, cache entries
top out at about ~4K and cache size seems to float around 30-35GB,
presumably shifting as the size of those 4K entries changes?
* I get better throughput (up to 120MB/s average incoming to the server,
with bursts to 200MB/s) until eventually all of the cache is dirty. Then
throughput drops to half or less. This makes sense, if the cache is all hot
it seems like I'd then switch to "write-through" traffic and be limited to
the back end connection, and unless my cache is TBs in size I'll eventually
have that problem in any case. But I do have a cache volume mounted of
300+GB and would like to make use of it. What's limiting my cache entries
and size?
* In every case (depending on the number of rsync clients and their network
connections) I get up to about 200MB/s and no more despite the server
having bonded 10Gb connections and the back end swift cluster having
multiple 10Gb connections. Where's my bottleneck?
Thanks in advance,
Randy
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.