On Dec 13 2016, Randy Rue <[email protected]> wrote:
> Hello All,
>
> I used a 1GB file created from the output of /dev/urandom
> The /cache directory is a 330GB RAM disk
>
> benchmark.py outputs this:
> [root@fast-dr contrib]# ../benchmark.py --authfile /etc/s3ql.authinfo
> --cachedir /cache swift://tin/fast_dr/ ./1GB.random
> Preparing test data...
> Measuring throughput to cache...
> Cache throughput with 4 KiB blocks: 14188 KiB/sec
> Cache throughput with 8 KiB blocks: 17977 KiB/sec
> Cache throughput with 16 KiB blocks: 33760 KiB/sec
> Cache throughput with 32 KiB blocks: 53328 KiB/sec
> Cache throughput with 64 KiB blocks: 141503 KiB/sec
> Cache throughput with 128 KiB blocks: 169001 KiB/sec
> Measuring raw backend throughput..
> Backend throughput: 24012 KiB/sec
> Test file size: 1024.00 MiB
> compressing with lzma-6...
> lzma compression speed: 1712 KiB/sec per thread (in)
> lzma compression speed: 1712 KiB/sec per thread (out)
> compressing with bzip2-6...
> bzip2 compression speed: 4285 KiB/sec per thread (in)
> bzip2 compression speed: 4307 KiB/sec per thread (out)
> compressing with zlib-6...
> zlib compression speed: 15118 KiB/sec per thread (in)
> zlib compression speed: 15123 KiB/sec per thread (out)
>
> With 128 KiB blocks, maximum performance for different compression
> algorithms and thread counts is:
>
> Threads: 1 2 4 8
> 24
> Max FS throughput (lzma): 1712 KiB/s 3424 KiB/s 6849 KiB/s 13698
> KiB/s 24011 KiB/s
> ..limited by: CPU CPU CPU CPU
> uplink
> Max FS throughput (bzip2): 4285 KiB/s 8570 KiB/s 17140 KiB/s 23888
> KiB/s 23888 KiB/s
> ..limited by: CPU CPU CPU uplink
> uplink
> Max FS throughput (zlib): 15118 KiB/s 24005 KiB/s 24005 KiB/s 24005
> KiB/s 24005 KiB/s
> ..limited by: CPU uplink uplink uplink
> uplink
>
>
> Questions:
> * I see discussion of tuning for block size but no mention of this in
> formatting or mounting the file system, do you mean specifying a block size
> in my call to rsync?
It means the blocksize that is used by applications when they issue
write(2) and read(2) requests to the kernel. This is distinct from the
rsync --block-size argument (which specifies the block size for rsync's
delta-transfer algorithm). Many applications use a hardcoded blocksize
that you cannot change (e.g. the "cp" program).
> * I've mounted the volume with mount.s3ql calling for 24 uplink threads, is
> this correct?
There is no "correct" value. It means S3QL will try to do up to 24
uploads in parallel.
> * I've tried several different combinations of arguments attempting to set
> the cache size and maximum cache entries but in every case, cache entries
> top out at about ~4K and cache size seems to float around 30-35GB,
> presumably shifting as the size of those 4K entries changes?
You need to be more specific. Please post a concrete set of options that
you used, the results that you got, and the results that you'd rather have.
> * I get better throughput (up to 120MB/s average incoming to the server,
> with bursts to 200MB/s) until eventually all of the cache is dirty. Then
> throughput drops to half or less. This makes sense, if the cache is all hot
> it seems like I'd then switch to "write-through" traffic and be limited to
> the back end connection, and unless my cache is TBs in size I'll eventually
> have that problem in any case. But I do have a cache volume mounted of
> 300+GB and would like to make use of it. What's limiting my cache entries
> and size?
The number of available file descriptors and the --max-cache-entries
argument.
> * In every case (depending on the number of rsync clients and their network
> connections) I get up to about 200MB/s and no more despite the server
> having bonded 10Gb connections and the back end swift cluster having
> multiple 10Gb connections. Where's my bottleneck?
I assume you mean what is limiting the upload speed to the server to 24
MB/s? That's not something that benchmark.py can determine. Do you get
more than 24 MB/s when you use a different swift client? If not, then
the server is to blame.
Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.