Hi Nikolaus,
 

> > * I've mounted the volume with mount.s3ql calling for 24 uplink threads, 
> is 
> > this correct? 
> There is no "correct" value. It means S3QL will try to do up to 24 
> uploads in parallel. 


Forgive me for asking the question imprecisely. Perhaps a better question 
is "Is this consistent with best practice" or "Do you have a 
recommendation?" 

> * I've tried several different combinations of arguments attempting to 
> set 
> > the cache size and maximum cache entries but in every case, cache 
> entries 
> > top out at about ~4K and cache size seems to float around 30-35GB, 
> > presumably shifting as the size of those 4K entries changes? 
> You need to be more specific. Please post a concrete set of options that 
> you used, the results that you got, and the results that you'd rather 
> have. 


My most recent attempt mounted using:
/usr/bin/mount.s3ql --nfs --compress zlib-6 --authfile /etc/s3ql.authinfo 
--log syslog --threads 24 --cachedir /cache --cachesize 346030080 
--allow-other swift://tin/fast_dr/ /fast_dr

I've also tried it with the argument "--max-cache-entries 40960" included. 
In both cases, while I have multiple rsyncs running from a mounted NFS 
client, "watch s3qlstat" shows the cache grow to ~4,000 entries and no 
more. Cache size grows to 38-40GB at the most, and dirty cache eventually 
grows to take up all of the cache. At that point whatever throughput I was 
seeing starts to drop.

At that point I've also tried changing the cache size on the fly using 
"s3qlctrl cachesize /fast_dr/ 781250000" (that's 100GB) and saw the cache 
size float upward maybe 5GB and then return to ~30GB. Cache entries didn't 
change.

I would like to see more of the 300GB cache volume used to see if this will 
help overall performance or at least postpone the point where all of the 
cache is dirty and write-throughs start.

Related:

> What's limiting my cache entries  and size? 
> The number of available file descriptors and the --max-cache-entries 
> argument. 


Am I missing something about using the max-cache-entries argument? It 
doesn't seem to make a difference.

Forgive me if this isn't a s3ql question, but what determines the number of 
available file descriptors?


> * In every case (depending on the number of rsync clients and their 
> network 
> > connections) I get up to about 200MB/s and no more despite the server 
> > having bonded 10Gb connections and the back end swift cluster having 
> > multiple 10Gb connections. Where's my bottleneck? 
> I assume you mean what is limiting the upload speed to the server to 24 
> MB/s? That's not something that benchmark.py can determine. Do you get 
> more than 24 MB/s when you use a different swift client? If not, then 
> the server is to blame. 


I mean "How can I find out why my total write speed is only as high as 
200MB/s when the NFS client, the s3ql server and the swift cluster all have 
multiple 10Gb connections?"

I'll test some rsyncs from the client to a non-s3ql target on the s3ql 
system as a data point for incoming speed, and some swift calls from the 
s3ql server to the swift cluster.

Life is Good,

Randy

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to