Hello Scott, Thank you. I did set afmFlushThreadDelay = 1 and did get a much faster startup. Setting to 0 didn’t improve further. I’m not sure how much we’ll need this in production when most of the time the queue is full. But for benchmarking during setup it’s helps a lot. (we run 4.2.3-4 on RHEL7)
Kind regards, Heiner Scott Fadden did write: When an AFM gateway is flushing data to the target (home) it starts flushing with a few threads (Don't remember the number) and ramps up to afmNumFlushThreads. How quickly this ramp up occurs is controlled by afmFlushThreadDealy. The default is 5 seconds. So flushing only adds threads once every 5 seconds. This was an experimental parameter so your milage may vary. Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 [email protected] http://www.ibm.com/systems/storage/spectrum/scale ----- Original message ----- From: "Billich Heinrich Rainer (PSI)" <[email protected]> Sent by: [email protected] To: "[email protected]" <[email protected]> Cc: Subject: [gpfsug-discuss] AFM: Slow startup of flush from cache to home Date: Fri, Oct 13, 2017 10:16 AM Hello, Running an AFM IW cache we noticed that AFM starts the flushing of data from cache to home rather slow, say at 20MB/s, and only slowly increases to several 100MB/s after a few minutes. As soon as the pending queue gets no longer filled the data rate drops, again. I assume that this is a good behavior for WAN traffic where you don’t want to use the full bandwidth from the beginning but only if really needed. For our local setup with dedicated links I would prefer a much more aggressive behavior to get data transferred asap to home. Am I right, does AFM implement such a ‘slow startup’, and is there a way to change this behavior? We did increase afmNumFlushThreads to 128. Currently we measure with many small files (1MB). For large files the behavior is different, we get a stable data rate from the beginning, but I did not yet try with a continuous write on the cache to see whether I see an increase after a while, too. Thank you, Heiner Billich -- Paul Scherrer Institut Science IT Heiner Billich WHGA 106 CH 5232 Villigen PSI 056 310 36 02 https://www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
