Hi all,
A Reminder to attend and also submit any panel questions for the
Wednesday session. So far, there are 3 questions around these topics:
1) excessive prefetch when reading small fractions of many large files
2) improved the integration between TSM and GPFS
3) number of security
Scale 4.2.3 was end of service as of September 30, 2020. As for waiters the mmdiag --waiters command only shows waiters on the node upon which the command is executed. You should use the command, mmlsnode -N waiters -L, to see all the waiters in the cluster, which may be more revealing as to the
Hi Uwe -
Regarding your previous message - waiters were coming / going with just 1-2
waiters when I ran the mmdiag command, with very low wait times (<0.01s).
We are running version 4.2.3
I did another capture today while the client is functioning normally and this
was the header result:
Hi Kamil,
in my mail just a few minutes ago I'd overlooked that the buffer size in
your trace was indeed 128M (I suppose the trace file is adapting that size
if not set in particular). That is very strange, even under high load, the
trace should then capture some longer time than 10 secs, and
Hi, Kamil,
looks your tracefile setting has been too low:
all streams included Thu Nov 12 20:58:19.950515266 2020 (TOD
1605232699.950515, cycles 20701552715873212) < useful part of trace
extends from here
trace quiesced Thu Nov 12 20:58:20.133134000 2020 (TOD 1605232700.000133,
cycles