Hello,
This is simply due to the fact that your “test” is doing parallel appended
writes to the same file. With GPFS, there is one node at a time that can write
to roughly the same position (aka byte range) in a file. This is handled by
something called a write_lock token that has to be
Hi all,
I am in the process of replacing a beegfs cluster with a spectrum scale cluster and some of our initial tests have returned poor performance when writing from multiple client nodes to the same file.
Rafael,
Changing the value of maxblocksize today does require the entire cluster to be stopped -- this is a current limitation in the product (which we are looking into eliminating).
Felipe
Felipe Knop k...@us.ibm.comGPFS Development and SecurityIBM SystemsIBM Building 0082455 South
Hello,
I have a Spectrum Scale cluster with 10 nodes on the version 4.2.
The cluster have maxblocksize=DEFAULT.
I will migrate all nodes to 5.0.3 (online) none downtime.
But the maxblocksize is fixed on the mmlsconfig after the migration (the
man have this information)
maxblocksize 1M
man