i thought i answered that already, but maybe i just thought about answering
it and then forgot about it :-D
so yes more than 32 subblocks per block significant increase the
performance of filesystems with small files, for the sake of the argument
let's say 32k in a large block filesystem again
Thanks, Alex. I'm all too familiar with the trade offs between large
blocks and small files and we do use pretty robust SSD storage for our
metadata. We support a wide range of workloads and we have some folks
with many small (<1M) files and other folks with many large (>256MB) files.
My point in
Hey Aaron,
Can you define your sizes for "large blocks" and "small files"? If you
dial one up and the other down, your performance will be worse. And in any
case it's a pathological corner case so it shouldn't matter much for your
workflow, unless you've designed your system with the wrong
Thanks, Bill.
I still don't feel like I've got an clear answer from IBM and frankly
the core issue of a lack of migration tool was totally dodged.
Again in Sven's presentation from SSUG @ SC17
(http://files.gpfsug.org/presentations/2017/SC17/SC17-UG-CORAL_V3.pdf)
he mentions "It has a
Scale 5.0 was released today and is available for download. Time to construct a
test cluster!
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
507-269-0413
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
It's not clear that this is a problem or malfunction.
Customer should contact IBM support and be ready to transmit copies of the
cited log files and other mmbackup command output (stdout and stderr
messages) for analysis.
Also mmsnap output.
From: "IBM Spectrum Scale"
To:
Tru,
Can you please help with this query or forward to the right person.
Thanks.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of