Good point about "tiny" files going into the inode and system pool. Which
reminds one:
Generally a bad idea to store metadata in wide striping disk base RAID
(Type 5 with spinning media)
Do use SSD or similar for metadata.
Consider smaller block size for metadata / system pool than
Hey Sven,
This is regarding mmap issues and GPFS.
We had discussed previously of experimenting with GPFS 5.
I now have upgraded all of compute nodes and NSD nodes to GPFS 5.0.0.2
I am yet to experiment with mmap performance, but before that - I am seeing
weird hangs with GPFS 5 and I think it
Hi All,
At the UK meeting next week, we’ve had a speaker slot become available, we’re
planning to put in a BoF type session on tooling Spectrum Scale so we have
space for a few 3-5 minute quick talks on what people are doing to automate. If
you are coming along and interested, please drop
Let’s keep in mind that line buffering is a concept
within the standard C library;
if every log line triggers one write(2) system call,
and it’s not direct io, then multiple write still get
coalesced into few larger disk writes (as with the dd example).
A logging application might choose to
It would be interesting in which chunks data arrive at the NSDs -- if
those chunks are bigger than the individual I/Os (i.e. multiples of the
record sizes), there is some data coalescing going on and it just needs to
have its path well paved ...
If not, there might be indeed something odd in
Just the 1000 SMB shares limit was what I wanted but the other info was useful,
thanks Carl.
Richard
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of Carl Zetie
Sent: 10 April 2018 16:34
To: