Re: [gpfsug-discuss] CES SMB export limit

2018-04-11 Thread Sobey, Richard A
Just the 1000 SMB shares limit was what I wanted but the other info was useful, thanks Carl. Richard -Original Message- From: gpfsug-discuss-boun...@spectrumscale.org On Behalf Of Carl Zetie Sent: 10 April 2018 16:34 To: gpfsug-discuss@spectrumscale.org Subject: Re: [gpfsug-discuss] CE

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Jonathan Buzzard
On Tue, 2018-04-10 at 23:43 +0200, Uwe Falke wrote: > Hi Aaron,  > to how many different files do these tiny I/O requests go? > > Mind that the write aggregates the I/O over a limited time (5 secs or > so) and ***per file***.  > It is for that matter a large difference to write small chunks all to

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Uwe Falke
It would be interesting in which chunks data arrive at the NSDs -- if those chunks are bigger than the individual I/Os (i.e. multiples of the record sizes), there is some data coalescing going on and it just needs to have its path well paved ... If not, there might be indeed something odd in th

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Peter Serocka
Let’s keep in mind that line buffering is a concept within the standard C library; if every log line triggers one write(2) system call, and it’s not direct io, then multiple write still get coalesced into few larger disk writes (as with the dd example). A logging application might choose to close

[gpfsug-discuss] UK Meeting - tooling Spectrum Scale

2018-04-11 Thread Simon Thompson (Spectrum Scale User Group Chair)
Hi All, At the UK meeting next week, we’ve had a speaker slot become available, we’re planning to put in a BoF type session on tooling Spectrum Scale so we have space for a few 3-5 minute quick talks on what people are doing to automate. If you are coming along and interested, please drop me

Re: [gpfsug-discuss] GPFS, MMAP and Pagepool

2018-04-11 Thread Lohit Valleru
Hey Sven, This is regarding mmap issues and GPFS. We had discussed previously of experimenting with GPFS 5. I now have upgraded all of compute nodes and NSD nodes to GPFS 5.0.0.2 I am yet to experiment with mmap performance, but before that - I am seeing weird hangs with GPFS 5 and I think it c

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Bryan Banister
Just another thought here. If the debug output files fit in an inode, then these would be handled as metadata updates to the inode, which is typically much smaller than the file system blocksize. Looking at my storage that handles GPFS metadata shows avg KiB/IO at a horrendous 5-12 KiB! HTH,

Re: [gpfsug-discuss] Confusing I/O Behavior

2018-04-11 Thread Marc A Kaplan
Good point about "tiny" files going into the inode and system pool. Which reminds one: Generally a bad idea to store metadata in wide striping disk base RAID (Type 5 with spinning media) Do use SSD or similar for metadata. Consider smaller block size for metadata / system pool than r