Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-07 Thread Christof Schmitt
The problem with the SMB upgrade is with the data shared between the protocol nodes. It is not tied to the protocol version used between SMB clients and the protocol nodes. Samba stores internal data (e.g. for the SMB state of open files) in tdb database files. ctdb then makes these tdb databases

Re: [gpfsug-discuss] 100G RoCEE and Spectrum Scale Performance

2018-03-07 Thread Olaf Weiser
HI Doug, I did some compares with gpfsperf ... betweend IB and 100GbE .. but we used the 100GbE with ROCE .. so my results might not be representative for you .. (don't wonder about edited hostnames .. its from a real customer environment..) so with real data workload.. it is nearly the same... ~

Re: [gpfsug-discuss] mmfind performance

2018-03-07 Thread Buterbaugh, Kevin L
Hi Marc, Thanks, I’m going to give this a try as the first mmfind finally finished overnight, but produced no output: /root root@gpfsmgrb# bash -x ~/bin/klb.sh + cd /usr/lpp/mmfs/samples/ilm + ./mmfind /gpfs23 -inum 113769917 -o -inum 132539418 -o -inum 135584191 -o -inum 136471839 -o -inum

Re: [gpfsug-discuss] mmfind performance

2018-03-07 Thread Simon Thompson (IT Research Support)
I can’t comment on mmfind vs perl, but have you looked at trying “tsfindinode” ? Simon From: on behalf of "Buterbaugh, Kevin L" Reply-To: "gpfsug-discuss@spectrumscale.org" Date:

Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-07 Thread Greg.Lehmann
In theory it only affects SMB, but in practice if NFS depends on winbind for authorisation then it is affected too. I can understand the need for changes to happen every so often and that maybe outages will be required then. But, I would like to see some effort to avoid doing this

Re: [gpfsug-discuss] [non-nasa source] Re: pagepool shrink doesn't release all memory

2018-03-07 Thread Aaron Knister
Following up on this... On one of the nodes on which I'd bounced the pagepool around I managed to cause what appeared to that node as filesystem corruption (i/o errors and fsstruct errors) on every single fs. Thankfully none of the other nodes in the cluster seemed to agree that the fs was