Re: [gpfsug-discuss] Adding client nodes using a shared NFS root image.

2021-01-29 Thread david_johnson
We use mmsdrrestore after the node boots. In our case these are diskless nodes provisioned by xCAT. The post install script takes care of ensuring infiniband is lit up, and does the mmsdrrestore followed by mmstartup. -- ddj Dave Johnson > On Jan 29, 2021, at 2:47 PM, Ruffner, Scott

Re: [gpfsug-discuss] Question about Policies

2019-12-27 Thread david_johnson
You would want to look for examples of external scripts that work on the result of running the policy engine in listing mode. The one issue that might need some attention is the way that gpfs quotes unprintable characters in the pathname. So the policy engine generates the list and your

Re: [gpfsug-discuss] gpfs client and turning off swap

2019-11-08 Thread david_johnson
We have most of our clients network booted and diskless — no swap possible. Gpfs still works until someone runs the node out of memory -- ddj Dave Johnson > On Nov 8, 2019, at 11:25 AM, Damir Krstic wrote: > >  > I was wondering if it's safe to turn off swap on gpfs client machines?

Re: [gpfsug-discuss] infiniband fabric instability effects

2019-09-13 Thread david_johnson
Restarting subnet manager in general is fairly harmless. It will cause a heavy sweep of the fabric when it comes back up, but there should be no LID renumbering. Traffic may be held up during the scanning and rebuild of the routing tables. Losing a subnet manager for a period of time would

Re: [gpfsug-discuss] Compiling gplbin on RHEL 7.7

2019-09-06 Thread david_johnson
We are starting rolling upgrade to 5.0.3-x and gplbin compiles with non-fatal warnings at that version. It seems to run fine. The rest of the cluster is still at 4.2.3-10 but only at RHEL 7.6 kernel. Do you have a reason to not go for the latest release on either the 4- or 5- line? [root@xxx

Re: [gpfsug-discuss] SS on RHEL 7.7

2019-08-16 Thread david_johnson
I installed it on a client machine that was accidentally upgraded to rhel7.7. There were two type mismatch warnings during the gplbin rpm build but gpfs started up and mounted the filesystem successfully. Client is running ss 5.0.3. -- ddj Dave Johnson > On Aug 16, 2019, at 11:49 AM, Lo

Re: [gpfsug-discuss] Spectrum Scale Standard 4.2.3-13 downloadbroken

2019-03-22 Thread david_johnson
Thank you, I trust that it is fixed now, will check it when I have a chance. The protocols version allowed me to proceed, just didn’t want others to run into the same issue. -- ddj Dave Johnson > On Mar 22, 2019, at 9:18 AM, Nariman Nasef wrote: > > This issue has been addressed

Re: [gpfsug-discuss] Clarification about blocksize in stardanrd gpfs and GNR

2019-03-21 Thread david_johnson
The underlying device in this context is the NSD, network storage device. This has relation at all to 512 byte or 4K disk blocks. Usually around a meg, always a power of two. -- ddj Dave Johnson > On Mar 21, 2019, at 9:22 AM, Dorigo Alvise (PSI) wrote: > > Hi, > I'm a little bit puzzled

Re: [gpfsug-discuss] Those users.... millions of files per directory - not necessarily a mistake

2018-08-23 Thread david_johnson
But heaven help you if you export the gpfs on nfs or cifs. -- ddj Dave Johnson > On Aug 23, 2018, at 11:23 AM, Marc A Kaplan wrote: > > Millions of files per directory, may well be a mistake... > > BUT there are some very smart use cases that might take advantage of GPFS > having good

Re: [gpfsug-discuss] Rebalancing with mmrestripefs -P

2018-08-20 Thread david_johnson
Yes the arrays are in different buildings. We want to spread the activity over more servers if possible but recognize the extra load that rebalancing would entail. The system is busy all the time. I have considered using QOS when we run policy migrations but haven’t yet because I don’t know

Re: [gpfsug-discuss] Rebalancing with mmrestripefs -P

2018-08-20 Thread david_johnson
Does anyone have a good rule of thumb for iops to allow for background QOS tasks? -- ddj Dave Johnson > On Aug 20, 2018, at 2:02 PM, Frederick Stock wrote: > > That should do what you want. Be aware that mmrestripefs generates > significant IO load so you should either use the QoS

Re: [gpfsug-discuss] Allocation map limits - any way around this?

2018-07-10 Thread david_johnson
I would as I suggested add the new NSD into a new pool in the same filesystem. Then I would migrate all the files off the old pool onto the new one. At this point you can deldisk the old ones or decide what else you’d want to do with them. -- ddj Dave Johnson > On Jul 10, 2018, at 12:29

Re: [gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5

2018-06-13 Thread david_johnson
Could you expand on what is meant by “same environment”? Can I export the same fs in one cluster from cnfs nodes (ie NSD cluster with no root squash) and also export the same fs in another cluster (client cluster with root squash) using Ganesha? -- ddj Dave Johnson > On Jun 13, 2018, at

Re: [gpfsug-discuss] AFM negative file caching

2018-05-30 Thread david_johnson
Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the

Re: [gpfsug-discuss] Converting a dependent fileset to independent

2018-04-25 Thread david_johnson
We use a dependent fileset for each research group / investigator. We do this mainly so we can apply fileset quotas. We tried independent filesets but they were quite inconvenient: 1) limited number independent filesets could be created compared to dependent 2) requirement to manage number of

Re: [gpfsug-discuss] GPFS autoload - wait for IB ports to become active

2018-03-08 Thread david_johnson
Until IBM provides a solution, here is my workaround. Add it so it runs before the gpfs script, I call it from our custom xcat diskless boot scripts. Based on rhel7, not fully systemd integrated. YMMV! Regards, — ddj ——- [ddj@storage041 ~]$ cat /etc/init.d/ibready #! /bin/bash # # chkconfig:

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread david_johnson
If the new files need indirect blocks or extended attributes that don’t fit in the basic inode, additional metadata space would need to be allocated. There might be other reasons but these come to mind immediately. -- ddj Dave Johnson > On Jan 23, 2018, at 12:16 PM, Buterbaugh, Kevin L >

Re: [gpfsug-discuss] Combine different rules

2017-11-01 Thread david_johnson
Filesets and storage pools are for the most part orthogonal concepts. You would sort your users and apply quotas with filesets. You would use storage pools underneath filesets and the filesystem to migrate between faster and slower media. Migration between storage pools is done well by the

Re: [gpfsug-discuss] mmsysmon.py revisited

2017-07-19 Thread david_johnson
We have FDR14 Mellanox fabric, probably similar interrupt load as OPA. -- ddj Dave Johnson On Jul 19, 2017, at 1:52 PM, Jonathon A Anderson wrote: >> It might be a problem specific to your system environment or a wrong >> configuration therefore please get