We use mmsdrrestore after the node boots. In our case these are diskless nodes
provisioned by xCAT. The post install script takes care of ensuring infiniband
is lit up, and does the mmsdrrestore followed by mmstartup.
-- ddj
Dave Johnson
> On Jan 29, 2021, at 2:47 PM, Ruffner, Scott
You would want to look for examples of external scripts that work on the result
of running the policy engine in listing mode. The one issue that might need
some attention is the way that gpfs quotes unprintable characters in the
pathname. So the policy engine generates the list and your
We have most of our clients network booted and diskless — no swap possible.
Gpfs still works until someone runs the node out of memory
-- ddj
Dave Johnson
> On Nov 8, 2019, at 11:25 AM, Damir Krstic wrote:
>
>
> I was wondering if it's safe to turn off swap on gpfs client machines?
Restarting subnet manager in general is fairly harmless. It will cause a heavy
sweep of the fabric when it comes back up, but there should be no LID
renumbering. Traffic may be held up during the scanning and rebuild of the
routing tables.
Losing a subnet manager for a period of time would
We are starting rolling upgrade to 5.0.3-x and gplbin compiles with non-fatal
warnings at that version. It seems to run fine. The rest of the cluster is
still at 4.2.3-10 but only at RHEL 7.6 kernel. Do you have a reason to not go
for the latest release on either the 4- or 5- line?
[root@xxx
I installed it on a client machine that was accidentally upgraded to rhel7.7.
There were two type mismatch warnings during the gplbin rpm build but gpfs
started up and mounted the filesystem successfully. Client is running ss
5.0.3.
-- ddj
Dave Johnson
> On Aug 16, 2019, at 11:49 AM, Lo
Thank you, I trust that it is fixed now, will check it when I have a chance.
The protocols version allowed me to proceed, just didn’t want others to run
into the same issue.
-- ddj
Dave Johnson
> On Mar 22, 2019, at 9:18 AM, Nariman Nasef wrote:
>
> This issue has been addressed
The underlying device in this context is the NSD, network storage device. This
has relation at all to 512 byte or 4K disk blocks. Usually around a meg, always
a power of two.
-- ddj
Dave Johnson
> On Mar 21, 2019, at 9:22 AM, Dorigo Alvise (PSI) wrote:
>
> Hi,
> I'm a little bit puzzled
But heaven help you if you export the gpfs on nfs or cifs.
-- ddj
Dave Johnson
> On Aug 23, 2018, at 11:23 AM, Marc A Kaplan wrote:
>
> Millions of files per directory, may well be a mistake...
>
> BUT there are some very smart use cases that might take advantage of GPFS
> having good
Yes the arrays are in different buildings. We want to spread the activity over
more servers if possible but recognize the extra load that rebalancing would
entail. The system is busy all the time.
I have considered using QOS when we run policy migrations but haven’t yet
because I don’t know
Does anyone have a good rule of thumb for iops to allow for background QOS
tasks?
-- ddj
Dave Johnson
> On Aug 20, 2018, at 2:02 PM, Frederick Stock wrote:
>
> That should do what you want. Be aware that mmrestripefs generates
> significant IO load so you should either use the QoS
I would as I suggested add the new NSD into a new pool in the same filesystem.
Then I would migrate all the files off the old pool onto the new one. At this
point you can deldisk the old ones or decide what else you’d want to do with
them.
-- ddj
Dave Johnson
> On Jul 10, 2018, at 12:29
Could you expand on what is meant by “same environment”? Can I export the same
fs in one cluster from cnfs nodes (ie NSD cluster with no root squash) and also
export the same fs in another cluster (client cluster with root squash) using
Ganesha?
-- ddj
Dave Johnson
> On Jun 13, 2018, at
Another possible workaround would be to add wrappers for these apps and only
add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the
app.
-- ddj
Dave Johnson
> On May 30, 2018, at 8:26 AM, Peter Serocka wrote:
>
> As a quick means, why not adding /usr/lib64 at the
We use a dependent fileset for each research group / investigator. We do this
mainly so we can apply fileset quotas. We tried independent filesets but they
were quite inconvenient:
1) limited number independent filesets could be created compared to dependent
2) requirement to manage number of
Until IBM provides a solution, here is my workaround. Add it so it runs before
the gpfs script, I call it from our custom xcat diskless boot scripts. Based on
rhel7, not fully systemd integrated. YMMV!
Regards,
— ddj
——-
[ddj@storage041 ~]$ cat /etc/init.d/ibready
#! /bin/bash
#
# chkconfig:
If the new files need indirect blocks or extended attributes that don’t fit in
the basic inode, additional metadata space would need to be allocated. There
might be other reasons but these come to mind immediately.
-- ddj
Dave Johnson
> On Jan 23, 2018, at 12:16 PM, Buterbaugh, Kevin L
>
Filesets and storage pools are for the most part orthogonal concepts. You
would sort your users and apply quotas with filesets. You would use storage
pools underneath filesets and the filesystem to migrate between faster and
slower media. Migration between storage pools is done well by the
We have FDR14 Mellanox fabric, probably similar interrupt load as OPA.
-- ddj
Dave Johnson
On Jul 19, 2017, at 1:52 PM, Jonathon A Anderson
wrote:
>> It might be a problem specific to your system environment or a wrong
>> configuration therefore please get
19 matches
Mail list logo