Hey Bob, Ditto on what Aaron said, it sounds as if the last fs manager might need a nudge. Things can get weird when a filesystem isn't mounted anywhere but a manager is needed for an operation though, so I would keep an eye on the ras logs of the cluster manager during the kick just to make sure the management duty isn't bouncing (which in turn can cause waiters).
-Jordan On Sat, Jun 8, 2019 at 9:16 PM Aaron Knister <[email protected]> wrote: > Bob, I wonder if something like an “mmdf” or an “mmchmgr” would trigger > the internal mounts to release. > > Sent from my iPhone > > On Jun 8, 2019, at 13:22, Oesterlin, Robert <[email protected]> > wrote: > > I have a few file systems that are showing “internal mount” on my NSD > servers, even though they are not mounted. I’d like to force them, without > have to restart GPFS on those nodes - any options? > > > > Not mounted on any other (local cluster) nodes. > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
