If there's any I/O going to the filesystem at all, GPFS has to keep it
internally mounted on at least a few nodes such as the token managers and
fs manager.

I *believe* that holds true even for remote clusters, in that they still
need to reach back to the "local" cluster when operating on the
filesystem.  I could be wrong about that though.

On Sun, Jun 9, 2019, 09:06 Oesterlin, Robert <[email protected]>
wrote:

> Thanks for the suggestions - as it turns out, it was the **remote**
> mounts causing the issues - which surprises me. I wanted to do a “mmchfs”
> on the local cluster, to change the default mount point. Why would GPFS
> care if it’s remote mounted?
>
>
>
> Oh - well…
>
>
>
>
>
> Bob Oesterlin
>
> Sr Principal Storage Engineer, Nuance
>
>
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to