Hi, mmchfs Device -o syncnfs is the correct way of setting the syncnfs so that it applies to the file system both on the home and the remote cluster On 4.2.3+ syncnfs is the default option on Linux . Which means GPFS will implement the syncnfs behavior regardless of what the mount command says The documentation indicates that mmmount Device -o syncnfs=yes appears to be the correct syntax. When I tried that, I do see 'syncnfs=yes' in the output of the 'mount' command To change the remote mount option so that you don't have to specify the option on the command line every time you do mmmount, instead of using mmchfs, one should use mmremotefs update -o.
Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Billich Heinrich Rainer (PSI)" <[email protected]> To: gpfsug main discussion list <[email protected]> Date: 07/06/2018 12:06 AM Subject: [gpfsug-discuss] -o syncnfs has no effect? Sent by: [email protected] Hello, I try to mount a fs with "-o syncnfs" as we'll export it with CES/Protocols. But I never see the mount option displayed when I do # mount | grep fs-name This is a remote cluster mount, we'll run the Protocol nodes in a separate cluster. On the home cluster I see the option 'nfssync' in the output of 'mount'. My conclusion is that the mount option "syncnfs" has no effect on remote cluster mounts. Which seems a bit strange? Please can someone clarify on this? What is the impact on protocol nodes exporting remote cluster mounts? Is there any chance of data corruption? Or are some mount options implicitely inherited from the home cluster? I've read 'syncnfs' is default on Linux, but I would like to know for sure. Funny enough I can pass arbitrary options with # mmmount <fs-name> -o some-garbage which are silently ignored. I did 'mmchfs -o syncnfs' on the home cluster and the syncnfs option is present in /etc/fstab on the remote cluster. I did not remount on all nodes __ Thank you, I'll appreciate any hints or replies. Heiner Versions: Remote cluster 5.0.1 on RHEL7.4 (imounts the fs and runs protocol nodes) Home cluster 4.2.3-8 on RHEL6 (export the fs, owns the storage) Filesystem: 17.00 (4.2.3.0) All Linux x86_64 with Spectrum Scale Standard Edition -- Paul Scherrer Institut Science IT Heiner Billich WHGA 106 CH 5232 Villigen PSI 056 310 36 02 https://www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
