It may depend on which state the NSDs are in with respect to the node in question. If from that node you use 'mmfsadm dump nsd | egrep "moved|error|broken" ' and see anything, that might be it.
One or two of those states can be fixed by mmnsddiscover, the other(s) require a kick of mmfsd to get the NSDs back. I never remember which is which. -Jordan On Tue, Jun 25, 2019, 13:13 Jan-Frode Myklebust <[email protected]> wrote: > I’ve had a situation recently where mmnsddiscover didn’t help, but > mmshutdown/mmstartup on that node did fix it. > > This was with v5.0.2-3 on ppc64le. > > > -jf > > tir. 25. jun. 2019 kl. 17:02 skrev Son Truong <[email protected]>: > >> >> Hello Renar, >> >> Thanks for that command, very useful and I can now see the problematic >> NSDs are all served remotely. >> >> I have double checked the multipath and devices and I can see these NSDs >> are available locally. >> >> How do I get GPFS to recognise this and server them out via 'localhost'? >> >> mmnsddiscover -d <NSD> seemed to have brought two of the four problematic >> NSDs back to being served locally, but the other two are not behaving. I >> have double checked the availability of these devices and their multipaths >> but everything on that side seems fine. >> >> Any more ideas? >> >> Regards, >> Son >> >> >> --------------------------- >> >> Message: 2 >> Date: Tue, 25 Jun 2019 12:10:53 +0000 >> From: "Grunenberg, Renar" <[email protected]> >> To: "[email protected]" >> <[email protected]> >> Subject: Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to >> NSD failed with EIO, switching to access the disk remotely." >> Message-ID: <[email protected]> >> Content-Type: text/plain; charset="utf-8" >> >> Hallo Son, >> >> you can check the access to the nsd with mmlsdisk <fsname> -m. This give >> you a colum like ?IO performed on node?. On NSD-Server you should see >> localhost, on nsd-client you see the hostig nsd-server per device. >> >> Regards Renar >> >> >> Renar Grunenberg >> Abteilung Informatik - Betrieb >> >> HUK-COBURG >> Bahnhofsplatz >> 96444 Coburg >> Telefon: 09561 96-44110 >> Telefax: 09561 96-44104 >> E-Mail: [email protected] >> Internet: www.huk.de >> ________________________________ >> HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter >> Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. >> 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg >> Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. >> Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans >> Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. >> ________________________________ >> Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte >> Informationen. >> Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich >> erhalten haben, informieren Sie bitte sofort den Absender und vernichten >> Sie diese Nachricht. >> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht >> ist nicht gestattet. >> >> This information may contain confidential and/or privileged information. >> If you are not the intended recipient (or have received this information >> in error) please notify the sender immediately and destroy this information. >> Any unauthorized copying, disclosure or distribution of the material in >> this information is strictly forbidden. >> ________________________________ >> Von: [email protected] < >> [email protected]> Im Auftrag von Son Truong >> Gesendet: Dienstag, 25. Juni 2019 13:38 >> An: [email protected] >> Betreff: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD >> failed with EIO, switching to access the disk remotely." >> >> Hello, >> >> I wonder if anyone has seen this? I am (not) having fun with the >> rescan-scsi-bus.sh command especially with the -r switch. Even though there >> are no devices removed the script seems to interrupt currently working NSDs >> and these messages appear in the mmfs.logs: >> >> 2019-06-25_06:30:48.706+0100: [I] Connected to <IP> <node> <c0n0> >> 2019-06-25_06:30:48.764+0100: [E] Local access to <NSD> failed with EIO, >> switching to access the disk remotely. >> 2019-06-25_06:30:51.187+0100: [E] Local access to <NSD> failed with EIO, >> switching to access the disk remotely. >> 2019-06-25_06:30:51.188+0100: [E] Local access to <NSD> failed with EIO, >> switching to access the disk remotely. >> 2019-06-25_06:30:51.188+0100: [N] Connecting to <IP> <node> <c0n5> >> 2019-06-25_06:30:51.195+0100: [I] Connected to <IP> <node> <c0n5> >> 2019-06-25_06:30:59.857+0100: [N] Connecting to <IP> <node> <c0n4> >> 2019-06-25_06:30:59.863+0100: [I] Connected to <IP> <node> <c0n4> >> 2019-06-25_06:33:30.134+0100: [E] Local access to <NSD> failed with EIO, >> switching to access the disk remotely. >> 2019-06-25_06:33:30.151+0100: [E] Local access to <NSD> failed with EIO, >> switching to access the disk remotely. >> >> These messages appear roughly at the same time each day and I?ve checked >> the NSDs via mmlsnsd and mmlsdisk commands and they are all ?ready? and >> ?up?. The multipaths to these NSDs are all fine too. >> >> Is there a way of finding out what ?access? (local or remote) a >> particular node has to an NSD? And is there a command to force it to switch >> to local access ? ?mmnsdrediscover? returns nothing and run really fast >> (contrary to the statement ?This may take a while? when it runs)? >> >> Any ideas appreciated! >> >> Regards, >> Son >> >> Son V Truong - Senior Storage Administrator Advanced Computing Research >> Centre IT Services, University of Bristol >> Email: [email protected]<mailto:[email protected]> >> Tel: Mobile: +44 (0) 7732 257 232 >> Address: 31 Great George Street, Bristol, BS1 5QD >> <https://www.google.com/maps/search/31+Great+George+Street,+Bristol,+BS1+5QD?entry=gmail&source=g> >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190625/db704f88/attachment.html >> > >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> End of gpfsug-discuss Digest, Vol 89, Issue 26 >> ********************************************** >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
