Re: [gpfsug-discuss] Portability interface

2020-09-22 Thread Truong Vu


You are correct, the "identical architecture" means the same machine
hardware name as shown by the -m option of the uname command.

Thanks,
Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   09/22/2020 05:18 AM
Subject:[EXTERNAL] gpfsug-discuss Digest, Vol 104, Issue 23
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

http://gpfsug.org/mailman/listinfo/gpfsug-discuss

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Checking if a AFM-managed file is stillinflight
  (Dorigo Alvise (PSI))
   2. Portability interface (Jonathan Buzzard)


--

Message: 1
Date: Mon, 21 Sep 2020 11:17:35 +
From: "Dorigo Alvise (PSI)" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Checking if a AFM-managed file is still
 inflight
Message-ID: <7bb1dd94-e99c-4a66-baf2-be287ee57...@psi.ch>
Content-Type: text/plain; charset="utf-8"

Thank you Venkat, the ?dirty? and ?append? flags seem quite useful.

   A



Da:  per conto di Venkateswara R
Puvvada 
Risposta: gpfsug main discussion list 
Data: luned?, 21 settembre 2020 12:57
A: gpfsug main discussion list 
Oggetto: Re: [gpfsug-discuss] Checking if a AFM-managed file is still
inflight

tspcacheutil , this command provides information about the
file's replication state. You can also run policy to find these files.

Example:

tspcacheutil /gpfs/gpfs1/sw2/1.txt
inode: ino=524290 gen=235142808 uid=1000 gid=0 size=3 mode=0200100777
nlink=1
   ctime=1600366912.382081156 mtime=1600275424.692786000
   cached 1  hasState 1  local 0
   create 0  setattr  0  dirty 0  link 0  append 0
pcache: parent ino=524291 foldval=0x6AE011D4 nlink=1
remote: ino=56076 size=3 nlink=1 fhsize=24 version=0
ctime=1600376836.408694099 mtime=1600275424.692786000

Cached - File is cached. For directory, readdir+lookup is completed.
hashState - file/dir have remote attributes for the replication.
local - file/dir is local,  won't be replicated to home or not revalidated
with home.
Create - file/dir is newly created, not yet replicated
Setattr - Attributes (chown, chmod, mmchattr , setACL, setEA etc..)  are
changed on dir/file, but not replicated yet.
Dirty - file have been changed in the cache, but not replicated yet. For
directory this means that files inside it have been removed or renamed.
Link - hard link for the file have been created, but not replicated yet.
Append - file have been appended, but not replicated yet. For directory
this is complete bit which indicates  that readddir was performed.

~Venkat (vpuvv...@in.ibm.com)



From:"Dorigo Alvise (PSI)" 
To:gpfsug main discussion list 
Date:09/21/2020 04:02 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] Checking if a AFM-managed
file is stillinflight
Sent by:gpfsug-discuss-boun...@spectrumscale.org




Information reported by that command (both at cache and home side) are
size, blocks, block size, and times.
I think it cannot be enough to decide that AFM completed the transfer of a
file.
Did I possibly miss something else ?
It would be nice to have a flag (like that one reported by the policy,
flags ?P? ? managed by AFM ? and ?w? ? beeing transferred -) that can help
us to know if AFM considers the file synced to home or not yet.

   Alvise

Da:  per conto di Olaf Weiser

Risposta: gpfsug main discussion list 
Data: luned?, 21 settembre 2020 11:55
A: "gpfsug-discuss@spectrumscale.org" 
Cc: "gpfsug-discuss@spectrumscale.org" 
Oggetto: Re: [gpfsug-discuss] Checking if a AFM-managed file is still
inflight

do you looking fo smth like this:
mmafmlocal ls filenameor stat filename





- Original message -
From: "Dorigo Alvise (PSI)" 
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list 
Cc:
Subject: [EXTERNAL] [gpfsug-discuss] Checking if a AFM-managed file is
still inflight
Date: Mon, Sep 21, 2020 10:45 AM


Dear GPFS users,
I know that through a policy one can know if a file is still being
transferred from the cache to your home by AFM.

I wonder if there is another method @cache or @home side, faster and less
invasive (a policy, as far as I know, can put some pressure on the system
when there are many files).
I quickly checked mmlsattr that seems not to be AFM-aware (but there is a
flags field that can show several things, like compression status, archive,
etc).


Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-24 Thread Truong Vu

Hi Renar,

Let's see if it is really the /bin/rm is the problem here.  Can you run the
command again without cleanup the temp files as follow:

DEBUG=1 keepTempFiles=1 mmgetstate -a

Thanks,
Tru.




From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   01/23/2019 07:46 AM
Subject:gpfsug-discuss Digest, Vol 84, Issue 32
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=jDHxo2hE5uOrH5xaI6YYQdQ-O5yZG-udF7ooPNOEUUM=UBffyp1tO8WZsaCys72XHljL9SyUe_v4ECCmymP17Lg=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Spectrum Scale Cygwin cmd delays (Grunenberg, Renar)


--

Message: 1
Date: Wed, 23 Jan 2019 12:45:39 +
From: "Grunenberg, Renar" 
To: 'gpfsug main discussion list' 
Subject: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays
Message-ID: <349cb338583a4c1d996677837fc65...@smxrf105.msg.hukrf.de>
Content-Type: text/plain; charset="utf-8"

Hallo All,

as a point to the problem, it seems to be that all the delayes are
happening here

DEBUG=1 mmgetstate ?a

??..
/bin/rm
-f /var/mmfs/gen/mmsdrfs.1256 /var/mmfs/tmp/allClusterNodes.mmgetstate.1256 
/var/mmfs/tmp/allQuorumNodes.mmgetstate.1256 
/var/mmfs/tmp/allNonQuorumNodes.mmgetstate.1256 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.pub 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.priv 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.cert 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.keystore 
/var/mmfs/tmp/nodefile.mmgetstate.1256 /var/mmfs/tmp/diskfile.mmgetstate.1256 
/var/mmfs/tmp/diskNamesFile.mmgetstate.1256


Any points to this it will be fixed in the near future are welcome.

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter
Deutschlands a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav
Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas.

Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese
Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in
this information is strictly forbidden.

Von: Grunenberg, Renar
Gesendet: Dienstag, 22. Januar 2019 18:10
An: 'gpfsug main discussion list' 
Betreff: AW: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

Hallo Roger,
first thanks fort he tip. But we decided to separate the linux-io-Cluster
from the Windows client only cluster, because of security requirements and
ssh management requirements. We can use at this point, local named admins
on Windows and use on Linux a Deamon and an separated Admin-interface
Network for pwless root ssh. Your Hint seems to be CCR related or is this a
Cygwin problem.
@Spectrum Scale Team:
Point1: IP V6 can?t disabled because of applications that want to use this.
But the mmcmi cmd are give us already the right ipv4 adresses.
Point2. There are no DNS-Issues
Point3: We must check these.
Any recommendations to Rogers statements?

Regards Renar

Von: gpfsug-discuss-boun...@spectrumscale.org<
mailto:gpfsug-discuss-boun...@spectrumscale.org> [
mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Roger Moye
Gesendet: Dienstag, 22. Januar 2019 16:43
An: gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

We experienced the same issue and were 

Re: [gpfsug-discuss] Sudo wrappers

2018-10-10 Thread Truong Vu

Yes, you can use mmchconfig for that.

eg: mmchconfig sudoUser=gpfsadmin

Thanks,
Tru.


Message: 2
Date: Wed, 10 Oct 2018 15:58:51 +
From: Simon Thompson 
To: "gpfsug-discuss@spectrumscale.org"
 
Subject: [gpfsug-discuss] Sudo wrappers
Message-ID: <88e47b96-df0b-428a-92f6-1aeaea4aa...@bham.ac.uk>
Content-Type: text/plain; charset="utf-8"

OK, so I finally got a few minutes to play with the sudo wrappers.

I read the docs on the GPFS website, setup my gpfsadmin user and made it so
that root can ssh as the gpfsadmin user to the host.

Except of course I?ve clearly misunderstood things, because when I do:

[myusername@bber-dssg02 bin]$ sudo /usr/lpp/mmfs/bin/mmgetstate -a
myusername@bber-afmgw01.bb2.cluster's password:
myusername@bber-dssg02.bb2.cluster's password:
myusername@bber-dssg01.bb2.cluster's password:
myusername@bber-afmgw02.bb2.cluster's password:

Now ?myusername? is ? my username, not ?gpfsadmin?. What I really don?t
want to do is permit root to ssh to all the hosts in the cluster
as ?myusername?. I kinda thought the username it sshes as would be
configurable, but apparently not?

Annoyingly, I can do:
[myusername@bber-dssg02 bin]$ sudo
SUDO_USER=gpfsadmin /usr/lpp/mmfs/bin/mmgetstate -a

And that works fine? So is it possibly to set in a config file the user
that the sudo wrapper works as?

(I get there are cases where you want to ssh as the original calling user)

Simon
-- next part --
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20181010/6317be26/attachment-0001.html
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Lroc on NVME

2018-06-12 Thread Truong Vu

Yes, older versions of GPFS don't recognize /dev/nvme*.  So you would
need /var/mmfs/etc/nsddevices user exit.  On newer GPFS versions, the nvme
devices are also generic.  So, it is good that you are using the same NSD
sub-type.

Cheers,
Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   06/12/2018 06:47 AM
Subject:gpfsug-discuss Digest, Vol 77, Issue 15
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

http://gpfsug.org/mailman/listinfo/gpfsug-discuss

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: RHEL updated to 7.5 instead of 7.4 (Felipe Knop)
   2. Re: RHEL updated to 7.5 instead of 7.4 (Jeffrey R. Lang)
   3. Lroc on NVME (Peter Childs)


--

Message: 1
Date: Mon, 11 Jun 2018 09:52:16 -0400
From: "Felipe Knop" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4
Message-ID:




Content-Type: text/plain; charset="utf-8"

Fred,

Correct. The FAQ should be updated shortly.

  Felipe


Felipe Knop k...@us.ibm.com
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314  T/L 293-9314





From:"Frederick Stock" 
To:  gpfsug main discussion list 
Date:06/11/2018 07:52 AM
Subject: Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4
Sent by: gpfsug-discuss-boun...@spectrumscale.org



Spectrum Scale 4.2.3.9 does support RHEL 7.5.

Fred
__
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com



From:"Sobey, Richard A" 
To:gpfsug main discussion list 
Date:06/11/2018 06:59 AM
Subject:Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Thanks Simon. Do you mean you pinned the minor release to 7.X but yum
upgraded you to 7.Y? This has just happened to me:

[root@ ~]# subscription-manager release
Release: 7.4
[root@ ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Granted I didn?t issue a yum clean all after changing the release however
I?ve never seen this happen before.

Anyway, I need to either downgrade back to 7.4 or upgrade GPFS, whichever
will be the best supported. I need to learn to pay attention to what kernel
version I?m being updated to in future!

Cheers
Richard

From: gpfsug-discuss-boun...@spectrumscale.org
 On Behalf Of Simon Thompson (IT
Research Support)
Sent: 11 June 2018 11:50
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4

We have on our DSS-G ?

Have you looked at:
https://access.redhat.com/solutions/238533


?

Simon

From:  on behalf of "Sobey,
Richard A" 
Reply-To: "gpfsug-discuss@spectrumscale.org" <
gpfsug-discuss@spectrumscale.org>
Date: Monday, 11 June 2018 at 11:46
To: "gpfsug-discuss@spectrumscale.org" 
Subject: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4

Has anyone ever used subscription-manager to set a release to 7.4 only for
the system to upgrade to 7.5 anyway?

Also is 7.5 now supported with the 4.2.3.9 PTF or should I concentrate on
downgrading back to 7.4?

Richard___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




-- next part --
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20180611/d13470c2/attachment-0001.html
>
-- next part --
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20180611/d13470c2/attachment-0001.gif
>

--

Message: 2
Date: Mon, 11 Jun 2018 15:01:48 +
From: "Jeffrey R. Lang" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4
Message-ID:




Content-Type: text/plain; charset="us-ascii"

Yes, I recently had this happen.

It was determined that the caches had been updated to the 7.5 packages,
before I set the release to 7.4/   Since 

Re: [gpfsug-discuss] Working with per-fileset quotas

2017-12-08 Thread Truong Vu

1) That is correct.  The grace period can't be set for per-fileset
quota.  As you pointed out, you can only change the grace period for
user, group or fileset.

If you want a particular fileset to have no grace period, you can
set the hard limit to be the same as the soft limit.

When the grace column shows "none", this means the soft limit has
not been reached.  Once the soft limit is reached, the grace period
is start counting.

2) To remove explicit quota settings, you need to set the limit to 0.




From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   12/08/2017 07:00 AM
Subject:gpfsug-discuss Digest, Vol 71, Issue 19
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=5OySIaqfU0j1miWKKp6aydLjiGbE8z5pDz5JGveRRlQ=HKXwjIgPAsNTzNSL3-FrvHAXVyvZdzGYugbZgJ3FvMI=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Working with per-fileset quotas (Keith Ball)


--

Message: 1
Date: Thu, 7 Dec 2017 17:48:49 -0500
From: Keith Ball 
To: gpfsug-discuss@spectrumscale.org
Subject: [gpfsug-discuss] Working with per-fileset quotas
Message-ID:
 
Content-Type: text/plain; charset="utf-8"

Hi All,

In working with per-fileset quotas (not user/group/fileset quotas at the
filesystem level), I have the follwing issues/questions.

1.) Setting grace periods. I notice that some of the group quotas in a
specific fileset have a grace period (or remaining grace period) of X days,
while others report "none":

# mmrepquota -g --block-size G hpc-fs:fileset01
 Block Limits
| File Limits
Name   filesettype GB  quota  limit
in_doubtgrace |files   quotalimit in_doubtgrace
groupa  fileset01 GRP2257  2  2
0   4 days |  143 100  1000   4 days
root   fileset01 GRP 710  0  0
0 none |15578   000 none
groupb   fileset01 GRP2106400400  0   4
days |1 1048576  10485760 none
...

How can I set a grace period of "none" on group quotas? mmsetquota does not
appear (from the man pages) to provide any way to set grace periods for
per-fileset quotas:

mmsetquota Device ??grace {user | group | fileset}
   {[??block GracePeriod] [??files GracePeriod]}

How can I set to "none" or "0days"? (i.e. no grace period given if over
quota). Or, for that matter, set grace periods for any duration at all?


2.) How to remove any explicit quota settings for (not just deactivating
default quota settings) at the per-fileset level. The mmdefquotaoff docs
seem to suggest that the ?-d? option will not remove explicit per-fileset
quota settings if they are non-zero (so really, what use is the -d option
then?)

Many Thanks!
  Keith
-- next part --
An HTML attachment was scrubbed...
URL: <
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20171207_b790fd92_attachment-2D0001.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=5OySIaqfU0j1miWKKp6aydLjiGbE8z5pDz5JGveRRlQ=-DNcYGwFlUrOQZsZ9yQAYBdVv46u1xxeA4wJ2-VxQ_A=
>

--

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=5OySIaqfU0j1miWKKp6aydLjiGbE8z5pDz5JGveRRlQ=HKXwjIgPAsNTzNSL3-FrvHAXVyvZdzGYugbZgJ3FvMI=



End of gpfsug-discuss Digest, Vol 71, Issue 19
**



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Changing ip on spectrum scale cluster with every node down and not connected to the network

2017-10-11 Thread Truong Vu

What you can do is create network alias to the old IP.  Run mmchnode to
change hostname/IP for non-quorum nodes first.  Make one (or more) of the
nodes you just change a quorum node.  Change all of the quorum nodes that
still on old IPs to non-quorum.  Then change IPs on them.

Thanks,
Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   10/11/2017 04:53 AM
Subject:gpfsug-discuss Digest, Vol 69, Issue 26
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=3xds8LVU2TdfiaqkM91LA06caiYHJleBqSwOZ6ff81M=21OH1KjxVbfDBz9Kdr0USitreLsyXEbP9rHC7Vxmhw0=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Changing ip on spectrum scale cluster with every node down
  and not connected to network. (Andi Rhod Christiansen)
   2. Re: Changing ip on spectrum scale cluster with every node
  down and not connected to network. (Jonathan Buzzard)
   3. Re: Changing ip on spectrum scale cluster with every node
  down and not connected to network. (Andi Rhod Christiansen)
   4. Re: Changing ip on spectrum scale cluster with every node
  down and not connected to network.
  (Simon Thompson (IT Research Support))
   5. Checking a file-system for errors
  (Simon Thompson (IT Research Support))
   6. Re: Changing ip on spectrum scale cluster with every node
  down and not connected to network. (Jonathan Buzzard)


--

Message: 1
Date: Wed, 11 Oct 2017 07:46:03 +
From: Andi Rhod Christiansen 
To: "gpfsug-discuss@spectrumscale.org"
 
Subject: [gpfsug-discuss] Changing ip on spectrum scale cluster with
 every node down and not connected to network.
Message-ID:

<3e6e1727224143ac9b8488d16f40f...@b4rwex01.internal.b4restore.com>
Content-Type: text/plain; charset="us-ascii"

Hi,

Does anyone know how to change the ips on all the nodes within a cluster
when gpfs and interfaces are down?
Right now the cluster has been shutdown and all ports disconnected(ports
has been shut down on new switch)

The problem is that when I try to execute any mmchnode command(as the ibm
documentation states) the command fails, and that makes sense as the ip on
the interface has been changed without the deamon knowing.. But is there a
way to do it manually within the configuration files so that the gpfs
daemon updates the ips of all nodes within the cluster or does anyone know
of a hack around to do it without having network access.

It is not possible to turn on the switch ports as the cluster has the same
ips right now as another cluster on the new switch.

Hope you understand, relatively new to gpfs/spectrum scale

Venlig hilsen / Best Regards

Andi R. Christiansen
-- next part --
An HTML attachment was scrubbed...
URL: <
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20171011_820adb01_attachment-2D0001.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=3xds8LVU2TdfiaqkM91LA06caiYHJleBqSwOZ6ff81M=NrezaW_ayd5u-bE6ppJ6p3FBluuDTtv6KHqb4TwaGsY=
>

--

Message: 2
Date: Wed, 11 Oct 2017 09:01:47 +0100
From: Jonathan Buzzard 
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Changing ip on spectrum scale cluster
 with every node down and not connected to network.
Message-ID: <8b9180bf-0bef-4e42-020b-28a961001...@strath.ac.uk>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 11/10/17 08:46, Andi Rhod Christiansen wrote:

[SNIP]

> It is not possible to turn on the switch ports as the cluster has the
> same ips right now as another cluster on the new switch.
>

Er, yes it is. Spin up a new temporary VLAN, drop all the ports for the
cluster in the new temporary VLAN and then bring them up. Basically any
switch on which you can remotely down the ports is going to support
VLAN's. Even the crappy 16 port GbE switch I have at home supports them.

JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG


--

Message: 3
Date: Wed, 11 

Re: [gpfsug-discuss] Date formats inconsistent mmfs.log

2017-09-02 Thread Truong Vu

The dates that have the zone abbreviation are from the scripts which use
the OS date command.  The daemon has its own format.  This inconsistency
has been address in 4.2.2.





From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   09/02/2017 07:00 AM
Subject:gpfsug-discuss Digest, Vol 68, Issue 4
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=wiPE5K_0qzTwdloCshNcSyamVNRJKz5WyOBal7dMz8w=pd3-zi8UQxVOjxOYxqbuaFSvv_71WENUBJsw0KUV3ro=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Date formats inconsistent mmfs.log (Sobey, Richard A)


--

Message: 1
Date: Sat, 2 Sep 2017 09:35:34 +
From: "Sobey, Richard A" 
To: "'gpfsug-discuss@spectrumscale.org'"
 
Subject: [gpfsug-discuss] Date formats inconsistent mmfs.log
Message-ID:




Content-Type: text/plain; charset="us-ascii"

Is there a good reason for the date formats in mmfs.log to be inconsistent?
Apart from my OCD getting the better of me, it makes log analysis a bit
difficult.

Sat Sep  2 10:33:42.145 2017: [I] Command: successful mount gpfs
Sat  2 Sep 10:33:42 BST 2017: finished mounting /dev/gpfs
Sat Sep  2 10:33:42.168 2017: [I] Calling user exit script
mmSysMonGpfsStartup: event startup, Async
command /usr/lpp/mmfs/bin/mmsysmoncontrol.
Sat Sep  2 10:33:42.190 2017: [I] Calling user exit script
mmSinceShutdownRoleChange: event startup, Async
command /usr/lpp/mmfs/bin/mmsysmonc.
Sat  2 Sep 10:33:42 BST 2017: [I] sendRasEventToMonitor: Successfully send
a filesystem event to monitor
Sat  2 Sep 10:33:42 BST 2017: [I] The Spectrum Scale monitoring service is
already running. Pid=5134

Cheers
Richard
-- next part --
An HTML attachment was scrubbed...
URL: <
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20170902_4f65f336_attachment-2D0001.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=wiPE5K_0qzTwdloCshNcSyamVNRJKz5WyOBal7dMz8w=fNT71mM8obJ9rwxzm3Uzxw4mayi2pQg1u950E1raYK4=
 >

--

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=wiPE5K_0qzTwdloCshNcSyamVNRJKz5WyOBal7dMz8w=pd3-zi8UQxVOjxOYxqbuaFSvv_71WENUBJsw0KUV3ro=



End of gpfsug-discuss Digest, Vol 68, Issue 4
*



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?

2017-09-01 Thread Truong Vu

The discrepancy between the mmlsconfig view and mmdiag has been fixed in
GFPS 4.2.3 version.  Note, mmdiag reports the correct default value.

Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   09/01/2017 06:43 PM
Subject:gpfsug-discuss Digest, Vol 68, Issue 2
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=yK4FkYvJ21ubvurR6W1Pi3qvNw9ydj2XP0ghXPc7DUw=xZHUN9ZlFjvgBmBB8wnX2cQDQQV42R_q-xHubNA3JBM=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: GPFS GUI Nodes > NSD no data (Sobey, Richard A)
   2. Change to default for verbsRdmaMinBytes? (Edward Wahl)
   3. Quorum managers (Joshua Akers)
   4. Re: Change to default for verbsRdmaMinBytes? (Sven Oehme)


--

Message: 1
Date: Fri, 1 Sep 2017 13:36:56 +
From: "Sobey, Richard A" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS GUI Nodes > NSD no data
Message-ID:




Content-Type: text/plain; charset="us-ascii"

Resolved this, guessed at changing GPFSNSDDisk.period to 5.

From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Sobey,
Richard A
Sent: 01 September 2017 09:45
To: 'gpfsug-discuss@spectrumscale.org' 
Subject: [gpfsug-discuss] GPFS GUI Nodes > NSD no data

For some time now if I go into the GUI, select Monitoring > Nodes > NSD
Server Nodes, the only columns with good data are Name, State and NSD
Count. Everything else e.g. Avg Disk Wait Read is listed "N/A".

Is this another config option I need to enable? It's been bugging me for a
while, I don't think I've seen it work since 4.2.1 which was the first time
I saw the GUI.

Cheers
Richard
-- next part --
An HTML attachment was scrubbed...
URL: <
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20170901_2a4162e9_attachment-2D0001.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=yK4FkYvJ21ubvurR6W1Pi3qvNw9ydj2XP0ghXPc7DUw=jcPGl5zwtQFMbnEmBpNErsD43uwoVeKgKk_8j7ZeCJY=
 >

--

Message: 2
Date: Fri, 1 Sep 2017 16:56:25 -0400
From: Edward Wahl 
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?
Message-ID: <20170901165625.6e4ed...@osc.edu>
Content-Type: text/plain; charset="US-ASCII"

Howdy.   Just noticed this change to min RDMA packet size and I don't seem
to
see it in any patch notes.  Maybe I just skipped the one where this
changed?

 mmlsconfig verbsRdmaMinBytes
verbsRdmaMinBytes 16384

(in case someone thinks we changed it)

[root@proj-nsd01 ~]# mmlsconfig |grep verbs
verbsRdma enable
verbsRdma disable
verbsRdmasPerConnection 14
verbsRdmasPerNode 1024
verbsPorts mlx5_3/1
verbsPorts mlx4_0
verbsPorts mlx5_0
verbsPorts mlx5_0 mlx5_1
verbsPorts mlx4_1/1
verbsPorts mlx4_1/2


Oddly I also see this in config, though I've seen these kinds of things
before.
mmdiag --config |grep verbsRdmaMinBytes
   verbsRdmaMinBytes 8192

We're on a recent efix.
Current GPFS build: "4.2.2.3 efix21 (1028007)".

--

Ed Wahl
Ohio Supercomputer Center
614-292-9302


--

Message: 3
Date: Fri, 01 Sep 2017 21:06:15 +
From: Joshua Akers 
To: "gpfsug-discuss@spectrumscale.org"
 
Subject: [gpfsug-discuss] Quorum managers
Message-ID:
 
Content-Type: text/plain; charset="utf-8"

Hi all,

I was wondering how most people set up quorum managers. We historically had
physical admin nodes be the quorum managers, but are switching to a
virtualized admin services infrastructure. We have been choosing a few
compute nodes to act as quorum managers in our client clusters, but have
considered using virtual machines instead. Has anyone else done this?

Regards,
Josh
--
*Joshua D. Akers*

*HPC Team Lead*
NI Systems Support (MC0214)
1700 Pratt Drive
Blacksburg, VA 24061
540-231-9506
-- next part 

Re: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

2017-08-02 Thread Truong Vu

This sounds like a known problem that was fixed.  If you don't have the
fix, have you checkout the around in the FAQ 2.4?

Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   08/02/2017 06:51 AM
Subject:gpfsug-discuss Digest, Vol 67, Issue 4
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
 http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Systemd will not allow the mount ofa
filesystem (John Hearns)


--

Message: 1
Date: Wed, 2 Aug 2017 10:50:29 +
From: John Hearns 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Systemd will not allow the mount of   
 a
 filesystem
Message-ID:




Content-Type: text/plain; charset="utf-8"

Thankyou Renar.  In fact the tests I am running are in fact tests of a
version upgrade before we do this on our production cluster..



From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Grunenberg,
Renar
Sent: Wednesday, August 02, 2017 12:01 PM
To: 'gpfsug main discussion list' 
Subject: Re: [gpfsug-discuss] Systemd will not allow the mount of a
filesystem

Hallo John,
you are on a backlevel Spectrum Scale Release and a backlevel Systemd
package.
Please see here:
https://www.ibm.com/developerworks/community/forums/html/topic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7=25
<
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperworks%2Fcommunity%2Fforums%2Fhtml%2Ftopic%3Fid%3D00104bb5-acf5-4036-93ba-29ea7b1d43b7%26ps%3D25=01%7C01%7Cjohn.hearns%40asml.com%7Caf48038c0f334674b53208d4d98d739e%7Caf73baa8f5944eb2a39d93e96cad61fc%7C1=XuRlV4%2BRTilLfWD5NTK7n08m6IzjAmZ5mZOwUTNplSQ%3D=0
>



Renar Grunenberg
Abteilung Informatik ? Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de

Internet:

www.huk.de


HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter
Deutschlands a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav
Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas (stv.).

Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese
Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in
this information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org<
mailto:gpfsug-discuss-boun...@spectrumscale.org> [
mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von John Hearns
Gesendet: Mittwoch, 2. August 2017 11:50
An: gpfsug main discussion list >
Betreff: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

I am setting up a filesystem for some tests, so this is not mission
critical. This is on an OS with systemd
When I create a new filesystem, named gpfstest, then mmmount it the
filesystem is logged as being mounted then immediately dismounted.
Having fought with this for several hours I now find this in the system
messages file:

Aug  2 10:36:56 tosmn001 systemd: Unit hpc-gpfstest.mount is bound to
inactive unit dev-gpfstest.device. Stopping, too.

I stopped then started gpfs. I have run a systemctl daemon-reload
I created a new filesystem, using the same physical disk, with a new
filesystem name, testtest,