Re: [gpfsug-discuss] immutable folder

2022-02-23 Thread Hannappel, Juergen
While the apostrophe is evil it's not the problem: 

root@it-gti-02 test1]# mkdir "it/stu'pid name" 
[root@it-gti-02 test1]# mmchattr -i yes it/stu\'pid\ name 
[root@it-gti-02 test1]# mmchattr -i no it/stu\'pid\ name 

> From: "Paul Ward" 
> To: "gpfsug main discussion list" 
> Sent: Wednesday, 23 February, 2022 12:03:37
> Subject: Re: [gpfsug-discuss] immutable folder

> Its not a fileset, its just a folder, well a subfolder…

> [filesystem/[fileset]/share/data/iac/[user] 2004-2014/Laboratory Impact
> experiments/LGG shots/Kent LGG/Kent aerogel LGG shots/Lizardite in 
> aerogel/Nick
> Foster's sample

> It’s the “Nick Foster's sample” folder I want to delete, but it says it is
> immutable and I can’t disable that.

> I suspect it’s the apostrophe confusing things.

> Kindest regards,

> Paul

> Paul Ward

> TS Infrastructure Architect

> Natural History Museum

> T: 02079426450

> E: [ mailto:p.w...@nhm.ac.uk | p.w...@nhm.ac.uk ]

> From: gpfsug-discuss-boun...@spectrumscale.org
>  On Behalf Of IBM Spectrum Scale
> Sent: 22 February 2022 14:17
> To: gpfsug main discussion list 
> Subject: Re: [gpfsug-discuss] immutable folder

> Scale disallows deleting fileset junction using rmdir, so I suggested
> mmunlinkfileset.

> Regards, The Spectrum Scale (GPFS) team

> --
> If you feel that your question can benefit other users of Spectrum Scale 
> (GPFS),
> then please post it to the public IBM developerWroks Forum at [
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fdeveloperworks%2Fcommunity%2Fforums%2Fhtml%2Fforum%3Fid%3D----0479=04%7C01%7Cp.ward%40nhm.ac.uk%7Cbd72c8c2ee3d49f619c908d9f60e0732%7C73a29c014e78437fa0d4c8553e1960c1%7C1%7C0%7C637811363409593169%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000=XoY%2BAbA5%2FNBwuoJrY12MNurjJrp8KMsV1t63hdItfiM%3D=0
> |
> https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479
> ] .

> If your query concerns a potential software error in Spectrum Scale (GPFS) and
> you have an IBM software maintenance contract please contact 1-800-237-5511 in
> the United States or your local IBM Service Center in other countries.

> The forum is informally monitored as time permits and should not be used for
> priority messages to the Spectrum Scale (GPFS) team.

> From: "Paul Ward" < [ mailto:p.w...@nhm.ac.uk | p.w...@nhm.ac.uk ] >
> To: "gpfsug main discussion list" < [ mailto:gpfsug-discuss@spectrumscale.org 
> |
> gpfsug-discuss@spectrumscale.org ] >
> Date: 02/22/2022 05:31 AM
> Subject: [EXTERNAL] Re: [gpfsug-discuss] immutable folder
> Sent by: [ mailto:gpfsug-discuss-boun...@spectrumscale.org |
> gpfsug-discuss-boun...@spectrumscale.org ]

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Automating Snapshots : cron jobs or use the GUI ?

2022-02-02 Thread Hannappel, Juergen
Hi, 
I use a python script via cron job, it checks how many snapshots exist and 
removes those that 
exceed a configurable limit, then creates a new one. 
Deployed via puppet it's much less hassle than click around in a GUI/ 

> From: "Kidger, Daniel" 
> To: "gpfsug main discussion list" 
> Sent: Wednesday, 2 February, 2022 11:07:25
> Subject: [gpfsug-discuss] Automating Snapshots : cron jobs or use the GUI ?

> Hi all,

> Since the subject of snapshots has come up, I also have a question ...

> Snapshots can be created from the command line with mmcrsnapshot, and hence 
> can
> be automated via con jobs etc.
> Snapshots can also be created from the Scale GUI. The GUI also provides its 
> own
> automation for the creation, retention, and deletion of snapshots.

> My question is: do most customers use the former or the latter for automation?

> (I also note that /usr/lpp/mmfs/gui/cli/mksnaprule exists and appears to do
> exactly the same as what the GUI does it terms of creating automated 
> snapshots.
> It is a relic of V7000 Unified but still works fine in Spectrum Scale 5.1.2.2.
> How many customers also use the commands found in /usr/lpp/mmfs/gui/cli / ? )

> Daniel

> Daniel Kidger
> HPC Storage Solutions Architect, EMEA
> [ mailto:daniel.kid...@hpe.com | daniel.kid...@hpe.com ]

> +44 (0)7818 522266

> [ http://www.hpe.com/ | hpe.com ]

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] struct dirent d_type for fifos on gpfs

2022-01-13 Thread Hannappel, Juergen
Hi,
I noticed that whe I read directory entries with the usual readdir() function 
that
for fifos I get in the d_type files a 0, i.e. DT_UNKNOWN while if I try that on 
a
different file system e.g. ext4 I get the expected DT_FIFO.

Is this a bug or an expected feature?

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] 5.1.2.2 changes

2022-01-13 Thread Hannappel, Juergen
Hi,
just got notified that 5.1.2.2 is out.
What are the changes to 5.1.2.1?
https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=summary-changes
does not specify that

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] ESS 6.1.2.1 changes

2021-12-17 Thread Hannappel, Juergen
Hi,
I just noticed that tday a new ESS release (6.1.2.1) appeared on fix central.
What I can't find is a list of changes to 6.1.2.0, and anyway finding the 
change list is always a PITA.

Does anyone know what changed?

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Serial number of [EG]SS nodes

2021-09-02 Thread Hannappel, Juergen
Hi,
on an ESS node with power cpu I can get the serial number from 
/proc/device-tree/system-id
which is very useful sometimes
On nodes with X86 architecture (Lenovo GSS or IBM ESS3XXX_) there
is no such pseudo-file. Is there a simple way to get at the serial number?
-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616


smime.p7s
Description: S/MIME Cryptographic Signature
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] du --apparent-size and quota

2021-06-02 Thread Hannappel, Juergen
Hi, 
mmrepquota reports without the --block-size parameter the size in units of 
1KiB, so (if no ill-advised copy-paste editing confuses us) we are not talking 
about 400GiB but 400KiB. 
With just 863 files (from the inode part of the repquota output) and therefore 
0.5KiB/file on average that could be explained by the sub-block size(although 
many files should vanish in the inodes). 
If it's 400GiB in 863 files with 500MiB/File the subblock overhead would not 
matter at all! 

> From: "IBM Spectrum Scale" 
> To: "gpfsug main discussion list" 
> Cc: gpfsug-discuss-boun...@spectrumscale.org, gpfsug-disc...@gpfsug.org
> Sent: Wednesday, 2 June, 2021 16:12:52
> Subject: Re: [gpfsug-discuss] du --apparent-size and quota

> Hi,
> The data and metadata replications are 2 on both source and destination
> filesystems, so from:

> $ mmrepquota -j srcfilesys | grep fileset
> srcfileset FILESET 800 800 800 0 none | 863 0 0
> 0 none

> $ mmrepquota -j dstfilesys | grep fileset
> fileset root FILESET 457 400 400 0 none | 853 0
> 0 0 none

> the quota data should be changed from 800G to 457G (or 400G to 228.5G), after
> "rsync -AHS".

> Regards, The Spectrum Scale (GPFS) team

> --
> If you feel that your question can benefit other users of Spectrum Scale 
> (GPFS),
> then please post it to the public IBM developerWroks Forum at [
> https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479
> |
> https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479
> ] .

> If your query concerns a potential software error in Spectrum Scale (GPFS) and
> you have an IBM software maintenance contract please contact 1-800-237-5511 in
> the United States or your local IBM Service Center in other countries.

> The forum is informally monitored as time permits and should not be used for
> priority messages to the Spectrum Scale (GPFS) team.

> Ulrich Sibiller ---06/02/2021 06:16:22 PM---On 6/1/21 6:08 PM, Kumaran Rajaram
> wrote: >>> If I'm not mistaken even with SS5 created filesystems,

> From: Ulrich Sibiller 
> To: Kumaran Rajaram , gpfsug main discussion list
> , "gpfsug-disc...@gpfsug.org"
> 
> Date: 06/02/2021 06:16 PM
> Subject: [EXTERNAL] Re: [gpfsug-discuss] du --apparent-size and quota
> Sent by: gpfsug-discuss-boun...@spectrumscale.org

> On 6/1/21 6:08 PM, Kumaran Rajaram wrote:
 If I'm not mistaken even with SS5 created filesystems, 1 MiB FS block size
> >>> implies 32 kiB sub blocks (32 sub-blocks).

>> Just to add: The /srcfilesys seemed to have been created with GPFS version 
>> 4.x
> > which supports only 32 sub-blocks per block.

> > -T /srcfilesys Default mount point
> > -V 16.00 (4.2.2.0) Current file system version
> > 14.10 (4.1.0.4) Original file system version
> > --create-time Tue Feb 3 11:46:10 2015 File system creation time
> > -B 1048576 Block size
> > -f 32768 Minimum fragment (subblock) size in bytes
> > --subblocks-per-full-block 32 Number of subblocks per full block


>> The /dstfilesys was created with GPFS version 5.x which support greater than 
>> 32
>> subblocks per block. /dstfilesys does have 512 subblocks-per-full-block with
> > 8KiB subblock size since file-system blocksize is 4MiB.


> > -T /dstfilesys Default mount point
> > -V 23.00 (5.0.5.0) File system version
> > --create-time Tue May 11 16:51:27 2021 File system creation time
> > -B 4194304 Block size
> > -f 8192 Minimum fragment (subblock) size in bytes
> > --subblocks-per-full-block 512 Number of subblocks per full block


> Well, from the higher flexibility in terms of the number of subblocks I'd 
> expect
> a lower disk usage
> instead of a higher one. Is this a wrong assumption? From 400G to 457G it's a
> ~13% increase!

>> Beside the overhead, hard-links in the source FS (which, if I'm not mistaken,
>> are not handled by "rsync" unless you specify "-H") and in some cases spare
> > files can also explain the differences.

> My rsync is using -AHS, so this should not be relevant here.

> Uli

> --
> Science + Computing AG
> Vorstandsvorsitzender/Chairman of the board of management:
> Dr. Martin Matzke
> Vorstand/Board of Management:
> Matthias Schempp, Sabine Hohenstein
> Vorsitzender des Aufsichtsrats/
> Chairman of the Supervisory Board:
> Philippe Miltin
> Aufsichtsrat/Supervisory Board:
> Martin Wibbe, Ursula Morgenstern
> Sitz/Registered Office: Tuebingen
> Registergericht/Registration Court: Stuttgart
> Registernummer/Commercial Register No.: HRB 382196
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> [ http://gpfsug.org/mailman/listinfo/gpfsug-discuss |
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss ]

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> 

[gpfsug-discuss] Detecting open files

2021-03-15 Thread Hannappel, Juergen
Hi,
when unlinking filesets that sometimes fails because some open files on that 
fileset still exist.

Is there a way to find which files are open, and from which node?
Without running a mmdsh -N all lsof  on serveral (big) remote clusters, that 
is. 

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616


smime.p7s
Description: S/MIME Cryptographic Signature
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS_CLEAR_FILE_CACHE fails on Read-Only FS

2021-01-08 Thread Hannappel, Juergen
Hi,
in a program after reading a file I did a gpfs_fcntl() with 
GPFS_CLEAR_FILE_CACHE to get rid of the now unused pages in the file cache.
That works fine, but if the file system is read-only (in a remote cluster) this 
fails with a message that the file system is read only.
Is that expected behaviour or an unexpected feature (aka bug)?

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616


smime.p7s
Description: S/MIME Cryptographic Signature
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Mounting an nfs share on a CES node

2020-10-21 Thread Hannappel, Juergen
Hi,
I have a CES node exporting some filesystems vis smb and ganesha in a standard 
CES setup.
Now I want to mount a nfs share from a different, non-CES server on this CES 
node.
This did not work:
mount -o -fstype=nfs4,minorversion=1,rw,rsize=65536,wsize=65536 
some.other.server:/some/path /mnt/
mount.nfs: mount to NFS server 'some.other.server:/some/path' failed: RPC 
Error: Program unavailable

Does the CES software stack interfere with the nfs client setup? It seems that 
at least with
rpc-statd there is some conflict:

 systemctl status rpc-statd
● rpc-statd.service - NFS status monitor for NFSv2/3 locking.
   Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Wed 2020-10-21 17:48:21 CEST; 22min 
ago
  Process: 19896 ExecStart=/usr/sbin/rpc.statd $STATDARGS (code=exited, 
status=1/FAILURE)

Oct 21 17:48:21 mynode systemd[1]: Starting NFS status monitor for NFSv2/3 
locking
Oct 21 17:48:21 mynode rpc.statd[19896]: Statd service already running!
Oct 21 17:48:21 mynode systemd[1]: rpc-statd.service: control process exited, 
code=exited status=1
Oct 21 17:48:21 mynode systemd[1]: Failed to start NFS status monitor for 
NFSv2/3 locking..
Oct 21 17:48:21 mynode systemd[1]: Unit rpc-statd.service entered failed state.
Oct 21 17:48:21 mynode systemd[1]: rpc-statd.service failed.
-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Question about Security Bulletin: Openstack Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)

2020-08-27 Thread Hannappel, Juergen
Hin 
no, we don't use object services. Maybe the object services condition should be 
mentioned in the bulletin. 

Thanks, 
Juergen 

> From: "John T Olson" 
> To: "gpfsug main discussion list" 
> Sent: Wednesday, 26 August, 2020 17:52:56
> Subject: Re: [gpfsug-discuss] Question about Security Bulletin: Openstack
> Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)

> Hi, openstack Keystone is only used if you have configured and are using the
> object services. If you are not using object services, then the local Keystone
> server will not be configured and this vulnerability should not affect you. Do
> you have object services enabled?

> Thanks,

> John

> John T. Olson, Ph.D.
> Spectrum Scale Security
> Master Inventor
> 957/9032-1 Tucson, AZ, 85744
> (520) 799-5185, tie 321-5185 (FAX: 520-799-4237)
> Email: jtol...@us.ibm.com
> LinkedIn: www.linkedin.com/in/john-t-olson
> Follow me on twitter: @John_T_Olson

> "Hannappel, Juergen" ---08/26/2020 07:25:12 AM---Hello, in the bulletin [
> https://www.ibm.com/support/pages/node/6323241 |
> https://www.ibm.com/support/pages/node/6323241 ] it's mentioned

> From: "Hannappel, Juergen" 
> To: gpfsug main discussion list 
> Date: 08/26/2020 07:25 AM
> Subject: [EXTERNAL] [gpfsug-discuss] Question about Security Bulletin: 
> Openstack
> Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)
> Sent by: gpfsug-discuss-boun...@spectrumscale.org

> Hello,
> in the bulletin [ https://www.ibm.com/support/pages/node/6323241 |
> https://www.ibm.com/support/pages/node/6323241 ] it's mentioned
> "IBM Spectrum Scale, shipped with Openstack keystone, is exposed to
> vulnerabilities as detailed below."
> I am not aware of any openstack components in our standard Scale deployments,
> so how am I to read this sentence? Is there some Openstack stuff bundled into 
> a
> standard gpfs installation?

> --
> Dr. Jürgen Hannappel DESY/IT Tel. : +49 40 8998-4616
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> [ http://gpfsug.org/mailman/listinfo/gpfsug-discuss |
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss ]

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Question about Security Bulletin: Openstack Keystone vulnerabilities affects IBM Spectrum Scale (CVE-2020-12689)

2020-08-26 Thread Hannappel, Juergen
Hello,
in the bulletin https://www.ibm.com/support/pages/node/6323241 it's mentioned
"IBM Spectrum Scale, shipped with Openstack keystone, is exposed to 
vulnerabilities as detailed below."
I am not aware of any openstack components in our standard Scale deployments,
so how am I to read this sentence? Is there some Openstack stuff bundled into a 
standard gpfs installation?

-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Tiny cluster quorum problem

2020-08-18 Thread Hannappel, Juergen
... just for the record: 
man mmchnode | grep force | wc -l 
0 
In the man page the --force option is not mentioned at all. 

The same is true for mmdelnode: 
man mmdelnode | grep force | wc -l 
0 

But there the error output gives a hint that it's there: 
mmdelnode: If the affected nodes are permanently down, they can be deleted with 
the --force option. 

> From: "Juergen Hannappel" 
> To: "gpfsug main discussion list" 
> Sent: Tuesday, 18 August, 2020 15:54:48
> Subject: Re: [gpfsug-discuss] Tiny cluster quorum problem

> Thanks!
> That helped. With the --force I could change roles, expell the node and have 
> the
> "cluster" now up on the remaining node.

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Tiny cluster quorum problem

2020-08-18 Thread Hannappel, Juergen
Thanks! 
That helped. With the --force I could change roles, expell the node and have 
the "cluster" now up on the remaining node. 

> From: "Jan-Frode Myklebust" 
> To: "gpfsug main discussion list" 
> Sent: Tuesday, 18 August, 2020 15:45:33
> Subject: Re: [gpfsug-discuss] Tiny cluster quorum problem

> I would expect you should be able to get it back up using the routine at [
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_failsynch.htm
> |
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adv_failsynch.htm
> ]

> Maybe you just need to force remove quorum-role from the dead node ?

> -jf

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Tiny cluster quorum problem

2020-08-18 Thread Hannappel, Juergen
Hi,
on a tiny GPFS cluster with just two nodes one node died (really dead, cannot 
be switched on any more), and now I cannot remove it from the cluster anymore.
[root@exflonc42 ~]# mmdelnode -N exflonc41
mmdelnode: Unable to obtain the GPFS configuration file lock.
mmdelnode: GPFS was unable to obtain a lock from node exflonc41.desy.de.
mmdelnode: Command failed. Examine previous error messages to determine cause.

[root@exflonc42 ~]# mmlscluster
get file failed: Not enough CCR quorum nodes available (err 809)
gpfsClusterInit: Unexpected error from ccr fget mmsdrfs. Return code: 158
mmlscluster: Command failed. Examine previous error messages to determine cause.

Is there any chance to get this cluster up and running again or should I wipe 
it and create a new one from the remaining node?
There are no data on this cluster, it's a remote cluster to a storage cluster 
and has only compute clients
-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Parse -Y command output

2020-05-27 Thread Hannappel, Juergen
Hi,
an example in python 2.7 (for Python 3 you need to add e.g. 
errors='ignore'
to the parameters of the Popen call to get the proper kind of
text stream as output.

import subprocess
import csv
mmlsfileset = subprocess.Popen(['/usr/lpp/mmfs/bin/mmlsfileset',
args.filesystem,
'-Y'],
   stdout=subprocess.PIPE)
reader = csv.DictReader(mmlsfileset.stdout, delimiter=':')
filesets=[]
for row in reader:
if row['status'] == 'Linked':
filesets.append({'name':row['filesetName'],
 'linkdir':urllib.unquote(row['path'])})
mmlsfileset.wait()


- Original Message -
> From: "Billich Heinrich Rainer (ID SD)" 
> To: "gpfsug main discussion list" 
> Sent: Wednesday, 27 May, 2020 10:41:11
> Subject: [gpfsug-discuss] Parse -Y command output

> Hello,
> 
> I wonder if any python or bash functions exist to parse the output of
> mm-command's -Y format, i.e. colon-separated with HEADER rows. I would be nice
> to convert the output to a python list of named tuples or even better pandas
> dataframe.  I would like to access the  values by column name and row index.
> This would allow  to do quick scripts or  reports easily. Imagine using a
> jupyter notebook to dig into mmrepquota or mmlsdisk output, even doing charts,
> ..., easily.
> 
> I know the ReST interface does exisit, which I could call to get the data in
> Json format, but to parse the Y-format seems much less cumbersome and allows 
> to
> collect the data at one time and process it later.
> 
> If you have other suggestions please let me know. What I would like to get:
> A list of NSDs which shows the connection to vdisk-set. Vdisk, recovery group
> and ESS system for each NSD. To get this I need to merge at least two tables.
> A list of user of fileset quota values highlighting those having non-default
> vaulues or close to the limit.
> The ability to get the data quickly in an uniform way into some tool, 
> preferably
> python, to do analysis or formatting.
> 
> Cheers,
> 
> Heiner
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] gpfs_fcntl fails on read-only-fs

2020-05-19 Thread Hannappel, Juergen
Hi,
I tried to do gpfs_fcntl with the GPFS_ACCESS_RANGE hint,
with the isWrite field set to 0 and start = 0, lenght = size of the file
to indicate that I want to read the entire file now,
but on a readonly remote file system the gpfs_fcntl() call returns -1
and sets errno to "Read-only file system" whic is true but should
not matter in my Opinion.

Why is announcing a range of the file for reading not allowed on a read onoy 
file system?


-- 
Dr. Jürgen Hannappel  DESY/ITTel.  : +49 40 8998-4616
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] wait for mount during gpfs startup

2020-04-28 Thread Hannappel, Juergen
Hi,
a gpfs.mount target should be automatically created at boot by the
systemd-fstab-generator from the fstab entry, so no need with hackery like
ismountet.txt...


- Original Message -
> From: "Jonathan Buzzard" 
> To: gpfsug-discuss@spectrumscale.org
> Sent: Tuesday, 28 April, 2020 13:38:01
> Subject: Re: [gpfsug-discuss] wait for mount during gpfs startup

> Yuck, and double yuck. There are many things you can say about systemd
> (and I have a choice few) but one of them is that it makes this sort of
> hackery obsolete. At least that is one of it goals.
> 
> A systemd way to do it would be via one or more helper units. So lets
> assume your GPFS file system is mounted on /gpfs, then create a file
> called ismounted.txt on it and then create a unit called say
> gpfs_mounted.target that looks like
> 
> 
> # gpfs_mounted.target
> [Unit]
> TimeoutStartSec=infinity
> ConditionPathExists=/gpfs/ismounted.txt
> ExecStart=/usr/bin/sleep 10
> RemainAfterExit=yes
> 
> Then the main unit gets
> 
> Wants=gpfs_mounted.target
> After=gpfs_mounted.target
> 
> If you are using scripts in systemd you are almost certainly doing it
> wrong :-)
> 
> JAB.
> 
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss