Re: [gpfsug-discuss] subblock sanity check in 5.0

2018-06-25 Thread Felipe Knop

Joey,

The subblocks-per-full-block value cannot be specified when the file system
is created, but is rather computed automatically by GPFS. In file systems
with format older than 5.0, the value is fixed at 32. For file systems with
format 5.0.0 or later, the value is computed based on the block size.  See
manpage for mmcrfs, in table where the  -B BlockSize option is explained.
(Table 1. Block sizes and subblock sizes) . Say, for the default (in 5.0+)
4MB block size, the subblock size is 8KB.

The minimum "practical" subblock size is 4KB, to keep 4KB-alignment to
accommodate 4KN devices.

  Felipe


Felipe Knop k...@us.ibm.com
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314  T/L 293-9314





From:   Joseph Mendoza 
To: gpfsug main discussion list 
Date:   06/25/2018 08:59 PM
Subject:[gpfsug-discuss] subblock sanity check in 5.0
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Quick question, anyone know why GPFS wouldn't respect the default for
the subblocks-per-full-block parameter when creating a new filesystem?
I'd expect it to be set to 512 for an 8MB block size but my guess is
that also specifying a metadata-block-size is interfering with it (by
being too small).  This was a parameter recommended by the vendor for a
4.2 installation with metadata on dedicated SSDs in the system pool, any
best practices for 5.0?  I'm guessing I'd have to bump it up to at least
4MB to get 512 subblocks for both pools.

fs1 created with:
# mmcrfs fs1 -F fs1_ALL -A no -B 8M -i 4096 -m 2 -M 2 -r 1 -R 2 -j
cluster -n 9000 --metadata-block-size 512K --perfileset-quota
--filesetdf -S relatime -Q yes --inode-limit 2000:1000 -T /gpfs/fs1

# mmlsfs fs1


flag    value    description
--- 
---
 -f 8192 Minimum fragment (subblock)
size in bytes (system pool)
    131072   Minimum fragment (subblock)
size in bytes (other pools)
 -i 4096 Inode size in bytes
 -I 32768    Indirect block size in bytes

 -B 524288   Block size (system pool)
    8388608  Block size (other pools)

 -V 19.01 (5.0.1.0)  File system version

 --subblocks-per-full-block 64   Number of subblocks per
full block
 -P system;DATA  Disk storage pools in file
system


Thanks!
--Joey Mendoza
NCAR
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] subblock sanity check in 5.0

2018-06-25 Thread Joseph Mendoza
Quick question, anyone know why GPFS wouldn't respect the default for
the subblocks-per-full-block parameter when creating a new filesystem? 
I'd expect it to be set to 512 for an 8MB block size but my guess is
that also specifying a metadata-block-size is interfering with it (by
being too small).  This was a parameter recommended by the vendor for a
4.2 installation with metadata on dedicated SSDs in the system pool, any
best practices for 5.0?  I'm guessing I'd have to bump it up to at least
4MB to get 512 subblocks for both pools.

fs1 created with:
# mmcrfs fs1 -F fs1_ALL -A no -B 8M -i 4096 -m 2 -M 2 -r 1 -R 2 -j
cluster -n 9000 --metadata-block-size 512K --perfileset-quota
--filesetdf -S relatime -Q yes --inode-limit 2000:1000 -T /gpfs/fs1

# mmlsfs fs1


flag    value    description
--- 
---
 -f 8192 Minimum fragment (subblock)
size in bytes (system pool)
    131072   Minimum fragment (subblock)
size in bytes (other pools)
 -i 4096 Inode size in bytes
 -I 32768    Indirect block size in bytes

 -B 524288   Block size (system pool)
    8388608  Block size (other pools)

 -V 19.01 (5.0.1.0)  File system version

 --subblocks-per-full-block 64   Number of subblocks per
full block
 -P system;DATA  Disk storage pools in file
system


Thanks!
--Joey Mendoza
NCAR
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] mmchconfig subnets

2018-06-25 Thread Eric Horst
Hi, I'm hoping somebody has insights into how the subnets option actually
works. I've read the docs a dozen times and I want to make sure I
understand before I take my production cluster down to make the changes.

On the current cluster the daemon addresses are on a gpfs private network
and the admin addresses are on a public network. I'm changing so both
daemon and admin are public and the subnets option is used to utilize the
private network. This is to facilitate remote mounts to an independent
cluster.

The confusing factor in my case, not covered in the docs, is that the gpfs
private network is subnetted and static routes are used to reach them. That
is, there are three private networks, one for each datacenter and the
cluster nodes daemon interfaces are spread between the three.

172.16.141.32/27
172.16.141.24/29
172.16.141.128/27

A router connects these three networks but are otherwise 100% private.

For my mmchconfig subnets command should I use this?

mmchconfig subnets="172.16.141.24 172.16.141.32 172.16.141.128"

Where I get confused is that I'm trying to reason through how Spectrum
Scale is utilizing the subnets setting to decide if this will have the
desired result on my cluster. If I change the node addresses to their
public addresses, ie the private addresses are not explicitly configured in
Scale, then how are the private addresses discovered? Does each node use
the subnets option to identify that it has a private address and then
dynamically shares that with the cluster?

Thanks in advance for your clarifying comments.

-Eric

--

Eric Horst
University of Washington
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmapplypolicy on nested filesets ...

2018-06-25 Thread Jaime Pinto
It took a while before I could get back to this issue, but I want to  
confirm that Marc's  suggestions worked line a charm, and did exactly  
what I hoped for:


* remove any FOR FILESET(...) specifications
* mmapplypolicy  
/path/to/the/root/directory/of/the/independent-fileset-you-wish-to-scan ...  
--scope inodespace  -P your-policy-rules-file ...


I didn't have to do anything else, but exclude a few filesets from the scan.

Thanks
Jaime


Quoting "Marc A Kaplan" :


I suggest you remove any FOR FILESET(...) specifications from your rules
and then run

mmapplypolicy
/path/to/the/root/directory/of/the/independent-fileset-you-wish-to-scan
... --scope inodespace  -P your-policy-rules-file ...

See also the (RTFineM) for the --scope option and the Directory argument
of the mmapplypolicy command.

That is the best, most efficient way to scan all the files that are in a
particular inode-space.  Also, you must have all filesets of interest
"linked" and the file system must be mounted.

Notice that "independent" means that the fileset name is used to denote
both a fileset and an inode-space, where said inode-space contains the
fileset of that name and possibly other "dependent" filesets...

IF one wished to search the entire file system for files within several
different filesets, one could use rules with

FOR FILESET('fileset1','fileset2','and-so-on')

Or even more flexibly

WHERE   FILESET_NAME LIKE  'sql-like-pattern-with-%s-and-maybe-_s'

Or even more powerfully

WHERE  regex(FILESET_NAME, 'extended-regular-.*-expression')





From:   "Jaime Pinto" 
To: "gpfsug main discussion list" 
Date:   04/18/2018 01:00 PM
Subject:[gpfsug-discuss] mmapplypolicy on nested filesets ...
Sent by:gpfsug-discuss-boun...@spectrumscale.org



A few months ago I asked about limits and dynamics of traversing
depended .vs independent filesets on this forum. I used the
information provided to make decisions and setup our new DSS based
gpfs storage system. Now I have a problem I couldn't' yet figure out
how to make it work:

'project' and 'scratch' are top *independent* filesets of the same
file system.

'proj1', 'proj2' are dependent filesets nested under 'project'
'scra1', 'scra2' are dependent filesets nested under 'scratch'

I would like to run a purging policy on all contents under 'scratch'
(which includes 'scra1', 'scra2'), and TSM backup policies on all
contents under 'project' (which includes 'proj1', 'proj2').

HOWEVER:
When I run the purging policy on the whole gpfs device (with both
'project' and 'scratch' filesets)

* if I use FOR FILESET('scratch') on the list rules, the 'scra1' and
'scra2' filesets under scratch are excluded (totally unexpected)

* if I use FOR FILESET('scra1') I get error that scra1 is dependent
fileset (Ok, that is expected)

* if I use /*FOR FILESET('scratch')*/, all contents under 'project',
'proj1', 'proj2' are traversed as well, and I don't want that (it
takes too much time)

* if I use /*FOR FILESET('scratch')*/, and instead of the whole device
I apply the policy to the /scratch mount point only, the policy still
traverses all the content of 'project', 'proj1', 'proj2', which I
don't want. (again, totally unexpected)

QUESTION:

How can I craft the syntax of the mmapplypolicy in combination with
the RULE filters, so that I can traverse all the contents under the
'scratch' independent fileset, including the nested dependent filesets
'scra1','scra2', and NOT traverse the other independent filesets at
all (since this takes too much time)?

Thanks
Jaime


PS: FOR FILESET('scra*') does not work.




  
   TELL US ABOUT YOUR SUCCESS STORIES

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.scinethpc.ca_testimonials=DwICAg=jf_iaSHvJObTbx-siA1ZOg=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8=y0aRzkzp0QA9QR8eh3XtN6PETqWYDCNvItdihzdueTE=IpwHlr0YNr7rgV7gI8Y2sxIELLIwA15KK4nBnv9BYWk=

  
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.ca
University of Toronto
661 University Ave. (MaRS), Suite 1140
Toronto, ON, M5G1M1
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of
Toronto.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8=y0aRzkzp0QA9QR8eh3XtN6PETqWYDCNvItdihzdueTE=aff0vMJkKd-Z3pw3-jckmI3ejqXh8aSr8rxkKf3OGdk=














 
  TELL US ABOUT YOUR SUCCESS STORIES
 http://www.scinethpc.ca/testimonials
 
---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - 

Re: [gpfsug-discuss] mmbackup issue

2018-06-25 Thread Grunenberg, Renar
Hallo All,
here the requirement for enhancement of mmbackup.
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=121687

Please vote.



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Jonathan 
Buzzard
Gesendet: Donnerstag, 21. Juni 2018 09:33
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] mmbackup issue

On 20/06/18 17:00, Grunenberg, Renar wrote:
> Hallo Valdis, first thanks for the explanation we understand that,
> but this problem generate only 2 Version at tsm server for the same
> file, in the same directory. This mean that mmbackup and the
> .shadow... has no possibility to have for the same file in the same
> directory more then 2 backup versions with tsm. The native ba-client
> manage this. (Here are there already different inode numbers
> existent.) But at TSM-Server side the file that are selected at 'ba
> incr' are merged to the right filespace and will be binded to the
> mcclass >2 Version exist.
>

I think what you are saying is that mmbackup is only keeping two
versions of the file in the backup, the current version and a single
previous version. Normally in TSM you can control how many previous
versions of the file that you can keep, for both active and inactive
(aka deleted). You can also define how long these version are kept for.

It sounds like you are saying that mmbackup is ignoring the policy that
you have set for this in TSM (q copy) and doing it's own thing?

JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss