Re: [gpfsug-discuss] GPFS de duplication

2021-05-20 Thread Stephen Ulmer
Do file clones meet the workflow requirement? That is, can you control from whence the second (and further) copies are made? -- Stephen > On May 20, 2021, at 9:01 AM, Andrew Beattie wrote: > > Dave, > > Spectrum Scale does not support de-duplication, it does support compression. > You

Re: [gpfsug-discuss] Fwd: FW: Backing up GPFS with Rsync

2021-03-11 Thread Stephen Ulmer
O. > > Kind regards. > > -- > Enrico Tagliavini > Systems / Software Engineer > > enrico.tagliav...@fmi.ch > > Friedrich Miescher Institute for Biomedical Research > Infomatics > > Maulbeerstrasse 66 > 4058 Basel > Switzerland > > > >

Re: [gpfsug-discuss] Fwd: FW: Backing up GPFS with Rsync

2021-03-11 Thread Stephen Ulmer
I’m going to ask what may be a dumb question: Given that you have GPFS on both ends, what made you decide to NOT use AFM? -- Stephen > On Mar 11, 2021, at 3:56 AM, Tagliavini, Enrico > wrote: > > Hello William, > > I've got your email forwarded my another user and I decided to subscribe

Re: [gpfsug-discuss] Policy scan of symbolic links with contents?

2021-03-08 Thread Stephen Ulmer
Does that check the target of the symlink, or the path to the link itself? I think the OP was checking the target (or I misunderstood). -- Stephen Ulmer Sent from a mobile device; please excuse auto-correct silliness. > On Mar 8, 2021, at 3:34 PM, Jonathan Buzzard > wrote: > &g

Re: [gpfsug-discuss] Future of Spectrum Scale support for Centos

2020-12-09 Thread Stephen Ulmer
I have some hope about this… not a lot, but there is one path where it could go well: In particular, I’m hoping that after CentOS goes stream-only RHEL goes release-only, with regular (weekly?) minor release that are actually versioned together (as opposed to “here are some fixes for RHEL 8.x,

Re: [gpfsug-discuss] Best of spectrum scale

2020-09-11 Thread Stephen Ulmer
> On Sep 9, 2020, at 10:04 AM, Skylar Thompson wrote: > > On Wed, Sep 09, 2020 at 12:02:53PM +0100, Jonathan Buzzard wrote: >> On 08/09/2020 18:37, IBM Spectrum Scale wrote: >>> I think it is incorrect to assume that a command that continues >>> after detecting the working directory has been

Re: [gpfsug-discuss] mmapplypolicy oddity

2020-06-21 Thread Stephen Ulmer
Just to be clear, the .bad one failed before the other one existed? If you add a third one, do you still only get one set of output? Maybe the uniqueness of the target is important, and there is another symlink you don’t know about? -- Stephen > On Jun 19, 2020, at 8:08 AM, Oesterlin,

Re: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files

2020-06-09 Thread Stephen Ulmer
Jonathan brings up a good point that you’ll only get one shot at this — if you’re using the file system as your record of who owns what. You might want to use the policy engine to record the existing file names and ownership (and then provide updates using the same policy engine for the things

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-04 Thread Stephen Ulmer
Note that if nsd02-ib is offline, that nsd03-ib is now servicing all of the NSDs for *both* servers, and that if nsd03-ib gets busy enough to appear offline, then nsd04-ib would be next in line to get the load of all 3. The two servers with the problems are in line after the one that is off.

[gpfsug-discuss] Multi-cluster question (was Re: gpfsug-discuss Digest, Vol 100, Issue 32)

2020-05-29 Thread Stephen Ulmer
I have a question about multi-cluster, but it is related to this thread (it would be solving the same problem). Let’s say we have two clusters A and B, both clusters are normally shared-everything with no NSD servers defined. We want cluster B to be able to use a file system in cluster A. If

Re: [gpfsug-discuss] gpfs filesets question

2020-04-18 Thread Stephen Ulmer
Is this still true if the source and target fileset are both in the same storage pool? It seems like they could just move the metadata… Especially in the case of dependent filesets where the metadata is actually in the same allocation area for both the source and target. Maybe this just

Re: [gpfsug-discuss] Encryption - checking key server health (SKLM)

2020-02-20 Thread Stephen Ulmer
It seems like this belongs in mmhealth if it were to be bundled. If you need to use a third party tool, maybe fetch a particular key that is only used for fetching, so it’s compromise would represent no risk. -- Stephen Ulmer Sent from a mobile device; please excuse auto-correct silliness

Re: [gpfsug-discuss] Kernel BUG/panic in mm/slub.c:3772 on Spectrum Scale Data Access Edition installed via gpfs.gplbin RPM on KVM guests

2020-01-17 Thread Stephen Ulmer
Having a sanctioned way to compile targeting a version of the kernel that is installed — but not running — would be helpful in many circumstances. — Stephen > On Jan 17, 2020, at 11:58 AM, Ryan Novosielski wrote: > > Yeah, support got back to me with a similar response earlier today that I’d

Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.

2019-12-18 Thread Stephen Ulmer
I want to say that AFM was in GPFS before there were editions, and that everything that was pre-edition went into Standard Edition. That timing may not be exact, but Advanced edition has definitely never been required for “regular” AFM. For the longest time the only “Advanced” feature was

Re: [gpfsug-discuss] Spectrum Scale Replication across failure groups

2019-04-16 Thread Stephen Ulmer
I believe that -1 is "special", in that all -1’s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn’t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in:

Re: [gpfsug-discuss] Adding to an existing GPFS ACL

2019-03-27 Thread Stephen Ulmer
mmeditacl passes a temporary file containing the ACLs to $EDITOR. You can write $EDITOR if you want. :) -- Stephen > On Mar 27, 2019, at 12:19 PM, Buterbaugh, Kevin L > mailto:kevin.buterba...@vanderbilt.edu>> > wrote: > > Hi Jonathan, > > Thanks for the response. I did look at

Re: [gpfsug-discuss] GPFS v5: Blocksizes and subblocks

2019-03-27 Thread Stephen Ulmer
> sas_6T 65537 > > I know md lives in the system pool and if you do encryption you can forget > about putting data into you inodes for small files > > > > On Wed, Mar 27, 2019 at 10:57 AM Stephen Ulmer <mailto:ul...@ulmer.org>> wrote: > This presenta

Re: [gpfsug-discuss] GPFS v5: Blocksizes and subblocks

2019-03-27 Thread Stephen Ulmer
This presentation contains lots of good information about file system structure in general, and GPFS in specific, and I appreciate that and enjoyed reading it. However, it states outright (both graphically and in text) that storage pools are a feature of the cluster, not of a file system —

Re: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

2019-03-14 Thread Stephen Ulmer
+1 — This is the best solution. The only thing I would change would be to add: TimeoutStartSec=300 Or something similar. This leaves the maintenance of starting applications where it belongs (in systems, not in GPFS). You can use the same technique for other VFS types (like NFS if

Re: [gpfsug-discuss] Follow-up: migrating billions of files

2019-03-06 Thread Stephen Ulmer
In the case where tar -C doesn’t work, you can always use a subshell (I do this regularly): tar -cf . | ssh someguy@otherhost "(cd targetdir; tar -xvf - )" Only use -v on one end. :) Also, for parallel work that’s not designed that way, don't underestimate the -P option to GNU and BSD

Re: [gpfsug-discuss] ESS: mmvdisk filesystem with "--version" fails

2018-11-09 Thread Stephen Ulmer
gt; > Felipe Knop k...@us.ibm.com <mailto:k...@us.ibm.com> > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Stephen Ulmer ---11/09/2018 09:13:32 AM---It had better

Re: [gpfsug-discuss] ESS: mmvdisk filesystem with "--version" fails

2018-11-09 Thread Stephen Ulmer
It had better work — I’m literally going to be doing exactly the same thing in two weeks… -- Stephen > On Nov 9, 2018, at 9:07 AM, Oesterlin, Robert > wrote: > > Why doesn’t this work? I want to create a file system with an older version > level that my

Re: [gpfsug-discuss] NSD network checksums (nsdCksumTraditional)

2018-10-29 Thread Stephen Ulmer
e2echecksum.htm > > <https://www.ibm.com/support/knowledgecenter/en/SSYSP8_5.3.1/com.ibm.spectrum.scale.raid.v5r01.adm.doc/bl1adv_introe2echecksum.htm> > > Best, > -Kums > > > > > > From:Stephen Ulmer mailto:ul...@ulmer.org>> >

Re: [gpfsug-discuss] NSD network checksums (nsdCksumTraditional)

2018-10-29 Thread Stephen Ulmer
So the ESS checksums that are highly touted as "protecting all the way to the disk surface" completely ignore the transfer between the client and the NSD server? It sounds like you are saying that all of the checksumming done for GNR is internal to GNR and only protects against bit-flips on the

Re: [gpfsug-discuss] RAID type for system pool

2018-09-05 Thread Stephen Ulmer
> On Sep 5, 2018, at 11:34 AM, Buterbaugh, Kevin L > mailto:kevin.buterba...@vanderbilt.edu>> > wrote: > > […] > Of course, if we increase the size of the inodes by a factor of 8 then we > also need 8 times as much space to store those inodes. Given that Enterprise > class SSDs are still

Re: [gpfsug-discuss] Those users....

2018-08-22 Thread Stephen Ulmer
Clearly, those are the ones they’re working on. You’re lucky they’re de-duped. -- Stephen > On Aug 22, 2018, at 1:12 PM, Oesterlin, Robert > wrote: > > Sometimes, I look at the data that's being stored in my file systems and just > shake my head: > >

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Stephen Ulmer
> On Aug 1, 2018, at 8:11 PM, Andrew Beattie wrote: > […] > > which is probably why 32k sub block was the default for so many years I may not be remembering correctly, but I thought the default block size was 256k, and the sub-block size was always fixed at 1/32nd of the block size

Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4

2018-06-11 Thread Stephen Ulmer
So is it better to pin with the subscription manager, or in our case to pin the kernel version with yum (because you always have something to do when the kernel changes)? What is the consensus? -- Stephen Ulmer Sent from a mobile device; please excuse auto-correct silliness. > On Jun

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Stephen Ulmer
> On Jun 7, 2018, at 10:16 AM, Buterbaugh, Kevin L > wrote: > > Hi All, > > First off, I’m on day 8 of dealing with two different mini-catastrophes at > work and am therefore very sleep deprived and possibly missing something > obvious … with that disclaimer out of the way… > > We have a

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7

2018-05-16 Thread Stephen Ulmer
> On May 15, 2018, at 11:55 PM, Stijn De Weirdt wrote: > > hi stephen, > >> There isn’t a flaw in that argument, but where the security experts >> are concerned there is no argument. > we have gpfs clients hosts where users can login, we can't update those. > that is a

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7

2018-05-15 Thread Stephen Ulmer
There isn’t a flaw in that argument, but where the security experts are concerned there is no argument. Apparently this time Red Hat just told all of their RHEL 7.4 customers to upgrade to RHEL 7.5, rather than back-porting the security patches. So this time the retirement to upgrade

Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks

2018-05-15 Thread Stephen Ulmer
Lohit, Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You’ve mentioned client a few times now. :) -- Stephen > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > Thanks Christof. > > The usecase is

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7

2018-05-15 Thread Stephen Ulmer
> On May 15, 2018, at 5:28 AM, Jonathan Buzzard > wrote: > > On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is >> planned for upcoming PTFs on 4.2.3 and 5.0. Since code

Re: [gpfsug-discuss] Recharging where HSM is used

2018-05-03 Thread Stephen Ulmer
I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I’d also like to see what people are doing around this. If I may ask a question, what is the goal for

[gpfsug-discuss] Will GPFS recognize re-sized LUNs?

2018-04-25 Thread Stephen Ulmer
I’m 80% sure that the answer to this is "no", but I promised a client that I’d get a fresh answer anyway. If one extends a LUN that is under an NSD, and then does the OS-level magic to make that known to everyone that could write to it, can the NSD be extended to use the additional space? I

Re: [gpfsug-discuss] Preferred NSD

2018-03-14 Thread Stephen Ulmer
Depending on the size... I just quoted something both ways and DME (which is Advanced Edition equivalent) was about $400K cheaper than Standard Edition socket pricing for this particular customer and use case. It all depends. Also, for the case where the OP wants to distribute the file system

Re: [gpfsug-discuss] Removing LUN from host without unconfiguring GPFS filesystem

2018-01-21 Thread Stephen Ulmer
Harold, The way I read your question, no one has actually answered it fully: You want to put the old file system in cold storage for forensic purposes — exactly as it is. You want the NSDs to go away until and unless you need them in the future. QUICK AND DIRTY - YOU SHOULD NOT DO THIS: Set

Re: [gpfsug-discuss] Smallest block quota/limit and file quota/limit possible to set?

2017-12-04 Thread Stephen Ulmer
, it’s about generic *nix permissions. What am I missing? I don’t understand why you would want to use quota to enforce permissions. (There could be a legitimate reason here, but I don’t understand it.) Liberty, -- Stephen Ulmer Sent from a mobile device; please excuse autocorrect silliness

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Stephen Ulmer
Thank you. -- Stephen > On Nov 29, 2017, at 2:08 PM, Nikhil Khandelwal > wrote: > > Hi, > > I would like to clarify migration path to 5.0.0 from 4.X.X clusters. For all > Spectrum Scale clusters that are currently at 4.X.X, it is possible

Re: [gpfsug-discuss] Spectrum Scale Enablement Material - 1H 2017

2017-10-02 Thread Stephen Ulmer
I’ve been told in the past that the Spectrum Scale Wiki is the place to watch for the most timely information, and there is a way to "follow" the wiki so you get notified of updates. That being said, I’ve not gotten "following" it to work yet so I don’t know what that actually *means*. I’d

Re: [gpfsug-discuss] Replication settings when running mmapplypolicy

2017-06-23 Thread Stephen Ulmer
> On Jun 23, 2017, at 12:58 PM, Marc A Kaplan > wrote: > > I believe that is correct. If not, let us know! > > To recap... when running mmapplypolicy with rules like: > > ... MIGRATE ... REPLICATE(x) ... > > will change the replication

Re: [gpfsug-discuss] What is an independent fileset? was: mmbackup with fileset : scope errors

2017-05-18 Thread Stephen Ulmer
Each independent fileset is an allocation area, and they are (I believe) handled separately. There are a set of allocation managers for each file system, and when you need to create a file you ask one of them to do it. Each one has a pre-negotiated range of inodes to hand out, so there isn’t a

[gpfsug-discuss] GPFS on PowerVM Shared Storage Pools

2017-03-23 Thread Stephen Ulmer
I have a client who is very enamored with PowerVM Shared Storage Pools (because they work great for them). Has anyone here implemented GPFS on such? I think that technically there’s no reason that it wouldn’t work, but I’ve never actually "owned" an installation with GPFS on SSPs. Any

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Stephen Ulmer
> On Mar 15, 2017, at 8:37 AM, Lukas Hejtmanek <xhejt...@ics.muni.cz> wrote: > > On Wed, Mar 15, 2017 at 08:22:22AM -0400, Stephen Ulmer wrote: >> You need 4K nodes to store encryption keys. You can also put other useful >> things in there, like extended attribute

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Stephen Ulmer
You need 4K nodes to store encryption keys. You can also put other useful things in there, like extended attributes and (possibly) the entire file. Are you worried about wasting space? -- Stephen > On Mar 15, 2017, at 5:50 AM, Lukas Hejtmanek

Re: [gpfsug-discuss] Strange timestamp behaviour on NFS via CES

2017-02-03 Thread Stephen Ulmer
That’s a cool one. :) What if you use the "random date" file as a time reference to touch another file (like, 'touch -r file02 file03’)? -- Stephen > On Feb 3, 2017, at 7:46 AM, Andreas Mattsson > wrote: > > I’m having

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-20 Thread Stephen Ulmer
My list of questions that might or might not be thought provoking: How about the relative position of the items in the /etc/group file? Are all of the failures later in the file than all of the successes? Do any groups have group passwords (parsing error due to “different" line format)? Is

Re: [gpfsug-discuss] LROC

2016-12-21 Thread Stephen Ulmer
Sven, I’ve read this several times, and it will help me to re-state it. Please tell me if this is not what you meant: You often see even common operations (like ls) blow out the StatCache, and things are inefficient when the StatCache is in use but constantly overrun. Because of this, you

Re: [gpfsug-discuss] searching for mmcp or mmcopy - optimized bulk copy for spectrum scale?

2016-12-05 Thread Stephen Ulmer
This is not the answer to not writing it yourself: However, be aware that GNU xargs has the -P x option, which will try to keep x batches running. It’s a good way to optimize the number of threads for anything you’re multiprocessing in the shell. So you can build a list and have xargs fork x

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-12-01 Thread Stephen Ulmer
Just because I don’t think I’ve seen you state it: (How much) Do you care about the data? Is it scratch? Is it test data that exists elsewhere? Does it ever flow from this storage to any other storage? Will it be dubbed business critical two years after they swear to you that it’s not

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-11-30 Thread Stephen Ulmer
gt; <http://www-03.ibm.com/systems/storage/spectrum/> Attachment.png> <http://www-03.ibm.com/systems/storage/spectrum/scale/> > > <https://www.ibm.com/marketplace/cloud/object-storage/us/en-us> > > 2300 Dulles Station Blvd > Herndon, VA 20171-6133 > U

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-11-30 Thread Stephen Ulmer
I don’t understand what FPO provides here that mirroring doesn’t: You can still use failure domains — one for each node. Both still have redundancy for the data; you can lose a disk or a node. The data has to be re-striped in the event of a disk failure — no matter what. Also, the FPO license

Re: [gpfsug-discuss] filesystem thresholds in gui alerting

2016-10-26 Thread Stephen Ulmer
It seems prudent to note here that a file system that reports 100% full can have quite a bit of free space. Take for example a 10TB file system that is no longer written to — no one wants to use 100GB just to make the alert color green. This gets silly very quickly at even “medium” sizes...

Re: [gpfsug-discuss] Hardware refresh

2016-10-11 Thread Stephen Ulmer
I think that the OP was asking why not expand the existing cluster with the new hardware, and just make a new FS? I’ve not tried to make a cluster talk AFM to itself yet. If that’s impossible, then there’s one good reason to make a new cluster (to use AFM for migration). Liberty, -- Stephen

Re: [gpfsug-discuss] Blocksize

2016-09-26 Thread Stephen Ulmer
ield > using 1MB or even smaller blocksize on RAID stripes of 2 MB or above and your > performance will be significant impacted by that. > > Sven > > ------ > Sven Oehme > Scalable Storage Research > email: oeh...@us.ibm.com <mailto:oeh...@us.ibm.com> > Ph

Re: [gpfsug-discuss] Blocksize

2016-09-23 Thread Stephen Ulmer
Not to be too pedantic, but I believe the the subblock size is 1/32 of the block size (which strengthens Luis’s arguments below). I thought the the original question was NOT about inode size, but about metadata block size. You can specify that the system pool have a different block size from

Re: [gpfsug-discuss] Learn a new cluster

2016-09-23 Thread Stephen Ulmer
This was going to be my exact suggestion. My short to-learn list includes learn how to look inside a gpfs.snap for what I want to know. I’ve found the ability to do this with other snapshot bundles very useful in the past (for example I’ve used snap on AIX rather than my own scripts in some

Re: [gpfsug-discuss] Weirdness with 'mmces address add'

2016-09-07 Thread Stephen Ulmer
Hostnames can have many A records. IPs *generally* only have one PTR (though it’s not restricted, multiple PTRs is not recommended). Just knowing that you can see why allowing names would create more questions than it answers. So if it did take names instead of IP addresses, it would usually