Hi All,
Normally we never use the install toolkit, but deploy GPFS from a config
management tool.
I see there are now RPMs such as gpfs.license.dm, are these actually
required to be installed? Everything seems to work well without them, so
just interested. Maybe the GUI uses them?
Thanks
Simon
Hi
When mmbackup has passed the preflight stage (pretty quickly) you'll
find the autogenerated ruleset as /var/mmfs/mmbackup/.mmbackupRules*
Best,
Jez
On 18/05/17 20:02, Jaime Pinto wrote:
Ok Mark
I'll follow your option 2) suggestion, and capture what mmbackup is
using as a rule
Ok Mark
I'll follow your option 2) suggestion, and capture what mmbackup is
using as a rule first, then modify it.
I imagine by 'capture' you are referring to the -L n level I use?
-L n
Controls the level of information displayed by the
mmbackup command. Larger values
1. As I surmised, and I now have verification from Mr. mmbackup, mmbackup
wants to support incremental backups (using what it calls its shadow
database) and keep both your sanity and its sanity -- so mmbackup limits
you to either full filesystem or full inode-space (independent fileset.)
If
Marc
The -P option may be a very good workaround, but I still have to test it.
I'm currently trying to craft the mm rule, as minimalist as possible,
however I'm not sure about what attributes mmbackup expects to see.
Below is my first attempt. It would be nice to get comments from
Good afternoon all, my name is John Hearns.
I am currently working with the HPC Team at ASML in the Netherlands, the market
sector is manufacturing.
-- The information contained in this communication and any attachments is
confidential and may be privileged, and is for the sole use of the
Hi All,
A date for your diary, #SSUG18 in the UK will be taking place on:
April 18th, 19th 2018
Please mark it in your diaries now :-)
We'll confirm other details etc nearer the time, but date is confirmed.
Simon
___
gpfsug-discuss mailing list
So it could be that we didn’t really know what we were doing when our system
was installed (and still don’t by some of the messages I post *cough*) but
basically I think we’re quite similar to other shops where we resell GPFS to
departmental users internally and it just made some sense to break
Each independent fileset is an allocation area, and they are (I believe)
handled separately. There are a set of allocation managers for each file
system, and when you need to create a file you ask one of them to do it. Each
one has a pre-negotiated range of inodes to hand out, so there isn’t a
It's crappy, I had to put the command in 10+ times before it would work. Just
keep at it (that's my takeaway, sorry I'm not that technical when it comes to
Kerberos).
Could be a domain replication thing.
Is time syncing properly across all your CES nodes?
Richard
-Original Message-
Thanks, I was just about to post that, and I guess is still the reason a
dependent fileset is still the default without the –inode-space new option
fileset creation.
I do wonder why there is a limit of 1000, whether it’s just IBM not envisaging
any customer needing more than that? We’ve only
Here is one big reason independent filesets are problematic:
A5.13:
Table 43. Maximum number of filesets
Version of GPFS Maximum Number of Dependent FilesetsMaximum Number of
Independent Filesets
IBM Spectrum Scale V4 10,000 1,000
GPFS V3.5 10,000 1,000
Another is that each
Jaime,
While we're waiting for the mmbackup expert to weigh in, notice that the
mmbackup command does have a -P option that allows you to provide a
customized policy rules file.
So... a fairly safe hack is to do a trial mmbackup run, capture the
automatically generated policy file, and then
As I understand it,
mmbackup calls mmapplypolicy so this stands for mmapplypolicy too.
mmapplypolicy scans the metadata inodes (file) as requested depending on
the query supplied.
You can ask mmapplypolicy to scan a fileset, inode space or filesystem.
If scanning a fileset it scans the
Thanks for the explanation Mark and Luis,
It begs the question: why filesets are created as dependent by
default, if the adverse repercussions can be so great afterward? Even
in my case, where I manage GPFS and TSM deployments (and I have been
around for a while), didn't realize at all
When I see "independent fileset" (in Spectrum/GPFS/Scale) I always think
and try to read that as "inode space".
An "independent fileset" has all the attributes of an (older-fashioned)
dependent fileset PLUS all of its files are represented by inodes that are
in a separable range of inode
We recently migrated several hundred TB from an Isilon cluster to our GPFS
cluster using AFM using NFS gateways mostly using 4.2.2.2 , the main thing we
noticed was that it would not migrate empty directories - we worked around the
issue by getting a list of the missing directories and running
Further investigation and checking says 4.2.1 afmctl prefetch is missing
empty directories (not files as said previously) and noted by the update
in 4.2.2.3. However I've found it is also missing symlinks both dangling
(pointing to files that don't exist) and not.
I can't see any actual data
Hi,
mmbackup with scope fileset works with independent filesets only. The reason is the independent inode space and the ability to run the policy engine separated from the rest of the file system. With dependent inode space this is not possible, so there is no benefit moving to the scope of the
Hi
There is no direct way to convert the one fileset that is dependent to independent or viceversa.
I would suggest to take a look to chapter 5 of the 2014 redbook, lots of definitions about GPFS ILM including filesets http://www.redbooks.ibm.com/abstracts/sg248254.html?Open Is not the only
20 matches
Mail list logo