OOPS... mmbackup uses mmapplypolicy.
Unfortunately the script "mmapplypolicy" is a little "too smart". When
you use the "-N mynode" parameter it sees that you are referring to just
the node upon which
you are executing and does not pass the -N argument to the underlying
tsapolicy command.
How would that happen? The ID may not be random, but it is a long string
string of digits:
[root@n2 gpfs-git]# mmlscluster
GPFS cluster information
GPFS cluster name: madagascar.frozen
GPFS cluster id: 7399668614468035547
ANYHOW, I think you can
pool=xtra
servers=bog-xxx
Which starts out as sparse and the filesystem will dynamically "grow" as
GPFS writes to it...
But I have no idea how well this will work for a critical "production"
system...
tx, marc kaplan.
From: "Allen, Benjamin S." <bsal...@a
net/software/irixintro/documents/xfs-whitepaper.html
Sounds somewhat similar to QoS, but focused on giving applications
guaranteed bandwidth, not iops.
-jf
ons. 15. jun. 2016 kl. 00.08 skrev Marc A Kaplan <makap...@us.ibm.com>:
Yes, in QOS for 4.2.0 there are some simple assumptions that m
Jez,
Regarding your recent post. Do the mmchpolicy and mmapplypolicy
commands have sufficient functionality for your purposes?
Are you suggesting some improvements? If so, what? Please provide
examples and/or specific suggestions.
WRT your numbered items:
(1) `mmchpolicy fsname -I
Thanks for asking!
Prior to Release 4.2 system pool was the default pool for storing file
data and you had to write at least one policy SET POOL rule to make use
of any NSDs that you assigned to some other pool.
Now if a file system is formatted or upgraded to level 4.2 or higher, and
there
Hint: tsdbfs
More hints:
https://www.ibm.com/developerworks/community/forums/html/topic?id=----14215697
If you mess up, you didn't hear this from me ;-)
From: "Buterbaugh, Kevin L"
To: gpfsug main discussion list
be
the norm in such an implementation.
Regarding dry run functionality this can be achieved globally as follows:
setDryRun(True)
or as a more granular decorator:
@dryrun
... def delete_filesets():
Jez
On 20/06/16 16:03, Marc A Kaplan wrote:
Jez,
Regarding your recent post. Do th
Yeah. Try first changing the configuration so it does not depend on
onegig. Then secondly you may want to delete the nodeclass.
Any ideas on how to get out of this?
[root@gpfs01 ~]# mmlsnodeclass onegig
Node Class Name Members
-
Since this official IBM website (pre)announces transparent cloud tiering
...
http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024=IBMSocial=M16402YW=h=BSYS=blog=casyst=us_tact=M16402YW
And since Oesterlin mentioned Cluster Export Service (CES), please allow
me to
You may also want to use and/or adapt samples/ilm/tspcp which uses
mmapplypolicy to drive parallel cp commands.
The script was written by my colleague and manager, but I'm willing to
entertain
questions and suggestions...
Here are some of the comments:
# Run "cp" in parallel over a list of
Just to add...
Spectrum Scale is no different than most other file systems in this
respect. It assumes the disk system and network systems will detect I/O
errors, including data corruption.
And it usually will ... but there are, as you've discovered, scenarios
where it can not.
IBM ESS, GSS, GNR, and Perseus refer to the same "declustered" IBM
raid-in-software technology with advanced striping and error recovery.
I just googled some of those terms and hit this not written by IBM
summary:
http://www.raidinc.com/file-storage/gss-ess
Also, this is now a "mature"
You may need/want to set up an nsddevices script to help GPFS find all
your disks. Google it! Or ...
http://www.ibm.com/support/knowledgecenter/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.adm.doc/bl1adm_nsddevices.htm
___
gpfsug-discuss mailing
For a write or create operation ENOSPC would make some sense.
But if the file already exists and I'm just opening for read access I
would be very confused by ENOSPC.
How should the system respond: "Sorry, I know about that file, I have it
safely stored away in HSM, but it is not available
Since you write " so if the file is not touched for number of days, it's
moved to a tape" -
that is what we call the HSM feature. This is additional function beyond
backup. IBM has two implementations.
(1) TSM/HSM now called IBM Spectrum Protect.
My understanding is (someone will correct me if I'm wrong) ...
GPFS does not have true deadlock detection. As you say it has time outs.
The argument is: As a practical matter, it makes not much difference to a
sysadmin or user -- if things are gummed up "too long" they start to smell
like a
IBM HSM products have always supported unprivileged, user triggered recall
of any file. I am not familiar with any particular GUI, but from the CLI,
it's easy enough:
dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & #
pulling the first few blocks will trigger a complete
IBM ESS, GSS, GNR, and Perseus refer to the same "declustered" IBM
raid-in-software technology with advanced striping and error recovery.
I just googled some of those terms and hit this not written by IBM
summary:
http://www.raidinc.com/file-storage/gss-ess
Also, this is now a "mature"
The key point is that you must create the file system so that is "looks"
like a 3.5 file system. See mmcrfs ... --version. Tip: create or find a
test filesystem back on the 3.5 cluster and look at the version string.
mmslfs xxx -V. Then go to the 4.x system and try to create a file system
Jon, I don't doubt your experience, but it's not quite fair or even
sensible to make a decision today based on what was available in the GPFS
2.3 era.
We are now at GPFS 4.2 with support for 3 way replication and FPO.
Also we have Raid controllers, IB, and "Native Raid" and ESS, GSS
solutions
"I followed that guide using GPFS 3.4 and TSM 6.4 with HSM and it's been
working perfectly since then. I was even able to remove dsmscoutd online,
node-at-a-time back when I made the transition. The performance change was
revolutionary and so is the file selection.
We have large filesystems
Bob,
You can read ioql as "IO queue length" (outside of GPFS) and "qsdl" as
QOS queue length at the QOS throttle within GPFS, computed from average
delay introduced by the QOS subsystem.
These "queue lengths" are virtual or fictional -- They are computed by
observing average service times
"I assume that creating several smaller LUNs on each FlashSystem in the
same failure group is still preferable to one big LUN so we get more IO
queues to play with?"
Traditionally, more spindles, more disk arms working in parallel => better
overall performance.
HOWEVER Flash doesn't work that
I think there are two points here:
A) RAID striping is probably a "loser" for GPFS metadata.
B) RAID mirroring for your metadata may or may not be faster and/or more
reliable than GPFS replication. Depending on your requirements and
assumptions for fault-tolerance one or the other might be the
I think you found your answer: TSM tracks files by pathname.
So... if a file had path /w/x/y/z on Monday. But was moved to /w/x/q/p on
Tuesday, how would TSM "know" it was the same file...?
It wouldn't! To TSM it seems you've deleted the first and created the
second.
Technically there are
My little suggestion is to put it back where it was. And then add a
symlink from the desired (new) path to the TSM-needs-this actual path.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
>From a computer science point of view, this is a simple matter of
programming. Provide yet-another-option on filelist processing that
supports encoding or escaping of special characters.
Pick your poison!
We and many others have worked through this issue and provided solutions
in products
I understand you are having problems with your cluster, but you do NOT
need to have GPFS "started" to
display and/or change configuration paramters. You do need at least a
majority of the nodes to be up and in communcation (e.g. can talk to each
other by tcp/ip)
--ccr-enable
Enables the
Based on experiments on my test cluster, I can assure you that you can
list and change GPFS configuration parameters with CCR enabled while GPFS
is down.
I understand you are having a problem with your cluster, but you are
incorrectly disparaging the CCR.
In fact you can mmshutdown -a AND
mmapplypolicy uses the inodescan API which to gain overall speed, bypasses
various buffers, caches, locks, ... and just reads inodes "directly" from
disk.
So the "view" of inodescan is somewhat "behind" the overall state of the
live filesystem as viewed from the usual Posix APIs, such as
; -Aaron___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
Allow me to restate and demonstrate:
Even if systemd or any explicit kill signals destroy any/all running mmcr*
and mmsdr* processes,
simply running mmlsconfig will fire up new mmcr* and mmsdr* processes. For
example:
## I used kill -9 to kill all mmccr, mmsdr, lxtrace, ... processes
[root@n2
te scenario
Sent by:gpfsug-discuss-boun...@spectrumscale.org
Marc, what you are saying is anything outside a particular data center
shouldn’t be part of a cluster? I’m not sure marketing is in line with
this then.
From: <gpfsug-discuss-boun...@spectrumscale.org> on beha
At the very least, LOOK at the messages output by the mmapplypolicy
command at the beginning and end.
The "occupancy" stats for each pool are shown BEFORE and AFTER the
command does its work.
In even more detail, it shows you how many files and how many KB of data
were (or will be or would
mmchqos fs --enable ... maintenance=1234iops ...
Will apply the new settings to all currently running and future
maintenance commands.
There is just a brief delay (I think it is well under 30 seconds) for the
new settings to be propagated and become effective on each node.
You can use
enly doesn't
work like it says on the tin ;-)
I can test 1 and 2 relatively easily, but 3 is a bit more difficult for us
to test out as the FS we want to use SOBAR on is 4.2 already.
Simon
From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Marc A
Kaplan <makap...@us.i
I worked on some aspects of SOBAR, but without studying and testing the
commands - I'm not in a position right now to give simple definitive
answers -
having said that
Generally your questions are reasonable and the answer is: "Yes it should
be possible to do that, but you might be going
I think you have the sign wrong on your weight.
A simple way of ordering the files oldest first is
WEIGHT(DAYS(CURRENT_TIMESTAMP) - DAYS(ACCESS_TIME)) adding 100,000
does nothing to change the order.
WEIGHT can be any numeric SQL expression. So come to think of it
WEIGHT( -
Diffing file lists can be fast - IF you keep the file lists sorted by a
unique key, e.g. the inode number.
I believe that's how mmbackup does it. Use the classic set difference
algorithm.
Standard diff is designed to do something else and is terribly slow on
large file lists.
From:
You can leave out the WHERE ... AND POOL_NAME LIKE 'deep' - that is
redundant with the FROM POOL 'deep' clause.
In fact at a slight additional overhead in mmapplypolicy processing due to
begin checked a little later in the game, you can leave out
MISC_ATTRIBUTES NOT LIKE '%2%' since
the code
I've been told that it is a big leap to go from supporting GSS and ESS to
allowing and supporting native raid for customers who may throw together
"any" combination of hardware they might choose.
In particular the GNR "disk hospital" functions...
I believe the OP left out a step.
I am not saying this is a good idea, but ...
One must change the replication factors marked in each inode for each
file...
This could be done using an mmapplypolicy rule:
RULE 'one' MIGRATE FROM POOL 'yourdatapool' TO POOL 'yourdatapool'
REPLICATE(1,1)
When you write something like "mmbackup takes ages" - that let's us know
how you feel, kinda.
But we need some facts and data to make a determination if there is a real
problem and whether and how it might be improved.
Just to do a "back of the envelope" estimate of how long backup operations
fileHeatLossPercent=10, fileHeatPeriodMinutes=1440
means any file that has not been accessed for 1440 minutes (24 hours = 1
day) will lose 10% of its Heat.
So if it's heat was X at noon today, tomorrow 0.90 X, the next day 0.81X,
on the k'th day (.90)**k * X.
After 63 fileHeatPeriods, we
(I can answer your basic questions, Sven has more experience with tuning
very large file systems, so perhaps he will have more to say...)
1. Inodes are packed into the file of inodes. (There is one file of all
the inodes in a filesystem).
If you have metadata-blocksize 1MB you will have 256
each
RAID 6 LUN and splitting it up into multiple logical volumes (all that
done on the storage array, of course) and then presenting those to GPFS as
NSDs???
Or have I gone from merely asking stupid questions to Trump-level
craziness ;-)
Kevin
On Sep 28, 2016, at 10:23 AM, Marc A Kaplan <ma
FILE_HEAT value is a non-negative floating point value. The value can
grow quite large -- in theory it can grow to the IEEE 64 bit maximum
floating point value and maybe even to "infinity".
Each time a file is read completely from disk it's FILE_HEAT value
increases by 1. If a fraction, f,
routers, having used them in the past.
I suppose I could always smush some extra HCAs in the NSD servers and do
it that way but that got really ugly when I started factoring in
omnipath. Something like an LNET router would also be useful for GNR
users who would like to present to both an IB and
Thanks. That example is simpler than I imagined. Question: If that was
indeed your situation and you could afford it, why not just go totally
with infiniband switching/routing?
Are not the routers just a hack to connect Intel OPA to IB?
Ref:
I think at least the most popular old forms still work, even if the
documentation and usage messages were scrubbed.
So generally, for example, neither your scripts nor your fingers will
break. ;-)
From: "Sobey, Richard A"
To:
You asked ... "We are wishing to migrate data according to the heat onto
different
storage categories (expensive --> cheap devices)"
We suggest a policy rule like this:
Rule 'm' Migrate From Pool 'Expensive' To Pool 'Thrifty' Threshold(90,75)
Weight(-FILE_HEAT) /* minus sign! */
Which
OKAY, I'll say it again. inodes are PACKED into a single inode file. So
a 4KB inode takes 4KB, REGARDLESS of metadata blocksize. There is no
wasted space.
(Of course if you have metadata replication = 2, then yes, double that.
And yes, there overhead for indirect blocks (indices),
Read about GROUP POOL - you can call as often as you like to "repack" the
files into several pools from hot to cold. Of course, there is a cost to
running mmapplypolicy...
So maybe you'd just run it once every day or so...
___
gpfsug-discuss mailing
Consider using samples/ilm/mmfind (or mmapplypolicy with a LIST ... SHOW
rule) to gather the stats much faster. Should be minutes, not hours.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
Pretty easy to determine experimentally, with a few trials
And the answer is 3876 for an simple file with just a selinux EA which is
apparently created by default on my test system.
So apparently there are actually only 220 bytes of metadata in this case
(4096-3876=220).
As they say YMMV ;-)
New FS? Yes there are some good reasons.
New cluster? I did not see a compelling argument either way.
From: "mark.b...@siriuscom.com"
To: gpfsug main discussion list
Date: 10/11/2016 03:34 PM
Subject:Re:
bill.
Best Regards,
Shankar Balasubramanian
STSM, AFM & Async DR Development
IBM Systems
Bangalore - Embassy Golf Links
India
"Marc A Kaplan" ---10/12/2016 11:25:20 PM---Yes, you can AFM within a
single cluster, in fact with just a single node. I just set this up on m
From:
new cluster (to use AFM
for migration).
Liberty,
--
Stephen
On Oct 11, 2016, at 8:40 PM, mark.b...@siriuscom.com wrote:
Only compelling reason for new cluster would be old hardware is EOL or no
longer want to pay maintenance on it.
From: <gpfsug-discuss-boun...@spectrumscale.org>
Just looking/scanning the latest (4.2.2) Spectrum Scale Command (and
Programming) Reference I only found a few commands that officially are
documented as supporting a -Y option: mmnfs, mmsmb, mmuserauth.
But as many of you have discovered, -Y is accepted and yields
"interesting" output for
I think you "got it" - despite the name "remote" - mmremotefs is an
access method and a way of organizing your data and machines into several
clusters - which can work great if your network(s) are fast enough. For
example you can have one or more client clusters with no GPFS(Spectrum
Scale)
You can set inode size to any one of 512, 1024, 2048 or 4096 when you
mmcrfs. Try it on a test system.
You want all the metadata of a file to fit into the inode. Also small
files, and small directories, can be stored in their inodes.
On a test file system you can examine how much of an inode
You can control the load of mmrestripefs (and all maintenance commands) on
your system using mmchqos ...
From: "Olaf Weiser"
To: gpfsug main discussion list
Date: 03/15/2017 04:04 PM
Subject:Re: [gpfsug-discuss]
As primary developer of mmapplypolicy, please allow me to comment:
1) Fast access to metadata in system pool is most important, as several
have commented on. These days SSD is the favorite, but you can still go
with "spinning" media.
If you do go with disks, it's extremely important to spread
>>>Here is the command I use to apply a policy:
mmapplypolicy gsfs0 -P policy.txt -N
scg-gs0,scg-gs1,scg-gs2,scg-gs3,scg-gs4,scg-gs5,scg-gs6,scg-gs7 -g
/srv/gsfs0/admin_stuff/ -I test -B 500 -A 61 -a 4
That takes approximately 10 minutes to do the whole scan. The "-B 500
-A 61 -a 4"
Well I'm glad we followed Mr. S. Holmes dictum which I'll paraphrase...
eliminate the impossible and what remains, even if it seems improbable,
must hold.
BTW - you may want to look at mmclone. Personally, I find the doc and
terminology confusing, but mmclone was designed to efficiently
(Bryan B asked...)
Open a PMR. The first response from me will be ... Run the mmapplypolicy
command again, except with additional option `-d 017` and collect output
with something equivalent to `2>&1 | tee
/tmp/save-all-command-output-here-to-be-passed-along-to-IBM-service ` If
you are
That is a summary message. It says one way or another, the command has
dealt with 1.6 million files. For the case under discussion there are no
EXTERNAL pools, nor any DELETions, just intra-GPFS MIGRATions.
[I] A total of 1620263 files have been migrated, deleted or processed by
an EXTERNAL
ANYONE else reading this saga? Who uses mmapplypolicy to migrate files
within multi-TB file systems? Problems? Or all working as expected?
--
Well, again mmapplypolicy "thinks" it has "chosen" 1.6 million files whose
total size is 61 Terabytes and migrating those will bring the occupancy
Let's look at how mmapplypolicy does the reckoning.
Before it starts, it see your pools as:
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name KB_OccupiedKB_Total Percent_Occupied
gpfs23capacity 55365193728124983549952 44.297984614%
Kums is our performance guru, so weigh that appropriately and relative to
my own remarks...
Nevertheless, I still think RAID-5or6 is a poor choice for GPFS
metadata. The write cache will NOT mitigate the read-modify-write problem
of a workload that has a random or hop-scotch access pattern
n’t
gone into politics, has it?!? ;-)
Thanks - and again, if I should open a PMR for this please let me know...
Kevin
On Apr 16, 2017, at 2:15 PM, Marc A Kaplan <makap...@us.ibm.com> wrote:
Let's look at how mmapplypolicy does the reckoning.
Before it starts, it see your pools as:
[I] GPF
Oops... If you want to see the list of what would be migrated '-I test
-L 2' If you want to migrate and see each file migrated '-I yes -L 2'
I don't recommend -L 4 or higher, unless you want to see the files that do
not match your rules.
-L 3 will show you all the files that match the
The "ILM" chapter in the Admin Guide has some tips, among which:
18. You can convert a time interval value to a number of seconds with the
SQL cast syntax, as in the
following example:
define([toSeconds],[(($1) SECONDS(12,6))])
define([toUnixSeconds],[toSeconds($1 - ’1970-1-1@0:00’)])
RULE
(IMO) NFSv4 ACLs are complicated. Confusing. Difficult. Befuddling.
PIA. Before questioning the GPFS implementation, see how they work in
other file systems.
If GPFS does it differently, perhaps there is a rationale, or perhaps
you've found a bug.
I'm kinda curious... I've noticed a few message on this subject -- so I
went to the doc
The doc seems to indicate there are some circumstances where removing the
tape with the appropriate command and options and then adding it back will
result in the files on the tape becoming available
If you haven't already, measure the time directly on the CES node command
line skipping Windows and Samba overheads:
time ls -l /path
or
time ls -lR /path
Depending which you're interested in.
From: "Sven Oehme"
To: gpfsug main discussion list
I believe that is correct. If not, let us know!
To recap... when running mmapplypolicy with rules like:
... MIGRATE ... REPLICATE(x) ...
will change the replication factor to x, for each file selected by this
rule and chosen for execution.
... MIGRATE ... /* no REPLICATE keyword */
will
g
On Jun 23, 2017, at 12:58 PM, Marc A Kaplan <makap...@us.ibm.com> wrote:
I believe that is correct. If not, let us know!
To recap... when running mmapplypolicy with rules like:
... MIGRATE ... REPLICATE(x) ...
will change the replication factor to x, for each file selected by thi
i...@scinet.utoronto.ca>
To: "Marc A Kaplan" <makap...@us.ibm.com>
Cc: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>
Date: 05/18/2017 12:36 PM
Subject:Re: [gpfsug-discuss] What is an independent fileset? was:
mmbackupwith filese
When I see "independent fileset" (in Spectrum/GPFS/Scale) I always think
and try to read that as "inode space".
An "independent fileset" has all the attributes of an (older-fashioned)
dependent fileset PLUS all of its files are represented by inodes that are
in a separable range of inode
of candidate and EXCLUDEed or
LISTed files.
6
Displays the same information as 5, plus
non-candidate files and their attributes.
Thanks
Jaime
Quoting "Marc A Kaplan" <makap...@us.ibm.com>:
1. As I surmised, and
ilesystems within filesystems. Moving a
file from one fileset to another requires a copy operation. There is no
fast move nor hardlinking.
--marc
From: "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list" <gpfsug-discuss@spectr
-Platz 1, Building 449, Room 202
D-76344 Eggenstein-Leopoldshafen
Tel: +49 721 608 24916
Fax: +49 721 608 24972
Email: petz...@kit.edu
www.scc.kit.edu
KIT – The Research University in the Helmholtz Association
Since 2010, KIT has been certified as a family-friendly university.
[attac
Upgrading from GPFS 4.2.x to GPFS 4.2.y should not "break" TSM. If it
does, someone goofed, that would be a bug. (My opinion)
Think of it this way. TSM is an application that uses the OS and the
FileSystem(s). TSM can't verify it will work with all future versions of
OS and Filesystems,
To generate a list of files and their file heat values...
define([TS],esyscmd(date +%Y-%m-%d-%H-%M | tr -d '\n'))
RULE 'x1' EXTERNAL LIST 'heat.TS' EXEC ''
RULE 'x2' LIST 'heat.TS' SHOW(FILE_HEAT) WEIGHT(FILE_HEAT) /* use a
WHERE clause to select or exclude files */
mmapplypolicy
Read the doc again. Specify both -g and -N options on the command line to
get fully parallel directory and inode/policy scanning.
I'm curious as to what you're trying to do with THRESHOLD(0,100,0)...
Perhaps premigrate everything (that matches the other conditions)?
You are correct about
BTW - we realize that mmapplypolicy -g and -N is a "gotcha" for some
(many?) customer/admins -- so we're considering ways to make that easier
-- but without "breaking" scripts and callbacks and what-have-yous that
might depend on the current/old defaults... Always a balancing act --
Which features of 5.0 require a not-in-place upgrade of a file system?
Where has this information been published?
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
This redbook
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/3af3af29ce1f19cf86256c7100727a9f/335d8a48048ea78d85258059006dad33/$FILE/SOBAR_Migration_SpectrumScale_v1.0.pdf
has these and other hints:
-B blocksize, should match the file system block size of the source
system, but can also be
tem vs Database
Sent by:gpfsug-discuss-boun...@spectrumscale.org
On Thu, Nov 30, 2017 at 01:34:05PM -0500, Marc A Kaplan wrote:
> It would be interesting to know how well Spectrum Scale large directory
> and small file features work in these sort of DB-ish applications.
>
&
Indeed, for a very large directory you might get some speedup using
samples/ilm/mmfind directory -ls -maxdepth 1
There are some caveats, the same as those for the command upon which
mmfind rests, mmapplypolicy.
From: Skylar Thompson
To:
Assuming you have a recent version of Spectrum Scale...
You can use ACTION(SetXattr(...)) in mmapplypolicy {MIGRATE,LIST} rules
and/or in {SET POOL} rules that are evaluated at file creation time.
Later...
You can use WHERE Xattr(...) in any policy rules to test/compare
an
It's not clear that this is a problem or malfunction.
Customer should contact IBM support and be ready to transmit copies of the
cited log files and other mmbackup command output (stdout and stderr
messages) for analysis.
Also mmsnap output.
From: "IBM Spectrum Scale"
To:
mmapplypolicy ... -N nodeClass ...
will use the nodes in nodeClass as helper nodes to get its work done.
mmdsh -N nodeClass command ...
will run the SAME command on each of the nodes -- probably not what you
want to do with mmapplypolicy.
To see more about what mmapplypolicy is doing use
My guess is you have some expectation of how things "ought to be" that
does not match how things actually are.
If you haven't already done so, put some diagnostics into your script,
such as
env
hostname
echo "my args are: $*"
And run
mmapplypolicy with an explicit node list:
I googled GPFS_FCNTL_GET_DATABLKDISKIDX
and found this discussion:
https://www.ibm.com/developerworks/community/forums/html/topic?id=db48b190-4f2f-4e24-a035-25d3e2b06b2d=50
In general, GPFS files ARE deliberately "fragmented" but we don't say that
- we say they are "striped" over many disks
Placement policy rules "SET POOL 'xyz'... " may only name GPFS data pools.
NOT "EXTERNAL POOLs" -- EXTERNAL POOL is a concept only supported by
MIGRATE rules.
However you may be interested in "mmcloudgateway" & co, which is all about
combining GPFS with Cloud storage.
AKA IBM Transparent
Thanks Jonathan B for your comments and tips on experience using
mmapplypolicy and policy rules. Good to see that some of the features we
put into the product are actually useful.
For those not quite as familiar, and have come somewhat later to the game,
like Matthias K - I have a few
Not withstanding JAB's remark that this may not necessary:
Some customers/admins will want to "stage" a fileset in anticipation of
using the data therein.
Conversely you can "destage" - just set the TO POOL accordingly.
This can be accomplished with a policy rule like:
RULE 'stage' MIGRATE FOR
1 - 100 of 199 matches
Mail list logo