As I said :"MAY be ill-advised".
If you have a good reason to use "mv" then certainly, use it!
But there are plenty of good naming conventions for the scenario you
give...
Like, start a new directory of results every day, week or month...
/fs/experiments/y2019/m12/d30/fileX.ZZZ ...
OF course,
Yes, that is entirely true, if not then basic Posix calls like open(2) are
broken.
https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
Also see if your distribution includes samples/ilm/mmxcp
which, if you are determined to cp or mv from one path to another, shows a
way to do that easily in perl,
using code similar to the aforementions bin/mmxargs
Here is the path changing part...
...
$src =~ s/'/'\\''/g; # any ' within
Now apart from the mechanics of handling and manipulating pathnames ...
the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) may
be ill-advised.
I suspect this is a hold-over or leftover from the old days -- when a
filesystem was comprised of just a few storage devices (disk
ug-discuss] Question about Policies - using
mmapplypolicy/EXTERNAL LIST/mmxargs
Sent by:gpfsug-discuss-boun...@spectrumscale.org
On 28/12/2019 19:49, Marc A Kaplan wrote:
> The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file
> lists perfectly. No nee
The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file
lists perfectly. No need to worry about whitespaces and so forth.
Give it a look-see and a try
-- marc of GPFS -
From: Jonathan Buzzard
To: "gpfsug-discuss@spectrumscale.org"
Date: 12/28/2019 10:17
The MIGRATE rule is for moving files from one pool to another, without
changing the pathname or any attributes, except the storage devices holding
the data blocks of the file. Also can be use for "external" pools to
migrate to an HSM system.
"moving" from one folder to another is a different
IF you have everything properly licensed and then you reconfigure... It
may work okay... But then you may come up short if you ask for IBM support
or service...
So depending how much support you need or desire...
Or take the easier and supported path... And probably accomplish most of
what
Please show us the 2 or 3 mmbackup commands that you would like to run
concurrently.
Peeking into the script, I find:
if [[ $scope == "inode-space" ]]
then
deviceSuffix="${deviceName}.${filesetName}"
else
deviceSuffix="${deviceName}"
I believe mmbackup is designed to allow concurrent
Along with what Fred wrote, you can look at the mmbackup doc and also peek
into the script and find some options to look at the mmapplypolicy RULEs
used, and also capture the mmapplypolicy output which will better show you
which files and directories are being examined and so forth.
--marc
Certainly fair and understandable to look at alternatives to the IBM
backup/restore/HSM products and possibly mixing-matching.
But thanks, Jez for raising the question:
What do you see as strengths and weaknesses of the alternatives, IBM and
others?
AND as long as you are considering
ug-discuss-boun...@spectrumscale.org
What about filesystem atime updates. We recently changed the default to
«relatime». Could that maybe influence heat tracking?
-jf
tir. 13. aug. 2019 kl. 11:29 skrev Ulrich Sibiller <
u.sibil...@science-computing.de>:
On 12.08.19 15:38, Ma
My Admin guide says:
The loss percentage and period are set via the configuration
variables fileHeatLossPercent and fileHeatPeriodMinutes. By default, the
file access temperature is not
tracked. To use access temperature in policy, the tracking must first be
enabled. To do this, set the two
Please note that the maxmbps parameter of mmchconfig is not part of the QOS
features of the mmchqos command.
mmchqos can be used to precisely limit IOPs.
You can even set different limits for NSD traffic originating at different
nodes.
However, use the "force" of QOS carefully! No doubt you can
(Somewhat educated guess.) Somehow a previous incarnation of the mmfsd
daemon was killed, but left its shared segment laying about.
When GPFS is restarted, it discovers the old segment and deallocates it,
etc, etc...
Then the safest, easiest thing to do after going down that error recover
path is
https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.0/com.ibm.itsm.hsmul.doc/c_mig_stub_size.html
Trust but verify. And try it before you buy it. (Personally, I would have
guessed sub-block, doc says otherwise, but I'd try it nevertheless.)
___
The simple answer is YES. I think the other replies are questioning
whether you really want something different or more robust against
failures.
From: Aaron Turner
To: "gpfsug-discuss@spectrumscale.org"
Date: 05/14/2019 04:48 AM
Subject:[EXTERNAL]
2TB of extra meta data space for 100M files with ACLS?! I think that would
be 20KB per file! Does seem there's some mistake here.
Perhaps 2GB ? or 20GB? I don't see how we get to 2TeraBytes!
ALSO, IIRC GPFS is supposed to use an ACL scheme where identical ACLs are
stored once and each file
If you're into pondering some more tweaks:
-i InodeSize is tunable
system pool : --metadata-block-size is tunable separately from -B
blocksize
On ESS you might want to use different block size and error correcting
codes for (v)disks that hold system pool.
Generally I think you'd want to
If one googles "GPFS AFM Migration" you'll find several IBM presentations,
white papers and docs on the subject.
Also, I thought one can run AFM between two file systems, both file
systems in the same cluster. Yes I'm saying local cluster == remote
cluster == same cluster.
I thought I did
I don't know the particulars of the case in question, nor much about ESS
rules...
But for a vanilla Spectrum Scale cluster -.
1) There is nothing wrong or ill-advised about upgrading software and then
creating a new version 5.x file system... keeping any older file systems
in place.
2) I
... WHERE KB_ALLOCATED=0 AND FILE_SIZE>0
Oh, if you are also working with an HSM or HSM-like manager that can
migrate files -- then you might have to add some additional tests...
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
This will select files that have some data, but it must all be in the
inode, because no (part of) any data block has been assigned
... WHERE KB_ALLOCATED=0 AND FILE_SIZE>0
From: Jan-Frode Myklebust
To: gpfsug main discussion list
Date: 03/28/2019 10:08 AM
Subject:Re:
K.I.S.S. Try to open and read a file that you stored in GPFS. If good,
proceed. Otherwise wait a second and retry.
Nope, nothing GPFS specific about that. Need not be privileged or root
either. K.I.S.S.
___
gpfsug-discuss mailing list
's Topics:
1. Re: Migrating billions of files? mmfind ... mmxcp (Marc A Kaplan)
--
Message: 1
Date: Wed, 6 Mar 2019 10:01:57 -0500
From: "Marc A Kaplan"
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...
mmxcp may be in samples/ilm if not, perhaps we can put it on an approved
file sharing service ...
+ mmxcp script, for use with mmfind ... -xargs mmxcp ...
Which makes parallelized file copy relatively easy and super fast!
Usage: /gh/bin/mmxcp -t target -p strip_count source_pathname1
Various "leave" / join events may be interesting ... But you've got to
consider that an abrupt failure of several nodes is not necessarily
recorded anywhere! For example, because the would be recording devices
might all lose power at the same time.
We have (pre)shutdown and pre(startup) ...
Trap and record both... If you see a startup without a matching shutdown
you know the shutdown never happened, because GPFS crashed.
From: "Oesterlin, Robert"
To: gpfsug main discussion list
Date: 01/30/2019 05:52 PM
Subject:
1. First off, let's RTFM ...
-d Displays the amount of storage that is used by the snapshot.
This operation requires an amount of time that is proportional to the size
of the file system; therefore,
it can take several minutes or even hours on a large and heavily-loaded
file system.
This
Good to know the "Rest" does it for us. Since I started working on GPFS
internals and CLI utitlities around Release 3.x, I confess I never had
need of the GUI or the Rest API server. In fact I do most of my work
remotely via Putty/Xterm/Emacs and only once-in-a-while even have an
XWindows or
Personally, I agree that there ought to be a way in the product.
In the meawhile, you no doubt already have some ways to tell your users
where to find their filesets as pathnames.
Otherwise, how are they accessing their files?
And to keep things somewhat sane, I'd bet filesets are all linked to
“Is there a way for a non-root user” to get the junction path for the
fileset(s)?
Presuming the user has some path to some file in the fileset...
Issue `mmlsattr -L path` then "walk" back towards the root by discarding
successive path suffixes and watch for changes in the fileset name field
Regarding mixing different sized NSDs in the same pool...
GPFS has gotten somewhat smarter about striping over the years and also
offers some options about how blocks are allocated over NSDs And
then there's mmrestripe and its several options/flavors...
You probably do want to segregate
Look in gps.h I think the comment for acl_buffer_len is clear enough!
/* Mapping of buffer for gpfs_getacl, gpfs_putacl. */
typedef struct gpfs_opaque_acl
{
intacl_buffer_len; /* INPUT: Total size of buffer
(including this field).
OUTPUT:
I confess, I know what checksums are generally and how and why they are
used, but I am not familiar with all the various checksums that have been
discussed here.
I'd like to see a list or a chart with the following information for each
checksum:
Computed on what data elements, of what
at you need multiple passes and things change in
between, it also significant increases migration times. You will always
miss something or you need to manually correct. The right thing is to have
1 tool that takes care of both, the bulk transfer and the additional
attributes.
Sven
From
Rather than hack rsync or cp ... I proposed a smallish utility that would
copy those extended attributes and ACLs that cp -a just skips over.
This can be done using the documented GPFS APIs that were designed for
backup and restore of files.
SMOP and then add it as an option to samples/ilm/mmxcp
I believe `mmchqos ... -N ... ` is supported at 4.2.2 and later.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
How about using the -F option?
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
ned integer"
Is that the intended/documented behavior of "(value * value)" in a policy
statement?
[attachment "attwost2.dat" deleted by Marc A Kaplan/Watson/IBM]
___
gpfsug-discuss mailing list
gp
?!? Since this filesystem uses
512 byte inodes, there is no data content from any files involved (for a
metadata only disk), correct? Thanks…
Kevin
On Sep 11, 2018, at 9:12 AM, Marc A Kaplan wrote:
Metadata is anything besides the data contents of your files.
Inodes, directories, indirect blocks, alloc
Metadata is anything besides the data contents of your files.
Inodes, directories, indirect blocks, allocation maps, log data ... are
the biggies.
Apparently, --iohist may sometimes distinguish some metadata as "inode",
"logData", ... that doesn't mean those aren't metadata also.
From:
There is no single "optimal" number of files per directory.
GPFS can handle millions of files in a directory, rather efficiently. It
uses fairly modern extensible hashing and caching techniques that makes
lookup, insertions and deletions go fast. But of course, reading or
"listing" all
No, but of course for a RAID with additional parity (error correction)
bits the controller needs to read and write more.
So if, for example, n+2 sub-blocks per stripe = n data and 2 error
correction...
Then the smallest update requires read-compute-write on 1 data and 2 ecc.
= 3 reads and 3
A somewhat smarter RAID controller will "only" need to read the old values
of the single changed segment of data and the corresponding parity
segment, and know the new value of the data block. Then it can compute the
new parity segment value.
Not necessarily the entire stripe. Still 2 reads
Perhaps repeating myself, but consider no-RAID or RAID "0" and
-M MaxMetadataReplicas
Specifies the default maximum number of copies of inodes, directories, and
indirect blocks for a file.
Valid values are 1, 2, and 3. This value cannot be less than the value of
DefaultMetadataReplicas. The
OR don't do RAID replication, but use GPFS triple replication.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
It's good to try to reason and think this out... But there's a good
likelihood that we don't understand ALL the details, some of which may
negatively impact performance - so no matter what scheme you come up with
- test, test, and re-test before deploying and depending on it in
production.
-discuss] Those users millions of files per
directory - not necessarily a mistake
Sent by:gpfsug-discuss-boun...@spectrumscale.org
But heaven help you if you export the gpfs on nfs or cifs.
-- ddj
Dave Johnson
On Aug 23, 2018, at 11:23 AM, Marc A Kaplan wrote:
Millions of files
Millions of files per directory, may well be a mistake...
BUT there are some very smart use cases that might take advantage of GPFS
having good performance with large directories --
because GPFS uses extensible hashing -- it is better to store millions of
files in a single GPFS directory than
Richard Powell,
Good that you have it down to a smallish test case. Let's see it!
Here's my test case. Notice I use -L 2 and -I test to see what's what:
[root@/main/gpfs-git]$mmapplypolicy c41 -P /gh/c41gp.policy -L 2 -I test
[I] GPFS Current Data Pool Utilization in KB and %
Pool_Name
To repack in random order, which might be an interesting and easy way to
test and demonstrate...
Use the RAND() function:
RULE ... MIGRATE ... WEIGHT(RAND()) ...
-L 3 on the mmapplypolicy command will make the random weights evident in
the output.
Migrate to a group pool "repacks" the selected files over the pools that
comprise the group IN THE ORDER SPECIFIED UP TO THE SPECIFIED LIMIT for
each pool. To see this work, in your case, set a limit that is near the
current occupancy of pool 'ssd'.
For example: RULE ‘gp’ GROUP POOL ‘gpool’
s per
independent fileset?
Many thanks in advance.
Best Regards,
Stephan Peinkofer
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Marc A Kaplan
Sent: Tuesday, August 14, 2018 5:31 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit
True, mmbackup is designed to work best backing up either a single
independent fileset or the entire file system. So if you know some
filesets do not need to be backed up, map them to one or more indepedent
filesets that will not be backed up.
mmapplypolicy is happy to scan a single
(FH), M. Sc. (TUM)
Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
Tel: +49(0)89 35831-8715 Fax: +49(0)89 35831-9700
URL: http://www.lrz.de
On 12. Aug 2018, at 15:05, Marc A Kaplan wrote:
That's interesting, I confess I never read that piece
Peinkofer
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Marc A Kaplan
Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit
I know quota stuff was cooked into GPFS before we even had "independent
I know quota stuff was cooked into GPFS before we even had "independent
filesets"...
So which particular quota features or commands or options now depend on
"independence"?! Really?
Yes, independent fileset performance for mmapplypolicy and mmbackup scales
with the inodespace sizes. But I'm
Questions: How/why was the decision made to use a large number (~1000) of
independent filesets ?
What functions/features/commands are being used that work with independent
filesets, that do not also work with "dependent" filesets?
___
gpfsug-discuss
https://en.wikipedia.org/wiki/Coopetition
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
https://www.linkedin.com/in/oehmes/
Apparently, Sven is now "Chief Research Officer at DDN"
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
Firstly, I do suggest that you run some tests and see how much, if any,
difference the settings that are available make in performance and/or
storage utilization.
Secondly, as I and others have hinted at, deeper in the system, there may
be additional parameters and settings. Sometimes they
KB isn’t the default even if I had chosen an 8 or 16 MB block
size!
Kevin
?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and
Education
kevin.buterba...@vanderbilt.edu - (615)875-9633
On Aug 1, 2018, at 12:21 PM, Marc A Kaplan wrot
I haven't looked into all the details but here's a clue -- notice there is
only one "subblocks-per-full-block" parameter.
And it is the same for both metadata blocks and datadata blocks.
So maybe (MAYBE) that is a constraint somewhere...
Certainly, in the currently supported code, that's what
Why no path name in SET POOL rule?
Maybe more than one reason, but consider, that in Unix, the API has the
concept of "current directory" and "create a file in the current
directory"
AND another process or thread may at any time rename (mv!) any
directory...
So even it you "think" you know the
I would start by making sure that the application(s)... open the file
O_RDONLY and then you may want to fiddle with the GPFS atime settings:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.ins.doc/bl1ins_atime.htm
At first I thought "uge" was a typo, but
As long as we're giving hints...
Seems tsdbfs has several subcommands that might be helpful.
I like "inode"
But there's also "listda"
Subcommand "desc" will show you the structure of the file system under
"disks:" you will see which disk numbers are which NSDs.
Have fun, but DO NOT use the any
(psss... ) tsdbfs
Not responsible for anything bad that happens...!
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
If your restore software uses the gpfs_fputattrs() or
gpfs_fputattrswithpathname methods, notice there are some options to
control the pool.
AND there is also the possibility of using the little known "RESTORE"
policy rule to algorithmically control the pool selection by different
criteria
Kevin, that seems to be a good point.
IF you have dedicated hardware to acting only as a storage and/or file
server, THEN neither meltdown nor spectre should not be a worry.
BECAUSE meltdown and spectre are just about an adversarial process spying
on another process or kernel memory. IF
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT
LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
THESE are good, valid and fairly efficient tests for any files Spectrum
Scale system that has a DMAPI
"Not sure if the V and M misc_attributes are the same for other tape
backends..."
define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT
LIKE '%V%'))
define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%'))
define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%'))
There are good, valid
No need to specify REPLICATE(1), but no harm either.
No need to specify a FROM POOL, unless you want to restrict the set of
files considered. (consider a system with more than two pools...)
If a file is already in the target (TO) POOL, then no harm, we just skip
over that file.
From: "Simon
Uwe also asked: whether it is unwise to have the external and the internal
migrations in an uncoordinated fashion so that it might happen some files
have been migrated to external before they undergo migration from one
internal pool (pool0) to the other (pool1)
That's up to the admin. IOW
I added support to mmapplypolicy & co for HPSS (and TSM/HSM) years ago.
AFAIK, it works pretty well, and pretty much works in a not-too-surprising
way.
I believe that besides the Spectrum Scale Admin and Command docs there are
some "red book"s to help,
and some IBM support people know how to
To help make sense of this, one has to understand that "independent" means
a different range of inode numbers.
If you have a set of files within one range of inode numbers, say
3000-5000 and now you want to move some of them to a new range of inode
numbers, say 7000-8000, you're going to have
To address this question or problem, please be more specific about:
1) commands and policy rules used to perform or control the migrations and
pre-migrations.
2) how the commands are scheduled, how often, and/or by events and
mmXXcallbacks.
3) how many files are "okay" and how many are
It's more than hope. It works just as I wrote and documented and tested.
Personally, I find the nomenclature for filesets and inodespaces as
"independent filesets" unfortunate and leading to misunderstandings and
confusion. But that train left the station a few years ago, so we just
live
I suggest you remove any FOR FILESET(...) specifications from your rules
and then run
mmapplypolicy
/path/to/the/root/directory/of/the/independent-fileset-you-wish-to-scan
... --scope inodespace -P your-policy-rules-file ...
See also the (RTFineM) for the --scope option and the Directory
No, I think you'll have to find a working DeLorean, get in it and while
traveling at 88 mph (141.622 kph) submit your email over an amateur packet
radio network
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
return a success.
(Local Linux file systems do not do this with default mounts,
but networked filesystems usually do.)
Aaron, can you trace your application to see
what is going on in terms of system calls?
— Peter
> On 2018 Apr 10 Tue, at 18:28, Marc A Kaplan <makap...@us.ib
Debug messages are typically unbuffered or "line buffered". If that is
truly causing a performance problem AND you still want to collect the
messages -- you'll need to find a better way to channel and collect those
messages.
___
gpfsug-discuss
To my mind this is simpler: IF you can mmdelnode without too much
suffering, do that. Then reconfigure the host name and whatever else you'd
like to do. Then mmaddnode...
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
Look at my example, again, closely. I chose the blocksize as 16M and
subblock size as 4K and the inodesize as 1K
Developer works is a good resource, but articles you read there may be
incomplete or contain mistakes.
The official IBM Spectrum Scale cmd and admin guide documents, are
Does the mirrored-storage vendor guarantee the sequence of all writes to
all the LUNs at the remote-site exactly matches the sequence of writes to
the local site? If not.. the file system on the remote-site could be
left in an inconsistent state when the communications line is cut...
Thread seems to have gone off on a product editions and Licensing tangents
-- refer to IBM website for official statements:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1in_IntroducingIBMSpectrumScale.htm
So much the more reasons to use
mmfind ... -xargs ...
Which, for large number of files, gives you a very much more performant
and parallelized execution of the classic
find ... | xargs ...
The difference is exec is run in line with the evaluation of the other
find conditionals (like
More recent versions of mmfind support an -xargs option... Run mmfind
--help and see:
-xargs [-L maxlines] [-I rplstr] COMMAND
Similar to find ... | xargs [-L x] [-I r] COMMAND
but COMMAND executions may run in parallel. This is preferred
to -exec. With -xargs mmfind will
Leaving aside the -exec option, and whether you choose classic find or
mmfind,
why not just use the -ls option - same output, less overhead...
mmfind pathname -type f -ls
From: John Hearns
To: gpfsug main discussion list
Cc:
Let's give Fujitsu an opportunity to answer with some facts and re-pose
their questions.
When I first read the complaint, I kinda assumed they were using mmbackup
and TSM -- but then I noticed words about some gpfs_XXX apis So it
looks like this Fujitsu fellow is "rolling his own"... NOT
Please clarify and elaborate When you write "a full backup ... takes
60 days" - that seems very poor indeed.
BUT you haven't stated how much data is being copied to what kind of
backup media nor how much equipment or what types you are using... Nor
which backup software...
We have
Recall that many years ago we demonstrated a Billion files scanned with
mmapplypolicy in under 20 minutes...
And that was on ordinary at the time, spinning disks (not SSD!)... Granted
we packed about 1000 files per directory and made some other choices that
might not be typical usage OTOH
One way to know for sure it to do some experiments and dump some inodes
with the command
tsdbfs filesystem_name inode inode_number
Of course improper use of tsdbfs command can destroy or corrupt your
filesystems.
So I take no responsibility for that.
To stay safe, only use it on test
If one were starting over, it might make sense to use a smaller inode
size. I believe we still support 512, 1K, 2K.
Tradeoff with the fact that inodes can store data and EAs.
From: "Uwe Falke"
To: gpfsug main discussion list
Hint. RTFineManual, particularly the Admin guide, and look for
MISC_ATTRIBUTES,
Regarding metadata replication, one first has to ask, which metadata is
replicated when and what if anything the mmchattr -m does or changes...
Yes, "special" characters in pathnames can lead to trouble... But just for
the record...
GPFS supports the same liberal file name policy as standard POSIX. Namely
any bytestring is valid, except:
/ delimits directory names
The \0 (zero or Null Character) byte value marks the end of the
Having multiple blocksizes in the same file system would unnecessarily
complicate things. Consider migrating a file from one pool to another
with different blocksizes... How to represent the indirect blocks (lists
of blocks allocated to the file)? Especially consider that today,
migration
It's not clear that this is a problem or malfunction.
Customer should contact IBM support and be ready to transmit copies of the
cited log files and other mmbackup command output (stdout and stderr
messages) for analysis.
Also mmsnap output.
From: "IBM Spectrum Scale"
To:
You might want to look at FILE_SIZE. KB_ALLOCATED will be 0 if the file
data fits into the inode.
You might also want to use SIZE(FILE_SIZE) in the policy LIST rule, this
will cause the KB_HIT and KB_CHOSEN numbers to be the sum of FILE_SIZEs
instead of the default SIZE(KB_ALLOCATED).
--marc
Indeed, for a very large directory you might get some speedup using
samples/ilm/mmfind directory -ls -maxdepth 1
There are some caveats, the same as those for the command upon which
mmfind rests, mmapplypolicy.
From: Skylar Thompson
To:
1 - 100 of 199 matches
Mail list logo