in addition, to Achim,
where do you see those "smaller IO"...
have you checked IO sizes with mmfsadm dump iohist on each NSDclient/Server ?... If ok on that level.. it's not GPFS
Mit freundlichen Grüßen / Kind regards
Olaf Weiser
- Ursprüngliche Nachricht -Von: &q
keep in mind... creating many snapshots... means ;-) .. you'll have to delete many snapshots..
at a certain level, which depends on #files, #directories, ~workload, #nodes, #networks etc we ve seen cases, where generating just full snapshots (whole file system) is the better approach instead
Hallo Lucas , here we are
this is a regular email, send from Verse
@All, please ignore this email, it is to track internal email format issues
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
This email is just a test, because we've seen mail format issues from IBM sent emails
you can ignore this email , just for internal problem determination
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
Hallo Walter,
;-)
yes !AND! no ..
for sure , you can specifiy a subset of nodes to use RDMA and other nodes just communicating TCPIP
But that's only half of the truth .
The other half is.. who and how , you are going to migrate/copy the data
in case you 'll use mmrestripe you will have to
add this line to the ticket/record information, when opening a service ticket ..
fput failed: Version mismatch on conditional put (err 805)
- Ursprüngliche Nachricht -Von: "Luis Bolinches" Gesendet von: gpfsug-discuss-boun...@spectrumscale.orgAn: gpfsug-discuss@spectrumscale.orgCC:
temporary directories on each bootR! /tmp/systemd-private-*R! /var/tmp/systemd-private-*[root@ess5-ems1 ~]#
hope this helps -
cheers
Mit freundlichen Grüßen / Kind regards
Olaf Weiser IBM Systems, SpectrumScale Client Adoption
[...] We have tried a number of things including Spectrum Scale clientversion 5.0.5-9[...] in the client code or the server code?
there are going multiple improvements in the code.. continuously... Since your version 4.2.3 / 5.0.5 a lot of them are in the area of NSD server/GNR (which is
Hi Ed,
not a ready to run for "everything".. but just to remind, there is an ESCAPE statement
by this you can
cat policy2 RULE EXTERNAL LIST 'allfiles' EXEC '/var/mmfs/etc/list.exe' ESCAPE '%/#'
and turn a file name into smth , what a policy can use
I haven't used it for a while , but
smells like a network problem ..
IBV_WC_RETRY_EXC_ERR comes from OFED and clearly says that the data didn't get through successfully,
further help .. check
ibstat
iblinkinfo
ibdiagnet
and the sminfo .. (should be the same on all members)
- Ursprüngliche Nachricht -Von: "Iban
Hallo Olu,
from the log you provide, nothing seems to be faulty... but that does not mean, there is no issue ...
if you think , it is a GPFS problem start gpfs trace on a sample node, , which has this problem again and again... and capture a trae as well and provide that data to IBM
I
c:Subject: [EXTERNAL] Re: [gpfsug-discuss] Using setfacl vs. mmputaclDate: Mon, Mar 1, 2021 5:51 PM
On 01/03/2021 15:18, Olaf Weiser wrote:> CAUTION: This email originated outside the University. Check before> clicking links or attachments.> JAB,> yes-this is in argument ;-) ..
setfacl vs. mmputaclDate: Mon, Mar 1, 2021 2:14 PM
On 01/03/2021 12:45, Olaf Weiser wrote:> CAUTION: This email originated outside the University. Check before> clicking links or attachments.> Hallo Stephen,> behavior ... or better to say ... predicted behavior for chmod and ACLs> .. is
Hallo Stephen,
behavior ... or better to say ... predicted behavior for chmod and ACLs .. is not an easy thing or only , if you stay in either POSIX world or NFSv4 world
to be POSIX compliant, a chmod overwrites ACLs
GPFS was enhanced to ignore overwrites to ACLs on chmod by a parameter..
@all, please note...
as being said. there is a major difference , if we talk about GNR or GPFS native...
one "comon" key is the #queues in the OS to talk to a disk-device,
so if you run "classical" NSD architecture.. you may check how many IOPS you can fire against your block devices...
HI ,
those so called "nested" mounts .. you need to make sure to have the "upper" FS mounted first
this here may help
--mount-priority PriorityControls the order in which the individual file systems are mounted at daemon startup or when one of the all keywords is specified on
Hallo Iban,
this seems to be a hardware issue
Input/output error
just try /make sure.. that you really can read from the disk .. all NSDs from all of its NSD servers
so to say.. its most important, that the NSD is accessible on the primary NSD server.. as long this primary NSD server is
let me add a few comments from some very successful large installations in Eruope
# InterOP
Even though (as Luis pointed to) , there is no support statement to run intermix DSS/ESS in general, it was ~, and is, and will be, ~ allowed for short term purposes, such as e.g migration.
The reason to
do you looking fo smth like this:
mmafmlocal ls filename or stat filename
- Original message -From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] Checking if a AFM-managed file is
Hallo Philipp, seems, your nodes can not clearly communicate ?!? ..
can you check , that gpfs.gskit is at the same level ..if not, pls update to the same level
I've seen similar behavior , when reverse lookup of host names / wrong entries in /etc/hosts ... is breaking you setup ..
if DNS
ort is not existing or active (check by mmlsconfig and ibdev2netdev)
Mit freundlichen Grüßen / Kind regards
Olaf Weiser IBM Systems, SpectrumScale Client Adoption---
Hallo Stephan.. @all,
I think .. yes.. RFE is the way to go ...
the current behavior is really works-as designed even though I see your point, currently : a move of a file between filesets is smth like writing a new file and delete the old
so I expect, this will remain always the case , when
you may consider WatchFolder ... (cluster wider inotify --> kafka) .. and then you go from there
- Original message -From: Andi Christiansen Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug-discuss@spectrumscale.org" Cc:Subject: [EXTERNAL] [gpfsug-discuss] AFM
Hallo Heiner, Stefan, thx for this heads up ..we know ... it's all GNR .. so the answer
is a bit different, depending on the scenario on the regular building blocks.. the
RG Layout (2 RGs per BB) .. is different from the scale out ECE (#4+ nodes,
one RG ) The absolute max #vdisk is ( I think
)
Hallo "merging" 2 different GPFS
cluster into one .. is not possible .. for sure you can do "nested"
mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other
ESS) ..to an existing (other) cluster... - you can't preserve ESS's
RG definitions... you need
Hi Alex, not 100% sure about my answer.. but
so far as I see it.. it is working, because of the so called "dito
resolution " .. In the snaphost's inode .. die pointer to the DA's
point the the next (more recent) inode information .. so accessing a file in a snapshot- "redirects"
the request to
sorry.. this time.. with link Hallo Bill, welcome .. hard to predict, what your read "slowness" is about ...
some baseline tuning seems to be the trick for you... https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)From:
Hallo Bill, welcome .. hard to predict, what's your read "slowness"
is about ... some baseline tuning seems to be the
trick for you... From:
"Peters, Bill"
To:
"gpfsug-discuss@spectrumscale.org"
Date:
11/20/2019 07:18 PMSubject:
[EXTERNAL] [gpfsug-discuss]
Hi - sorry for delayed response..
as Alex started.. let me add a little though on thatyour said ... you came from GL4 ... to now GL6
... MES update is only supported , when converting everything to mmvdisk
... so I suspect.. you did it alreadynext.. by going through this MES upgrade ...
Hallo Heiner, usually, Spectrum Scale comes with a
tuned profile (named scale) .. [root@nsd01 ~]# tuned-adm active Current active profile: scalein there [root@nsd01 ~]# cat /etc/tuned/scale/tuned.conf |
tail -3 # Disable IPv6 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1
Hi you can take filehist ... -rwxr--r-- 1 root root 1840 Jan 30 02:24 /usr/lpp/mmfs/samples/debugtools/filehistit gives you a nice report how
many files at all, how much space .. etc..From:
"Dorigo Alvise
(PSI)" To:
"gpfsug-discuss@spectrumscale.org"
Date:
03/28/2019 01:52
unfortunately .. commands like nfs4_setfacl
are not implemented yet in GPFS
I once helped me out with a local NFS
mount to set ACLs automated ... then you can use NFSv4 client to do the
ACL stuff .. From:
"Buterbaugh, Kevin
L" To:
gpfsug main discussion
list Date:
so from the nsdMaxWorkerThreads 1024 I used to specify the same way for minWorker
... and tell everybody in the cluster.. ignorePrefetchLunCount=yesto adjust the min/maxworkers to your
infrastructure according your need.. how many IOPS - and / or bandwidth
with your given BS , do you think can
HI Jan, yes.. but we should highlight, that
this means.. an extra / additionally copy on writes/ changes to a block
... so it adds a bit latency , when running in this mode From:
Jan-Frode Myklebust
To:
gpfsug main discussion
list Date:
01/29/2019 08:19 PMSubject:
Re:
and .. check mmlsnode -N waiters -Lfor a very long waiter .. if the FS is still not mountedFrom:
"Frederick Stock"
To:
gpfsug main discussion
list Date:
01/16/2019 07:38 PMSubject:
Re: [gpfsug-discuss]
Filesystem automount issuesSent by:
Hallo Simon, It is a known issue - tsctl shownodes
up .. reports wrong FQDN and so CES can't retrieve the right information,
that the node is up and healthy .. once in a while , I had
same .. and I'm told, that there should be a fix (soon) . The only official way to recover is..
bring down whole
HI Tomer, send my work around wrapper to Renar..
I've seen to less data to be sure, that's the same (tsctl shownodes ...)
issue but he'll try and let us know .. From:
"Grunenberg, Renar"
To:
gpfsug main discussion
list , "Olaf Weiser"
Date:
11/29/
Try an dedicated extra „ -p „ foreach Attribute
Von meinem iPhone gesendet
> Am 26.11.2018 um 16:50 schrieb Dorigo Alvise (PSI) :
>
> Good evening,
> I'm following this guide:
>
ntp running / time correct ?From:
"Dorigo Alvise
(PSI)" To:
"gpfsug-discuss@spectrumscale.org"
Date:
11/15/2018 04:30 PMSubject:
[gpfsug-discuss]
Wrong behavior of mmperfmonSent by:
gpfsug-discuss-boun...@spectrumscale.orgHello,I'm using mmperfmon to get writing
Hi Simon .. well - I would love to .. .but .. ;-)
hey - what do you think, how long a citizen from the EU can live (and work)
in UK ;-) don't take me too serious... see you
soon, consider you invited for a coffee for my rude comment .. ;-)olafFrom:
Simon Thompson To:
spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss[attachment
"ecblank.gif" deleted by Olaf Weiser/Germany/IBM] [attachment
"19995626.gif" deleted by Olaf Weiser/Germany/IBM] [attachment
"ecblank.gif" deleted by Olaf Weiser/Germany/IBM] __
I think the step you are missing is this/configure LIBS=/usr/lpp/mmfs/lib/libgpfs.so make Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
Hallo friends and fans of GPFS and Scale
;-) in case you have an interest in running
SAP on Scale .. by this week , we got the final approval and re-certification
for all new ESS models to run SAP HANA
179898.441
28021.467 Tree creation :
10.215 3.165
6.603 2.881
Tree removal :
5.484 0.880
2.418 2.168
-- finished at 09/07/2018 06:55:42 --Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage
inux in general doesn't have a workload manager like in AIX. Abraços
/ Regards / Saludos, Anderson NobreAIX & Power ConsultantMaster Certified IT SpecialistIBM Systems Hardware Client Technical Team – IBM Systems Lab Services Phone:55-19-2132-4317E-mail: ano...@br.ibm.com - Original mess
as Dominic said.. .. your are absolutely
right .. for mmbackup you need dedicated inode spaces .. so "independent"
filesets .. (in case I you wanna be able to mmbackup on a fileset
level or multiple mmbackup's in parallel .. )From:
"Peinkofer, Stephan"
To:
gpfsug main discussion
there's no mm* command to get it cluster
wide.. you can use fileheat and policy
engine to identify most active files .. and further more... combine
it with migration rules ... to replace those files .. please note.. files, that are accessed
very heavily but all requests answered out of pagepol
ide
for this ? - if not - I know .. open PMRs is an additional ...but
could you please .. then we can decide .. if raising the
limit is an option for you .. Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platf
dear friends of GPFS, Sven is Sven.. and he is "[...]permanent[...]"
.. it is hard to saw him went away to DDN,
'cause as we all agree he contributes very^99 much in GPFS and he's good
friend to.. but hey.. GPFS has a 20 year history..is
very complex ... so it's not a one man show .. we are still
Can u share your stanza file ?
Von meinem iPhone gesendet
> Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L
> :
>
> OK, so hold on … NOW what’s going on??? I deleted the filesystem … went to
> lunch … came back an hour later … recreated the filesystem with a metadata
> block size of 4 MB
Hi Renar, if possible, let's check if you can
identify specific parts of your name space, which a affected (fileset,
subDir ...) if so .. you can EXCLUDE them from mmbackup
andrun a 2nd policy in parallel with an
EXEC LIST and call dsmc incr directly upon that list .. I know.. it's not a solution
HI, yes.. CES comes along with lots of monitors
about status, health checks and a special NFS (ganesha) code.. which is
optimized / available only for a limited choice of OS/platforms so CES is not available for e.g. AIX
and in your case... not available for systemZ ... but - of course you can
Renar, can you share , what gpfs packages you
tried to install I just did a fresh 5.0.1 install and
it works fine for me... even though, I don't see this ibm python rpm [root@tlinc04 ~]# rpm -qa | grep -i openssl
Hallo Simon replication attributes of a file won't
be changed just by the fact , that the pool attribute is changed.. or in other words .. if a file gets
migrated from POOLA to POOLB, does not change the replication automatically...
even if the pool consists of NSDs with multiple fgso depending
you can try :systemctl enable NetworkManager-wait-onlineln -s '/usr/lib/systemd/system/NetworkManager-wait-online.service' '/etc/systemd/system/multi-user.target.wants/NetworkManager-wait-online.service'in many cases .. it helps .. From:
Jan-Frode Myklebust
To:
HI Mark.. yes.. that's possible...
at least , I'm sure.. there was a chapter in the former advanced
admin guide of older releases with PPRC .. how to do that.. similar to PPRC , you might use other
methods , but from gpfs perspective this should'nt make a difference..
and I had have a german
HI Doug,
I did some compares with gpfsperf ...
betweend IB and 100GbE .. but we used the 100GbE with ROCE .. so my results
might not be representative for you .. (don't wonder about edited hostnames
.. its from a real customer environment..) so with real data workload.. it is nearly
the same... ~
this parameter is just for administrative
commands.. "where" to send the output of a command...and for those admin ports .. so called
ephemeral ports... it depends , how much admin commands ( = sessions =
sockets) you want to run in parallel in my experience.. 10 ports is more
than enough we use
yes... to add some more details even though it might be very theoretical,
that only some nodes from the foreign cluster 'll suffer from connection
issues, the rule to react up on expel request isA) if the requested node is really unreachable
(or in trouble) ... the node will loose the disk
Hi Mark, I think what you'll need is to set name = "GPFSDisk"this should report the utilization to
the direct attached disk cheers olsfFrom:
Mark Bush To:
gpfsug main discussion
list Date:
12/19/2017 04:50 PMSubject:
even though I think, this is something
to open a PMR .. you might help you out yourself by
finding pending messages to this nodeso check on mmfsadm dump tscomm ...
output on that node if you find pending messages to a specific
node.. .go on that node and debug further.. if it is not an important
e 1 node + 2 pdisk
limited by rebuild space sf_g_01_vdisk02 1 node + 1 pdisk
limited by rebuild space sf_g_01_vdisk07 1 node + 1 pdisk
limited by rebuild space sf_g_01_vdisk01 2 pdiskThanks,IvanoIl 16/11/17 13:03, Olaf Weiser ha scritto:>
we run gpfsperf write seq.Thanks,IvanoIl 16/11/17
04:42, Olaf Weiser ha scritto:> Sure... as long we assume that really all
physical disk are used .. the> fact that was told 1/2 or 1/4 might turn out
that one / two complet> enclosures 're eliminated ... ? ..that s why I was
Hi Falk, can you open a PMR for it .. it
should be investigated in detail From:
"Uwe Falke"
To:
gpfsug main discussion
list Date:
10/24/2017 06:49 PMSubject:
[gpfsug-discuss]
nsdperf crash testing RDMA between
is ib_read_bw working ?just test it between the two nodes ...
From:
Barry Evans To:
gpfsug main discussion
list Date:
09/20/2017 03:21 AMSubject:
[gpfsug-discuss]
RoCE not playing ballSent by:
HI Aaron, not sure, if we are ready to talk/ distribute
pnfs4.1 experiences here.. I know one customer doing pNFS and for
myself , we did a lot of testing hereplease contact me directly .. let's
see, how I can help .. From:
Aaron Knister To:
Hi Andreas, one could use the WEIGHT statement
... a simple policy for e.g. rule ’repack’ MIGRATE FROM POOL ’xx’
TO POOL ’’ WEIGHT(FILE_HEAT)and then the -I prepare to see, what
would be done by policy.. or you use the LIST function .. or ..
and so on .. From:
Andreas Landhäußer
ht that QOS (which I’m using) was what would keep
an mmrestripefs from overrunning the system … QOS has worked extremely
well for us - it’s one of my favorite additions to GPFS.KevinOn May 4, 2017, at 10:34 AM, Olaf Weiser <olaf.wei...@de.ibm.com>
wrote:no.. it is just in the code, because
no.. it is just in the code, because we
have to avoid to run out of mutexs / blockreduce the number of nodes -N down to
4 (2nodes is even more safer) ... is the easiest way to solve it
for nowI've been told the real root cause will
be fixed in one of the next ptfs .. within this year .. this
this configuration (2 nodes and tiebreaker)
is not designed to survive node and disk failures at the same time... this depends on , where the clustermanager
and the filesystem manager runs .. when a node and half of the disk disappear
at the same time...for a real active-active configuration
you
pls checkworkerThreads (assuming you 're
> 4.2.2) start with 128 .. increase iteratively pagepool at least 8 GignorePrefetchLunCount=yes (1) then you won't see a difference and
GPFS is as fast or even faster .. From:
"Marcus Koenig1"
To:
gpfsug main discussion
the issue is fixed, an APAR will be released soon - IV93100From:
Olaf Weiser/Germany/IBM@IBMDETo:
"gpfsug main discussion
list" <gpfsug-discuss@spectrumscale.org>Cc:
"gpfsug main discussion
list" <gpfsug-discuss@spectrumscale.org>Date:
yes.. and please be carefully about the
number of nodes , doing the job because of multiple PIT worker hammering
against your data if you limit the restripe to 2 nodes
(-N ..) of adjust the PITworker down to 8 or even 4 ...
you can run multiple restripes.. without hurting the application
as long, you did not changed mmchconfig
release=latest and the file system version has'nt changed as well , this
should work ( I did it several times..) From:
"mark.b...@siriuscom.com"
To:
gpfsug main discussion
list Date:
CES ip addresses just disappeared
from my two protocol nodes (4.2.2.0). From: <gpfsug-discuss-boun...@spectrumscale.org>
on behalf of Olaf Weiser <olaf.wei...@de.ibm.com>Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Date: Thursday, February 2, 2017 at 12:02 P
Hi Ha-Jo, we do the same here .. so no news so
far as I know... grussvom laffFrom:
Hans-Joachim Ehlers
To:
gpfsug main discussion
list Date:
02/03/2017 05:14 PMSubject:
[gpfsug-discuss]
Mount of file setSent by:
pls contact me directly olaf.wei...@de.ibm.comMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
seems, that the node is up n running from
the OS point of view .. so one can ping the node/ login the
node... but the /var/mmfs DIR is obviously damaged/empty
.. what ever.. that's why you see a message like this..have you reinstalled that node / any
backup/restore thing ?From:
"J. Eric
Hi ...same thing here.. everything after
10 nodes will be truncated.. though I don't have an issue with it
... I 'll open a PMR .. and I recommend you to do the same thing.. ;-)
the reason seems simple.. it is the
"| tail" .at the end of the command.. .. which truncates
the output to the last 10
FS version.Other ideas, anyone? Is anyone else in a similar
situation and can test whether they see similar behavior?Thanks...KevinOn Jan 19, 2017, at 2:45 AM, Olaf Weiser <olaf.wei...@de.ibm.com>
wrote:have you checked, where th fsmgr runs
as you have nodes with different code levelsmmls
have you checked, where th fsmgr runs as
you have nodes with different code levelsmmlsmgr From:
"Buterbaugh, Kevin
L" To:
gpfsug main discussion
list Date:
01/18/2017 04:57 PMSubject:
[gpfsug-discuss]
... but as I said,
please discuss this within the PMR .. there are to much dependencies to
discuss this here .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
rsize , wsize is set to 1Mhowever... some current kernel levels
(RHEL7) are cutting it down to 256K peaces .. it is solved with 7.3 (I
think/hope)Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
ed data rate .. it
will definitively shorten your sync times in the future .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Ce
3you need to "recycle" the
sockets.. means .. mmshutdown/stsartuposhould fix you issueMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Compete
You can use both -HAWC ,LROC- on the same node... but you need dedicated
,independent ,block devices ...
In addition for hawc, you could consider replication and use 2 devices, even
across 2 nodes. ...
Gesendet von IBM Verse
leslie elliott --- [gpfsug-discuss] HAWC and LROC ---
t by:
gpfsug-discuss-boun...@spectrumscale.orgNo - the :0 and :1 address are floating
addresses *assigned by CES* - it created those interfaces. The issue seems
to be that these are assigned and CES doesn't know it. Bob OesterlinSr Storage Engineer, Nuance HPC Grid From: <gpfsug-discuss-boun...@s
adress, that is not already[...] as an alias
[...] Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
simple question -sorry for that -
your Nodes.. do they have an IP address in the same subnet as your IP address
listed here ?and if, is this network up n running
so that GPFS can find/detect it ?what tells mmlscluster --ces ?from each node - assuming class C /24
network , do a ip a | grep
in deed, it is.. consider last recent GPFS
releases .. lot's of enhancements/improvements in terms of file creation
rate 're included here .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform
aronOn 10/15/16 11:23 AM, Olaf Weiser wrote:> from your file system configuration .. mmfs -L you'll
find the> size of the LOG> since release 4.x ..you can change it, but you need to re-mount the
FS> on every client , to make the change effective ...>> when a clients initiate writes
If you File System was created with i=512 you wont benefit from 4k Disk
technologies ... some backend emulate it by Controller Software but most likely
you'll get in trouble, when trying to add 4k Disk into your filesystem ...
Gesendet von IBM Verse
Aaron Knister --- Re: [gpfsug-discuss]
so go with RAID 1 or nWay replication...for
MD hope this helps..Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platf
there're multiple dependencies , the
performance of MD scan is related toas a rule of thumb... the total amount of IOPS you need to
scan your MD is highly dependent on the metadata blocksize, inode size
(assuming default 4K ) ( and the total number Inodes.. ;-) ) the time it takes to answer
try mmfsadm dump iohist gives you a nice approach, on how long
it takes until an IO is processed .. the statistic reports the time it
takes, the IO is done from GPFS <--> to your block devices (including
the path to it) Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Comp
may need some minutes to set it
up .. but once it is configured, it is very powerful ... have fun.. ;-) Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Sto
addition.. tell us, how "big"
your test files are ? .. you moved by mmapplypolicy Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German /
hit some limitations like - blocksize (can not be changed)
- disk size.. depending on the
existing storage pools/disk sizes have fun cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Ma
98 matches
Mail list logo