Re: [gpfsug-discuss] IO sizes

2022-02-24 Thread Olaf Weiser
in addition, to Achim, where do you see those "smaller IO"... have you checked IO sizes with mmfsadm dump iohist on each NSDclient/Server ?... If ok on that level.. it's not GPFS       Mit freundlichen Grüßen / Kind regards  Olaf Weiser      - Ursprüngliche Nachricht -Von: &q

Re: [gpfsug-discuss] snapshots causing filesystem quiesce

2022-02-02 Thread Olaf Weiser
keep in mind... creating many snapshots... means ;-) .. you'll have to delete many snapshots.. at a certain level, which depends on #files, #directories, ~workload, #nodes, #networks etc we ve seen cases, where generating just full snapshots (whole file system)  is the better approach instead

[gpfsug-discuss] email format check again for IBM domain send email

2021-12-17 Thread Olaf Weiser
  Hallo Lucas , here we are this is a regular email, send from Verse   @All, please ignore this email, it is to track  internal email format issues   ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org

[gpfsug-discuss] Test email format / mail format

2021-12-10 Thread Olaf Weiser
This email is just a test, because we've seen mail format issues from IBM sent emails you can ignore this email , just for internal problem determination ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org

Re: [gpfsug-discuss] alternate path between ESS Servers for Datamigration

2021-12-09 Thread Olaf Weiser
Hallo Walter, ;-) yes !AND! no ..   for sure , you can specifiy a subset of nodes to use RDMA and other nodes just communicating TCPIP But that's only half of the truth .   The other half is.. who and how , you are going to migrate/copy the data in case you 'll use mmrestripe you will have to

Re: [gpfsug-discuss] gpfsgui in a core dump/restart loop

2021-11-30 Thread Olaf Weiser
add this line to the ticket/record information, when opening a service ticket ..   fput failed: Version mismatch on conditional put (err 805)     - Ursprüngliche Nachricht -Von: "Luis Bolinches" Gesendet von: gpfsug-discuss-boun...@spectrumscale.orgAn: gpfsug-discuss@spectrumscale.orgCC:

Re: [gpfsug-discuss] /tmp/mmfs vanishes randomly?

2021-11-08 Thread Olaf Weiser
temporary directories on each bootR! /tmp/systemd-private-*R! /var/tmp/systemd-private-*[root@ess5-ems1 ~]#     hope this helps - cheers       Mit freundlichen Grüßen / Kind regards  Olaf Weiser IBM Systems, SpectrumScale Client Adoption

Re: [gpfsug-discuss] alphafold and mmap performance

2021-10-19 Thread Olaf Weiser
[...] We have tried a number of things including Spectrum Scale clientversion 5.0.5-9[...]  in the client code or the server code?  there are going  multiple improvements in the code.. continuously... Since your version 4.2.3 /  5.0.5 a lot of them are in the area of NSD server/GNR (which is

Re: [gpfsug-discuss] Handling bad file names in policies?

2021-10-05 Thread Olaf Weiser
Hi  Ed,   not a ready to run for "everything".. but just to remind, there is an ESCAPE statement by this you can    cat policy2 RULE EXTERNAL LIST 'allfiles' EXEC '/var/mmfs/etc/list.exe'  ESCAPE '%/#'   and turn a file name into smth , what a policy can use   I haven't used it for a while , but

Re: [gpfsug-discuss] RDMA write error IBV_WC_RETRY_EXC_ERR

2021-07-09 Thread Olaf Weiser
smells like a network problem ..   IBV_WC_RETRY_EXC_ERR  comes from OFED and clearly says that the data didn't get through successfully,   further help .. check ibstat iblinkinfo ibdiagnet and the sminfo .. (should be the same on all members)         - Ursprüngliche Nachricht -Von: "Iban

Re: [gpfsug-discuss] Filesystem mount attempt hangs GPFS client node

2021-03-30 Thread Olaf Weiser
Hallo Olu, from the log you provide, nothing seems to be faulty... but that does not mean, there is no issue ...   if you think , it is a GPFS problem start gpfs trace on a sample node, , which has this problem again and again... and capture a trae as well and provide that data to IBM I

Re: [gpfsug-discuss] Using setfacl vs. mmputacl

2021-03-01 Thread Olaf Weiser
c:Subject: [EXTERNAL] Re: [gpfsug-discuss] Using setfacl vs. mmputaclDate: Mon, Mar 1, 2021 5:51 PM  On 01/03/2021 15:18, Olaf Weiser wrote:> CAUTION: This email originated outside the University. Check before> clicking links or attachments.> JAB,> yes-this is in argument ;-) ..

Re: [gpfsug-discuss] Using setfacl vs. mmputacl

2021-03-01 Thread Olaf Weiser
setfacl vs. mmputaclDate: Mon, Mar 1, 2021 2:14 PM  On 01/03/2021 12:45, Olaf Weiser wrote:> CAUTION: This email originated outside the University. Check before> clicking links or attachments.> Hallo Stephen,> behavior ... or better to say ... predicted behavior for chmod and ACLs> .. is

Re: [gpfsug-discuss] Using setfacl vs. mmputacl

2021-03-01 Thread Olaf Weiser
Hallo Stephen,   behavior ... or better to say ... predicted behavior for chmod and ACLs .. is not an easy thing or only  , if  you stay in either POSIX world or NFSv4 world   to be POSIX compliant, a chmod overwrites ACLs   GPFS was enhanced to ignore overwrites to ACLs on chmod by a parameter..

Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Olaf Weiser
@all, please note...   as being said. there is a major difference , if we talk about GNR or GPFS native...    one "comon" key is the #queues in the OS to talk to a disk-device, so if you run "classical" NSD  architecture.. you may check how many IOPS you can fire against your block devices...

Re: [gpfsug-discuss] cannot unmount fs

2021-01-27 Thread Olaf Weiser
HI , those  so called "nested" mounts .. you need to make sure to have the "upper" FS mounted first   this here may help --mount-priority PriorityControls the order in which the individual file systems are mounted at daemon startup or when one of the all keywords is specified on

Re: [gpfsug-discuss] Disk in unrecovered state

2021-01-12 Thread Olaf Weiser
Hallo Iban,   this seems to be a hardware issue     Input/output error     just try /make sure.. that you really can read from the disk .. all NSDs from all of its NSD servers so to say.. its most important, that the NSD is accessible on the primary NSD server.. as long this primary NSD server is

Re: [gpfsug-discuss] Services on DSS/ESS nodes

2020-10-05 Thread Olaf Weiser
let me add a few comments from some very successful large installations in Eruope   # InterOP Even though (as Luis pointed to) , there is no support statement to run intermix DSS/ESS in general, it was ~, and is, and will be, ~ allowed for short term purposes, such as e.g migration. The reason to

Re: [gpfsug-discuss] Checking if a AFM-managed file is still inflight

2020-09-21 Thread Olaf Weiser
do you looking fo smth like this: mmafmlocal ls filename    or stat filename            - Original message -From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] [gpfsug-discuss] Checking if a AFM-managed file is

Re: [gpfsug-discuss] tsgskkm stuck

2020-08-30 Thread Olaf Weiser
Hallo Philipp, seems, your nodes can not clearly communicate ?!? .. can you check , that gpfs.gskit  is at the same level ..if not, pls update to the same level I've seen similar behavior , when reverse lookup of host names / wrong entries in /etc/hosts  ... is breaking you setup ..   if DNS

Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA

2020-07-23 Thread Olaf Weiser
ort is not existing or active (check by mmlsconfig and ibdev2netdev)     Mit freundlichen Grüßen / Kind regards  Olaf Weiser IBM Systems, SpectrumScale Client Adoption---

Re: [gpfsug-discuss] gpfs filesets question

2020-04-20 Thread Olaf Weiser
Hallo Stephan.. @all, I think .. yes.. RFE is the way to go ... the current behavior is really works-as designed even though I see your point, currently :  a move of a file between filesets is smth like writing a new file and delete the old so I expect, this will remain always the case , when

Re: [gpfsug-discuss] AFM Alternative?

2020-02-26 Thread Olaf Weiser
you may consider WatchFolder  ... (cluster wider inotify --> kafka) .. and then you go from there     - Original message -From: Andi Christiansen Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug-discuss@spectrumscale.org" Cc:Subject: [EXTERNAL] [gpfsug-discuss] AFM

Re: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64?

2019-12-13 Thread Olaf Weiser
Hallo Heiner, Stefan, thx for this heads up ..we know ... it's all GNR .. so the answer is a bit different, depending on the scenario on the regular building blocks..  the RG Layout (2 RGs per BB) .. is different from the scale out ECE (#4+ nodes, one RG )  The absolute max #vdisk is ( I think )  

Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

2019-12-03 Thread Olaf Weiser
Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... -  you can't preserve ESS's RG definitions... you need

Re: [gpfsug-discuss] Compression question

2019-11-28 Thread Olaf Weiser
Hi Alex, not 100% sure about my answer.. but so far as I see it.. it is working, because of the so called "dito resolution " .. In the snaphost's inode .. die pointer to the DA's point the the next (more recent) inode information .. so accessing a file in a snapshot- "redirects" the request to

Re: [gpfsug-discuss] introduction

2019-11-20 Thread Olaf Weiser
sorry.. this time.. with link Hallo Bill, welcome .. hard to predict, what  your read "slowness" is about ... some baseline tuning seems to be the trick for you... https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)From:        

Re: [gpfsug-discuss] introduction

2019-11-20 Thread Olaf Weiser
Hallo Bill, welcome .. hard to predict, what's your read "slowness" is about ... some baseline tuning seems to be the trick for you... From:        "Peters, Bill" To:        "gpfsug-discuss@spectrumscale.org" Date:        11/20/2019 07:18 PMSubject:        [EXTERNAL] [gpfsug-discuss]

Re: [gpfsug-discuss] ESS - Considerations when adding NSD space?

2019-10-25 Thread Olaf Weiser
Hi  - sorry for delayed response.. as Alex started.. let me add a little though on thatyour said ... you came from GL4 ... to now GL6   ... MES update is only supported , when converting everything to mmvdisk ...    so  I suspect.. you did it  alreadynext.. by going through this MES upgrade ...

Re: [gpfsug-discuss] Ganesha all IPv6 sockets - ist this to be expected?

2019-09-16 Thread Olaf Weiser
Hallo Heiner, usually, Spectrum Scale comes with a tuned profile (named scale) .. [root@nsd01 ~]# tuned-adm active Current active profile: scalein there [root@nsd01 ~]# cat /etc/tuned/scale/tuned.conf | tail -3  # Disable IPv6 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1

Re: [gpfsug-discuss] Getting which files are store fully in inodes

2019-03-28 Thread Olaf Weiser
Hi you can take filehist ... -rwxr--r-- 1 root root 1840 Jan 30 02:24 /usr/lpp/mmfs/samples/debugtools/filehistit gives you a nice report how many files at all, how much space .. etc..From:        "Dorigo Alvise (PSI)" To:        "gpfsug-discuss@spectrumscale.org" Date:        03/28/2019 01:52

Re: [gpfsug-discuss] Adding to an existing GPFS ACL

2019-03-27 Thread Olaf Weiser
unfortunately .. commands like nfs4_setfacl   are not implemented yet in GPFS I once helped me out with a local NFS mount to set ACLs automated ... then you can use NFSv4 client to do the ACL stuff .. From:        "Buterbaugh, Kevin L" To:        gpfsug main discussion list Date:        

Re: [gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-21 Thread Olaf Weiser
so from the nsdMaxWorkerThreads 1024 I used to specify the same way for minWorker ... and tell everybody in the cluster.. ignorePrefetchLunCount=yesto adjust the min/maxworkers to your infrastructure according your need.. how many IOPS - and / or bandwidth with your given BS , do you think can

Re: [gpfsug-discuss] Querying size of snapshots

2019-01-29 Thread Olaf Weiser
HI Jan, yes.. but we should highlight, that this means.. an extra / additionally copy on writes/ changes to a block ... so it adds a bit latency , when running in this mode From:        Jan-Frode Myklebust To:        gpfsug main discussion list Date:        01/29/2019 08:19 PMSubject:        Re:

Re: [gpfsug-discuss] Filesystem automount issues

2019-01-16 Thread Olaf Weiser
and .. check mmlsnode -N waiters -Lfor a very long waiter .. if the FS is still not mountedFrom:        "Frederick Stock" To:        gpfsug main discussion list Date:        01/16/2019 07:38 PMSubject:        Re: [gpfsug-discuss] Filesystem automount issuesSent by:        

Re: [gpfsug-discuss] A cautionary tale of upgrades

2019-01-13 Thread Olaf Weiser
Hallo Simon, It is a known issue - tsctl shownodes up .. reports wrong FQDN and so CES can't retrieve the right information, that the node is up and healthy ..  once in a while , I  had same .. and I'm told, that there should be a fix (soon) . The only official way to recover is.. bring down whole

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-29 Thread Olaf Weiser
HI Tomer, send my work around wrapper to Renar.. I've seen to less data to be sure, that's the same (tsctl shownodes ...) issue but he'll try and let us know .. From:        "Grunenberg, Renar" To:        gpfsug main discussion list , "Olaf Weiser" Date:        11/29/

Re: [gpfsug-discuss] Error with AFM fileset creation with mapping

2018-11-26 Thread Olaf Weiser
Try an dedicated extra „ -p „ foreach Attribute Von meinem iPhone gesendet > Am 26.11.2018 um 16:50 schrieb Dorigo Alvise (PSI) : > > Good evening, > I'm following this guide: >

Re: [gpfsug-discuss] Wrong behavior of mmperfmon

2018-11-15 Thread Olaf Weiser
ntp running / time correct ?From:        "Dorigo Alvise (PSI)" To:        "gpfsug-discuss@spectrumscale.org" Date:        11/15/2018 04:30 PMSubject:        [gpfsug-discuss] Wrong behavior of mmperfmonSent by:        gpfsug-discuss-boun...@spectrumscale.orgHello,I'm using mmperfmon to get writing

Re: [gpfsug-discuss] Job vacancy @Birmingham

2018-10-18 Thread Olaf Weiser
Hi  Simon .. well - I would love to .. .but .. ;-) hey - what do you think, how long a citizen from the EU can live (and work) in UK ;-)   don't take me too serious... see you soon, consider you invited for a coffee for my rude comment .. ;-)olafFrom:        Simon Thompson To:        

Re: [gpfsug-discuss] Preliminary conclusion: single client, single thread, small files - native Scale vs NFS

2018-10-17 Thread Olaf Weiser
spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss[attachment "ecblank.gif" deleted by Olaf Weiser/Germany/IBM] [attachment "19995626.gif" deleted by Olaf Weiser/Germany/IBM] [attachment "ecblank.gif" deleted by Olaf Weiser/Germany/IBM] __

Re: [gpfsug-discuss] error compiling IOR on GPFS

2018-10-12 Thread Olaf Weiser
I think the step you are missing is this/configure LIBS=/usr/lpp/mmfs/lib/libgpfs.so make Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

[gpfsug-discuss] IBM ESS - certified now for SAP

2018-09-27 Thread Olaf Weiser
Hallo friends and fans of GPFS and Scale ;-) in case you have an interest in running SAP on Scale .. by this week , we got the final approval and re-certification for all new ESS models to run SAP HANA

Re: [gpfsug-discuss] Metadata with GNR code

2018-09-21 Thread Olaf Weiser
    179898.441      28021.467   Tree creation     :         10.215          3.165          6.603          2.881   Tree removal      :          5.484          0.880          2.418          2.168 -- finished at 09/07/2018 06:55:42 --Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage

Re: [gpfsug-discuss] Top files on GPFS filesystem

2018-09-05 Thread Olaf Weiser
inux in general doesn't have a workload manager like in AIX.  Abraços / Regards / Saludos, Anderson NobreAIX & Power ConsultantMaster Certified IT SpecialistIBM Systems Hardware Client Technical Team – IBM Systems Lab Services Phone:55-19-2132-4317E-mail: ano...@br.ibm.com  - Original mess

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Olaf Weiser
as Dominic said.. .. your are absolutely right .. for mmbackup you need dedicated inode spaces .. so "independent" filesets .. (in case I  you wanna  be able to mmbackup on a fileset level or multiple mmbackup's in parallel .. )From:        "Peinkofer, Stephan" To:        gpfsug main discussion

Re: [gpfsug-discuss] Top files on GPFS filesystem

2018-08-13 Thread Olaf Weiser
there's no mm* command to get it cluster wide.. you can use fileheat and policy engine to identify most active files ..  and further more... combine it with migration rules ... to replace those files .. please note.. files, that are accessed very heavily but all requests answered out of pagepol

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Olaf Weiser
ide for this ?  - if not - I know .. open PMRs is an additional ...but could you please .. then we can decide .. if raising the limit is an option for you .. Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platf

Re: [gpfsug-discuss] Sven Oehme now at DDN

2018-08-08 Thread Olaf Weiser
dear friends of GPFS, Sven is Sven.. and he is "[...]permanent[...]" .. it is hard to saw him went away to DDN, 'cause as we all agree he contributes very^99 much in GPFS and he's good friend to.. but hey.. GPFS has a 20 year history..is very complex ... so it's not a one man show .. we are still

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Olaf Weiser
Can u share your stanza file ? Von meinem iPhone gesendet > Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L > : > > OK, so hold on … NOW what’s going on??? I deleted the filesystem … went to > lunch … came back an hour later … recreated the filesystem with a metadata > block size of 4 MB

Re: [gpfsug-discuss] mmbackup issue

2018-06-20 Thread Olaf Weiser
Hi Renar, if possible, let's check if you can identify specific parts of your name space, which a affected (fileset, subDir ...) if so .. you can EXCLUDE them from mmbackup andrun a 2nd policy in parallel with an EXEC LIST and call dsmc incr directly upon that list .. I know.. it's not a solution

Re: [gpfsug-discuss] NFS on system Z

2018-05-19 Thread Olaf Weiser
HI, yes.. CES comes along with lots of monitors about status, health checks and a special NFS (ganesha) code.. which is optimized / available only for a limited choice of OS/platforms so CES is not available for e.g. AIX and in your case... not available for systemZ ... but - of course you can

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-15 Thread Olaf Weiser
Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root@tlinc04 ~]# rpm -qa | grep -i openssl                                                                                      

Re: [gpfsug-discuss] Pool migration and replicate

2018-04-26 Thread Olaf Weiser
Hallo Simon replication attributes of a file won't be changed just by the fact , that the pool attribute is changed.. or in other words .. if a file gets migrated from POOLA to POOLB, does not change the replication automatically... even if the pool consists of NSDs with multiple fgso depending

Re: [gpfsug-discuss] GPFS autoload - wait for IB ports tobecomeactive

2018-03-15 Thread Olaf Weiser
you can try :systemctl enable  NetworkManager-wait-onlineln -s '/usr/lib/systemd/system/NetworkManager-wait-online.service' '/etc/systemd/system/multi-user.target.wants/NetworkManager-wait-online.service'in many cases .. it helps .. From:        Jan-Frode Myklebust To:        

Re: [gpfsug-discuss] Underlying LUN mirroring NSD impact

2018-03-14 Thread Olaf Weiser
HI Mark.. yes.. that's  possible...  at least ,  I'm sure.. there was a chapter in the former advanced admin guide of older releases with PPRC .. how to do that.. similar to PPRC , you might use other methods , but from gpfs perspective this should'nt make a difference.. and I had have a german

Re: [gpfsug-discuss] 100G RoCEE and Spectrum Scale Performance

2018-03-07 Thread Olaf Weiser
HI Doug, I did some compares with gpfsperf ... betweend IB and 100GbE .. but we used the 100GbE with ROCE .. so my results might not be representative for you .. (don't wonder about edited hostnames .. its from a real customer environment..) so with real data workload.. it is nearly the same... ~

Re: [gpfsug-discuss] tscCmdPortRange question

2018-03-06 Thread Olaf Weiser
this parameter is just for administrative commands.. "where" to send the output of a command...and for those admin ports .. so called ephemeral ports... it depends , how much admin commands ( = sessions  = sockets)  you want to run in parallel in my experience.. 10 ports is more than enough we use

Re: [gpfsug-discuss] storage-based replication for Spectrum Scale

2018-01-25 Thread Olaf Weiser
yes... to add some more details even though it might be very theoretical, that only some nodes from the foreign cluster 'll suffer from connection issues, the rule to react up on expel request isA) if the requested node is really unreachable (or in trouble) ... the node will loose the disk

Re: [gpfsug-discuss] pmcollector and NSD perf

2017-12-19 Thread Olaf Weiser
Hi Mark, I think what you'll need is to set name = "GPFSDisk"this should report the utilization to the direct attached disk cheers olsfFrom:        Mark Bush To:        gpfsug main discussion list Date:        12/19/2017 04:50 PMSubject:  

Re: [gpfsug-discuss] GPFS long waiter

2017-11-16 Thread Olaf Weiser
even though I think, this is something to open a PMR .. you might help you out yourself  by finding pending messages to this nodeso check on mmfsadm dump tscomm ... output on that node if you find pending messages to a specific node.. .go on that node and debug further.. if it is not an important

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Olaf Weiser
e     1 node + 2 pdisk                   limited by rebuild space  sf_g_01_vdisk02     1 node + 1 pdisk                   limited by rebuild space  sf_g_01_vdisk07     1 node + 1 pdisk                   limited by rebuild space  sf_g_01_vdisk01     2 pdiskThanks,IvanoIl 16/11/17 13:03, Olaf Weiser ha scritto:>

Re: [gpfsug-discuss] Write performances and filesystem size

2017-11-16 Thread Olaf Weiser
we run gpfsperf write seq.Thanks,IvanoIl 16/11/17 04:42, Olaf Weiser ha scritto:> Sure... as long we assume that really all physical disk are used .. the> fact that was told 1/2 or 1/4 might turn out that one / two complet> enclosures 're eliminated ... ? ..that s why I was

Re: [gpfsug-discuss] nsdperf crash testing RDMA between Power BE and Intel nodes

2017-10-24 Thread Olaf Weiser
Hi Falk, can you open a PMR for it  .. it should be investigated in detail From:        "Uwe Falke" To:        gpfsug main discussion list Date:        10/24/2017 06:49 PMSubject:        [gpfsug-discuss] nsdperf crash testing RDMA between

Re: [gpfsug-discuss] RoCE not playing ball

2017-09-19 Thread Olaf Weiser
is ib_read_bw  working  ?just test it between the two nodes ... From:        Barry Evans To:        gpfsug main discussion list Date:        09/20/2017 03:21 AMSubject:        [gpfsug-discuss] RoCE not playing ballSent by:        

Re: [gpfsug-discuss] multicluster security

2017-09-09 Thread Olaf Weiser
HI Aaron, not sure, if we are ready to talk/ distribute pnfs4.1 experiences here.. I know one customer doing pNFS and for myself , we did a lot of testing hereplease contact me directly .. let's see, how I can help .. From:        Aaron Knister To:        

Re: [gpfsug-discuss] gpfs filesystem heat reporting, howto setup

2017-06-01 Thread Olaf Weiser
Hi Andreas, one could use the WEIGHT  statement ... a simple policy for e.g. rule ’repack’ MIGRATE FROM POOL ’xx’ TO POOL ’’ WEIGHT(FILE_HEAT)and then the -I prepare to see, what would be done by policy.. or you use the LIST function .. or .. and so on .. From:        Andreas Landhäußer

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Olaf Weiser
ht that QOS (which I’m using) was what would keep an mmrestripefs from overrunning the system … QOS has worked extremely well for us - it’s one of my favorite additions to GPFS.KevinOn May 4, 2017, at 10:34 AM, Olaf Weiser <olaf.wei...@de.ibm.com> wrote:no.. it is just in the code, because

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Olaf Weiser
no.. it is just in the code, because we have to avoid to run out of mutexs / blockreduce the number of nodes -N down to 4  (2nodes is even more safer) ... is the easiest way to solve it for nowI've been told the real root cause will be fixed in one of the next ptfs .. within this year .. this

Re: [gpfsug-discuss] Tiebreaker disk question

2017-05-04 Thread Olaf Weiser
this configuration (2 nodes and tiebreaker) is not designed to survive node and disk failures at the same time... this depends on , where the clustermanager and the filesystem manager runs .. when a node and half of the disk disappear at the same time...for a real active-active configuration you

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-21 Thread Olaf Weiser
pls checkworkerThreads  (assuming you 're  > 4.2.2) start with 128 .. increase iteratively pagepool  at least 8 GignorePrefetchLunCount=yes (1) then you won't see a difference and GPFS is as fast or even faster .. From:        "Marcus Koenig1" To:        gpfsug main discussion

Re: [gpfsug-discuss] CES doesn't assign addresses to nodes

2017-03-23 Thread Olaf Weiser
the issue is fixed, an APAR will be released soon - IV93100From:        Olaf Weiser/Germany/IBM@IBMDETo:        "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>Cc:        "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>Date:  

Re: [gpfsug-discuss] Running multiple mmrestripefs in a single cluster?

2017-03-15 Thread Olaf Weiser
yes.. and please be carefully about the number of nodes , doing the job because of multiple PIT worker hammering against your data if you limit the restripe to 2 nodes  (-N ..)   of adjust the PITworker down to 8 or even 4  ... you can run multiple restripes.. without hurting the application

Re: [gpfsug-discuss] Reverting to older versions

2017-02-10 Thread Olaf Weiser
as long, you did not changed mmchconfig release=latest and the file system version has'nt changed as well , this should work ( I did it several times..) From:        "mark.b...@siriuscom.com" To:        gpfsug main discussion list Date:    

Re: [gpfsug-discuss] CES doesn't assign addresses to nodes

2017-02-09 Thread Olaf Weiser
CES ip addresses just disappeared from my two protocol nodes (4.2.2.0). From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Olaf Weiser <olaf.wei...@de.ibm.com>Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Date: Thursday, February 2, 2017 at 12:02 P

Re: [gpfsug-discuss] Mount of file set

2017-02-03 Thread Olaf Weiser
Hi Ha-Jo, we do the same here .. so no news so far as I know... grussvom laffFrom:        Hans-Joachim Ehlers To:        gpfsug main discussion list Date:        02/03/2017 05:14 PMSubject:        [gpfsug-discuss] Mount of file setSent by:  

Re: [gpfsug-discuss] CES doesn't assign addresses to nodes

2017-02-02 Thread Olaf Weiser
pls contact me directly olaf.wei...@de.ibm.comMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

Re: [gpfsug-discuss] proper gpfs shutdown when node disappears

2017-02-02 Thread Olaf Weiser
seems, that the node is up n running from the OS point of view .. so one can ping the node/ login the node... but the /var/mmfs DIR is obviously damaged/empty .. what ever.. that's why you see a message like this..have you reinstalled that node / any backup/restore thing ?From:        "J. Eric

Re: [gpfsug-discuss] CES doesn't assign addresses to nodes

2017-01-31 Thread Olaf Weiser
Hi ...same thing here.. everything after 10 nodes will be truncated.. though I don't have an issue with it ... I 'll open a PMR .. and I recommend you to do the same thing.. ;-) the reason seems simple.. it is the "| tail" .at the end of the command.. .. which truncates the output to the last 10

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Olaf Weiser
FS version.Other ideas, anyone?  Is anyone else in a similar situation and can test whether they see similar behavior?Thanks...KevinOn Jan 19, 2017, at 2:45 AM, Olaf Weiser <olaf.wei...@de.ibm.com> wrote:have you checked, where th fsmgr runs as you have nodes with different code levelsmmls

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Olaf Weiser
have you checked, where th fsmgr runs as you have nodes with different code levelsmmlsmgr From:        "Buterbaugh, Kevin L" To:        gpfsug main discussion list Date:        01/18/2017 04:57 PMSubject:        [gpfsug-discuss]

Re: [gpfsug-discuss] nodes being ejected out of the cluster

2017-01-11 Thread Olaf Weiser
... but as I said, please discuss this within the PMR .. there are to much dependencies to discuss this here .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

Re: [gpfsug-discuss] CES ifs-ganashe

2016-12-20 Thread Olaf Weiser
rsize , wsize is set to 1Mhowever... some current kernel levels (RHEL7) are cutting it down to 256K peaces .. it is solved with 7.3 (I think/hope)Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

Re: [gpfsug-discuss] mmchdisk performance/behavior in a stretch cluster config?

2016-11-18 Thread Olaf Weiser
ed data rate .. it will definitively  shorten your sync times in the future .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Ce

Re: [gpfsug-discuss] Tuning AFM for high throughput/high IO over _really_ long distances

2016-11-09 Thread Olaf Weiser
3you need to "recycle" the sockets.. means .. mmshutdown/stsartuposhould fix you issueMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Compete

Re: [gpfsug-discuss] HAWC and LROC

2016-11-05 Thread Olaf Weiser
You can use both -HAWC ,LROC- on the same node... but you need dedicated ,independent ,block devices ... In addition for hawc, you could consider replication and use 2 devices, even across 2 nodes. ... Gesendet von IBM Verse leslie elliott --- [gpfsug-discuss] HAWC and LROC ---

Re: [gpfsug-discuss] [EXTERNAL] Re: CES: IP address won't assign: "handleNetworkProblem with lock held"

2016-10-17 Thread Olaf Weiser
t by:        gpfsug-discuss-boun...@spectrumscale.orgNo - the :0 and :1 address are floating addresses *assigned by CES* - it created those interfaces. The issue seems to be that these are assigned and CES doesn't know it.  Bob OesterlinSr Storage Engineer, Nuance HPC Grid  From: <gpfsug-discuss-boun...@s

Re: [gpfsug-discuss] CES: IP address won't assign: "handleNetworkProblem with lock held"

2016-10-17 Thread Olaf Weiser
adress, that is not already[...] as an alias [...]  Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

Re: [gpfsug-discuss] CES: IP address won't assign: "handleNetworkProblem with lock held"

2016-10-17 Thread Olaf Weiser
simple question  -sorry for that - your Nodes.. do they have an IP address in the same subnet as your IP address listed here ?and if, is this network up n running so that GPFS can find/detect it ?what tells mmlscluster --ces ?from each node - assuming class C /24 network , do a ip a | grep

Re: [gpfsug-discuss] SGExceptionLogBufferFullThread waiter

2016-10-15 Thread Olaf Weiser
in deed, it is.. consider last recent GPFS releases .. lot's of enhancements/improvements in terms of file creation rate 're included here .. cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform

Re: [gpfsug-discuss] SGExceptionLogBufferFullThread waiter

2016-10-15 Thread Olaf Weiser
aronOn 10/15/16 11:23 AM, Olaf Weiser wrote:> from your file system configuration .. mmfs -L you'll find the> size of the LOG> since release 4.x ..you can change it, but you need to re-mount the FS> on every client , to make the change effective ...>> when a clients initiate writes

Re: [gpfsug-discuss] 4K sector NSD support (was: Hardware refresh)

2016-10-11 Thread Olaf Weiser
If you File System was created with i=512 you wont benefit from 4k Disk technologies ... some backend emulate it by Controller Software but most likely you'll get in trouble, when trying to add 4k Disk into your filesystem ... Gesendet von IBM Verse Aaron Knister --- Re: [gpfsug-discuss]

Re: [gpfsug-discuss] Fwd: Blocksize

2016-09-29 Thread Olaf Weiser
so go with RAID 1  or nWay replication...for MD hope this helps..Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platf

Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Olaf Weiser
there're multiple dependencies , the performance of MD scan is related toas a rule of thumb... the total amount of IOPS you need to scan your MD is highly dependent on the metadata blocksize, inode size (assuming default 4K )   ( and the total number Inodes.. ;-) ) the time it takes to answer

Re: [gpfsug-discuss] iowait?

2016-08-29 Thread Olaf Weiser
try mmfsadm dump iohist gives you a nice approach, on how long it takes until an IO is processed .. the statistic reports the time it takes, the IO is done from GPFS <--> to your block devices (including the path to it) Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Comp

Re: [gpfsug-discuss] Aggregating filesystem performance

2016-07-12 Thread Olaf Weiser
may need some minutes to set it up .. but once it is configured, it is very powerful ... have fun.. ;-) Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Sto

Re: [gpfsug-discuss] Migration policy confusion

2016-07-07 Thread Olaf Weiser
addition..  tell us, how "big" your test files are ? .. you moved by mmapplypolicy Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German /

Re: [gpfsug-discuss] ESS GL6

2016-06-20 Thread Olaf Weiser
hit some limitations like - blocksize (can not be changed)  - disk size.. depending on the existing storage pools/disk sizes have fun cheersMit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Ma