Re: [gpfsug-discuss] DB2 (not DB2 PureScale) and Spectrum Scale

2021-06-10 Thread Jim Doherty
than directIO. If you need the database to grow all the time, I would avoid using direct IO and use a larger GPFS pagepool to allow it cache data. Using directIO is the better solution. Jim Doherty On Monday, June 7, 2021, 11:03:26 AM EDT, Wally Dietrich wrote: Hi

Re: [gpfsug-discuss] Using VMs as quorum / admin nodes in a GPFS infiniband cluster

2021-06-07 Thread Jim Doherty
  (mmfsck,    mmrestripefs)  make sure they have enough pagepool  as a small pagepool could impact the performance of these operations.   Jim Doherty On Monday, June 7, 2021, 08:55:49 AM EDT, Leonardo Sala wrote: Hallo, we do have multiple bare-metal GPFS clusters with infiniband fabric

Re: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files

2020-06-08 Thread Jim Doherty
You will need to do this with chown from the  c library functions  (could do this from perl or python).   If you try to change this from a shell script  you will hit the Linux command  which will have a lot more overhead. I had a customer attempt this using the shell and it ended up taking

Re: [gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 cluster to 5.0.4.3 cluster

2020-05-28 Thread Jim Doherty
What is the minimum release level of the Spectrum Scale 5.0.4 cluster?    Is it 4.2.3.X?  Jim Doherty On Thursday, May 28, 2020, 6:31:21 PM EDT, Prasad Surampudi wrote: We have two scale clusters, cluster-A running version Scale 4.2.3 and RHEL6/7 and Cluster-B running Spectrum

Re: [gpfsug-discuss] advanced filecache math

2019-05-09 Thread Jim Doherty
application.   There have been some memory leaks fixed in Ganesha that will be in  4.2.3 PTF15 which is available on fixcentral Jim Doherty On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme wrote: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes

Re: [gpfsug-discuss] Memory accounting for processes writing to GPFS

2019-03-06 Thread Jim Doherty
For any process with a large number of threads the VMM size has become an imaginary number ever since the glibc change to allocate a heap per thread. I look to /proc/$pid/status to find the memory used by a proc  RSS + Swap + kernel page tables.  Jim On Wednesday, March 6, 2019, 4:25:48

Re: [gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-21 Thread Jim Doherty
Are all of the slow IOs from the same NSD volumes?    You could run an mmtrace and take an internaldump and open a ticket to the Spectrum Scale queue.  You may want to limit the run to just your nsd servers and not all nodes like I use in my example. Or one of the tools we use to review

Re: [gpfsug-discuss] mmfsd recording High CPU usage

2018-11-21 Thread Jim Doherty
At a guess with no data    if the application is opening more files than can fit in the maxFilesToCache (MFTC) objects  GPFS will expand the MFTC to support the open files,  but it will also scan to try and free any unused objects.    If you can identify the user job that is causing this 

Re: [gpfsug-discuss] Long I/O's on client but not on NSD server(s)

2018-10-04 Thread Jim Doherty
It could mean a shortage of nsd server threads   or a congested network.   Jim On Thursday, October 4, 2018, 3:55:10 PM EDT, Buterbaugh, Kevin L wrote: Hi All, What does it mean if I have a few dozen very long I/O’s (50 - 75 seconds) on a gateway as reported by “mmdiag —iohist”

Re: [gpfsug-discuss] What is this error message telling me?

2018-09-27 Thread Jim Doherty
The data  is also shown in an internaldump as a part of the mmfsadm dump tscomm data,  the RTO & RTT times are listed in microseconds.  So the RTO here in my example is 18.5 seconds (see below).   You  can get the same information from the  Linux networking command   ss -i.    The normal

Re: [gpfsug-discuss] High I/O wait times

2018-07-06 Thread Jim Doherty
You may want to get an mmtrace,  but I suspect that the disk IOs are slow. The iohist is showing the time from when the start IO was issued until it was finished.    Of course if you have disk IOs taking 10x too long then other IOs are going to queue up behind it.    If there are more IOs

Re: [gpfsug-discuss] Question about NSD "Devtype" setting, nsddevices file

2018-01-17 Thread Jim Doherty
Run a mmlsnsd -X   I suspect you will see that GPFS is using one of the /dev/sd* "generic" paths to the LUN,   not the /dev/mapper/ path.   In our case the device is setup as dmm  [root@service5 ~]# mmlsnsd -X  Disk name    NSD volume ID  Device Devtype  Node name    

Re: [gpfsug-discuss] Gpfs Memory Usaage Keeps going up and we don't know why.

2017-07-24 Thread Jim Doherty
n Manager")   2099520 bytes in use    17500049370 hard limit on memory usage  16778240 bytes committed to regions  1 number of regions 4 allocations 0 frees      0 allocation failures On Mon, 2017-07-24 at 13:11 +, Jim Doherty wrote: There a

Re: [gpfsug-discuss] Gpfs Memory Usaage Keeps going up and we don't know why.

2017-07-24 Thread Jim Doherty
There are 3 places that the GPFS mmfsd uses memory  the pagepool  plus 2 shared memory segments.   To see the memory utilization of the shared memory segments run the command   mmfsadm dump malloc .    The statistics for memory pool id 2 is where  maxFilesToCache/maxStatCache objects are  and