Re: [gpfsug-discuss] AFM gateway node scaling

2020-03-26 Thread Matt Weil
scale.v5r04.doc/bl1ins_gatewaynodefailureafm.htm > > ~Venkat (vpuvv...@in.ibm.com) > > > > From:        Matt Weil > To:        gpfsug-discuss@spectrumscale.org > Date:        03/25/2020 10:34 PM > Subject:        [EXTERNAL] Re: [gpfsug-discuss] AFM gateway node scaling > Sent by:  

Re: [gpfsug-discuss] AFM gateway node scaling

2020-03-25 Thread Matt Weil
per gateway).  AFM gateway nodes are > licensed as server nodes. > > > ~Venkat (vpuvv...@in.ibm.com) > > > > From:        Matt Weil > To:        gpfsug-discuss@spectrumscale.org > Date:        03/23/2020 11:39 PM > Subject:        [EXTERNAL] [gpf

[gpfsug-discuss] AFM gateway node scaling

2020-03-23 Thread Matt Weil
Hello all, Is there any guide and or recommendation as to how to scale this. filesets per gateway node?  Is it necessary to separate NSD server and gateway roles.  Are dedicated gateway nodes licensed as clients? Thanks for any guidance. Matt ___

Re: [gpfsug-discuss] pmcollector node

2017-08-21 Thread Matt Weil
any input on this Thanks On 7/5/17 10:51 AM, Matt Weil wrote: > Hello all, > > Question on the requirements on pmcollector node/s for a 500+ node > cluster. Is there a sizing guide? What specifics should we scale? > CPU Disks memory? >

[gpfsug-discuss] socketMaxListenConnections and net.core.somaxconn

2017-05-08 Thread Matt Weil
Hello all, what happens if we set socketMaxListenConnections to a larger number than we have clients? more memory used? Thanks Matt The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive

Re: [gpfsug-discuss] AFM gateways

2017-04-12 Thread Matt Weil
yes it tells you that when you attempt to make the node a gateway and is does not have a server license designation. On 4/12/17 4:53 AM, Venkateswara R Puvvada wrote: Gateway node requires server license. ~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...@in.ibm.com>) From: Matt We

Re: [gpfsug-discuss] CES node slow to respond

2017-03-24 Thread Matt Weil
o:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil Sent: Thursday, March 23, 2017 10:24 AM To: gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org> Subject: Re: [gpfsug-discuss] CES node slow to respond FYI all We also ran into this after bumping maxFilesToC

Re: [gpfsug-discuss] CES node slow to respond

2017-03-24 Thread Matt Weil
also running version 4.2.2.2. On 3/24/17 2:57 PM, Matt Weil wrote: On 3/24/17 1:13 PM, Bryan Banister wrote: Hi Vipul, Hmm… interesting. We have dedicated systems running CES and nothing else, so the only thing opening files on GPFS is ganesha. IBM Support recommended we massively

Re: [gpfsug-discuss] CES node slow to respond

2017-03-24 Thread Matt Weil
Message- From: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil Sent: Thursday, March 23, 2017 10:24 AM To: gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale

Re: [gpfsug-discuss] Dual homing CES nodes

2017-03-23 Thread Matt Weil
ver to the right networks. > > On Thu, Mar 23, 2017 at 10:35:30AM -0500, Matt Weil wrote: >> Hello all, >> >> Are there any issues with connecting CES nodes to multiple networks? The materials in this message are private and may contain P

[gpfsug-discuss] Dual homing CES nodes

2017-03-23 Thread Matt Weil
Hello all, Are there any issues with connecting CES nodes to multiple networks? Thanks Matt The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended

Re: [gpfsug-discuss] CES node slow to respond

2017-03-23 Thread Matt Weil
ug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil > Sent: Wednesday, March 22, 2017 11:43 AM > To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org> > Subject: [gpfsug-discuss] CES node slow to respond > > All, > > We had an indecent yesterday where

[gpfsug-discuss] CES node slow to respond

2017-03-22 Thread Matt Weil
All, We had an indecent yesterday where one of our CES nodes slowed to a crawl. GPFS waiters showed pre fetch threads going after inodes. iohist also showed lots of inode fetching. Then we noticed that the CES host had 5.4 million files open. The change I made was to set maxStatCache=DEFAULT

[gpfsug-discuss] numaMemoryInterleave=yes

2017-03-07 Thread Matt Weil
Hello all Is this necessary any more? numastat -p mmfsd seems to spread it out without it. Thanks Matt The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the

[gpfsug-discuss] GUI access

2017-02-14 Thread Matt Weil
Hello all, Some how we misplaced the password for our dev instance. Is there any way to reset it? Thanks Matt The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are

Re: [gpfsug-discuss] Getting 'blk_cloned_rq_check_limits: over max size limit' errors after updating the systems to kernel 2.6.32-642.el6 or later

2017-02-13 Thread Matt Weil
lue of max sectors of the block device. -jf On Sat, Feb 11, 2017 at 7:32 PM, Matt Weil <mw...@wustl.edu<mailto:mw...@wustl.edu>> wrote: https://access.redhat.com/solutions/2437991 I ran into this issue the other day even with the echo "4096" > /sys/block/$ii/queu

[gpfsug-discuss] Getting 'blk_cloned_rq_check_limits: over max size limit' errors after updating the systems to kernel 2.6.32-642.el6 or later

2017-02-11 Thread Matt Weil
https://access.redhat.com/solutions/2437991 I ran into this issue the other day even with the echo "4096" > /sys/block/$ii/queue/max_sectors_kb; in place. I have always made that larger to get to the 2M IO size. So I never really seen this issue until the other day. I may have triggered it

[gpfsug-discuss] stuck GracePeriodThread

2017-02-07 Thread Matt Weil
running cnfs # rpm -qa | grep gpfs gpfs.gpl-4.1.1-7.noarch gpfs.base-4.1.1-7.x86_64 gpfs.docs-4.1.1-7.noarch gpfs.gplbin-3.10.0-327.18.2.el7.x86_64-4.1.1-7.x86_64 pcp-pmda-gpfs-3.10.6-2.el7.x86_64 gpfs.ext-4.1.1-7.x86_64 gpfs.gskit-8.0.50-47.x86_64 gpfs.msg.en_US-4.1.1-7.noarch === mmdiag:

[gpfsug-discuss] LROC nvme small IO size 4 k

2017-01-26 Thread Matt Weil
I still see small 4k IO's going to the nvme device after changing the max_sectors_kb. Writes did increase from 64 to 512. Is that a nvme limitation. > [root@ces1 system]# cat /sys/block/nvme0n1/queue/read_ahead_kb > 8192 > [root@ces1 system]# cat /sys/block/nvme0n1/queue/nr_requests > 512 >

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-26 Thread Matt Weil
100% utilized are bursts above 200,000 IO's. Any way to tell ganesha.nfsd to cache more? On 1/25/17 3:51 PM, Matt Weil wrote: [ces1,ces2,ces3] maxStatCache 8 worker1Threads 2000 maxFilesToCache 50 pagepool 100G maxStatCache 8 lrocData no 378G system memory. On 1/25/17 3:29

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
off also did you increase maxstatcache so LROC actually has some compact objects to use ? if you send value for maxfilestocache,maxfilestocache,workerthreads and available memory of the node i can provide a start point. On Wed, Jan 25, 2017 at 10:20 PM Matt Weil <mw...@wustl.edu<mai

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
way more than they ever where before. I guess we will need another nvme. sven On Wed, Jan 25, 2017 at 9:50 PM Matt Weil <mw...@wustl.edu<mailto:mw...@wustl.edu>> wrote: Hello all, We are having an issue where the LROC on a CES node gets overrun 100% utilized. Processes then sta

[gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
Hello all, We are having an issue where the LROC on a CES node gets overrun 100% utilized. Processes then start to backup waiting for the LROC to return data. Any way to have the GPFS client go direct if LROC gets to busy? Thanks Matt The materials in this

[gpfsug-discuss] CES nodes Hyper threads or no

2017-01-10 Thread Matt Weil
All, I typically turn Hyper threading off on storage nodes. So I did on our CES nodes as well. Now they are running at a load of over 100 and have 25% cpu idle. With two 8 cores I am now wondering if hyper threading would help or did we just under size them :-(. These are nfs v3 servers only

Re: [gpfsug-discuss] CES nodes mount nfsv3 not responding

2017-01-03 Thread Matt Weil
drew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com> > > > > - Original message - > From: Matt Weil <mw...@wustl.edu> >

Re: [gpfsug-discuss] CES nodes mount nfsv3 not responding

2017-01-03 Thread Matt Weil
> again. > > My clients environment is currently deployed on Centos 7. > Andrew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com> > > > > - Original message ---

[gpfsug-discuss] CES nodes mount nfsv3 not responding

2017-01-03 Thread Matt Weil
this follows the IP what ever node the ip lands on. the ganesha.nfsd process seems to stop working. any ideas? there is nothing helpful in the logs. time mount ces200:/vol/aggr14/temp403 /mnt/test mount.nfs: mount system call failed real1m0.000s user0m0.000s sys 0m0.010s

Re: [gpfsug-discuss] LROC

2016-12-29 Thread Matt Weil
MB, currently in use: 0 MB > Statistics from: Thu Dec 29 10:35:32 2016 > > Total objects stored 0 (0 MB) recalled 0 (0 MB) > objects failed to store 0 failed to recall 0 failed to inval 0 > objects queried 0 (0 MB) not found 0 = 0.00 % > objects invalidated 0 (

Re: [gpfsug-discuss] LROC

2016-12-29 Thread Matt Weil
rom: Thu Dec 29 10:08:58 2016 It is not caching however. I will restart gpfs to see if that makes it start working. On 12/29/16 10:18 AM, Matt Weil wrote: > > > > On 12/29/16 10:09 AM, Sven Oehme wrote: >> i agree that is a very long name , given this is a nvme device it >

Re: [gpfsug-discuss] LROC

2016-12-29 Thread Matt Weil
osName = Linux ]] > then > : # Add function to discover disks in the Linux environment. > for luns in `ls /dev/disk/by-id | grep nvme` > do > all_luns=disk/by-id/$luns > echo $all_luns dmm > done > > fi > I will try tha

Re: [gpfsug-discuss] LROC

2016-12-28 Thread Matt Weil
avage "well there's yer problem". Are you > perhaps running a version of GPFS 4.1 older than 4.1.1.9? Looks like > there was an LROC related assert fixed in 4.1.1.9 but I can't find > details on it. > > > > *From:*Matt Weil > *Sent:* 12/28/16, 5:21 PM > *To:

Re: [gpfsug-discuss] LROC

2016-12-28 Thread Matt Weil
yes > Wed Dec 28 16:17:07.507 2016: [X] *** Assert exp(ssd->state != > ssdActive) in line 427 of file > /project/sprelbmd1/build/rbmd11027d/src/avs/fs/mmfs/ts/flea/fs_agent_gpfs.C > Wed Dec 28 16:17:07.508 2016: [E] *** Traceback: > Wed Dec 28 16:17:07.509 2016: [E] 2:0x7FF1604F39B5 >

Re: [gpfsug-discuss] LROC

2016-12-20 Thread Matt Weil
cache metadata or also data associated to the > files ? > > sven > > > > On Tue, Dec 20, 2016 at 5:35 PM Matt Weil <mw...@wustl.edu > <mailto:mw...@wustl.edu>> wrote: > > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Gen

Re: [gpfsug-discuss] Intel Whitepaper - Spectrum Scale & LROC with NVMe

2016-12-06 Thread Matt Weil
Hello all, Thanks for sharing that. I am setting this up on our CES nodes. In this example the nvme devices are not persistent. RHEL's default udev rules put them in /dev/disk/by-id/ persistently by serial number so I modified mmdevdiscover to look for them there. What are others doing?

Re: [gpfsug-discuss] rpldisk vs deldisk & adddisk

2016-12-01 Thread Matt Weil
I always suspend the disk then use mmrestripefs -m to remove the data. Then delete the disk with mmdeldisk. ‐m Migrates all critical data off of any suspended disk in this file system. Critical data is all data that would be lost if

Re: [gpfsug-discuss] multipath.conf for EMC V-max

2016-11-15 Thread Matt Weil
http://www.emc.com/collateral/TechnicalDocument/docu5128.pdf page 219 this is the default in rhel. device { vendor "EMC" product "SYMMETRIX" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"

[gpfsug-discuss] dependency problem with python-dnspython and python-dns

2016-11-14 Thread Matt Weil
> #manual install protocal nodes > yum install nfs-ganesha-2.3.2-0.ibm24_2.el7.x86_64 > nfs-ganesha-gpfs-2.3.2-0.ibm24_2.el7.x86_64 > nfs-ganesha-utils-2.3.2-0.ibm24_2.el7.x86_64 > gpfs.smb-4.3.11_gpfs_21-8.el7.x86_64 spectrum-scale-object-4.2.1-1.noarch > > there is a dependancy problem with

[gpfsug-discuss] Leave protocol detail info

2016-10-28 Thread Matt Weil
anybody no what this means? Wed Oct 26 20:08:29.619 2016: [D] Leave protocol detail info: LA: 75 LFLG: 24409951 LFLG delta: 75 The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature.

[gpfsug-discuss] increasing inode

2016-09-19 Thread Matt Weil
All, What exactly happens that makes the clients hang when a file set inodes are increased? The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended

[gpfsug-discuss] Backup on object stores

2016-08-25 Thread Matt Weil
Hello all, Just brain storming here mainly but want to know how you are all approaching this. Do you replicate using GPFS and forget about backups? > https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_osbackup.htm This seems good for a full recovery

Re: [gpfsug-discuss] New to GPFS

2016-07-25 Thread Matt Weil
On 7/24/16 5:27 AM, Stef Coene wrote: > Hi, > > Like the subject says, I'm new to Spectrum Scale. > > We are considering GPFS as back end for CommVault back-up data. > Back-end storage will be iSCSI (300 TB) and V5000 SAS (100 TB). > I created a 2 node cluster (RHEL) with 2 protocol nodes and 1

[gpfsug-discuss] CES sizing guide

2016-07-11 Thread Matt Weil
Hello all, > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node > > Is there any more guidance on this as one socket can be a lot of cores and > memory today. > > Thanks >

Re: [gpfsug-discuss] 4.2 installer

2016-03-20 Thread Matt Weil
a limited set > of nodes. > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > 507-269-0413 > > > From: <gpfsug-discuss-boun...@spectrumscale.org > <mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Matt > Weil <mw...@genome.wustl.edu <mailto:m

Re: [gpfsug-discuss] 4.2 installer

2016-03-19 Thread Matt Weil
Fri Mar 18 11:50:43 CDT 2016: mmcesop: /vol/system/ found but is not on a GPFS filesystem On 3/18/16 11:39 AM, Matt Weil wrote: > upgrading to 4.2.2 fixed the dependency issue. I now get Unable to > access CES shared root. > > # /usr/lpp/mmfs/bin/mmlsconfig | grep 'cesSharedRoot' >

Re: [gpfsug-discuss] 4.2 installer

2016-03-19 Thread Matt Weil
> > > From: > <gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> > on behalf of Matt Weil > <mw...@genome.wustl.edu<mailto:mw...@genome.wustl.edu>> > Reply-To: gpfsug main discussion list > <gpf

Re: [gpfsug-discuss] 4.2 installer

2016-03-19 Thread Matt Weil
pfsug-discuss-boun...@spectrumscale.org > [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Matt Weil > [mw...@genome.wustl.edu] > Sent: 16 March 2016 19:37 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] 4.2 installer > > any help here? > ~]# yum -d0

Re: [gpfsug-discuss] cpu shielding

2016-03-04 Thread Matt Weil
thernet? > > I have seen problems that look like yours in the past with single-network > Ethernet setups. > > Regards, > > Vic > > Sent from my iPhone > >> On 2 Mar 2016, at 20:54, Matt Weil <mw...@genome.wustl.edu> wrote: >> >> Can you share

Re: [gpfsug-discuss] Anyone else using Veritas NetBackup with GPFS?

2015-12-03 Thread Matt Weil
Paul, We currently run netbackup to push about 1.3PB of real data to tape. This using 1 nb master and a single media server that is also a GPFS client. The media server uses the spare file system space as a staging area before writing to tape. We have recently invested into a TSM server