Hello all,
Not sure if this is the a good place but we are experiencing a strange
issue.
It appears that systemd is un-mounting the file system immediately after
it is mounted.
#strace of systemd shows that the device is not there. Systemd sees
that the path is failed and umounts the devi
Weil ---11/30/2015 01:42:08 PM---Hello
all, Not sure if this is the a good place but we are expeMatt Weil
---11/30/2015 01:42:08 PM---Hello all, Not sure if this is the a good
place but we are experiencing a strange
From: Matt Weil
To: gpfsug main discussion list
Date: 11/30/2015 01:42 PM
Paul,
We currently run netbackup to push about 1.3PB of real data to tape.
This using 1 nb master and a single media server that is also a GPFS
client. The media server uses the spare file system space as a staging
area before writing to tape. We have recently invested into a TSM
server du
Hello all,
I recently replaced 4 nsd servers with new ones. All of the roles
including primary and secondary servers where moved to new servers. Once
completed the old servers where removed. Now clients of a remote
cluster show a broken connection to the cluster that now has the new
nodes.
e(s) using
mmremotecluster? What does “mmremotecluster show” indicate? If they
are the old nodes, run “mmremotecluster update"
Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
507-269-0413
From: <mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Matt
Weil mailto:mw...@gen
All,
We are seeing issues on our GPFS clients where mmfsd is not able to
respond in time to renew its lease. Once that happens the file system is
unmounted. We are experimenting with c groups to tie mmfsd and others
to specified cpu's. Any recommendations out there on how to shield GPFS
from oth
oun...@spectrumscale.org] On Behalf Of Matt Weil
> Sent: Wednesday, March 02, 2016 1:47 PM
> To: gpfsug main discussion list
> Subject: [gpfsug-discuss] cpu shielding
>
> All,
>
> We are seeing issues on our GPFS clients where mmfsd is not able to respond
> in time to renew
et?
two 10 gig Intel nics in a LACP bond. links are not saturated.
>
> I have seen problems that look like yours in the past with single-network
> Ethernet setups.
>
> Regards,
>
> Vic
>
> Sent from my iPhone
>
>> On 2 Mar 2016, at 20:54, Matt Weil wrote:
u shielding
>
> Hi,
>
> How sure are you that it is cpu scheduling that is your problem?
>
> Are you using IB or Ethernet?
>
> I have seen problems that look like yours in the past with single-network
> Ethernet setups.
>
> Regards,
>
> Vic
>
> Sent from my
thernet?
>
> I have seen problems that look like yours in the past with single-network
> Ethernet setups.
>
> Regards,
>
> Vic
>
> Sent from my iPhone
>
>> On 2 Mar 2016, at 20:54, Matt Weil wrote:
>>
>> Can you share anything more?
>> We are t
t; From: gpfsug-discuss-boun...@spectrumscale.org
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Matt Weil
> [mw...@genome.wustl.edu]
> Sent: 16 March 2016 19:37
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] 4.2 installer
>
> any help here?
> ~]
ance HPC Grid
>
>
>
> From:
> mailto:gpfsug-discuss-boun...@spectrumscale.org>>
> on behalf of Matt Weil
> mailto:mw...@genome.wustl.edu>>
> Reply-To: gpfsug main discussion list
> mailto:gpfsug-discuss@spectrumscale.org>>
> Date: Wednesday, Mar
All,
Attempting to upgrade our into our dev environment. The update to 4.2
was simple.
http://www.ibm.com/support/knowledgecenter/STXKQY/420/com.ibm.spectrum.scale.v4r2.ins.doc/bl1ins_migratingtoISS4.2fromISS4.1.1.htm
But I am confused on the installation toolkit. It seems that it is
going to
Fri Mar 18 11:50:43 CDT 2016: mmcesop: /vol/system/ found but is not on
a GPFS filesystem
On 3/18/16 11:39 AM, Matt Weil wrote:
> upgrading to 4.2.2 fixed the dependency issue. I now get Unable to
> access CES shared root.
>
> # /usr/lpp/mmfs/bin/mmlsconfig | grep 'cesSharedRoot
imited set
> of nodes.
>
> Bob Oesterlin
> Sr Storage Engineer, Nuance HPC Grid
> 507-269-0413
>
>
> From: <mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Matt
> Weil mailto:mw...@genome.wustl.edu>>
> Reply-To: gpfsug main discussion
Hello all,
Is there any good documentation out there to integrate IPA with CES?
Thanks
Matt
This email message is a private communication. The information transmitted,
including attachments, is intended only for the person or entity to which it is
addressed and may contain confidential,
Apr 21 07:41:53 linuscs88 mmfs: Shutting down abnormally due to error in
/project/sprelfks1/build/rfks1s007a/src/avs/fs/mmfs/ts/tm/tree.C line
1025 retCode 12, reasonCode 56
any ideas?
This email message is a private communication. The information transmitted,
including attachments, is int
enable
mpathconf --enable --with_multipathd y
show config
multipathd show config
On 4/25/16 3:27 PM, Jan Finnerman Load wrote:
> Hi,
>
> I realize this might not be strictly GPFS related but I’m getting a
> little desperate here…
> I’m doing an implementation of GPFS/Spectrum Scale 4.2 at a custom
Hello all,
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node
>
> Is there any more guidance on this as one socket can be a lot of cores and
> memory today.
>
> Thanks
>
On 7/24/16 5:27 AM, Stef Coene wrote:
> Hi,
>
> Like the subject says, I'm new to Spectrum Scale.
>
> We are considering GPFS as back end for CommVault back-up data.
> Back-end storage will be iSCSI (300 TB) and V5000 SAS (100 TB).
> I created a 2 node cluster (RHEL) with 2 protocol nodes and 1 c
Hello all,
Just brain storming here mainly but want to know how you are all
approaching this. Do you replicate using GPFS and forget about backups?
> https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_osbackup.htm
This seems good for a full recovery
All,
What exactly happens that makes the clients hang when a file set inodes
are increased?
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature. If you are not the
intended recipi
anybody no what this means?
Wed Oct 26 20:08:29.619 2016: [D] Leave protocol detail info: LA: 75
LFLG: 24409951 LFLG delta: 75
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature.
Also is there any document that explains what happens to the cluster
when a client node stops responding and get evicted.
On 10/28/16 3:49 PM, Matt Weil wrote:
> anybody no what this means?
>
> Wed Oct 26 20:08:29.619 2016: [D] Leave protocol detail info: LA: 75
> LFLG: 24409951 LF
> #manual install protocal nodes
> yum install nfs-ganesha-2.3.2-0.ibm24_2.el7.x86_64
> nfs-ganesha-gpfs-2.3.2-0.ibm24_2.el7.x86_64
> nfs-ganesha-utils-2.3.2-0.ibm24_2.el7.x86_64
> gpfs.smb-4.3.11_gpfs_21-8.el7.x86_64 spectrum-scale-object-4.2.1-1.noarch
>
> there is a dependancy problem with pytho
http://www.emc.com/collateral/TechnicalDocument/docu5128.pdf
page 219
this is the default in rhel.
device {
vendor "EMC"
product "SYMMETRIX"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted
--device=/dev/%n"
I always suspend the disk then use mmrestripefs -m to remove the data. Then
delete the disk with mmdeldisk.
‐m
Migrates all critical data off of any suspended
disk in this file system. Critical data is all
data that would be lost if
Hello all,
Thanks for sharing that. I am setting this up on our CES nodes. In this
example the nvme devices are not persistent. RHEL's default udev rules put
them in /dev/disk/by-id/ persistently by serial number so I modified
mmdevdiscover to look for them there. What are others doing? cust
alancing across devices
you don't want to add extra complexity and path length to anything , so stick
with raw devices .
K that is what I was thinking.
sven
On Tue, Dec 6, 2016 at 8:40 AM Matt Weil
mailto:mw...@wustl.edu>> wrote:
Hello all,
Thanks for sharing that. I am settin
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage
Hello all,
Are there any tuning recommendations to get these to cache more metadata?
Thanks
Matt
___
gpfsug-discus
cache metadata or also data associated to the
> files ?
>
> sven
>
>
>
> On Tue, Dec 20, 2016 at 5:35 PM Matt Weil <mailto:mw...@wustl.edu>> wrote:
>
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%
Does ganashe have a default read and write max size? if so what is it?
Thanks
Matt
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
This is enabled on this node but mmdiag it does not seem to show it
caching. Did I miss something? I do have one file system in the
cluster that is running 3.5.0.7 wondering if that is causing this.
> [root@ces1 ~]# mmdiag --lroc
>
> === mmdiag: lroc ===
> LROC Device(s): 'NULL' status Idle
> Ca
rom the cluster.
>
> -Aaron
>
> On 12/28/16 11:44 AM, Matt Weil wrote:
>> This is enabled on this node but mmdiag it does not seem to show it
>> caching. Did I miss something? I do have one file system in the
>> cluster that is running 3.5.0.7 wondering if that is c
t; --
> Sven Oehme
> Scalable Storage Research
> email: oeh...@us.ibm.com
> Phone: +1 (408) 824-8904
> IBM Almaden Research Lab
> --
>
> Inactive hide details for Matt Weil ---12/28/2016 07:02:5
no I will do that next.
On 12/28/16 1:55 PM, Sven Oehme wrote:
> Did you restart the daemon on that node after you fixed it ? Sent from
> IBM Verse
>
> Matt Weil --- Re: [gpfsug-discuss] LROC ---
>
> From: "Matt Weil"
> To: gpfsug-discuss@spectrumscale.
still in a 'status Shutdown' even after gpfs was stopped and started.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
yes
> Wed Dec 28 16:17:07.507 2016: [X] *** Assert exp(ssd->state !=
> ssdActive) in line 427 of file
> /project/sprelbmd1/build/rbmd11027d/src/avs/fs/mmfs/ts/flea/fs_agent_gpfs.C
> Wed Dec 28 16:17:07.508 2016: [E] *** Traceback:
> Wed Dec 28 16:17:07.509 2016: [E] 2:0x7FF1604F39B5
> logA
avage "well there's yer problem". Are you
> perhaps running a version of GPFS 4.1 older than 4.1.1.9? Looks like
> there was an LROC related assert fixed in 4.1.1.9 but I can't find
> details on it.
>
>
>
> *From:*Matt Weil
> *Sent:* 12/28/16, 5:21 PM
> ro_cache_S29GNYAH200016 0A6403AA586531E1
> /dev/disk/by-id/nvme-Dell_Express_Flash_NVMe_SM1715_1.6TB_SFF___S29GNYAH200016
> dmm ces1.gsc.wustl.edu server node
On 12/28/16 5:19 PM, Aaron Knister wrote:
> mmlssnsd -X | grep 0A6403AA58641546
__
ere.
> if [[ $osName = Linux ]]
> then
> : # Add function to discover disks in the Linux environment.
> for luns in `ls /dev/disk/by-id | grep nvme`
> do
> all_luns=disk/by-id/$luns
> echo $all_luns dmm
> done
>
> fi
sing issues.
>
> What does a "tspreparedisk -S" show on that node?
>
> Also, what does your nsddevices script look like? I'm wondering if you
> could have it give back "/dev/dm-XXX" paths instead of
>
atistics from: Thu Dec 29 10:08:58 2016
It is not caching however. I will restart gpfs to see if that makes it
start working.
On 12/29/16 10:18 AM, Matt Weil wrote:
>
>
>
> On 12/29/16 10:09 AM, Sven Oehme wrote:
>> i agree that is a very long name , given this is a nvme device
pacity: 1526184 MB, currently in use: 0 MB
> Statistics from: Thu Dec 29 10:35:32 2016
>
> Total objects stored 0 (0 MB) recalled 0 (0 MB)
> objects failed to store 0 failed to recall 0 failed to inval 0
> objects queried 0 (0 MB) not found 0 = 0.00 %
> objects
ice is too big.
>
> on the caching , it only gets used when you run out of pagepool or
> when you run out of full file objects . so what benchmark, test did
> you run to push data into LROC ?
>
> sven
>
>
> On Thu, Dec 29, 2016 at 5:41 PM Matt Weil <mailto:mw...@wus
this follows the IP what ever node the ip lands on. the ganesha.nfsd
process seems to stop working. any ideas? there is nothing helpful in
the logs.
time mount ces200:/vol/aggr14/temp403 /mnt/test
mount.nfs: mount system call failed
real1m0.000s
user0m0.000s
sys 0m0.010s
__
> again.
>
> My clients environment is currently deployed on Centos 7.
> Andrew Beattie
> Software Defined Storage - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com>
>
>
>
> - Ori
Software Defined Storage - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com>
>
>
>
> - Original message -
> From: Matt Weil
> Sent by: gpfsug-discuss-boun...@spectr
All,
I typically turn Hyper threading off on storage nodes. So I did on our
CES nodes as well. Now they are running at a load of over 100 and have
25% cpu idle. With two 8 cores I am now wondering if hyper threading
would help or did we just under size them :-(.
These are nfs v3 servers only w
Hello all,
We are having an issue where the LROC on a CES node gets overrun 100%
utilized. Processes then start to backup waiting for the LROC to
return data. Any way to have the GPFS client go direct if LROC gets to
busy?
Thanks
Matt
The materials in this mes
27;s go up way more than they ever where before. I guess we
will need another nvme.
sven
On Wed, Jan 25, 2017 at 9:50 PM Matt Weil
mailto:mw...@wustl.edu>> wrote:
Hello all,
We are having an issue where the LROC on a CES node gets overrun 100%
utilized. Processes then start to backup
off
also did you increase maxstatcache so LROC actually has some compact objects to
use ?
if you send value for maxfilestocache,maxfilestocache,workerthreads and
available memory of the node i can provide a start point.
On Wed, Jan 25, 2017 at 10:20 PM Matt Weil
mailto:mw...@wustl.edu>>
100% utilized are bursts above 200,000 IO's. Any way to tell ganesha.nfsd to
cache more?
On 1/25/17 3:51 PM, Matt Weil wrote:
[ces1,ces2,ces3]
maxStatCache 8
worker1Threads 2000
maxFilesToCache 50
pagepool 100G
maxStatCache 8
lrocData no
378G system memory.
On 1/25/17
I still see small 4k IO's going to the nvme device after changing the
max_sectors_kb. Writes did increase from 64 to 512. Is that a nvme
limitation.
> [root@ces1 system]# cat /sys/block/nvme0n1/queue/read_ahead_kb
> 8192
> [root@ces1 system]# cat /sys/block/nvme0n1/queue/nr_requests
> 512
> [roo
turning off data seems to have helped this issue
Thanks all
On 1/26/17 9:37 AM, Matt Weil wrote:
100% utilized are bursts above 200,000 IO's. Any way to tell ganesha.nfsd to
cache more?
On 1/25/17 3:51 PM, Matt Weil wrote:
[ces1,ces2,ces3]
maxStatCache 8
worker1Threads
running cnfs
# rpm -qa | grep gpfs
gpfs.gpl-4.1.1-7.noarch
gpfs.base-4.1.1-7.x86_64
gpfs.docs-4.1.1-7.noarch
gpfs.gplbin-3.10.0-327.18.2.el7.x86_64-4.1.1-7.x86_64
pcp-pmda-gpfs-3.10.6-2.el7.x86_64
gpfs.ext-4.1.1-7.x86_64
gpfs.gskit-8.0.50-47.x86_64
gpfs.msg.en_US-4.1.1-7.noarch
=== mmdiag: waiter
https://access.redhat.com/solutions/2437991
I ran into this issue the other day even with the echo "4096" >
/sys/block/$ii/queue/max_sectors_kb; in place. I have always made that
larger to get to the 2M IO size. So I never really seen this issue
until the other day. I may have triggered it myse
lue of max sectors of the block device.
-jf
On Sat, Feb 11, 2017 at 7:32 PM, Matt Weil
mailto:mw...@wustl.edu>> wrote:
https://access.redhat.com/solutions/2437991
I ran into this issue the other day even with the echo "4096" >
/sys/block/$ii/queue/max_sectors_kb; in pl
Hello all,
Some how we misplaced the password for our dev instance. Is there any
way to reset it?
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature. If you are not
Hello all
Is this necessary any more?
numastat -p mmfsd
seems to spread it out without it.
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature. If you are not the
All,
We had an indecent yesterday where one of our CES nodes slowed to a
crawl. GPFS waiters showed pre fetch threads going after inodes.
iohist also showed lots of inode fetching. Then we noticed that the CES
host had 5.4 million files open.
The change I made was to set maxStatCache=DEFAULT be
:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil
> Sent: Wednesday, March 22, 2017 11:43 AM
> To: gpfsug main discussion list
> Subject: [gpfsug-discuss] CES node slow to respond
>
> All,
>
> We had an indecent yesterday where one of our CES nodes slowed to
Hello all,
Are there any issues with connecting CES nodes to multiple networks?
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature. If you are not the
intended recipi
> ether 84:2b:2b:47:70:35 txqueuelen 1000 (Ethernet)
On 3/23/17 10:42 AM, Skylar Thompson wrote:
> The only thing I can think of is you should be careful that you have
> distinct CES groups so that addresses will failover to the right networks.
>
> On Thu, Mar 23, 2017 a
of why ganesha is
holding open so many files? Is this expected/needed/etc?
Or do we have to open a PMR to get some kind of explanation?
-B
-Original Message-
From:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
[mailto:gpfsug-discuss-bo
also running version 4.2.2.2.
On 3/24/17 2:57 PM, Matt Weil wrote:
On 3/24/17 1:13 PM, Bryan Banister wrote:
Hi Vipul,
Hmm… interesting. We have dedicated systems running CES and nothing else, so
the only thing opening files on GPFS is ganesha. IBM Support recommended we
massively
Is this expected/needed/etc?
Or do we have to open a PMR to get some kind of explanation?
-B
-Original Message-
From:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil
Hello,
any reason to not enable all NSD servers as gateway when using native
gpfs AFM? Will they all pass traffic?
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive natur
.com/support/knowledgecenter/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1ins_NFSvsGPFSAFM.htm
~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...@in.ibm.com>)
From:Matt Weil mailto:mw...@wustl.edu>>
To:gpfsug main discussion list
mailto:gpfsug-discuss@spectrumscale.org>>
yes it tells you that when you attempt to make the node a gateway and is does
not have a server license designation.
On 4/12/17 4:53 AM, Venkateswara R Puvvada wrote:
Gateway node requires server license.
~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...@in.ibm.com>)
From: Mat
Hello all,
Is there any way to rate limit the AFM traffic?
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive nature. If you are not the
intended recipient, be advised tha
Hello all,
what happens if we set socketMaxListenConnections to a larger number
than we have clients? more memory used?
Thanks
Matt
The materials in this message are private and may contain Protected Healthcare
Information or other information of a sensitive
Hello all,
Question on the requirements on pmcollector node/s for a 500+ node
cluster. Is there a sizing guide? What specifics should we scale?
CPU Disks memory?
Thanks
Matt
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http:
any input on this
Thanks
On 7/5/17 10:51 AM, Matt Weil wrote:
> Hello all,
>
> Question on the requirements on pmcollector node/s for a 500+ node
> cluster. Is there a sizing guide? What specifics should we scale?
> CPU Disks memory?
>
Hello all,
Is there any guide and or recommendation as to how to scale this.
filesets per gateway node? Is it necessary to separate NSD server and
gateway roles. Are dedicated gateway nodes licensed as clients?
Thanks for any guidance.
Matt
___
gpf
per gateway). AFM gateway nodes are
> licensed as server nodes.
>
>
> ~Venkat (vpuvv...@in.ibm.com)
>
>
>
> From: Matt Weil
> To: gpfsug-discuss@spectrumscale.org
> Date: 03/23/2020 11:39 PM
> Subject: [EXTERNAL] [gpf
scale.v5r04.doc/bl1ins_gatewaynodefailureafm.htm
>
> ~Venkat (vpuvv...@in.ibm.com)
>
>
>
> From: Matt Weil
> To: gpfsug-discuss@spectrumscale.org
> Date: 03/25/2020 10:34 PM
> Subject: [EXTERNAL] Re: [gpfsug-discuss] AFM gateway node scaling
> Sent by:
77 matches
Mail list logo