[gpfsug-discuss] S3 API and POSIX rights

2021-01-06 Thread Lukas Hejtmanek
Hello, we are playing a bit with Spectrum Scale OBJ storage. We were able to get working unified access for NFS and OBJ but only if we use swift clients. If we use s3 client for OBJ, all objects are owned by swift user and large objects are multiparted wich is not suitable for unified access.

[gpfsug-discuss] put_cred bug

2020-09-30 Thread Lukas Hejtmanek
Hello, is this bug already resolved? https://access.redhat.com/solutions/3132971 I think, I'm seeing it even with latest gpfs 5.0.5.2 [1204205.886192] Kernel panic - not syncing: CRED: put_cred_rcu() sees 8821c16cdad0 with usage -530190256 maybe also related: [ 1384.404355] GPFS

Re: [gpfsug-discuss] Kernel crashes with Spectrum Scale and RHEL 7.7 3.10.0-1062.18.1.el7 kernel

2020-04-15 Thread Lukas Hejtmanek
@us.ibm.com >GPFS Development and Security >IBM Systems >IBM Building 008 >2455 South Rd, Poughkeepsie, NY 12601 >(845) 433-9314 T/L 293-9314 >  >  >   > > - Original message - > From: Lukas Hejtmanek > Sen

Re: [gpfsug-discuss] Kernel crashes with Spectrum Scale and RHEL 7.7 3.10.0-1062.18.1.el7 kernel

2020-04-15 Thread Lukas Hejtmanek
And are you sure it is present only in -1062.18.1.el7 kernel? I think it is present in all -1062.* kernels.. On Wed, Apr 15, 2020 at 04:25:41PM +, Felipe Knop wrote: >Laurence, >  >The problem affects all the Scale releases / PTFs. >  >  Felipe >  > >

Re: [gpfsug-discuss] Kernel crashes with Spectrum Scale and RHEL 7.7 3.10.0-1062.18.1.el7 kernel

2020-04-15 Thread Lukas Hejtmanek
Hello, I noticed this bug, it took about 10 minutes to crash. However, I'm seeing similar NULL pointer dereference even with older kernels, That dereference does not happen always in GPFS code, sometimes outside in NFS or elsewhere, however it looks familiar. I have many crashdumps about this.

[gpfsug-discuss] Asymetric SAN with GPFS

2019-07-29 Thread Lukas Hejtmanek
Hello, is there any settings for GPFS 5.x so that you could mitigate slow down of asymmetric SAN? The asymmetric SAN means, that not every LUN has the same speed, or not every disk array has the same number of LUNs. It seems that overal speed is degraded to the slowest LUN. Is there any

Re: [gpfsug-discuss] gpfs and device number

2019-05-08 Thread Lukas Hejtmanek
rael Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: Lukas Hejtmanek > To: gpfsug-discuss@spectrumscale.org > Date: 26/04/2019 15:37 > Subject:[gpfsug-discuss] gpfs and device number > Sent by:gpfsug-discuss-boun...@s

[gpfsug-discuss] gpfs and device number

2019-04-26 Thread Lukas Hejtmanek
Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root@kat1 ~]# stat /gpfs/vol1/ File: ‘/gpfs/vol1/’ Size: 262144 Blocks: 512IO Block: 262144 directory Device: 28h/40d Inode: 3 [root@kat2 ~]# stat /gpfs/vol1/

Re: [gpfsug-discuss] Multihomed nodes and failover networks

2018-10-26 Thread Lukas Hejtmanek
On Fri, Oct 26, 2018 at 04:52:43PM +0200, Lukas Hejtmanek wrote: > On Fri, Oct 26, 2018 at 02:48:48PM +, Simon Thompson wrote: > > If IB is enabled and is setup with verbs, then this is the preferred > > network. GPFS will always fail-back to Ethernet afterwards, however what y

Re: [gpfsug-discuss] Multihomed nodes and failover networks

2018-10-26 Thread Lukas Hejtmanek
On Fri, Oct 26, 2018 at 03:52:43PM +0100, Jonathan Buzzard wrote: > On 26/10/2018 15:48, Simon Thompson wrote: > > If IB is enabled and is setup with verbs, then this is the preferred > > network. GPFS will always fail-back to Ethernet afterwards, however > > what you can't do is have multiple

Re: [gpfsug-discuss] Multihomed nodes and failover networks

2018-10-26 Thread Lukas Hejtmanek
On Fri, Oct 26, 2018 at 02:48:48PM +, Simon Thompson wrote: > If IB is enabled and is setup with verbs, then this is the preferred > network. GPFS will always fail-back to Ethernet afterwards, however what you > can't do is have multiple "subnets" defined and have GPFS fail between > different

[gpfsug-discuss] Multihomed nodes and failover networks

2018-10-26 Thread Lukas Hejtmanek
Hello, does anyone know whether there is a chance to use e.g., 10G ethernet together with IniniBand network for multihoming of GPFS nodes? I mean to setup two different type of networks to mitigate network failures. I read that you can have several networks configured in GPFS but it does not

[gpfsug-discuss] mmfsd and oom settings

2018-09-17 Thread Lukas Hejtmanek
Hello, I accidentally got killed mmfsd by oom killer because of pagepool size which is normally ok but there was memory leak in smbd process so system run out of memory. (64gb pagepool but also two 32GB smbd processes) Shouldn't gpfs startup script set oom_score_adj to some proper value so that

Re: [gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5 (Wilson, Neil)

2018-07-27 Thread Lukas Hejtmanek
From: "Tomer Perry" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5 > Message-ID: > > > > > Content-Type: text/plain; charset="iso-8859-1" > > Please open a service ticket > > > Rega

Re: [gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5

2018-06-13 Thread Lukas Hejtmanek
On Wed, Jun 13, 2018 at 12:48:14PM +0300, Tomer Perry wrote: > knfs is supported - with or without the cNFS feature ( cNFS will add HA > to NFS on top of GPFS - > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adv_cnfs.htm > > ). > > knfs and

[gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5

2018-06-13 Thread Lukas Hejtmanek
Hello, did anyone encountered an error with RHEL 7.5 kernel 3.10.0-862.3.2.el7.x86_64 and the latest GPFS 4.2.3-9 with kernel NFS? I'm getting random errors: Unknown error 521. It means EBADHANDLE. Not sure whether it is due to kernel or GPFS. -- Lukáš Hejtmánek Linux Administrator only

Re: [gpfsug-discuss] Preferred NSD

2018-03-14 Thread Lukas Hejtmanek
at a time, > it's not very useful to make two additional copies of that data on other > nodes, and it will only slow you down. > > Regards, > Alex > > On Tue, Mar 13, 2018 at 7:16 AM, Lukas Hejtmanek <xhejt...@ics.muni.cz> > wrote: > > > On Tue, Mar 13, 20

Re: [gpfsug-discuss] Preferred NSD

2018-03-13 Thread Lukas Hejtmanek
ault tolerance *and* you'll get more efficient usage of your SSDs. > > I'm sure there are other ways to skin this cat too. > > -Aaron > > > > On March 12, 2018 at 10:59:35 EDT, Lukas Hejtmanek > <xhejt...@ics.muni.cz<mailto:xhejt...@ics.muni.cz>> wrote: >

Re: [gpfsug-discuss] Preferred NSD

2018-03-12 Thread Lukas Hejtmanek
On Mon, Mar 12, 2018 at 11:18:40AM -0400, valdis.kletni...@vt.edu wrote: > On Mon, 12 Mar 2018 15:51:05 +0100, Lukas Hejtmanek said: > > I don't think like 5 or more data/metadata replicas are practical here. On > > the > > other hand, multiple node failures is something real

[gpfsug-discuss] Preferred NSD

2018-03-12 Thread Lukas Hejtmanek
Hello, I'm thinking about the following setup: ~ 60 nodes, each with two enterprise NVMe SSDs, FDR IB interconnected I would like to setup shared scratch area using GPFS and those NVMe SSDs. Each SSDs as on NSD. I don't think like 5 or more data/metadata replicas are practical here. On the

Re: [gpfsug-discuss] el7.4 compatibility

2017-09-28 Thread Lukas Hejtmanek
You need 4.2.3.4 GPFS version and it will work. On Thu, Sep 28, 2017 at 02:18:52PM +, Jeffrey R. Lang wrote: > I just tired to build the GPFS GPL module against the latest version of RHEL > 7.4 kernel and the build fails. The link below show that it should work. > > cc kdump.o kdump-kern.o

[gpfsug-discuss] Overwritting migrated files

2017-09-07 Thread Lukas Hejtmanek
Hello, we have files about 100GB per file. Many of these files are migrated to tapes. (GPFS+TSM, tape storage is external pool and dsmmigrate, dsmrecall are in place). These files are images from bacula backup system. When bacula wants to reuse some of images, it needs to truncate the file to

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Lukas Hejtmanek
On Wed, Mar 15, 2017 at 01:12:44PM +, GORECKI, DIETER wrote: > One other thing to consider is the storage of data inside the inode itself > for very small files. GPFS has the ability to use the remaining [kilo]bytes > of the inode to store the data of the file whenever the file is small

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Lukas Hejtmanek
Hello, On Wed, Mar 15, 2017 at 01:09:07PM +, Luis Bolinches wrote: >My 2 cents. Before even thinking too much on that path I would check the >following >  >- What the is physical size on those SSD if they are already 4K you won't >"save" anything size of what? >- Do

Re: [gpfsug-discuss] default inode size

2017-03-15 Thread Lukas Hejtmanek
On Wed, Mar 15, 2017 at 08:22:22AM -0400, Stephen Ulmer wrote: > You need 4K nodes to store encryption keys. You can also put other useful > things in there, like extended attributes and (possibly) the entire file. > > Are you worried about wasting space? well, I have 1PB of capacity for data

[gpfsug-discuss] snapshots

2017-01-25 Thread Lukas Hejtmanek
Hello, is there a way to get number of inodes consumed by a particular snapshot? I have a fileset with separate inodespace: Filesets in file system 'vol1': Name StatusPath InodeSpace MaxInodesAllocInodes UsedInodes export Linked

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Lukas Hejtmanek
Just leting know, I see the same problem with 4.2.2.1 version. mmrepquota resolves only some of group names. On Thu, Jan 19, 2017 at 04:25:20PM +, Buterbaugh, Kevin L wrote: > Hi Olaf, > > We will continue upgrading clients in a rolling fashion, but with ~700 of > them, it’ll be a few

Re: [gpfsug-discuss] SS 4.2.1 + CES NFS / SMB

2016-11-14 Thread Lukas Hejtmanek
On Fri, Nov 11, 2016 at 04:20:52PM +, Andy Parker1 wrote: > When I mount the AIX client as NFS4 I do no see the user/group names. I > know NFS4 passes names and not UID/GID numbers so I > guess this is linked. > Can anybody explain why I do not see userID / Group names when viewing >

Re: [gpfsug-discuss] Samba via CES

2016-09-28 Thread Lukas Hejtmanek
On Wed, Sep 28, 2016 at 01:33:45PM -0700, Christof Schmitt wrote: > The client has to reconnect, open the file again and reissue request that > have not been completed. Without persistent handles, the main risk is that > another client can step in and access the same file in the meantime. With

[gpfsug-discuss] Samba via CES

2016-09-27 Thread Lukas Hejtmanek
Hello, does CES offer high availability of SMB? I.e., does used samba server provide cluster wide persistent handles? Or failover from node to node is currently not supported without the client disruption? -- Lukáš Hejtmánek ___ gpfsug-discuss mailing

[gpfsug-discuss] CES NFS with Kerberos

2016-09-21 Thread Lukas Hejtmanek
Hello, does nfs server (ganesha) work for someone with Kerberos authentication? I got random permission denied: :/mnt/nfs-test/tmp# for i in `seq 1 20`; do rm testf; dd if=/dev/zero of=testf bs=1M count=10; done 10+0 records in 10+0 records out 10485760 bytes (105 GB) copied,

[gpfsug-discuss] CES and nfs pseudo root

2016-09-20 Thread Lukas Hejtmanek
Hello, ganesha allows to specify pseudo root for each export using Pseudo="path". mmnfs export sets pseudo path the same as export dir, e.g., I want to export /mnt/nfs, Pseudo is set to '/mnt/nfs' as well. Can I set somehow Pseudo to '/'? -- Lukáš Hejtmánek

[gpfsug-discuss] gpfs snapshots

2016-09-12 Thread Lukas Hejtmanek
Hello, using gpfs 4.2.1, I do about 60 snapshots per day (one snapshot per 15 minutes during working hours). It seems that snapid is increasing only number. Should I be fine with such a number of snapshots per day? I guess we could reach snapid 100,000. I remove all these snapshots during night

[gpfsug-discuss] gpfs 4.2.1 and samba export

2016-09-12 Thread Lukas Hejtmanek
Hello, I have GPFS version 4.2.1 on Centos 7.2 (kernel 3.10.0-327.22.2.el7.x86_64) and I have got some weird behavior of samba. Windows clients get stucked for almost 1 minute when copying files. I traced down the problematic syscall: 27887 16:39:28.000401 utimensat(AT_FDCWD,

Re: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8

2016-08-31 Thread Lukas Hejtmanek
Hello, thank you for explanation. I confirm that things are working with 573 kernel. On Tue, Aug 30, 2016 at 05:07:21PM -0400, mark.berg...@uphs.upenn.edu wrote: > In the message dated: Tue, 30 Aug 2016 22:39:18 +0200, > The pithy ruminations from Lukas Hejtmanek on > <[gpfsug-d

Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection" (Dominic Mueller-Wicke)

2016-08-31 Thread Lukas Hejtmanek
On Wed, Aug 31, 2016 at 07:52:38AM +0200, Dominic Mueller-Wicke01 wrote: > Thanks for reading the paper. I agree that the restore of a large number of > files is a challenge today. The restore is the focus area for future > enhancements for the integration between IBM Spectrum Scale and IBM >

Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Lukas Hejtmanek
Hello, On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote: > Find the paper here: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection thank you for the paper, I appreciate it. However, I wonder