Hello,
we are playing a bit with Spectrum Scale OBJ storage. We were able to get
working unified access for NFS and OBJ but only if we use swift clients. If we
use s3 client for OBJ, all objects are owned by swift user and large objects
are multiparted wich is not suitable for unified access.
Hello,
is this bug already resolved?
https://access.redhat.com/solutions/3132971
I think, I'm seeing it even with latest gpfs 5.0.5.2
[1204205.886192] Kernel panic - not syncing: CRED: put_cred_rcu() sees
8821c16cdad0 with usage -530190256
maybe also related:
[ 1384.404355] GPFS
@us.ibm.com
>GPFS Development and Security
>IBM Systems
>IBM Building 008
>2455 South Rd, Poughkeepsie, NY 12601
>(845) 433-9314 T/L 293-9314
>
>
>
>
> - Original message -
> From: Lukas Hejtmanek
> Sen
And are you sure it is present only in -1062.18.1.el7 kernel? I think it is
present in all -1062.* kernels..
On Wed, Apr 15, 2020 at 04:25:41PM +, Felipe Knop wrote:
>Laurence,
>
>The problem affects all the Scale releases / PTFs.
>
> Felipe
>
>
>
Hello,
I noticed this bug, it took about 10 minutes to crash.
However, I'm seeing similar NULL pointer dereference even with older kernels,
That dereference does not happen always in GPFS code, sometimes outside in NFS
or elsewhere, however it looks familiar. I have many crashdumps about this.
Hello,
is there any settings for GPFS 5.x so that you could mitigate slow down of
asymmetric SAN? The asymmetric SAN means, that not every LUN has the same
speed, or not every disk array has the same number of LUNs. It seems that
overal speed is degraded to the slowest LUN. Is there any
rael Tel: +972 3 9188625
> Mobile: +972 52 2554625
>
>
>
>
> From: Lukas Hejtmanek
> To: gpfsug-discuss@spectrumscale.org
> Date: 26/04/2019 15:37
> Subject:[gpfsug-discuss] gpfs and device number
> Sent by:gpfsug-discuss-boun...@s
Hello,
I noticed that from time to time, device id of a gpfs volume is not same
across whole gpfs cluster.
[root@kat1 ~]# stat /gpfs/vol1/
File: ‘/gpfs/vol1/’
Size: 262144 Blocks: 512IO Block: 262144 directory
Device: 28h/40d Inode: 3
[root@kat2 ~]# stat /gpfs/vol1/
On Fri, Oct 26, 2018 at 04:52:43PM +0200, Lukas Hejtmanek wrote:
> On Fri, Oct 26, 2018 at 02:48:48PM +, Simon Thompson wrote:
> > If IB is enabled and is setup with verbs, then this is the preferred
> > network. GPFS will always fail-back to Ethernet afterwards, however what y
On Fri, Oct 26, 2018 at 03:52:43PM +0100, Jonathan Buzzard wrote:
> On 26/10/2018 15:48, Simon Thompson wrote:
> > If IB is enabled and is setup with verbs, then this is the preferred
> > network. GPFS will always fail-back to Ethernet afterwards, however
> > what you can't do is have multiple
On Fri, Oct 26, 2018 at 02:48:48PM +, Simon Thompson wrote:
> If IB is enabled and is setup with verbs, then this is the preferred
> network. GPFS will always fail-back to Ethernet afterwards, however what you
> can't do is have multiple "subnets" defined and have GPFS fail between
> different
Hello,
does anyone know whether there is a chance to use e.g., 10G ethernet together
with IniniBand network for multihoming of GPFS nodes?
I mean to setup two different type of networks to mitigate network failures.
I read that you can have several networks configured in GPFS but it does not
Hello,
I accidentally got killed mmfsd by oom killer because of pagepool size which is
normally ok but there was memory leak in smbd process so system run out of
memory. (64gb pagepool but also two 32GB smbd processes)
Shouldn't gpfs startup script set oom_score_adj to some proper value so that
From: "Tomer Perry"
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] GPFS 4.2.3-9 and RHEL 7.5
> Message-ID:
>
>
>
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Please open a service ticket
>
>
> Rega
On Wed, Jun 13, 2018 at 12:48:14PM +0300, Tomer Perry wrote:
> knfs is supported - with or without the cNFS feature ( cNFS will add HA
> to NFS on top of GPFS -
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adv_cnfs.htm
>
> ).
>
> knfs and
Hello,
did anyone encountered an error with RHEL 7.5 kernel 3.10.0-862.3.2.el7.x86_64
and the latest GPFS 4.2.3-9 with kernel NFS?
I'm getting random errors: Unknown error 521. It means EBADHANDLE. Not sure
whether it is due to kernel or GPFS.
--
Lukáš Hejtmánek
Linux Administrator only
at a time,
> it's not very useful to make two additional copies of that data on other
> nodes, and it will only slow you down.
>
> Regards,
> Alex
>
> On Tue, Mar 13, 2018 at 7:16 AM, Lukas Hejtmanek <xhejt...@ics.muni.cz>
> wrote:
>
> > On Tue, Mar 13, 20
ault tolerance *and* you'll get more efficient usage of your SSDs.
>
> I'm sure there are other ways to skin this cat too.
>
> -Aaron
>
>
>
> On March 12, 2018 at 10:59:35 EDT, Lukas Hejtmanek
> <xhejt...@ics.muni.cz<mailto:xhejt...@ics.muni.cz>> wrote:
>
On Mon, Mar 12, 2018 at 11:18:40AM -0400, valdis.kletni...@vt.edu wrote:
> On Mon, 12 Mar 2018 15:51:05 +0100, Lukas Hejtmanek said:
> > I don't think like 5 or more data/metadata replicas are practical here. On
> > the
> > other hand, multiple node failures is something real
Hello,
I'm thinking about the following setup:
~ 60 nodes, each with two enterprise NVMe SSDs, FDR IB interconnected
I would like to setup shared scratch area using GPFS and those NVMe SSDs. Each
SSDs as on NSD.
I don't think like 5 or more data/metadata replicas are practical here. On the
You need 4.2.3.4 GPFS version and it will work.
On Thu, Sep 28, 2017 at 02:18:52PM +, Jeffrey R. Lang wrote:
> I just tired to build the GPFS GPL module against the latest version of RHEL
> 7.4 kernel and the build fails. The link below show that it should work.
>
> cc kdump.o kdump-kern.o
Hello,
we have files about 100GB per file. Many of these files are migrated to tapes.
(GPFS+TSM, tape storage is external pool and dsmmigrate, dsmrecall are in
place).
These files are images from bacula backup system. When bacula wants to reuse
some of images, it needs to truncate the file to
On Wed, Mar 15, 2017 at 01:12:44PM +, GORECKI, DIETER wrote:
> One other thing to consider is the storage of data inside the inode itself
> for very small files. GPFS has the ability to use the remaining [kilo]bytes
> of the inode to store the data of the file whenever the file is small
Hello,
On Wed, Mar 15, 2017 at 01:09:07PM +, Luis Bolinches wrote:
>My 2 cents. Before even thinking too much on that path I would check the
>following
>
>- What the is physical size on those SSD if they are already 4K you won't
>"save" anything
size of what?
>- Do
On Wed, Mar 15, 2017 at 08:22:22AM -0400, Stephen Ulmer wrote:
> You need 4K nodes to store encryption keys. You can also put other useful
> things in there, like extended attributes and (possibly) the entire file.
>
> Are you worried about wasting space?
well, I have 1PB of capacity for data
Hello,
is there a way to get number of inodes consumed by a particular snapshot?
I have a fileset with separate inodespace:
Filesets in file system 'vol1':
Name StatusPath
InodeSpace MaxInodesAllocInodes UsedInodes
export Linked
Just leting know, I see the same problem with 4.2.2.1 version. mmrepquota
resolves only some of group names.
On Thu, Jan 19, 2017 at 04:25:20PM +, Buterbaugh, Kevin L wrote:
> Hi Olaf,
>
> We will continue upgrading clients in a rolling fashion, but with ~700 of
> them, it’ll be a few
On Fri, Nov 11, 2016 at 04:20:52PM +, Andy Parker1 wrote:
> When I mount the AIX client as NFS4 I do no see the user/group names. I
> know NFS4 passes names and not UID/GID numbers so I
> guess this is linked.
> Can anybody explain why I do not see userID / Group names when viewing
>
On Wed, Sep 28, 2016 at 01:33:45PM -0700, Christof Schmitt wrote:
> The client has to reconnect, open the file again and reissue request that
> have not been completed. Without persistent handles, the main risk is that
> another client can step in and access the same file in the meantime. With
Hello,
does CES offer high availability of SMB? I.e., does used samba server provide
cluster wide persistent handles? Or failover from node to node is currently
not supported without the client disruption?
--
Lukáš Hejtmánek
___
gpfsug-discuss mailing
Hello,
does nfs server (ganesha) work for someone with Kerberos authentication?
I got random permission denied:
:/mnt/nfs-test/tmp# for i in `seq 1 20`; do rm testf; dd if=/dev/zero of=testf
bs=1M count=10; done
10+0 records in
10+0 records out
10485760 bytes (105 GB) copied,
Hello,
ganesha allows to specify pseudo root for each export using Pseudo="path".
mmnfs export sets pseudo path the same as export dir, e.g., I want to export
/mnt/nfs, Pseudo is set to '/mnt/nfs' as well.
Can I set somehow Pseudo to '/'?
--
Lukáš Hejtmánek
Hello,
using gpfs 4.2.1, I do about 60 snapshots per day (one snapshot per 15 minutes
during working hours). It seems that snapid is increasing only number. Should
I be fine with such a number of snapshots per day? I guess we could reach
snapid 100,000. I remove all these snapshots during night
Hello,
I have GPFS version 4.2.1 on Centos 7.2 (kernel 3.10.0-327.22.2.el7.x86_64)
and I have got some weird behavior of samba. Windows clients get stucked for
almost 1 minute when copying files. I traced down the problematic syscall:
27887 16:39:28.000401 utimensat(AT_FDCWD,
Hello,
thank you for explanation. I confirm that things are working with 573 kernel.
On Tue, Aug 30, 2016 at 05:07:21PM -0400, mark.berg...@uphs.upenn.edu wrote:
> In the message dated: Tue, 30 Aug 2016 22:39:18 +0200,
> The pithy ruminations from Lukas Hejtmanek on
> <[gpfsug-d
On Wed, Aug 31, 2016 at 07:52:38AM +0200, Dominic Mueller-Wicke01 wrote:
> Thanks for reading the paper. I agree that the restore of a large number of
> files is a challenge today. The restore is the focus area for future
> enhancements for the integration between IBM Spectrum Scale and IBM
>
Hello,
On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> Find the paper here:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection
thank you for the paper, I appreciate it.
However, I wonder
37 matches
Mail list logo