Hi All,
We're having fun (ok not fun ...) with AFM.
We have a file-set where the queue length isn't shortening, watching it
over 5 sec periods, the queue length increases by ~600-1000 items, and the
numExec goes up by about 15k.
The queues are steadily rising and we've seen them over 100
dear all
how gpfs work when disk fail
this is a example scenario when disk fail
1 Server
2 Disk directly attached to the local node 100GB
mmlscluster
GPFS cluster information
GPFS cluster name: test.gpfs
GPFS cluster id: 17439727301824
You don't have room to write 180GB of file data, only ~100GB. When you
write f.ex. 90 GB of file data, each filesystem block will get one copy
written to each of your disks, occuppying 180 GB on total disk space. So
you can always read if from the other disks if one should fail.
This is
Can anyone from the Scale team comment?
Anytime I see “may result in file system corruption or undetected file data
corruption” it gets my attention.
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
Storage
IBM My Notifications
Check out the IBM Electronic
Thanks Carl. Unfortunately I won't be at SC17 this year but thankfully a
number of my colleagues will be so I'll send them with a list of
questions on my behalf :)
On 10/6/17 4:39 PM, Carl Zetie wrote:
Hi Aaron,
I appreciate your care with this. The user group are the first users to be
Thanks John!
Funnily enough playing with node classes is what sent me down this path.
I had a bunch of nodes defined (just over 1000) with a lower pagepool
than the default. I then started using nodeclasses to clean up the
config and I noticed that if you define a parameter with a nodeclass
Thanks! Good to know.
On 10/6/17 11:06 PM, IBM Spectrum Scale wrote:
Hi Aaron,
The default value applies to all nodes in the cluster. Thus changing it
will change all nodes in the cluster. You need to run mmchconfig to
customize the node override again.
Regards, The Spectrum Scale (GPFS)
Hi Sven,
Just wondering if you've had any additional thoughts/conversations about
this.
-Aaron
On 9/8/17 5:21 PM, Sven Oehme wrote:
Hi,
the code assumption is that the underlying device has no volatile write
cache, i was absolute sure we have that somewhere in the FAQ, but i
couldn't
Hi,
yeah sorry i intended to reply back before my vacation and forgot about it
the the vacation flushed it all away :-D
so right now the assumption in Scale/GPFS is that the underlying storage
doesn't have any form of enabled volatile write cache. the problem seems to
be that even if we set
Thanks, Sven.
I think my goal was for the REQ_FUA flag to be used in alignment with
the consistency expectations of the filesystem. Meaning if I was writing
to a file on a filesystem (e.g. dd if=/dev/zero of=/gpfs/fs0/file1) that
the write requests to the disk addresses containing data on the
Simon,
>Question 1.
>Can we force the gateway node for the other file-sets to our "02" node.
>I.e. So that we can get the queue services for the other filesets.
AFM automatically maps the fileset to gateway node, and today there is no
option available for users to assign fileset to a particular
On the related area of Malware detection / Audit logging, the suggested Solution is DatAdvantage from Varonis.
This has been discussed on this list back in December 2016.
The white paper (Jan 2017) is here:
We have a GPFS setup which is completely Infiniband connected. Version 4.2.3.4
I see that verbsRdmaCm is set to Disabled. Reading up about this, I am
inclined to leave this disabled.
Can anyone comment on the likely effects of changing it, and if there are any
real benefits in performance?
Hello,
Currently we are urgently looking for an on-access antivirus solution for our
IBM Spectrum Scale SMB CES Cluster.
Unfortunately IBM has no such solution. Does anyone have a good supported
solution?
Kind regards,
Jaap Jan Ouwehand
ICT Specialist (Storage & Linux)
VUmc - ICT
Aaron,
The reply you just got her is absolutely the correct one.
However, its worth contributing something here. I have recently bene dealing
with the parameter verbsPorts - which is a list of the interfaces which verbs
should use. I found on our cluyster it was set to use dual ports for all
According to one of the presentations posted on this list a few days ago, there
is "bulk antivirus scanning with Symantec AV" "coming soon".
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Ouwehand, JJ
Sent: 09 October 2017 10:13
To:
16 matches
Mail list logo