I am looking for anyone with experience of using Spectrum Scale with nvme
devices.
I could use an offline brain dump...
The specific issue I have is with the nsd device discovery and the naming.
Before anyone replies, I am gettign excellent support from IBM and have been
directed to the
Luke,
This issue has been fixed. As a workaround you could you also try
resetting the same ACLs at home (instead of cache) or change directory
ctime at home and verify that ACLs are updated correctly on fileset root.
You can contact customer support or open a PMR and request efix.
~Venkat
Thanks to all for the information.
Im happy to say that it is close to what I hoped would be the case.
Interesting to see the effect of the -n value. Reinforces the need to think
about it and not go with the defaults.
Thanks again,
Carl.
On 7 November 2017 at 03:18, Achim Rehor
Placement policy rules "SET POOL 'xyz'... " may only name GPFS data pools.
NOT "EXTERNAL POOLs" -- EXTERNAL POOL is a concept only supported by
MIGRATE rules.
However you may be interested in "mmcloudgateway" & co, which is all about
combining GPFS with Cloud storage.
AKA IBM Transparent
Right, Bryan. To expand on that a bit, I'll make two additional points.
(1) Only a node in the cluster that owns the file system can be appointed
a file system manager for the file system. Nodes that remote mount the
file system from other clusters cannot be appointed the file system
manager
Hi Simon,
It will only trigger the callback on the currently appointed File System
Manager, so you need to make sure your callback scripts are installed on all
nodes that can occupy this role.
HTH,
-Bryan
From: gpfsug-discuss-boun...@spectrumscale.org
Thanks Eric,
One other question, when it says it must run on a manager node, I'm assuming
that means a manager node in a storage cluster (we multi-cluster clients
clusters in).
Thanks
Simon
From: Eric Agar > on behalf of
Simon,
Based on my reading of the code, when a softQuotaExceeded event callback
is invoked with %quotaType having the value "FILESET", the following
arguments correspond with each other for filesetLimitExceeded and
softQuotaExceeded:
- filesetLimitExceeded %inodeUsage and softQuotaExceeded
On Mon, 6 Nov 2017 09:20:11 +
"Chase, Peter" wrote:
> how can I automate sending files to a cloud object store as they arrive in
> GPFS and keep a copy of the file in GPFS?
Sounds like you already have an idea how to do this by using ILM policies.
Either quota
Hi Carl.
When we commissioned our system we ran an NFS stress tool, and filled the
system to the top.
No performance degradation was seen until it was 99.7% full.
I believe that after this point it takes longer to find free blocks to
write to.
YMMV.
On 6 November 2017 at 03:35, Carl
We were looking at adding some callbacks to notify us when file-sets go
over their inode limit by implementing it as a soft inode quota.
In the docs:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectru
m.scale.v4r23.doc/bl1adm_mmaddcallback.htm#mmaddcallback__Table1
Hi Venkat,
This is only for the fileset root. All other files and directories pull the
correct ACLs as expected when accessing the fileset as root user, or after
setting the correct (missing) ACL on the fileset root.
Multiple SS versions from around 4.1 to present.
Thanks!
Luke.
On Mon, 6 Nov
Hi Carl
I don’t have any direct metrics, but we frequently run our file systems above
the 80% level, run split data and metadata.I haven’t experienced any GPFS
performance issues that I can attribute to high utilization. I know the
documentation talks about this, and the lower values of blocks
Is this problem happens only for the fileset root directory ? Could you
try accessing the fileset as privileged user after the fileset link and
verify if ACLs are set properly ? AFM reads the ACLs from home and sets
in the cache automatically during the file/dir lookup. What is the
Spectrum
Dear SpectrumScale Experts,
When creating an IW cache view of a directory in a remote GPFS filesystem,
I prepare the AFM "home" directory using 'mmafmconfig enable '
command.
I wish the cache fileset junction point to inherit the ACL for the home
directory when I link it to the filesystem.
Frank,
For clarity in the understanding the underlying mechanism in GPFS, could you
describe what happens in the case say of a particular file that is appended to
every 24 hours?
ie. as that file gets to 7MB, it then writes to a new sub-block (1/32 of the
next 1MB block). I guess that sub
Peter,
Welcome to the mailing list!
Can I summarise in saying that you are looking for a way for GPFS to recognise
that a file has just arrived in the filesystem (via FTP) and so trigger an
action, in this case to trigger to push to Amazon S3 ?
I think that you also have a second question
Hello to all!
I'm pleased to have joined the GPFS UG mailing list, I'm experimenting with
GPFS on zLinux running in z/VM on a z13 mainframe. I work for the UK Met Office
in the GPCS team (general purpose compute service/mainframe team) and I'm based
in Exeter, Devon.
I've joined with a
18 matches
Mail list logo