https://public.dhe.ibm.com/storage/spectrumscale/spectrum_scale_apars_512x.html
Is normally the best place to look for changes in PTF releases.
Peter Childs
ITS Research Storage
Queen Mary University Of London
From: gpfsug-discuss-boun
interesting with it.
Peter Childs
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Simon Thompson
Sent: Monday, October 11, 2021 9:35 AM
To: gpfsug main discussion list
Subject: [EXTERNAL] Re: [gpfsug-discuss] Handling bad file names in policies
nodes are stateless, but is not possible
when your nodes are stateless.
My understanding is that xcat should have a hook to do this like the
pre-scripts to run one at the end but I'm yet to find it.
Peter Childs
From: gpfsug-discuss-boun
this issue, but I feel
community feed back might get me further to start with.
Thanks
--
Peter Childs
ITS Research Storage
Queen Mary, University of London
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailma
t I'm wrong, as changing our acl setting is
going to be a pain. as while we don't make a lot of use of them we make enough
that having to use nfs4 acls all the time is going to be a pain.
--
Peter Childs
ITS Research Storage
Queen Mary, University of London
___
yes it does use the -Y flag all the time.
I'd like to share the code, once its gone though some internal code
review..
With reference to the other post, I will I think raise a PMR for this
as it does not look like mmlsquota is working as documented.
Thanks
Peter Childs
>
> Rob
>
&
ally a
bug, that ought to be fixed.
Long story is that I'm working on rewriting our quota report util that
used be a long bash/awk script into a more easy to understand python
script, and I want to get the user quota info for just one fileset.
Thanks in advance.
--
Peter Childs
ITS Research Storage
in.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> -
(615)875-9633
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
--
Peter Childs
ITS Research Storage
Queen Mary, Univ
ts so as Sven says the buffers are getting dirty too quickly.
I have thought before that making snapshot taking more reliable would be nice,
I'd not really thought it would be possible, I guess its time to write another
RFE.
Peter Childs
Research Storage
ITS Research Infrastructure
Qu
kind of
like balance.
Thanks
--
Peter Childs
ITS Research Storage
Queen Mary, University of London
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
ll currently the manager it needs to be told to stop.
>From experience it's also worth having different file system managers on
>different nodes, if at all possible.
But that's just a guess without seeing the output of mmlsmgr.
Peter Childs
Research Storage
ITS Research and Teaching Support
Quee
.
Thanks in advance
Peter Childs
On Mon, 23 Jul 2018 at 20:37, Peter Childs
mailto:p.chi...@qmul.ac.uk>> wrote:
On Mon, 2018-07-23 at 00:51 +1200, José Filipe Higino wrote:
Hi there,
Have you been able to create a test case (replicate the problem)? Can you tell
us a bit more ab
ve 5Gbit I suspect due to
Ethernet back off due to the mixed speeds.
While we do have some IB we don't currently run our storage over it.
Thanks in advance
Peter Childs
Sorry if I am un-announced here for the first time. But I would like to help if
I can.
Jose Higino,
from NIWA
New Zeala
and I can't see that issue currently.
Thanks for the help.
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
Yaron Daniel wrote
Hi
Do u run mmbackup on snapshot , which is read only ?
Regards
Yaron
onnections (and 40GB to the back of
the storage.
We're currently running 4.2.3-8
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
IBM Spectrum Scale wrote
What is in the dump that indicates the metanode is moving around? Could you
pleas
le 4.2.3-8 currently.
Thanks in advance.
--
Peter Childs
ITS Research Storage
Queen Mary, University of London
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
found to actually work. (now that's an interesting spin on a
disclaimer)
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
Simon Thompson (IT Research Support) wrote
Thanks Felipe,
Is it safe to assume that there is intent* for RHEL 7.5
f its syntax matched that for mmsetquota
and mmlsquota. or that the reset to default quota was added to mmsetquota and
mmedquota was left for editing quotas visually in an editor.
Regards
Peter Childs
On Tue, 2018-05-22 at 16:01 +0800, IBM Spectrum Scale wrote:
Hi Kuei-Yu,
Should we
with network connection drops.
On Monday, July 24, 2017 5:29 AM, Peter Childs <p.chi...@qmul.ac.uk> wrote:
We have two GPFS clusters.
One is fairly old and running 4.2.1-2 and non CCR and the nodes run
fine using up about 1.5G of memory and is consistent (GPFS pagepool is
set to 1G, so t
as there was a PTF to fix a memory leak
that occurred in tscomm associated with network connection drops.
On Monday, July 24, 2017 5:29 AM, Peter Childs <p.chi...@qmul.ac.uk> wrote:
We have two GPFS clusters.
One is fairly old and running 4.2.1-2 and non CCR and the no
As I understand it,
mmbackup calls mmapplypolicy so this stands for mmapplypolicy too.
mmapplypolicy scans the metadata inodes (file) as requested depending on
the query supplied.
You can ask mmapplypolicy to scan a fileset, inode space or filesystem.
If scanning a fileset it scans the
should catch everything, But I'm wondering if anyone
else has noticed any other things afmctl prefetch misses.
Thanks in advance
Peter Childs
On 16/05/17 10:40, Peter Childs wrote:
I know it was said at the User group meeting last week that older
versions of afm prefetch miss empty files and
o be aware of?
We are using 4.2.1-3 and prefetch was done using "mmafmctl prefetch"
using a gpfs policy to collect the list, we are using GPFS Multi-cluster
to connect the two filesystems not NFS
Thanks in advanced
Peter Childs
__
Simon,
We've managed to resolve this issue by switching off quota's and switching them
back on again and rebuilding the quota file.
Can I check if you run quota's on your cluster.
See you 2 weeks in Manchester
Thanks in advance.
Peter Childs
Research Storage Expert
ITS Research
.
Peter Childs
ITS Research Infrastructure
Queen Mary, University of London
From: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Peter Childs
<p.chi...@qmul.ac.uk>
Sent: Tuesday, April 11, 201
the standard trouble shouting guides and got nowhere hence
why I asked. But another set of slides always helps.
Thank-you for the help, still head scratching Which only makes the issue
more random.
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
That's basically what we did, They are only environment variables, so if you
not using bash to call mmbackup you will need to change the lines accordingly.
What they do is in the manual the issue is the default changed between
versions.
Peter Childs
ITS Research Infrastructure
Queen Mary
by changing
MMBACKUP_PROGRESS_INTERVAL flexable but the default is different;)
Peter Childs
ITS Research Infrastructure
Queen Mary, University of London
From: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on
4.2.1.1 or CentOs 7. So that might account for it.
Thanks
Peter Childs
From: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Venkateswara R Puvvada
<vpuvv...@in.ibm.com>
Sent: Thursday, Febr
Does anyone know of any setting that may help this or what is wrong?
Thanks
Peter Childs
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
trying to work out if empty directories or those containing only empty
directories get migrated correctly as you can't list them in the mmafmctl
prefetch statement. (If you try (using DIRECTORIES_PLUS) they through errors)
I am very interested in the solution to this issue.
Peter Childs
Queen Mary
So your saying maxStatCache should be raised on LROC enabled nodes only as its
the only place under Linux its used and should be set low on non-LROC enabled
nodes.
Fine just good to know, nice and easy now with nodeclasses
Peter Childs
From
My understanding was the maxStatCache was only used on AIX and should be set
low on Linux, as raising it did't help and wasted resources. Are we saying that
LROC now uses it and setting it low if you raise maxFilesToCache under linux is
no longer the advice.
Peter Childs
ile sets. The
> lack of stability under these large scans was the real failing for us.
Interesting.
>
>
> Bill Pappas
>
> 901-619-0585
>
> bpap...@dstonline.com<mailto:bpap...@dstonline.com>
>
>
Peter Childs
Research Storage
ITS Research and Teaching
Yes but not a great deal,
Peter Childs
Research Storage Expert
ITS Research Infrastructure
Queen Mary, University of London
From: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Yaron Dani
cluster.
But I'm interested to hear otherwise, as I'm about to embark on this myself.
I note you can switch an old cluster but need to shutdown to do so.
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
Marc A Kaplan wrote
New FS? Yes
ing actually changes. (new ones are fine) increasing
the version number makes this possible but it does not actually do it, as doing
it would mean walking every directory and updating stuff.
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
Y
run the script on all nodes.
Peter Childs
ITS Research Infrastructure
Queen Mary, University of London
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
38 matches
Mail list logo