Hi,
the problem is the cleanup of the tokens and/or the openfile objects. i
suggest you open a defect for this.
sven
On Thu, Jul 12, 2018 at 8:22 AM Billich Heinrich Rainer (PSI) <
heiner.bill...@psi.ch> wrote:
>
>
>
>
> Hello Sven,
>
>
>
> The machine has
>
>
>
> maxFilesToCache 204800 (2M)
Hello Sven,
The machine has
maxFilesToCache 204800 (2M)
it will become a CES node, hence the higher than default value. It’s just a 3
node cluster with remote cluster mount and no activity (yet). But all three
nodes are listed as token server by ‘mmdiag –tokenmgr’.
Top showed 100% idle on
Why no path name in SET POOL rule?
Maybe more than one reason, but consider, that in Unix, the API has the
concept of "current directory" and "create a file in the current
directory"
AND another process or thread may at any time rename (mv!) any
directory...
So even it you "think" you know the
Hello Sven,
Thank you. I did enable numaMemorInterleave but the issues stays.
In the meantime I switched to version 5.0.0-2 just to see if it’s version
dependent – it’s not. All gpfs filesystems are unmounted when this happens.
At shutdown I often need to do a hard reset to force a reboot – o.k
If that has not changed, then:
PATH_NAME is not usable for placement policies.
Only the FILESET_NAME attribute is accepted.
One might think, that PATH_NAME is as known on creating a new file as is
FILESET_NAME, but for some reason the documentation says:
"When file attributes are referenced in
Just to follow up on the question about where to learn why a NSD is marked
down you should see a message in the GPFS log, /var/adm/ras/mmfs.log.*
Regards, The Spectrum Scale (GPFS) team
--
Hallo Achim, hallo Simon,
first thanks for your answers. I think Achims answers map these at best. The
nsd-servers (only 2) for these disk were mistakenly restart in a same time
window.
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561
Hi Renar, whenever an access to a NSD happens,
there is a potential that the node cannot access the disk, so if the (only)
NSD server is down, there will be no chance to access the disk, and the
disk will be set down.If you have twintailed disks, the 'second' (or possibly some more) NSD
server will
That's perfect, thank you both.
Best regards
Michal
Dne 12.7.2018 v 10:39 Smita J Raut napsal(a):
If ABCD is not a fileset then below rule can be used-
RULE 'ABCD-rule-01' SET POOL 'fastdata' WHERE PATH_NAME LIKE
'/gpfs/gpfs01/ABCD/%'
Thanks,
Smita
From: Simon Thompson
To: gpfsug main d
How are the disks attached? We have some IB/SRP storage that is sometimes a
little slow to appear in multipath and have seen this in the past (we since set
autoload=off and always check multipath before restarting GPFS on the node).
Simon
From: on behalf of
"renar.grunenb...@huk-coburg.de"
R
If ABCD is not a fileset then below rule can be used-
RULE 'ABCD-rule-01' SET POOL 'fastdata' WHERE PATH_NAME LIKE '
/gpfs/gpfs01/ABCD/%'
Thanks,
Smita
From: Simon Thompson
To: gpfsug main discussion list
Date: 07/12/2018 01:34 PM
Subject:Re: [gpfsug-discuss] File placement r
Hallo All,
we see after a reboot of two NSD-Servers some disks in different filesystems
are down and we don’t see why.
The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and
SS 5.0.1.1.
The question now, there are any log, structures in the gpfs deamon that log
these situ
Is ABCD a fileset? If so, its easy with something like:
RULE 'ABCD-rule-01' SET POOL 'fastdata' FOR FILESET ('ABCD-fileset-name')
Simon
On 12/07/2018, 07:56, "gpfsug-discuss-boun...@spectrumscale.org on behalf of
zac...@img.cas.cz" wrote:
Hello,
it is possible to create file pla
13 matches
Mail list logo