Hi Marc,

Thanks again for the suggestions. An interesting report in the log while the 
another node took over managing the filesystem:


Mon Jul  4 10:24:08.616 2016: [W] Inode space 10 in file system gpfs is 
approaching the limit for the maximum number of inodes.


Inode space 10 was the independent fileset that the snapshot creation/deletion 
managed to remove. Still getting negative inode numbers reported after 
migrating manager functions and suspending/resuming the file system:

Inode Information
-----------------
Total number of used inodes in all Inode spaces:         -103900000
Total number of free inodes in all Inode spaces:          -24797856
Total number of allocated inodes in all Inode spaces:    -128697856
Total of Maximum number of inodes in all Inode spaces:   -103900000

I’ll have to wait until later today to try unmounting, daemon recycle or mmfsck.

Cheers,
Luke.

From: [email protected] 
[mailto:[email protected]] On Behalf Of Marc A Kaplan
Sent: 03 July 2016 19:43
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] Trapped Inodes - releases, now count them 
properly!

mmdf statistics are not real-time accurate, there is a trade off in accuracy vs 
the cost of polling each node that might have the file system mounted.

That said, here are some possibilities, in increasing order of impact on users 
and your possible desperation ;-)

A1. Wait a while (at most a few minutes) and see if the mmdf stats are updated.

A2.
mmchmgr fs another-node

may force new stats to be sent to the new fs manager. (Not sure but I expect it 
will.)

B. Briefly quiesce the file system with:

mmfsctl fs suspend; mmfsctl fs resume;


C. If you have no users active ... I'm pretty sure

mmumount fs -a ; mmmount fs -a;

will  clear the problem ... but there's always

D.
mmshutdown -a ; mmstartup -a

E.
If none of those resolve the situation something is hosed --

F.
hope that

mmfsck

can fix it.


--marc



From:        Luke Raimbach 
<[email protected]<mailto:[email protected]>>
To:        gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date:        07/03/2016 10:55 AM
Subject:        Re: [gpfsug-discuss] Trapped Inodes
Sent by:        
[email protected]<mailto:[email protected]>
________________________________



Hi Marc,

Thanks for that suggestions. This seems to have removed the NULL fileset from 
the list, however mmdf now shows even more strange statistics:

Inode Information
-----------------
Total number of used inodes in all Inode spaces:         -103900000
Total number of free inodes in all Inode spaces:          -24797856
Total number of allocated inodes in all Inode spaces:    -128697856
Total of Maximum number of inodes in all Inode spaces:   -103900000


Any ideas why these negative numbers are being reported?

Cheers,
Luke.

From: 
[email protected]<mailto:[email protected]>
 [mailto:[email protected]] On Behalf Of Marc A Kaplan
Sent: 02 July 2016 20:17
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] Trapped Inodes

I have been informed that it is possible that a glitch can occur (for example 
an abrupt shutdown) which can leave you in a situation where it looks like all 
snapshots are deleted, but there is still a hidden snapshot that must be 
cleaned up...

The workaround is to create a snapshot `mmcrsnapshot fs dummy`  and then delete 
it `mmdelsnapshot fs dummy` and see if that clears up the situation...

--marc



From:        Luke Raimbach 
<[email protected]<mailto:[email protected]>>
To:        gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date:        07/02/2016 06:05 AM
Subject:        Re: [gpfsug-discuss] Trapped Inodes
Sent by:        
[email protected]<mailto:[email protected]>
________________________________




Hi Marc,

Thanks for the suggestion.

Snapshots were my first suspect but there are none anywhere on the filesystem.

Cheers,
Luke.

On 1 Jul 2016 5:30 pm, Marc A Kaplan 
<[email protected]<mailto:[email protected]>> wrote:
Question and Suggestion:  Do you have any snapshots that might include files 
that were in the fileset you are attempting to delete?  Deleting those 
snapshots will allow the fileset deletion to complete.  The snapshots are kinda 
intertwined with what was the "live" copy of the inodes. In the GPFS "ditto" 
implementation of snapshotting,  for a file that has not changed since the 
snapshot operation, the snapshot copy is not really a copy but just a pointer 
to the "live" file.   So even after you have logically deleted the "live" 
files, the snapshot still points to those inodes you thought you deleted.  
Rather than invalidate the snapshot, (you wouldn't want that, would you?!) GPFS 
holds onto the inodes, until they are no longer referenced by any snapshot.

--marc



From:        Luke Raimbach 
<[email protected]<mailto:[email protected]>>
To:        gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date:        07/01/2016 06:32 AM
Subject:        [gpfsug-discuss] Trapped Inodes
Sent by:        
[email protected]<mailto:[email protected]>

________________________________




Hi All,

I've run out of inodes on a relatively small filesystem. The total metadata 
capacity allows for a maximum of 188,743,680 inodes.

A fileset containing 158,000,000 inodes was force deleted and has gone into a 
bad state, where it is reported as (NULL) and has state "deleted":

Attributes for fileset (NULL):
===============================
Status                                  Deleted
Path                                    --
Id                                      15
Root inode                              latest:
Parent Id                               <none>
Created                                 Wed Jun 15 14:07:51 2016
Comment
Inode space                             8
Maximum number of inodes                158000000
Allocated inodes                        158000000
Permission change flag                  chmodAndSetacl
afm-associated                          No

Offline mmfsck fixed a few problems, but didn't free these poor, trapped 
inodes. Now I've run out and mmdf is telling me crazy things like this:

Inode Information
-----------------
Total number of used inodes in all Inode spaces:                  0
Total number of free inodes in all Inode spaces:           27895680
Total number of allocated inodes in all Inode spaces:      27895680
Total of Maximum number of inodes in all Inode spaces:     34100000


Current GPFS build: "4.2.0.3".

Who will help me rescue these inodes?

Cheers,
Luke.

Luke Raimbach​
Senior HPC Data and Storage Systems Engineer,
The Francis Crick Institute,
Gibbs Building,
215 Euston Road,
London NW1 2BE.

E: [email protected]<mailto:[email protected]>
W: www.crick.ac.uk

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE. 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE. 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to