Please note the differences between inodes on the ZFS and inodes on the mdt 
lustre. In the previous incarnation the file system run out of ionodes as 
reported on the Lustre, even though the mdt was only half filled and zfs 
backend still reported free inodes.

From: Shaun Tancheff <stanch...@cray.com>
Sent: Thursday, October 03, 2019 05:41
To: Degremont, Aurelien <degre...@amazon.com>; Hebenstreit, Michael 
<michael.hebenstr...@intel.com>; Andreas Dilger <adil...@whamcloud.com>
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] changing inode size on MDT

Hi,

A little pedantic but for ‘inodes’ don’t exist in a zfs pool per-se. So the 
code which attempts to report the number of inodes used/available guesses based 
on the average per-object utilization rate. If you have a many large files your 
reported number of inodes goes down faster than if you have a many small files.

I think the primary take away is that for a zfs backend you will run out of 
_inodes_ at the same time as you run out of _space_, there simply is no 
distinction.

From: lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 on behalf of "Degremont, Aurelien" 
<degre...@amazon.com<mailto:degre...@amazon.com>>
Date: Thursday, October 3, 2019 at 6:11 PM
To: "Hebenstreit, Michael" 
<michael.hebenstr...@intel.com<mailto:michael.hebenstr...@intel.com>>, Andreas 
Dilger <adil...@whamcloud.com<mailto:adil...@whamcloud.com>>
Cc: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Subject: Re: [lustre-discuss] changing inode size on MDT

As Andreas said "it is not relevant for ZFS since ZFS dynamically allocates 
inodes and blocks as needed"

"as needed" is the important part. In your example, your MDT is almost empty, 
so 17G inodes for an empty MDT seems pretty sufficient.
As you will create new files and use these inodes, you will see the total 
number of inodes change.

But I don't what is the maximum number of inodes you can estimate for a 8.3 TiB 
MDT…

De : lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 au nom de "Hebenstreit, Michael" 
<michael.hebenstr...@intel.com<mailto:michael.hebenstr...@intel.com>>
Date : jeudi 3 octobre 2019 à 13:04
À : Andreas Dilger <adil...@whamcloud.com<mailto:adil...@whamcloud.com>>
Cc : "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Objet : Re: [lustre-discuss] changing inode size on MDT

So you are saying on a zfs based Lustre there is no way to increase the number 
of available inodes? I have 8TB MDT with roughly 17G inodes

[root@elfsa1m1 ~]# df -h
Filesystem       Size  Used Avail Use% Mounted on
mdt0000          8.3T  256K  8.3T   1% /mdt0000

[root@elfsa1m1 ~]# df -i
Filesystem           Inodes  IUsed       IFree IUse% Mounted on
mdt0000         17678817874      6 17678817868    1% /mdt0000

Formating under Lustre 2.10.8

mkfs.lustre --mdt --backfstype=zfs --fsname=lfsarc01 --index=0 
--mgsnid="36.101.92.22@tcp<mailto:36.101.92.22@tcp>" --reformat mdt0000/mdt0000

this translates to only 948M inodes on the Lustre FS.

[root@elfsa1m1 ~]# df -i
Filesystem           Inodes  IUsed       IFree IUse% Mounted on
mdt0000         17678817874      6 17678817868    1% /mdt0000
mdt0000/mdt0000   948016092    263   948015829    1% /lfs/lfsarc01/mdt

[root@elfsa1m1 ~]# df -h
Filesystem       Size  Used Avail Use% Mounted on
mdt0000          8.3T  256K  8.3T   1% /mdt0000
mdt0000/mdt0000  8.2T   24M  8.2T   1% /lfs/lfsarc01/mdt

and there is no reasonable option to provide more file entries except for 
adding another MDT?

Thanks
Michael

From: Andreas Dilger <adil...@whamcloud.com<mailto:adil...@whamcloud.com>>
Sent: Wednesday, October 02, 2019 18:49
To: Hebenstreit, Michael 
<michael.hebenstr...@intel.com<mailto:michael.hebenstr...@intel.com>>
Cc: Mohr Jr, Richard Frank <rm...@utk.edu<mailto:rm...@utk.edu>>; 
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
Subject: Re: [lustre-discuss] changing inode size on MDT

There are several confusing/misleading comments on this thread that need to be 
clarified...

On Oct 2, 2019, at 13:45, Hebenstreit, Michael 
<michael.hebenstr...@intel.com<mailto:michael.hebenstr...@intel.com>> wrote:

http://wiki.lustre.org/Lustre_Tuning#Number_of_Inodes_for_MDS

Note that I've updated this page to reflect current defaults.  The Lustre 
Operations Manual has a much better description of these parameters.


and I'd like to use --mkfsoptions='-i 1024' to have more inodes in the MDT. We 
already run out of inodes on that FS (probably due to an ZFS bug in early IEEL 
version) - so I'd like to increase #inodes if possible.

The "-i 1024" option (bytes-per-inode ratio) is only needed for ldiskfs since 
it statically allocates the inodes at mkfs time, it is not relevant for ZFS 
since ZFS dynamically allocates inodes and blocks as needed.

On Oct 2, 2019, at 14:00, Colin Faber 
<cfa...@gmail.com<mailto:cfa...@gmail.com>> wrote:
With 1K inodes you won't have space to accommodate new features, IIRC the 
current minimal limit on modern lustre is 2K now. If you're running out of MDT 
space you might consider DNE and multiple MDT's to accommodate that larger name 
space.

To clarify, since Lustre 2.10 any new ldiskfs MDT will allocate 1024 bytes for 
the inode itself (-I 1024).  That allows enough space *within* the inode to 
efficiently store xattrs for more complex layouts (PFL, FLR, DoM).  If xattrs 
do not fit inside the inode itself then they will be stored in an external 4KB 
inode block.

The MDT is formatted with a bytes-per-inode *ratio* of 2.5KB, which means 
(approximately) one inode will be created for every 2.5kB of the total MDT 
size.  That 2.5KB of space includes the 1KB for the inode itself, plus space 
for a directory entry (or multiple if hard-linked), extra xattrs, the journal 
(up to 4GB for large MDTs), Lustre recovery logs, ChangeLogs, etc.  Each 
directory inode will have at least one 4KB block allocated.

So, it is _possible_ to reduce the inode *ratio* below 2.5KB if you know what 
you are doing (e.g. 2KB/inode or 1.5KB/inode, this can be an arbitrary number 
of bytes, it doesn't have to be an even multiple of anything) but it definitely 
isn't possible to have 1KB inode size and 1KB per inode ratio, as there 
wouldn't be *any* space left for directories, log files, journal, etc.

Cheers, Andreas
--
Andreas Dilger
Principal Lustre Architect
Whamcloud





_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to