Hi,

I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually
uses ext3 to create the filesystem) and this is how lustre would create
the default number of inodes. 

For small filesystems(as is your case), it creates a inode for every
4096 bytes of space on the file system. This can also be specified by
the -i option to mke2fs. So in your case, with a 32 MB partition you
would have 32MB/4096 = 8192 inodes by default. So using a "--mkfsoptions
-i 2048" option to mkfs.lustre would give you 16384 inodes enough to
create more than 10000 files.

For large filesytems, an inode is created for every 1Mb of filesystem
space and for even for larger filesystems an inode is created for every
4MB of filesystem space.

Yes, tune2fs cannot change the number of inodes in your filesystem. This
option can only be set while formatting the filesystem.

Regards,
Kalpak.


On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote:
> tune2fs on the MDT partition says that there are still free inodes. In
> general, how the default number of inodes is calculated for a lustre
> file system? I guess it can be set by "mkfsoptions", but not through
> tunefs.lustre though.  
> 
>   
> [EMAIL PROTECTED] ~]# tune2fs -l /dev/hda10 | more
> tune2fs 1.35 (28-Feb-2004)
> Filesystem volume name:   lustrefs-MDT0000
> Last mounted on:          <not available>
> Filesystem UUID:          77726e31-c4ac-4244-b71d-396a98e1c2ed
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal resize_inode dir_index filetype
> needs_reco
> very sparse_super large_file
> Default mount options:    (none)
> Filesystem state:         clean
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              10032
> Block count:              10032
> Reserved block count:     501
> Free blocks:              7736
> Free inodes:              10019
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Reserved GDT blocks:      2
> Block size:               4096
> Fragment size:            4096
> Reserved GDT blocks:      2
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         10032
> Inode blocks per group:   1254
> Filesystem created:       Wed Feb  7 15:04:21 2007
> Last mount time:          Wed Feb  7 15:05:54 2007
> Last write time:          Wed Feb  7 15:05:54 2007
> Mount count:              3
> Maximum mount count:      37
> Last checked:             Wed Feb  7 15:04:21 2007
> Check interval:           15552000 (6 months)
> Next check after:         Mon Aug  6 16:04:21 2007
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               512
> Journal inode:            8
> Default directory hash:   tea
> Directory Hash Seed:      9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf
> Journal backup:           inode blocks
> 
> 
> > -----Original Message-----
> > From: Gary Every [mailto:[EMAIL PROTECTED] 
> > Sent: Thursday, February 08, 2007 2:21 PM
> > To: Lin Shen (lshen); [email protected]
> > Subject: RE: [Lustre-discuss] No space left while running createmany
> > 
> > Sounds like you're running outta inodes
> > 
> > Do: tune2fs -l <raw_device> to see how many inodes the thing supports
> > 
> > 
> > 
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of Lin Shen
> > (lshen)
> > Sent: Thursday, February 08, 2007 3:01 PM
> > To: [email protected]
> > Subject: [Lustre-discuss] No space left while running createmany
> > 
> > I created a lustre file system with MDT on a 32MB partition 
> > and one OST on a 480MB partition and mounted the file system 
> > on two nodes. While running the createmany test program on 
> > the client node, it always stops at 10000 files with a No 
> > space left error. But the strange thing is that df shows both 
> > partition have lt of free space.
> > 
> > Lin    
> > 
> > _______________________________________________
> > Lustre-discuss mailing list
> > [email protected]
> > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> > 
> 
> _______________________________________________
> Lustre-discuss mailing list
> [email protected]
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to