Lin Shen (lshen) wrote:
Just to show that the --mkfsoptions="-i 2048" is not working as expected
or maybe I'm not doing it right.
First, I did a mkfs on the mdt partition with the default. From the
command outputs can tell that it's using 4096 as you described. And "lfs
df -i" says that there are 7743 inodes created. So far so good.
Then, I did another mkfs on the same partition, and this time I set the
bytes-per-node to 2048. Supposely, the number of inodes should double.
But "lfs df -i" says only 6489 inode are created. It actually created
fewer inodes!
[EMAIL PROTECTED] ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat
/dev/hda9
Permanent disk data:
Target: lustrefs-MDTffff
Index: unassigned
Lustre FS: lustrefs
Mount type: ldiskfs
Flags: 0x75
(MDT MGS needs_index first_time update ) Persistent mount
opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:
device size = 39MB
formatting backing filesystem ldiskfs on /dev/hda9
target name lustrefs-MDTffff
4k blocks 0
options -i 4096 -I 512 -q -O dir_index -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 4096 -I 512 -q
-O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata
[EMAIL PROTECTED] ~]# lfs df -i
UUID Inodes IUsed IFree IUse% Mounted on
lustrefs-MDT0000_UUID 7743 25 7718 0
/mnt/lustre/bonnie[MDT
:0]
lustrefs-OST0000_UUID 106864 57 106807 0
/mnt/lustre/bonnie[OST
:0]
filesystem summary: 7743 25 7718 0
/mnt/lustre/bonnie
I have had no issues with mkfsoptions but I single quote it like this
mkfs.lustre --fsname=lustre01 --mdt --mgs --mkfsoptions='-i 1024' /dev/sdb
[EMAIL PROTECTED] ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048"
--mdt --mgs --reformat /dev/hda9
Permanent disk data:
Target: lustrefs-MDTffff
Index: unassigned
Lustre FS: lustrefs
Mount type: ldiskfs
Flags: 0x75
(MDT MGS needs_index first_time update ) Persistent mount
opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:
device size = 39MB
formatting backing filesystem ldiskfs on /dev/hda9
target name lustrefs-MDTffff
4k blocks 0
options -i 2048 -I 512 -q -O dir_index -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 -I 512 -q -O
dir_index -F /dev/hda9 Writing CONFIGS/mountdata
[EMAIL PROTECTED] ~]# lfs df -i
UUID Inodes IUsed IFree IUse% Mounted on
lustrefs-MDT0000_UUID 6489 25 6464 0
/mnt/lustre/bonnie[MDT:0]
lustrefs-OST0000_UUID 106864 57 106807 0
/mnt/lustre/bonnie[OST:0]
filesystem summary: 6489 25 6464 0
/mnt/lustre/bonnie
-----Original Message-----
From: Kalpak Shah [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 08, 2007 11:10 PM
To: Lin Shen (lshen)
Cc: Gary Every; [email protected]
Subject: RE: [Lustre-discuss] No space left while running createmany
Hi,
I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually
uses ext3 to create the filesystem) and this is how lustre would
create the default number of inodes.
For small filesystems(as is your case), it creates a inode for every
4096 bytes of space on the file system. This can also be specified by
the -i option to mke2fs. So in your case, with a
32 MB partition you would have 32MB/4096 = 8192 inodes by default. So
using a "--mkfsoptions -i 2048" option to mkfs.lustre would give you
16384 inodes enough to create more than 10000 files.
For large filesytems, an inode is created for every 1Mb of filesystem
space and for even for larger filesystems an inode is created for
every 4MB of filesystem space.
Yes, tune2fs cannot change the number of inodes in your filesystem.
This option can only be set while formatting the filesystem.
Regards,
Kalpak.
On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote:
tune2fs on the MDT partition says that there are still free
inodes. In
general, how the default number of inodes is calculated for
a lustre
file system? I guess it can be set by "mkfsoptions", but
not through
tunefs.lustre though.
[EMAIL PROTECTED] ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35
(28-Feb-2004)
Filesystem volume name: lustrefs-MDT0000
Last mounted on: <not available>
Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode
dir_index filetype
needs_reco
very sparse_super large_file
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 10032
Block count: 10032
Reserved block count: 501
Free blocks: 7736
Free inodes: 10019
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 2
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 2
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 10032
Inode blocks per group: 1254
Filesystem created: Wed Feb 7 15:04:21 2007
Last mount time: Wed Feb 7 15:05:54 2007
Last write time: Wed Feb 7 15:05:54 2007
Mount count: 3
Maximum mount count: 37
Last checked: Wed Feb 7 15:04:21 2007
Check interval: 15552000 (6 months)
Next check after: Mon Aug 6 16:04:21 2007
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 512
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf
Journal backup: inode blocks
-----Original Message-----
From: Gary Every [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 08, 2007 2:21 PM
To: Lin Shen (lshen); [email protected]
Subject: RE: [Lustre-discuss] No space left while running
createmany
Sounds like you're running outta inodes
Do: tune2fs -l <raw_device> to see how many inodes the thing
supports
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Lin Shen
(lshen)
Sent: Thursday, February 08, 2007 3:01 PM
To: [email protected]
Subject: [Lustre-discuss] No space left while running createmany
I created a lustre file system with MDT on a 32MB
partition and one
OST on a 480MB partition and mounted the file system on
two nodes.
While running the createmany test program on the client node, it
always stops at 10000 files with a No space left error. But the
strange thing is that df shows both partition have lt of
free space.
Lin
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss