On Tue, Oct 20, 2009 at 8:54 AM, Vladimir Dronnikov <[email protected]> wrote:
>> -4 block groups
>> +3 block groups
>> 8192 blocks per group, 8192 fragments per group
>> -1560 inodes per group
>> +2080 inodes per group
>> Superblock backups stored on blocks:
>> - 8193, 24577
>> + 8193
>>
>> Is this just an over-active trimming of the last block group by standard
>> mke2fs?
>
> standard mke2fs reserves group descriptor blocks for online fs growth.
> It is not reflected in its statistics output, however.
Well, from the source it is not obvious at all. Is it this cryptic
"rgdtsz" stuff? Can it have a better name?
>> Why twice as many inodes as in standard one?
>
> Right. bytes_per_inode was not respected, due to cryptic mke2fs
> set_field() design. Fixed.
Applying...
>> When I cranked up the test size to 1G:
>> kilobytes=$(( (RANDOM*RANDOM) % 1000000 + 2000))
>> larger images consistently have four times less inodes:
>>
>> It starts exactly at 512*1024 kbytes.
>
> Again due to group descriptors reservation.
> Can you temporarily get rid of reserved blocks in vanilla mke2fs by
> returning 0 from calc_reserved_gdt_blocks() in lib/ext2fs/initialize.c
> and retest, patch being applied?
Why should I hack vanilla mke2fs?
This last patch does not seem to improve things. Actually, this change:
- uint32_t ninodes = nblocks_full / (blocksize >= 4096 ?
1 : 4096 / blocksize);
+ //uint32_t ninodes = nblocks_full / (blocksize >= 4096
? 1 : 4096 / blocksize);
+ uint32_t ninodes = ((uint64_t) nblocks_full *
blocksize) / bytes_per_inode;
breaks 68 kbyte images. Other differences did not go away either:
Testing 24908
+warning: 331 blocks unused
-4 block groups
+3 block groups
-1560 inodes per group
+2080 inodes per group
Superblock backups stored on blocks:
- 8193, 24577
+ 8193
Testing 1218
-304 inodes, N blocks
+152 inodes, N blocks
-304 inodes per group
+152 inodes per group
Testing 57696
+warning: 351 blocks unused
-14464 inodes, N blocks
+14448 inodes, N blocks
-8 block groups
+7 block groups
-1808 inodes per group
+2064 inodes per group
Superblock backups stored on blocks:
- 8193, 24577, 40961, 57345
+ 8193, 24577, 40961
Testing 49395
-warning: 239 blocks unused
+warning: 242 blocks unused
How about this?
vanilla mke2fs refuses to create images in files less than 60k big.
Lets start with 60k image and test every kilobyte:
kilobytes=60
while true; do
test_mke2fs #|| exit 1
: $((kilobytes++))
test $kilobytes = 100 && exit
done
Even before your patch, this was failing at 60k:
e2fsck 1.41.4 (27-Jan-2009)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Inode bitmap differences: +(9--11)
Free inodes count wrong for group #0 (5, counted=8).
Directories count wrong for group #0 (2, counted=1).
Free inodes count wrong (5, counted=8).
image_bb: 11/16 files (0.0% non-contiguous), 9/60 blocks
The first size which worked was 68k.
With your patch sans the above part it still fails at 60k, works at 68k.
I am applying your changes sans the above part.
I'm also modifying mkfs_ext2_test.sh - adding this loop.
Ideally, "mke2fs image_bb 60" needs to be fixed so that
resulting image passes e2fsck, and "mke2fs image_bb 68..99"
needs to be fixed to have 16 inodes, not 24:
Testing 68
--- image_bb.out 2009-10-20 13:16:52.302556331 +0200
+++ image_std.out 2009-10-20 13:16:52.284551744 +0200
@@ -2,9 +2,9 @@
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
-24 inodes, N blocks
+16 inodes, N blocks
3 blocks reserved for the super user
First data block=1
1 block groups
8192 blocks per group, 8192 fragments per group
-24 inodes per group
+16 inodes per group
...
Testing 99
--- image_bb.out 2009-10-20 13:16:53.958296491 +0200
+++ image_std.out 2009-10-20 13:16:53.946305702 +0200
@@ -2,9 +2,9 @@
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
-24 inodes, N blocks
+16 inodes, N blocks
4 blocks reserved for the super user
First data block=1
1 block groups
8192 blocks per group, 8192 fragments per group
-24 inodes per group
+16 inodes per group
Can you do it?
--
vda
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox