>> I also covered all visible spots of endianness -- thus the order of >> calculations: once stored to sb->, many values can't be read out >> without back-convertion -- thus local variables appearing at glance >> redundant. >> >> Please, put this version under your testsuite. I also suggest to not >> "dd seek=" files, but explicitly fill them either with garbage from >> /dev/urandom, or /dev/zero. It took me two hours of young life to spot >> why two successive runs of the same script produce different results >> :) > > Yes. For now, even current mkfs_ext2_test.sh throws more discrepancies. > Can you explain those which are ok and fix bad ones? > > > # ./mkfs_ext2_test.sh > Testing 24908 > --- image_bb.out Mon Oct 19 23:56:21 2009 > +++ image_std.out Mon Oct 19 23:56:21 2009 > @@ -1,3 +1,4 @@ > +warning: 331 blocks unused > Filesystem label= > OS type: Linux > Block size=1024 (log=0) > @@ -5,8 +6,8 @@ > 6240 inodes, N blocks > 1245 blocks reserved for the super user > First data block=1 > -4 block groups > +3 block groups > 8192 blocks per group, 8192 fragments per group > -1560 inodes per group > +2080 inodes per group > Superblock backups stored on blocks: > - 8193, 24577 > + 8193 > > Is this just an over-active trimming of the last block group by standard > mke2fs? >
standard mke2fs reserves group descriptor blocks for online fs growth. It is not reflected in its statistics output, however. > > Why twice as many inodes as in standard one? > Right. bytes_per_inode was not respected, due to cryptic mke2fs set_field() design. Fixed. > > When I cranked up the test size to 1G: > kilobytes=$(( (RANDOM*RANDOM) % 1000000 + 2000)) > larger images consistently have four times less inodes: > > > It starts exactly at 512*1024 kbytes. > Again due to group descriptors reservation. Can you temporarily get rid of reserved blocks in vanilla mke2fs by returning 0 from calc_reserved_gdt_blocks() in lib/ext2fs/initialize.c and retest, patch being applied? -- Vladimir
test.patch
Description: Binary data
_______________________________________________ busybox mailing list [email protected] http://lists.busybox.net/mailman/listinfo/busybox
