[CC'ed [email protected]]
On Monday 19 October 2009 22:58, Vladimir Dronnikov wrote:
> It was a hard to beat sucker. Please, review.
> When optimising it is important to not break arythmetics.
CC util-linux/mkfs_ext2.o
cc1: warnings being treated as errors
util-linux/mkfs_ext2.c: In function 'mkfs_ext2_main':
util-linux/mkfs_ext2.c:260: error: ISO C90 forbids mixed declarations and code
make[1]: *** [util-linux/mkfs_ext2.o] Error 1
Fixed. Please do git pull.
> I also covered all visible spots of endianness -- thus the order of
> calculations: once stored to sb->, many values can't be read out
> without back-convertion -- thus local variables appearing at glance
> redundant.
>
> Please, put this version under your testsuite. I also suggest to not
> "dd seek=" files, but explicitly fill them either with garbage from
> /dev/urandom, or /dev/zero. It took me two hours of young life to spot
> why two successive runs of the same script produce different results
> :)
Yes. For now, even current mkfs_ext2_test.sh throws more discrepancies.
Can you explain those which are ok and fix bad ones?
# ./mkfs_ext2_test.sh
Testing 24908
--- image_bb.out Mon Oct 19 23:56:21 2009
+++ image_std.out Mon Oct 19 23:56:21 2009
@@ -1,3 +1,4 @@
+warning: 331 blocks unused
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
@@ -5,8 +6,8 @@
6240 inodes, N blocks
1245 blocks reserved for the super user
First data block=1
-4 block groups
+3 block groups
8192 blocks per group, 8192 fragments per group
-1560 inodes per group
+2080 inodes per group
Superblock backups stored on blocks:
- 8193, 24577
+ 8193
Is this just an over-active trimming of the last block group by standard mke2fs?
Testing 1218
--- image_bb.out Mon Oct 19 23:56:21 2009
+++ image_std.out Mon Oct 19 23:56:21 2009
@@ -2,9 +2,9 @@
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
-304 inodes, N blocks
+152 inodes, N blocks
60 blocks reserved for the super user
First data block=1
1 block groups
8192 blocks per group, 8192 fragments per group
-304 inodes per group
+152 inodes per group
Why twice as many inodes as in standard one?
Testing 57696
--- image_bb.out Mon Oct 19 23:56:21 2009
+++ image_std.out Mon Oct 19 23:56:21 2009
@@ -1,12 +1,13 @@
+warning: 351 blocks unused
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
-14464 inodes, N blocks
+14448 inodes, N blocks
2884 blocks reserved for the super user
First data block=1
-8 block groups
+7 block groups
8192 blocks per group, 8192 fragments per group
-1808 inodes per group
+2064 inodes per group
Superblock backups stored on blocks:
- 8193, 24577, 40961, 57345
+ 8193, 24577, 40961
Same as 1st case (24908k) - last block group was too small?
Testing 49395
--- image_bb.out Mon Oct 19 23:56:22 2009
+++ image_std.out Mon Oct 19 23:56:22 2009
@@ -1,4 +1,4 @@
-warning: 239 blocks unused
+warning: 242 blocks unused
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Testing 298075
Testing 18144
Testing 327400
Testing 564184
?
When I cranked up the test size to 1G:
kilobytes=$(( (RANDOM*RANDOM) % 1000000 + 2000))
larger images consistently have four times less inodes:
Testing 564184
--- image_bb.out Mon Oct 19 23:56:24 2009
+++ image_std.out Mon Oct 19 23:56:24 2009
@@ -2,11 +2,11 @@
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
-141120 inodes, N blocks
+35280 inodes, N blocks
7052 blocks reserved for the super user
First data block=0
5 block groups
32768 blocks per group, 32768 fragments per group
-28224 inodes per group
+7056 inodes per group
Superblock backups stored on blocks:
32768, 98304
It starts exactly at 512*1024 kbytes.
--
vda
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox