On Wed, 10 Dec 2025, Michael van Elst wrote:
[email protected] (Stephen Borrill) writes:
262208 60563392 2 GPT part - NetBSD FFSv1/FFSv2
...
262208 80563392 2 GPT part - NetBSD FFSv1/FFSv2
...
Now to resize the filesystem:
# resize_ffs -vy /dev/rdk1
Growing fs from 15140848 blocks to 20140848 blocks
Why 20140848, not 80563392? Even if I specify 80563392 with -s, it still only
grows to 20140848.
I would wonder about the 'from' value.
before 60563392 / 4 = 15140848
after 80563392 / 4 = 20140848
This is the number of filesystem 'fragments' and fits
the resize operation.
dumpfs concurs:
ncg 217 size 20140848 blocks 19524526
bsize 16384 shift 14 mask 0xffffc000
fsize 2048 shift 11 mask 0xfffff800
'size' and 'blocks' are measured in terms of fragments.
OK, I'm happy with the explanation of size from dumpfs being multiples of
fsize not sectors, but not the rest. The problem is that the term 'blocks'
is used inconsistently. In dmesg and df, it means sectors. In resize_ffs
and dumpfs it means 'fragments'.
resize_ffs(8) says that -s is in sectors ("The size is given
as the count of disk sectors, usually 512 bytes". It avoids the word
blocks). It's therefore particularly confusing that it appears to switch
between meanings when run with -v:
# resize_ffs -vy -s 80563392 /dev/rdk1
Growing fs from 15140848 blocks to 20140848 blocks
I will look at improving the output of resize_ffs -v.
I don't gain any free space:
Filesystem 1K-blocks Used Avail %Cap Mounted on
root_device 29355772 29326868 -1438884 105% /
You cannot resize a mounted filesystem. While resize_ffs writes
bits to the disk, the filesystem still works with cached values.
I expected this, but assumed that doing an explicit read/write mount
afterwards would update it.
This is all from me helping out a colleague who was following instructions
I'd written which were explicitly for non-root filesystems (i.e. the
filesystem wasn't mounted at all, not even read-only, so nothing was
cached). I've since improved the documentation so he doesn't make the same
mistake next time :-)
And at a reboot, if I run resize_ffs again it claims to be growing from
15140848 to 20140848 again, so appears to have done nothing.
Looks like the filesystem was mounted read-write and at umount
time, wrote back the cached superblock. If your are lucky, it
did just undo the resize.
The /etc/rc.d/resize_root script only avoids disaster because it
resizes the still read-only mounted filesystem and then reboots
immediately.
Yes, you're right. Him doing mount -a and then umounting messed up the
superblock. This was spotted by subsequent fscks.
If you are sure that the filesystem was still read-only, I could
imagine that there is an issue with cache flushing of the virtual
disk. Maybe doing a 'dkctl xbd0 synccache' makes a difference.
dkctl: /dev/rxbd0: synccache: Operation not supported
--
Stephen