On 28/10/2021 22:59, Sworddragon wrote:
Despite I'm not using Linux as main system anymore and wanted to avoid
getting into too much work I found some time to do some tests as this issue
bugs me just too much.

You could try running the following immediately after,
to see if it also returns quickly:

   blockdev --flushbufs /dev/sdb

Yes, this command also blocks for a bit over 1 minute when this issue
occurs.

Right that suggests the conv=fsync to dd was ineffective

Here is the output (I had to freely translate the strings since
this Knoppix instance is only in german so they might be slightly
inaccurate; Also I had to type all text since it was executed on a
different system but carefully checked to not introduce any typos):

root@Microknoppix:~# dd if=/dev/random of=/dev/sdb bs=1M conv=fsync
status=progress
1039138816 Bytes(1,0 GB, 991 MiB) copied, 56 s, 18,5 MB/s
dd: Error on writing '/dev/sdb': The device has not enough free space
999+0 records in
998+0 records out

Ah right. What's happening is dd is not doing the fsync()
as it's exiting early due to write(2) getting ENOSPC.

As you've seen you can avoid the need for fsync()
to flush buffers with oflag=direct.
A reason that might be faster also is that depending on your free mem,
you would avoid churning the kernel caches.

Another way to at least ensure the conv=fsync was effective,
would be to not write too much.  I.e. you could determine
the exact size of the disk (with `blockdev --getsize64 /dev/sdb` for e.g.)
and then use an appropriate bs= and count=. That's awkward though
and difficult to do with good performance due to larger block sizes
not generally aligning with the device size.

So this is a gotcha that should at least be documented.
Though I'm leaning towards improving this by always
doing an fsync on exit if we get a read or write error
and have successfully written any data, so that
previously written data is sync'd as requested.

cheers,
Pádraig



Reply via email to