Hello Samuel,

Fair...here is some "real world" test (even though its highly skewed
towards a developer-type person)

Test Setup
    Storage: SSD-backed QEMU VM (cache:none).
    Comparison: Vanilla (Unsafe) vs. Full Sync Patch (Safe).

1. Unpacking Linux Kernel (tar -xf linux-5.15.tar.xz)
    Vanilla: 2m 39s
    Full Sync: 2m 45s
    Delta: +6s (~3.8%)
    Conclusion: Minimal regression. This confirms that standard file
creation/write paths are not being penalized by the barrier logic.

2. Git Clone (git clone .../hurd.git)
    Vanilla: 2.12s
    Full Sync: 2.06s
    Delta: 0s (Noise)

    Conclusion: Zero regression for standard development tasks.

3. APT Reinstall (coreutils grep sed bash)
    Vanilla: 37.7s (Install time)
    Full Sync: 42.3s (Install time, adjusted for download)
    Delta: +4.6s (~12%)

    Conclusion: This 12% slowdown is the "Safety Tax." dpkg uses fsync to
ensure database consistency. The Full Sync patch respects these requests,
adding physical durability at a reasonable performance cost.

These tests confirm that the patch follows the "pay for what you use"
principle. Users will not see a degradation in git, tar, or compilation
times. They will see a moderate (12%) slowdown in apt, which buys them
protection against corruption during power loss.

I believe this acts as a solid baseline for safety. The 12% overhead on
synchronous workloads might be acceptable especially if it will be
mitigated with JBD2 work, if we go there.


Regards,

Milos


On Wed, Jan 28, 2026 at 12:20 AM Samuel Thibault <[email protected]>
wrote:

> Hello,
>
> Milos Nikic, le mar. 27 janv. 2026 21:57:27 -0800, a ecrit:
> > I have benchmarked the patch with "Full Sync" (sync in
> diskfs_write_disknode
> > and in block_getblk) on a clean build (latest GNUMach/Rumpdisk/Ext2fs)
> > vs. the "Vanilla" (no sync in diskfs_write_disknode and block_getblk)
> with
> > the exact same setup to measure the impact of the barrier logic.
> >
> > Test Setup
> >   Workload: A run is 1,000 iterations of open() -> write() -> fsync() ->
> close
> > ()
> >             (small C program compiled and ran inside Hurd).
> >   Storage:  QEMU VM (cache:none) backed by SSD.
>
> I was rather thinking about the usual workload that people use:
> installing debian packages, cloning a large git repo locally, etc.,
> typically.
>
> Synthetic benchmarks will always show the overhead with large
> amplification, so they are not representative of what people will
> actually see: we don't usually have fsync() called that often.
>
> Samuel
>

Reply via email to