Re: [PATCH AUTOSEL 5.9 33/33] xfs: don't allow NOWAIT DIO across extent boundaries
On Wed, Nov 25, 2020 at 06:46:54PM -0500, Sasha Levin wrote: > On Thu, Nov 26, 2020 at 08:52:47AM +1100, Dave Chinner wrote: > > We've already had one XFS upstream kernel regression in this -rc > > cycle propagated to the stable kernels in 5.9.9 because the stable > > process picked up a bunch of random XFS fixes within hours of them > > being merged by Linus. One of those commits was a result of a > > thinko, and despite the fact we found it and reverted it within a > > few days, users of stable kernels have been exposed to it for a > > couple of weeks. That *should never have happened*. > > No, what shouldn't have happened is a commit that never went out for a review > on the public mailing lists nor spending any time in linux-next ending > up in Linus's tree. I think you've got your wires crossed somewhere, Sasha, because none of that happened here. From the public record, the patch was first posted here by Darrick: https://lore.kernel.org/linux-xfs/160494584816.772693.2490433010759557816.stgit@magnolia/ on Nov 9, and was reviewed by Christoph a day later. It was merged into the XFS tree on Nov 10, with the rvb tag: https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git/commit/?h=for-next=6ff646b2ceb0eec916101877f38da0b73e3a5b7f Which means it should have been in linux-next on Nov 11, 12 and 13, when Darrick sent the pull request: https://lore.kernel.org/linux-xfs/20201113231738.GX9695@magnolia/ It was merged into Linus's tree an hour later. So, in contrast to your claims, the evidence is that the patch was, in fact, publicly posted, reviewed, and spent time in linux-next before ending up in Linus's tree. FWIW, on November 17, GregKH sent the patch to lkml for stable review after being selected by the stable process for a stable backport. This was not cc'd to the XFS list, and it was committed without comment into the 5.9.x tree is on November 18. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/fs/xfs?h=linux-5.9.y=0ca9a072112b18efc9ba9d3a9b77e9dae60f93ac IOWs, the XFS developers didn't ask for it to be backported to stable kernels - the commit did not contain a "cc: sta...@kernel.org", nor was the original patch posting cc'd to the stable list. The fact is that entire decision to backport this commit was made by stable maintainers and/or their tools, and the stable maintainers themselves chose not to tell the XFS list they had selected it for backport. Hence: > It's ridiculous that you see a failure in the maintainership workflow of > XFS and turn around to blame it somehow on the stable process. I think you really need to have another look at the evidence before you dig yourself a deeper hole and waste more of my time > > This has happened before, and *again* we were lucky this wasn't > > worse than it was. We were saved by the flaw being caught by own > > internal pre-write corruption verifiers (which exist because we > > don't trust our code to be bug-free, let alone the collections of > > random, poorly tested backports) so that it only resulted in > > corruption shutdowns rather than permanent on-disk damage and data > > loss. > > > > Put simply: the stable process is flawed because it shortcuts the > > necessary stabilisation testing for new code. It doesn't matter if > > The stable process assumes that commits that ended up upstream were > reviewed and tested; the stable process doesn't offer much in the way of > in-depth review of specific patches but mostly focuses on testing the > product of backporting hundreds of patches into each stable branch. And I've lost count of the number of times I've told the stable maintainers that this is an invalid assumption. Yet here we are again. How many times do we have to make the same mistake before we learn from it? > Release candidate cycles are here to squash the bugs that went in during > the merge window, not to introduce new "thinkos" in the way of pulling > patches out of your hip in the middle of the release cycle. "pulling patches out of your hip" Nice insult. Avoids profanity filters and everything. But I don't know why you're trying to insult me over something I played no part in. Seriously, merging critical fixes discovered in the -rc cycle happens *all the time* and has been done for as long as we've had -rc cycles. Even Linus himself does this. The fact is that the -rc process is intended to accomodate merging fixes quickly whilst still allowing sufficient testing time to be confident that no regressions were introduced or have been found and addressed before release. And that's the whole point of having an iterative integration testing phase in the release cycle - it can be adapted in duration to the current state of the code base and the fixes that are being made late in the cycle. You *should* know all this Sasha, so I'm not sure why you are claiming that long standing, well founded software engineering practices are suddenly a problem... > > the merged commits have a
Re: [PATCH AUTOSEL 5.9 33/33] xfs: don't allow NOWAIT DIO across extent boundaries
On Thu, Nov 26, 2020 at 08:52:47AM +1100, Dave Chinner wrote: We've already had one XFS upstream kernel regression in this -rc cycle propagated to the stable kernels in 5.9.9 because the stable process picked up a bunch of random XFS fixes within hours of them being merged by Linus. One of those commits was a result of a thinko, and despite the fact we found it and reverted it within a few days, users of stable kernels have been exposed to it for a couple of weeks. That *should never have happened*. No, what shouldn't have happened is a commit that never went out for a review on the public mailing lists nor spending any time in linux-next ending up in Linus's tree. It's ridiculous that you see a failure in the maintainership workflow of XFS and turn around to blame it somehow on the stable process. This has happened before, and *again* we were lucky this wasn't worse than it was. We were saved by the flaw being caught by own internal pre-write corruption verifiers (which exist because we don't trust our code to be bug-free, let alone the collections of random, poorly tested backports) so that it only resulted in corruption shutdowns rather than permanent on-disk damage and data loss. Put simply: the stable process is flawed because it shortcuts the necessary stabilisation testing for new code. It doesn't matter if The stable process assumes that commits that ended up upstream were reviewed and tested; the stable process doesn't offer much in the way of in-depth review of specific patches but mostly focuses on testing the product of backporting hundreds of patches into each stable branch. Release candidate cycles are here to squash the bugs that went in during the merge window, not to introduce new "thinkos" in the way of pulling patches out of your hip in the middle of the release cycle. the merged commits have a "fixes" tag in them, that tag doesn't mean the change is ready to be exposed to production systems. We need the *-rc stabilisation process* to weed out thinkos, brown paper bag bugs, etc, because we all make mistakes, and bugs in filesystem code can *lose user data permanently*. What needed to happen here is that XFS's internal testing story would run *before* this patch was merged anywhere and catch this bug. Why didn't it happen? Hence I ask that the stable maintainers only do automated pulls of iomap and XFS changes from upstream kernels when Linus officially releases them rather than at random points in time in the -rc cycle. If there is a critical fix we need to go back to stable kernels immediately, we will let sta...@kernel.org know directly that we want this done. I'll happily switch back to a model where we look only for stable tags from XFS, but sadly this happened only *once* in the past year. How is this helping to prevent the dangerous bugs that may cause users to lose their data permanently? -- Thanks, Sasha
Re: [PATCH AUTOSEL 5.9 33/33] xfs: don't allow NOWAIT DIO across extent boundaries
On Wed, Nov 25, 2020 at 10:35:50AM -0500, Sasha Levin wrote: > From: Dave Chinner > > [ Upstream commit 883a790a84401f6f55992887fd7263d808d4d05d ] > > Jens has reported a situation where partial direct IOs can be issued > and completed yet still return -EAGAIN. We don't want this to report > a short IO as we want XFS to complete user DIO entirely or not at > all. > > This partial IO situation can occur on a write IO that is split > across an allocated extent and a hole, and the second mapping is > returning EAGAIN because allocation would be required. > > The trivial reproducer: > > $ sudo xfs_io -fdt -c "pwrite 0 4k" -c "pwrite -V 1 -b 8k -N 0 8k" > /mnt/scr/foo > wrote 4096/4096 bytes at offset 0 > 4 KiB, 1 ops; 0.0001 sec (27.509 MiB/sec and 7042.2535 ops/sec) > pwrite: Resource temporarily unavailable > $ > > The pwritev2(0, 8kB, RWF_NOWAIT) call returns EAGAIN having done > the first 4kB write: > > xfs_file_direct_write: dev 259:1 ino 0x83 size 0x1000 offset 0x0 count 0x2000 > iomap_apply: dev 259:1 ino 0x83 pos 0 length 8192 flags > WRITE|DIRECT|NOWAIT (0x31) ops xfs_direct_write_iomap_ops caller iomap_dio_rw > actor iomap_dio_actor > xfs_ilock_nowait: dev 259:1 ino 0x83 flags ILOCK_SHARED caller > xfs_ilock_for_iomap > xfs_iunlock: dev 259:1 ino 0x83 flags ILOCK_SHARED caller > xfs_direct_write_iomap_begin > xfs_iomap_found: dev 259:1 ino 0x83 size 0x1000 offset 0x0 count 8192 > fork data startoff 0x0 startblock 24 blockcount 0x1 > iomap_apply_dstmap: dev 259:1 ino 0x83 bdev 259:1 addr 102400 offset 0 > length 4096 type MAPPED flags DIRTY > > Here the first iomap loop has mapped the first 4kB of the file and > issued the IO, and we enter the second iomap_apply loop: > > iomap_apply: dev 259:1 ino 0x83 pos 4096 length 4096 flags > WRITE|DIRECT|NOWAIT (0x31) ops xfs_direct_write_iomap_ops caller iomap_dio_rw > actor iomap_dio_actor > xfs_ilock_nowait: dev 259:1 ino 0x83 flags ILOCK_SHARED caller > xfs_ilock_for_iomap > xfs_iunlock: dev 259:1 ino 0x83 flags ILOCK_SHARED caller > xfs_direct_write_iomap_begin > > And we exit with -EAGAIN out because we hit the allocate case trying > to make the second 4kB block. > > Then IO completes on the first 4kB and the original IO context > completes and unlocks the inode, returning -EAGAIN to userspace: > > xfs_end_io_direct_write: dev 259:1 ino 0x83 isize 0x1000 disize 0x1000 > offset 0x0 count 4096 > xfs_iunlock: dev 259:1 ino 0x83 flags IOLOCK_SHARED caller > xfs_file_dio_aio_write > > There are other vectors to the same problem when we re-enter the > mapping code if we have to make multiple mappinfs under NOWAIT > conditions. e.g. failing trylocks, COW extents being found, > allocation being required, and so on. > > Avoid all these potential problems by only allowing IOMAP_NOWAIT IO > to go ahead if the mapping we retrieve for the IO spans an entire > allocated extent. This avoids the possibility of subsequent mappings > to complete the IO from triggering NOWAIT semantics by any means as > NOWAIT IO will now only enter the mapping code once per NOWAIT IO. > > Reported-and-tested-by: Jens Axboe > Signed-off-by: Dave Chinner > Reviewed-by: Darrick J. Wong > Signed-off-by: Darrick J. Wong > Signed-off-by: Sasha Levin > --- > fs/xfs/xfs_iomap.c | 29 + > 1 file changed, 29 insertions(+) No, please don't pick this up for stable kernels until at least 5.10 is released. This still needs some time integration testing time to ensure that we haven't introduced any other performance regressions with this change. We've already had one XFS upstream kernel regression in this -rc cycle propagated to the stable kernels in 5.9.9 because the stable process picked up a bunch of random XFS fixes within hours of them being merged by Linus. One of those commits was a result of a thinko, and despite the fact we found it and reverted it within a few days, users of stable kernels have been exposed to it for a couple of weeks. That *should never have happened*. This has happened before, and *again* we were lucky this wasn't worse than it was. We were saved by the flaw being caught by own internal pre-write corruption verifiers (which exist because we don't trust our code to be bug-free, let alone the collections of random, poorly tested backports) so that it only resulted in corruption shutdowns rather than permanent on-disk damage and data loss. Put simply: the stable process is flawed because it shortcuts the necessary stabilisation testing for new code. It doesn't matter if the merged commits have a "fixes" tag in them, that tag doesn't mean the change is ready to be exposed to production systems. We need the *-rc stabilisation process* to weed out thinkos, brown paper bag bugs, etc, because we all make mistakes, and bugs in filesystem code can *lose user data permanently*. Hence I ask that the stable maintainers only do automated pulls of iomap and XFS
[PATCH AUTOSEL 5.9 33/33] xfs: don't allow NOWAIT DIO across extent boundaries
From: Dave Chinner [ Upstream commit 883a790a84401f6f55992887fd7263d808d4d05d ] Jens has reported a situation where partial direct IOs can be issued and completed yet still return -EAGAIN. We don't want this to report a short IO as we want XFS to complete user DIO entirely or not at all. This partial IO situation can occur on a write IO that is split across an allocated extent and a hole, and the second mapping is returning EAGAIN because allocation would be required. The trivial reproducer: $ sudo xfs_io -fdt -c "pwrite 0 4k" -c "pwrite -V 1 -b 8k -N 0 8k" /mnt/scr/foo wrote 4096/4096 bytes at offset 0 4 KiB, 1 ops; 0.0001 sec (27.509 MiB/sec and 7042.2535 ops/sec) pwrite: Resource temporarily unavailable $ The pwritev2(0, 8kB, RWF_NOWAIT) call returns EAGAIN having done the first 4kB write: xfs_file_direct_write: dev 259:1 ino 0x83 size 0x1000 offset 0x0 count 0x2000 iomap_apply: dev 259:1 ino 0x83 pos 0 length 8192 flags WRITE|DIRECT|NOWAIT (0x31) ops xfs_direct_write_iomap_ops caller iomap_dio_rw actor iomap_dio_actor xfs_ilock_nowait: dev 259:1 ino 0x83 flags ILOCK_SHARED caller xfs_ilock_for_iomap xfs_iunlock: dev 259:1 ino 0x83 flags ILOCK_SHARED caller xfs_direct_write_iomap_begin xfs_iomap_found: dev 259:1 ino 0x83 size 0x1000 offset 0x0 count 8192 fork data startoff 0x0 startblock 24 blockcount 0x1 iomap_apply_dstmap: dev 259:1 ino 0x83 bdev 259:1 addr 102400 offset 0 length 4096 type MAPPED flags DIRTY Here the first iomap loop has mapped the first 4kB of the file and issued the IO, and we enter the second iomap_apply loop: iomap_apply: dev 259:1 ino 0x83 pos 4096 length 4096 flags WRITE|DIRECT|NOWAIT (0x31) ops xfs_direct_write_iomap_ops caller iomap_dio_rw actor iomap_dio_actor xfs_ilock_nowait: dev 259:1 ino 0x83 flags ILOCK_SHARED caller xfs_ilock_for_iomap xfs_iunlock: dev 259:1 ino 0x83 flags ILOCK_SHARED caller xfs_direct_write_iomap_begin And we exit with -EAGAIN out because we hit the allocate case trying to make the second 4kB block. Then IO completes on the first 4kB and the original IO context completes and unlocks the inode, returning -EAGAIN to userspace: xfs_end_io_direct_write: dev 259:1 ino 0x83 isize 0x1000 disize 0x1000 offset 0x0 count 4096 xfs_iunlock: dev 259:1 ino 0x83 flags IOLOCK_SHARED caller xfs_file_dio_aio_write There are other vectors to the same problem when we re-enter the mapping code if we have to make multiple mappinfs under NOWAIT conditions. e.g. failing trylocks, COW extents being found, allocation being required, and so on. Avoid all these potential problems by only allowing IOMAP_NOWAIT IO to go ahead if the mapping we retrieve for the IO spans an entire allocated extent. This avoids the possibility of subsequent mappings to complete the IO from triggering NOWAIT semantics by any means as NOWAIT IO will now only enter the mapping code once per NOWAIT IO. Reported-and-tested-by: Jens Axboe Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin --- fs/xfs/xfs_iomap.c | 29 + 1 file changed, 29 insertions(+) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 3abb8b9d6f4c6..7b9ff824e82d4 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -706,6 +706,23 @@ xfs_ilock_for_iomap( return 0; } +/* + * Check that the imap we are going to return to the caller spans the entire + * range that the caller requested for the IO. + */ +static bool +imap_spans_range( + struct xfs_bmbt_irec*imap, + xfs_fileoff_t offset_fsb, + xfs_fileoff_t end_fsb) +{ + if (imap->br_startoff > offset_fsb) + return false; + if (imap->br_startoff + imap->br_blockcount < end_fsb) + return false; + return true; +} + static int xfs_direct_write_iomap_begin( struct inode*inode, @@ -766,6 +783,18 @@ xfs_direct_write_iomap_begin( if (imap_needs_alloc(inode, flags, , nimaps)) goto allocate_blocks; + /* +* NOWAIT IO needs to span the entire requested IO with a single map so +* that we avoid partial IO failures due to the rest of the IO range not +* covered by this map triggering an EAGAIN condition when it is +* subsequently mapped and aborting the IO. +*/ + if ((flags & IOMAP_NOWAIT) && + !imap_spans_range(, offset_fsb, end_fsb)) { + error = -EAGAIN; + goto out_unlock; + } + xfs_iunlock(ip, lockmode); trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, ); return xfs_bmbt_to_iomap(ip, iomap, , iomap_flags); -- 2.27.0