On 06/11/2025 02:04, Benjamin Marzinski wrote:
}
/*
         * Ensure that bio is a multiple of internal sector encryption size
@@ -3762,6 +3766,11 @@ static void crypt_io_hints(struct dm_tar
        if (ti->emulate_zone_append)
                limits->max_hw_sectors = min(limits->max_hw_sectors,
                                             BIO_MAX_VECS << 
PAGE_SECTORS_SHIFT);
+
+       limits->atomic_write_hw_unit_max = min(limits->atomic_write_hw_unit_max,
+                                              BIO_MAX_VECS << PAGE_SHIFT);
+       limits->atomic_write_hw_max = min(limits->atomic_write_hw_max,
+                                         BIO_MAX_VECS << PAGE_SHIFT);
  }
Do we need to cap these limits, instead of just accepting the underlying
device limits?

I was going to mention that I don't think that this is required.

Neither of them are really used for IO.
atomic_write_unit_max, which is used for IO, will already get a capped
value from atomic_write_hw_unit_max in
blk_atomic_writes_update_limits().

Yes, we limit the software atomic writes limits by what the block stack can actually handle for never requiring to split a bio with REQ_ATOMIC flag set. The HW limits are just what the HW can support.

And capping atomic_write_hw_max seems
wrong, since atomic_write_hw_max == UINT_MAX is used
blk_validate_atomic_write_limits() to indicate that atomic writes were
never set up, because no underlying device supported them.

That UINT_MAX is used as a flag to indicate that we have not started to stack limits for bottom devices yet, and so we just report 0 for the atomic limits (when set). I actually don't think that this check is strictly required in blk_validate_atomic_write_limits(), as we don't try to validate those the limits before stacking the bottom devices. I did hit it previously - I need to check on that.

I don't think
these caps will actually break things, but in my mind they make some
already confusing limits even more confusing.

Or am I missing some reason why this is needed?

Thanks,
John


Reply via email to