On Wed, Apr 23, 2025 at 02:02:11PM -0400, Kent Overstreet wrote: > Allocating your own bio doesn't allow you to safely exceed the > BIO_MAX_VECS limit - there's places in the io path that need to bounce, > and they all use biosets.
Yes. Another reason not to do it, which I don't want to anyway. But we do have a few places that do it like squashs which we need to weed out. And/or finally kill the bounce bufferingreal, which is long overdue. > That may be an issue even for non vmalloc bios, unless everything that > bounces has been converted to bounce to a folio of the same order. Anything that actually hits the bounce buffering is going to cause problems because it hasn't kept up with the evolution of the block layer, and is basically not used for anything relevant. > > The problem with transparent vmalloc handling is that it's not possible. > > The magic handling for virtually indexed caches can be hidden on the > > submission side, but the completion side also needs to call > > invalidate_kernel_vmap_range for reads. Requiring the caller to know > > they deal vmalloc is a way to at least keep that on the radar. > > yeesh, that's a landmine. > > having a separate bio_add_vmalloc as a hint is still a really bad > "solution", unfortunately. And since this is something we don't have > sanitizers or debug code for, and it only shows up on some archs - > that's nasty. Well, we can't do it in the block stack because that doesn't have the vmalloc address available. So the caller has to do it, and having a very visible sign is the best we can do. Yes, signs aren't the best cure for landmines, but they are better than nothing. > > Not for a purely synchronous helper we could handle both, but so far > > I've not seen anything but the xfs log recovery code that needs it, > > and we'd probably get into needing to pass a bio_set to avoid > > deadlock when used deeper in the stack, etc. I can look into that > > if we have more than a single user, but for now it doesn't seem > > worth it. > > bcache and bcachefs btree buffers can also be vmalloc backed. Possibly > also the prio_set path in bcache, for reading/writing bucket gens, but > I'd have to check. But do you do synchronous I/O, i.e. using sumit_bio_wait on them? > > > Having a common helper for vmalloc and the kernel direct mapping > > is actually how I started, but then I ran into all the issues with > > it and with the extremely simple helpers for the direct mapping > > which are used a lot, and the more complicated version for vmalloc > > which just has a few users instead. > > *nod* > > What else did you run into? invalidate_kernel_vmap_range() seems like > the only problematic one, given that is_vmalloc_addr() is cheap. invalidate_kernel_vmap_range is the major issue that can't be worked around. Everything else was mentioned before and can be summarized as minor inconveniences.