On Fri, Dec 12, 2025 at 05:21:41PM -0800, Eric Biggers wrote: > > + memset(pages, 0, sizeof(struct page *) * nr_segs); > > + nr_allocated = alloc_pages_bulk(GFP_KERNEL, nr_segs, pages); > > + if (nr_allocated < nr_segs) > > + mempool_alloc_bulk(blk_crypto_bounce_page_pool, (void **)pages, > > + nr_segs, nr_allocated); > > alloc_pages_bulk() is documented to fill in pages sequentially. So the > "random pages in the array unallocated" part seems misleading. This > also means that only the remaining portion needs to be passed to > mempool_alloc_bulk(), similar to blk_crypto_fallback_encrypt_endio().
I the better idea is to offset the search in mempool_alloc_bulk based on the passed in allocated argument. I'll prepare a patch for that.
