On Tue, Mar 10, 2026 at 11:57 AM Caleb Sander Mateos
<[email protected]> wrote:
>
> On Mon, Mar 9, 2026 at 4:55 PM Sungwoo Kim <[email protected]> wrote:
> >
> > On Mon, Mar 9, 2026 at 11:31 AM Caleb Sander Mateos
> > <[email protected]> wrote:
> > >
> > > On Sun, Mar 8, 2026 at 11:30 PM Sungwoo Kim <[email protected]> wrote:
> > > >
> > > > The numa_node can be < 0 since NUMA_NO_NODE = -1. However,
> > > > struct blk_mq_hw_ctx{} defines numa_node as unsigned int. As a result,
> > > > numa_node is set to UINT_MAX for NUMA_NO_NODE in blk_mq_alloc_hctx().
> > >
> > > The node argument to blk_mq_alloc_hctx() comes from
> > > blk_mq_alloc_and_init_hctx(), which is called by
> > > blk_mq_alloc_and_init_hctx() with int node = blk_mq_get_hctx_node(set,
> > > i). node = NUMA_NO_NODE would suggest that blk_mq_hw_queue_to_node()
> > > doesn't find any CPU affinitized to the queue. Is that even possible?
> >
> > Thanks for your review, Celeb.
>
> While I'm flattered you consider me a celebrity, my name is Caleb :)

My apologies, Caleb. I'll be more careful.

>
> >
> > blk_mq_hw_queue_to_node() can return NUMA_NO_NODE if the device queues
> > exceed the
> > number of CPUs. Afterward, it is adjusted on the caller side to
> > numa_node = set->numa_node.
>
> I thought the NVMe driver capped the number of queues so every queue
> is affinitized to some CPU (see nvme_max_io_queues()). What am I
> missing?
>
> Best,
> Caleb

You are right, every hw queue should be affinitized. It seems I'm
missing a lot. I'll look into the reasons and get back with V2.

I appreciate your review again.

Sungwoo.

Reply via email to