On Tue, 8 Oct 2013, Jens Axboe wrote:
On Tue, Oct 08 2013, Matthew Wilcox wrote:
On Tue, Oct 08, 2013 at 11:34:20AM +0200, Matias Bjørling wrote:
The nvme driver implements itself as a bio-based driver. This primarily because
of high lock congestion for high-performance nvm devices. To remove the
congestion, a multi-queue block layer is being implemented.

Um, no.  You'll crater performance by adding another memory allocation
(of the struct request).  multi-queue is not the solution.

That's a rather "jump to conclusions" statement to make. As Matias
mentioned, there are no extra fast path allocations. Once the tagging is
converted as well, I'd be surprised if it performs worse than before.
And that on top of a net reduction in code.

blk-mq might not be perfect as it stands, but it's a helluva lot better
than a bunch of flash based drivers with lots of duplicated code and
mechanisms. We need to move away from that.

--
Jens Axboe

But this wastes copious amounts of memory on an NVMe device with more
than 1 namespace. The hardware's queues are shared among all namespaces,
so you can't possibly have all the struct requests in use. What would
be better is if I can create one blk-mq for each device/host and attach
multiple gendisks to that.

Reply via email to