Hello, Adam.

On Mon, Mar 20, 2017 at 17:09:15 +1100, Adam Carter wrote:
> > That, indeed, seems to to be the case.  When I do cat /proc/interrupts |
> > egrep '(CPU/nvm)', I get just the header line with one data line:

> >            CPU0       CPU1       CPU2       CPU3
> >  17:          0          0         15      14605   IO-APIC 17-fasteoi 
> > ehci_hcd:usb1, nvme0q0, nvme0q1

> > I'm kind of feeling a bit out of my depth here.  What are the nvme0q0,
> > etc.?  "Queues" of some kind?  You appear to have nine of these things,
> > I've just got two.  I'm sure there's a fine manual I ought to be
> > reading.  Do you know where I might find this manual?


> Can't remember where i read up on this. Might have been troubleshooting
> poor small packet performance on a firewall (some network drivers can have
> multiqueue too). Maybe start with this;
> https://www.thomas-krenn.com/en/wiki/Linux_Multi-Queue_Block_IO_Queueing_Mechanism_(blk-mq)

Interesting article.

> It looks like the nvme driver was made "mutliqueue" in kernel 3.19.

> FWIW my system is 8 core (AMD 8350). Its odd having two queues on the same
> interrupt, but I have the same for q0 and q1, ....

I think q0 is the "administrative" queue, and the other 8 are ordinary
queues.  (Sorry, I read that somewhere, but can't remember where).

> .... but on your system i'd say there should be some queues on other
> interrupts to they can be serviced by other cores, so that doesnt look
> right.

It wasn't right.

> Do you have MSI enabled? Bus options -> PCI Support -> Message Signaled
> Interrupts (MSI and MSI-X)

I didn't have MSI enabled, but do now.  I now get 5 queues (nvmeq[0-4])
on four interrups, and the interrupts are spread over the four cores
(I've got a 4 core Athlon).

> If your system is not too old you may get more interrupts or a better
> spread with that enabled.

Yes, that is the case.  From a "warm start", I was able to copy my 1.4GB
file in 2.76s.  This is a bit more like it!

> When I look at the entire /proc/interrupts, there are just 30 lines
> > listed, and I suspect there are no more than 32 interrupt numbers
> > available.  Is there any way I can configure Linux to give my SSD more
> > than one interrupt line to work with?

> > > FWIW
> > > # hdparm -tT /dev/nvme0n1

> > > /dev/nvme0n1:
> > >  Timing cached reads:   9884 MB in  2.00 seconds = 4945.35 MB/sec
> > >  Timing buffered disk reads: 4506 MB in  3.00 seconds = 1501.84 MB/sec

> > I get:

> > /dev/nvme0n1:
> >  Timing cached reads:   4248 MB in  2.00 seconds = 2124.01 MB/sec
> >  Timing buffered disk reads: 1214 MB in  3.00 seconds = 404.51 MB/sec

> > So my "cached reads" speed is (a little under) half of yours.  This is
> > to be expected, since my PCIe lanes are only version 2 (and yours are
> > probably version 3).


> FWIW the motherboard manual says it has PCIe 2.0 x16 slots. Agree that
> cache speed is likely a hardware issue.


> > But the "buffered disk read" are much slower.  Is
> > this just the age of my PC, or might I have something suboptimally
> > configured?


> You look like you're getting SATA speeds, but since you have the nvme
> device, i guess that implies you havent fallen back to SATA.

I don't think the SSD has a SATA interface.

> Could well be older hardware or less PCIe slots/lanes.

My $ hdparm -tT /dev/nvme0n1 speeds haven't improved since enabling MSI
in the kernel.

But I'm intending to build a new machine "soon" (depending on when I can
get a suitable Ryzen motherboard), and will put this NVMe SSD into the
new box.  I have to decide whether to get another one of these NVMe SSDs
to run as RAID-1, or whether to put swap, /usr, etc. on it, and build a
RAID-1 from two cheaper SATA SSDs.

-- 
Alan Mackenzie (Nuremberg, Germany).

Reply via email to