On 10/10/2019 12:21, John Garry wrote:


As discussed before, tags of hisilicon V3 is HBA wide. If you switch
to real hw queue, each hw queue has to own its independent tags.
However, that isn't supported by V3 hardware.

I am generating the tag internally in the driver now, so that hostwide
tags issue should not be an issue.

And, to be clear, I am not paying too much attention to performance, but
rather just hotplugging while running IO.

An update on testing:
I did some scripted overnight testing. The script essentially loops like
this:
- online all CPUS
- run fio binded on a limited bunch of CPUs to cover a hctx mask for 1
minute
- offline those CPUs
- wait 1 minute (> SCSI or NVMe timeout)
- and repeat

SCSI is actually quite stable, but NVMe isn't. For NVMe I am finding
some fio processes never dying with IOPS @ 0. I don't see any NVMe
timeout reported. Did you do any NVMe testing of this sort?


Yeah, so for NVMe, I see some sort of regression, like this:
Jobs: 1 (f=1): [_R] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158037877d:17h:18m:22s]

I have tested against vanilla 5.4 rc1 without problem.

If you can advise some debug to add, then I'd appreciate it. If not, I'll try to add some debug to the new paths introduced in this series to see if anything I hit coincides with the error state, so will at least have a hint...

Thanks,
John




See previous discussion:

https://marc.info/?t=155928863000001&r=1&w=2


Thanks,
Ming


Reply via email to