Hi,
I am testing wakeup latency on a 40 core server with Centos 7 installed. I
used cpuset to isolate cores 10-19 and they are almost idle except some
kernel threads. Since currently the latest kernel I have is 3.10, I am not
sure if this issue still exists.
Problem
===
Below is the result
From: Wenbo Wang wenbo.w...@memblaze.com
Signed-off-by: Wenbo Wang wenbo.w...@memblaze.com
Reviewed-by: Chong Yuan chong.y...@memblaze.com
Currently __run_timers() increases jiffie by one in every loop. This is
very slow when the jiffie gap is large.
The following test is done on idle cpu 15
If someone else has found active waitqueue and updated
bt->wake_index, do not modify it again. This saves an
atomic read.
Signed-off-by: Wenbo Wang <mail_weber_w...@163.com>
CC: linux-bl...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
---
block/blk-mq-tag.c | 9 -
1 file c
: Tuesday, February 2, 2016 12:00 AM
To: Wenbo Wang; ax...@fb.com
Cc: linux-kernel@vger.kernel.org; Wenbo Wang; linux-n...@lists.infradead.org
Subject: RE: [PATCH] NVMe: do not touch sq door bell if nvmeq has been suspended
Does this ever happen? The queue should be stopped before the bar is unmapped
If __nvme_submit_cmd races with nvme_dev_disable, nvmeq
could have been suspended and dev->bar could have been
unmapped. Do not touch sq door bell in this case.
Signed-off-by: Wenbo Wang <mail_weber_w...@163.com>
---
drivers/nvme/host/pci.c | 5 +
1 file changed, 5 insertions(+)
d
If __nvme_submit_cmd races with nvme_dev_disable, nvmeq
could have been suspended and dev->bar could have been
unmapped. Do not touch sq door bell in this case.
Signed-off-by: Wenbo Wang <wenbo.w...@memblaze.com>
Reviewed-by: Wenwei Tao <wenwei@memblaze.com>
CC: linux-n...@list
From: Wenbo Wang <wenbo.w...@memblaze.com>
[v3] Do request irq in nvme_init_queue() to handle request irq failures
There is one problem with the original patch. Since init queue happens
before request irq, online_queue might be left increased if request irq
fails. This version merges reque
Keith,
Is this patch accepted? Thanks.
-Original Message-
From: Wenbo Wang
Sent: Tuesday, August 9, 2016 11:18 AM
To: 'keith.bu...@intel.com'; 'ax...@fb.com'
Cc: linux-n...@lists.infradead.org; linux-kernel@vger.kernel.org
Subject: [PATCH] nvme/quirk: Add a delay before checking device
Add a delay before checking device ready for memblaze device
Signed-off-by: Wenbo Wang <wenbo.w...@memblaze.com>
---
drivers/nvme/host/pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c82282f..ab90e5f 100644
--- a/driver
From: Wenbo Wang <wenbo.w...@memblaze.com>
Signed-off-by: Wenbo Wang <wenbo.w...@memblaze.com>
---
drivers/nvme/host/pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 663c40c..2fec23c 100644
--- a/drivers/nvme/host
Hi,
I am testing wakeup latency on a 40 core server with Centos 7 installed. I
used cpuset to isolate cores 10-19 and they are almost idle except some
kernel threads. Since currently the latest kernel I have is 3.10, I am not
sure if this issue still exists.
Problem
===
Below is the result
From: Wenbo Wang
Signed-off-by: Wenbo Wang
Reviewed-by: Chong Yuan
Currently __run_timers() increases jiffie by one in every loop. This is
very slow when the jiffie gap is large.
The following test is done on idle cpu 15 (isolated by isolcpus= kernel
option). There is a clocksource_watchdog
From: Wenbo Wang
Signed-off-by: Wenbo Wang
---
drivers/nvme/host/pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 663c40c..2fec23c 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2141,6 +2141,8 @@ static
From: Wenbo Wang
[v3] Do request irq in nvme_init_queue() to handle request irq failures
There is one problem with the original patch. Since init queue happens
before request irq, online_queue might be left increased if request irq
fails. This version merges request irq into nvme_init_queue
Add a delay before checking device ready for memblaze device
Signed-off-by: Wenbo Wang
---
drivers/nvme/host/pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c82282f..ab90e5f 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers
Keith,
Is this patch accepted? Thanks.
-Original Message-
From: Wenbo Wang
Sent: Tuesday, August 9, 2016 11:18 AM
To: 'keith.bu...@intel.com'; 'ax...@fb.com'
Cc: linux-n...@lists.infradead.org; linux-kernel@vger.kernel.org
Subject: [PATCH] nvme/quirk: Add a delay before checking device
If someone else has found active waitqueue and updated
bt->wake_index, do not modify it again. This saves an
atomic read.
Signed-off-by: Wenbo Wang
CC: linux-bl...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
---
block/blk-mq-tag.c | 9 -
1 file changed, 4 insertions(+), 5 deleti
If __nvme_submit_cmd races with nvme_dev_disable, nvmeq
could have been suspended and dev->bar could have been
unmapped. Do not touch sq door bell in this case.
Signed-off-by: Wenbo Wang
---
drivers/nvme/host/pci.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/nvme/host/pc
If __nvme_submit_cmd races with nvme_dev_disable, nvmeq
could have been suspended and dev->bar could have been
unmapped. Do not touch sq door bell in this case.
Signed-off-by: Wenbo Wang
Reviewed-by: Wenwei Tao
CC: linux-n...@lists.infradead.org
---
drivers/nvme/host/pci.c | 3 ++-
1 f
: Tuesday, February 2, 2016 12:00 AM
To: Wenbo Wang; ax...@fb.com
Cc: linux-kernel@vger.kernel.org; Wenbo Wang; linux-n...@lists.infradead.org
Subject: RE: [PATCH] NVMe: do not touch sq door bell if nvmeq has been suspended
Does this ever happen? The queue should be stopped before the bar is unmapped
20 matches
Mail list logo