On Tue, Aug 08, 2017 at 12:33:40PM +0530, Sreekanth Reddy wrote:
> On Tue, Aug 8, 2017 at 9:34 AM, Keith Busch wrote:
> >
> > It looks like they can make existing nvme tooling work with little
> > effort if they have the driver implement NVME_IOCTL_ADMIN_COMMAND,
On Mon, Aug 07, 2017 at 08:45:25AM -0700, James Bottomley wrote:
> On Mon, 2017-08-07 at 20:01 +0530, Kashyap Desai wrote:
> >
> > We have to attempt this use case and see how it behaves. I have not
> > tried this, so not sure if things are really bad or just some tuning
> > may be helpful. I
On Mon, Aug 07, 2017 at 08:45:25AM -0700, James Bottomley wrote:
> On Mon, 2017-08-07 at 20:01 +0530, Kashyap Desai wrote:
> >
> > We have to attempt this use case and see how it behaves. I have not
> > tried this, so not sure if things are really bad or just some tuning
> > may be helpful. I
.@intel.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
Resending due to send-email setup error; this patch may appear twice
for some.
drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/driv
fault status from new events if the driver hasn't seen the
power fault clear from the previous handling attempt.
Fixes: fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking
for new ones")
Cc: # 4.9+
Cc: Mayurkumar Patel
Signed-off-by: Keith Busch
---
Resending due to
make sure that we get no
> underflow for pathological input.
>
> Signed-off-by: Martin Wilck <mwi...@suse.com>
> Reviewed-by: Hannes Reinecke <h...@suse.de>
> Acked-by: Christoph Hellwig <h...@lst.de>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
make sure that we get no
> underflow for pathological input.
>
> Signed-off-by: Martin Wilck
> Reviewed-by: Hannes Reinecke
> Acked-by: Christoph Hellwig
Looks good.
Reviewed-by: Keith Busch
On Tue, Jul 18, 2017 at 02:52:26PM -0400, Sinan Kaya wrote:
> On 7/18/2017 10:36 AM, Keith Busch wrote:
>
> I do see that the NVMe driver is creating a completion interrupt on
> each CPU core for the completions. No problems with that.
>
> However, I don't think
On Tue, Jul 18, 2017 at 02:52:26PM -0400, Sinan Kaya wrote:
> On 7/18/2017 10:36 AM, Keith Busch wrote:
>
> I do see that the NVMe driver is creating a completion interrupt on
> each CPU core for the completions. No problems with that.
>
> However, I don't think
On Mon, Jul 17, 2017 at 07:07:00PM -0400, ok...@codeaurora.org wrote:
> Maybe, I need to understand the design better. I was curious why completion
> and submission queues were protected by a single lock causing lock
> contention.
Ideally the queues are tied to CPUs, so you couldn't have one
On Mon, Jul 17, 2017 at 07:07:00PM -0400, ok...@codeaurora.org wrote:
> Maybe, I need to understand the design better. I was curious why completion
> and submission queues were protected by a single lock causing lock
> contention.
Ideally the queues are tied to CPUs, so you couldn't have one
On Mon, Jul 17, 2017 at 06:46:11PM -0400, Sinan Kaya wrote:
> Hi Keith,
>
> On 7/17/2017 6:45 PM, Keith Busch wrote:
> > On Mon, Jul 17, 2017 at 06:36:23PM -0400, Sinan Kaya wrote:
> >> Code is moving the completion queue doorbell after processing all completed
> >
On Mon, Jul 17, 2017 at 06:46:11PM -0400, Sinan Kaya wrote:
> Hi Keith,
>
> On 7/17/2017 6:45 PM, Keith Busch wrote:
> > On Mon, Jul 17, 2017 at 06:36:23PM -0400, Sinan Kaya wrote:
> >> Code is moving the completion queue doorbell after processing all completed
> >
On Mon, Jul 17, 2017 at 06:36:23PM -0400, Sinan Kaya wrote:
> Code is moving the completion queue doorbell after processing all completed
> events and sending callbacks to the block layer on each iteration.
>
> This is causing a performance drop when a lot of jobs are queued towards
> the HW.
On Mon, Jul 17, 2017 at 06:36:23PM -0400, Sinan Kaya wrote:
> Code is moving the completion queue doorbell after processing all completed
> events and sending callbacks to the block layer on each iteration.
>
> This is causing a performance drop when a lot of jobs are queued towards
> the HW.
ff-by: Christophe JAILLET <christophe.jail...@wanadoo.fr>
Indeed, thanks for the fix. Alternatively this can be fixed by relocating
nvme_dev_map prior to the 'get_device' a few lines up. This patch is
okay, too.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
> diff --git a/drivers/n
ristophe JAILLET
Indeed, thanks for the fix. Alternatively this can be fixed by relocating
nvme_dev_map prior to the 'get_device' a few lines up. This patch is
okay, too.
Reviewed-by: Keith Busch
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index d10d2f279d19..005b
On Thu, Jul 13, 2017 at 12:42:44PM -0400, Sinan Kaya wrote:
> On 7/13/2017 12:29 PM, Keith Busch wrote:
> > That wording is just confusing. It looks to me the 1 second polling is
> > to be used following a reset if CRS is not implemented.
> >
> >
> > https
On Thu, Jul 13, 2017 at 12:42:44PM -0400, Sinan Kaya wrote:
> On 7/13/2017 12:29 PM, Keith Busch wrote:
> > That wording is just confusing. It looks to me the 1 second polling is
> > to be used following a reset if CRS is not implemented.
> >
> >
> > https
On Thu, Jul 13, 2017 at 11:44:12AM -0400, Sinan Kaya wrote:
> On 7/13/2017 8:17 AM, Bjorn Helgaas wrote:
> >> he spec is calling to wait up to 1 seconds if the device is sending CRS.
> >> The NVMe device seems to be requiring more. Relax this up to 60 seconds.
> > Can you add a pointer to the "1
On Thu, Jul 13, 2017 at 11:44:12AM -0400, Sinan Kaya wrote:
> On 7/13/2017 8:17 AM, Bjorn Helgaas wrote:
> >> he spec is calling to wait up to 1 seconds if the device is sending CRS.
> >> The NVMe device seems to be requiring more. Relax this up to 60 seconds.
> > Can you add a pointer to the "1
On Thu, Jul 13, 2017 at 07:17:58AM -0500, Bjorn Helgaas wrote:
> On Thu, Jul 06, 2017 at 05:07:14PM -0400, Sinan Kaya wrote:
> > An endpoint is allowed to issue Configuration Request Retry Status (CRS)
> > following a Function Level Reset (FLR) request to indicate that it is not
> > ready to
On Thu, Jul 13, 2017 at 07:17:58AM -0500, Bjorn Helgaas wrote:
> On Thu, Jul 06, 2017 at 05:07:14PM -0400, Sinan Kaya wrote:
> > An endpoint is allowed to issue Configuration Request Retry Status (CRS)
> > following a Function Level Reset (FLR) request to indicate that it is not
> > ready to
On Tue, Jul 11, 2017 at 01:55:02AM -0700, Suganath Prabu S wrote:
> +/**
> + * _base_check_pcie_native_sgl - This function is called for PCIe end
> devices to
> + * determine if the driver needs to build a native SGL. If so, that native
> + * SGL is built in the special contiguous buffers
On Tue, Jul 11, 2017 at 01:55:02AM -0700, Suganath Prabu S wrote:
> +/**
> + * _base_check_pcie_native_sgl - This function is called for PCIe end
> devices to
> + * determine if the driver needs to build a native SGL. If so, that native
> + * SGL is built in the special contiguous buffers
On Tue, Jul 11, 2017 at 01:55:02AM -0700, Suganath Prabu S wrote:
> diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h
> b/drivers/scsi/mpt3sas/mpt3sas_base.h
> index 60fa7b6..cebdd8e 100644
> --- a/drivers/scsi/mpt3sas/mpt3sas_base.h
> +++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
> @@ -54,6 +54,7 @@
On Tue, Jul 11, 2017 at 01:55:02AM -0700, Suganath Prabu S wrote:
> diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h
> b/drivers/scsi/mpt3sas/mpt3sas_base.h
> index 60fa7b6..cebdd8e 100644
> --- a/drivers/scsi/mpt3sas/mpt3sas_base.h
> +++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
> @@ -54,6 +54,7 @@
of this folly. So
I think we need to let this last one go through with the quirk.
Acked-by: Keith Busch <keith.bu...@intel.com>
of this folly. So
I think we need to let this last one go through with the quirk.
Acked-by: Keith Busch
On Thu, Jun 29, 2017 at 08:59:07AM +0200, Valentin Rothberg wrote:
> Remove dead build rule for drivers/nvme/host/scsi.c which has been
> removed by commit ("nvme: Remove SCSI translations").
>
> Signed-off-by: Valentin Rothberg <vrothb...@suse.com>
Oops, thanks for
On Thu, Jun 29, 2017 at 08:59:07AM +0200, Valentin Rothberg wrote:
> Remove dead build rule for drivers/nvme/host/scsi.c which has been
> removed by commit ("nvme: Remove SCSI translations").
>
> Signed-off-by: Valentin Rothberg
Oops, thanks for the fix.
Reviewed-by: Keith Busch
On Wed, Jun 28, 2017 at 11:32:51AM -0500, wenxi...@linux.vnet.ibm.com wrote:
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 519599d..e871444 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -264,6 +264,10 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
>
> if
On Wed, Jun 28, 2017 at 11:32:51AM -0500, wenxi...@linux.vnet.ibm.com wrote:
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 519599d..e871444 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -264,6 +264,10 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
>
> if
On Mon, Jun 26, 2017 at 12:01:29AM -0700, Kai-Heng Feng wrote:
> A user reports APST is enabled, even when the NVMe is quirked or with
> option "default_ps_max_latency_us=0".
>
> The current logic will not set APST if the device is quirked. But the
> NVMe in question will enable APST
On Mon, Jun 26, 2017 at 12:01:29AM -0700, Kai-Heng Feng wrote:
> A user reports APST is enabled, even when the NVMe is quirked or with
> option "default_ps_max_latency_us=0".
>
> The current logic will not set APST if the device is quirked. But the
> NVMe in question will enable APST
On Tue, Jun 20, 2017 at 01:37:32AM +0200, Thomas Gleixner wrote:
> @@ -441,18 +440,27 @@ void fixup_irqs(void)
>
> for_each_irq_desc(irq, desc) {
> const struct cpumask *affinity;
> - int break_affinity = 0;
> - int set_affinity = 1;
> +
On Tue, Jun 20, 2017 at 01:37:32AM +0200, Thomas Gleixner wrote:
> @@ -441,18 +440,27 @@ void fixup_irqs(void)
>
> for_each_irq_desc(irq, desc) {
> const struct cpumask *affinity;
> - int break_affinity = 0;
> - int set_affinity = 1;
> +
On Tue, Jun 20, 2017 at 01:37:15AM +0200, Thomas Gleixner wrote:
> static int vmd_enable_domain(struct vmd_dev *vmd)
> {
> struct pci_sysdata *sd = >sysdata;
> + struct fwnode_handle *fn;
> struct resource *res;
> u32 upper_bits;
> unsigned long flags;
> @@ -617,8
On Tue, Jun 20, 2017 at 01:37:15AM +0200, Thomas Gleixner wrote:
> static int vmd_enable_domain(struct vmd_dev *vmd)
> {
> struct pci_sysdata *sd = >sysdata;
> + struct fwnode_handle *fn;
> struct resource *res;
> u32 upper_bits;
> unsigned long flags;
> @@ -617,8
On Thu, Jun 01, 2017 at 01:17:48PM +0200, Johannes Thumshirn wrote:
> Now that we have a way for getting the UUID from a target, provide it
> to userspace as well.
>
> Unfortunately there is already a sysfs attribute called UUID which is
> a misnomer as it holds the NGUID value. So instead of
On Thu, Jun 01, 2017 at 01:17:48PM +0200, Johannes Thumshirn wrote:
> Now that we have a way for getting the UUID from a target, provide it
> to userspace as well.
>
> Unfortunately there is already a sysfs attribute called UUID which is
> a misnomer as it holds the NGUID value. So instead of
On Tue, May 30, 2017 at 02:58:06PM +0300, Sagi Grimberg wrote:
> > So, the reason the state is changed when the work is running rather than
> > queueing is for the window when the state may be set to NVME_CTRL_DELETING,
> > and we don't want the reset work to proceed in that case.
> >
> > What do
On Tue, May 30, 2017 at 02:58:06PM +0300, Sagi Grimberg wrote:
> > So, the reason the state is changed when the work is running rather than
> > queueing is for the window when the state may be set to NVME_CTRL_DELETING,
> > and we don't want the reset work to proceed in that case.
> >
> > What do
On Wed, May 24, 2017 at 03:06:31PM -0700, Andy Lutomirski wrote:
> They have known firmware bugs. A fix is apparently in the works --
> once fixed firmware is available, someone from Intel (Hi, Keith!)
> can adjust the quirk accordingly.
Here's the latest firmware with all the known fixes:
On Wed, May 24, 2017 at 03:06:31PM -0700, Andy Lutomirski wrote:
> They have known firmware bugs. A fix is apparently in the works --
> once fixed firmware is available, someone from Intel (Hi, Keith!)
> can adjust the quirk accordingly.
Here's the latest firmware with all the known fixes:
On Wed, May 24, 2017 at 05:26:25PM +0300, Rakesh Pandit wrote:
> Commit c5f6ce97c1210 tries to address multiple resets but fails as
> work_busy doesn't involve any synchronization and can fail. This is
> reproducible easily as can be seen by WARNING below which is triggered
> with line:
>
>
On Wed, May 24, 2017 at 05:26:25PM +0300, Rakesh Pandit wrote:
> Commit c5f6ce97c1210 tries to address multiple resets but fails as
> work_busy doesn't involve any synchronization and can fail. This is
> reproducible easily as can be seen by WARNING below which is triggered
> with line:
>
>
On Fri, May 19, 2017 at 11:24:39AM -0700, Andy Lutomirski wrote:
> On Fri, May 19, 2017 at 7:18 AM, Keith Busch <keith.bu...@intel.com> wrote:
> > On Thu, May 18, 2017 at 11:35:05PM -0700, Christoph Hellwig wrote:
> >> On Thu, May 18, 2017 at 06:13:55PM -070
On Fri, May 19, 2017 at 11:24:39AM -0700, Andy Lutomirski wrote:
> On Fri, May 19, 2017 at 7:18 AM, Keith Busch wrote:
> > On Thu, May 18, 2017 at 11:35:05PM -0700, Christoph Hellwig wrote:
> >> On Thu, May 18, 2017 at 06:13:55PM -0700, Andy Lutomirski wrote:
> >>
On Thu, May 18, 2017 at 11:35:05PM -0700, Christoph Hellwig wrote:
> On Thu, May 18, 2017 at 06:13:55PM -0700, Andy Lutomirski wrote:
> > a) Leave the Dell quirk in place until someone from Dell or Samsung
> > figures out what's actually going on. Add a blanket quirk turning off
> > the deepest
On Thu, May 18, 2017 at 11:35:05PM -0700, Christoph Hellwig wrote:
> On Thu, May 18, 2017 at 06:13:55PM -0700, Andy Lutomirski wrote:
> > a) Leave the Dell quirk in place until someone from Dell or Samsung
> > figures out what's actually going on. Add a blanket quirk turning off
> > the deepest
On Mon, May 15, 2017 at 12:15:28PM +0300, Sagi Grimberg wrote:
>
> > > Hi,
>
> Hi Oza,
>
> > > we are configuring interrupt coalesce for NVMe, but right now, it uses
> > > module param.
> > > so the same interrupt coalesce settings get applied for all the NVMEs
> > > connected to different RCs.
On Mon, May 15, 2017 at 12:15:28PM +0300, Sagi Grimberg wrote:
>
> > > Hi,
>
> Hi Oza,
>
> > > we are configuring interrupt coalesce for NVMe, but right now, it uses
> > > module param.
> > > so the same interrupt coalesce settings get applied for all the NVMEs
> > > connected to different RCs.
On Wed, May 03, 2017 at 07:02:07PM -0700, Alexander Kappner wrote:
> Some buggy NVMe controllers support APST (autonomous power
> state transitions), but do not report APSTA=1. On these controllers, the NVMe
> driver does not enable APST support. I have verified this problem occurring
> on
>
On Wed, May 03, 2017 at 07:02:07PM -0700, Alexander Kappner wrote:
> Some buggy NVMe controllers support APST (autonomous power
> state transitions), but do not report APSTA=1. On these controllers, the NVMe
> driver does not enable APST support. I have verified this problem occurring
> on
>
4.11 appropriate. I'll expedite this
> > through the block tree, if Keith/Sagi/Christoph agrees on this
> > being the right approach for 4.11.
>
> I'm perfectly fine with this going to 4.11
All good with me as well.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
4.11 appropriate. I'll expedite this
> > through the block tree, if Keith/Sagi/Christoph agrees on this
> > being the right approach for 4.11.
>
> I'm perfectly fine with this going to 4.11
All good with me as well.
Reviewed-by: Keith Busch
> drivers/pci/host/vmd.c | 3 ++-
> 3 files changed, 6 insertions(+), 3 deletions(-)
Okay for vmd driver.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
| 3 ++-
> 3 files changed, 6 insertions(+), 3 deletions(-)
Okay for vmd driver.
Reviewed-by: Keith Busch
Commit-ID: b72f8051f34b8164a62391e3676edc34523c5952
Gitweb: http://git.kernel.org/tip/b72f8051f34b8164a62391e3676edc34523c5952
Author: Keith Busch <keith.bu...@intel.com>
AuthorDate: Wed, 19 Apr 2017 19:51:10 -0400
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate:
Commit-ID: b72f8051f34b8164a62391e3676edc34523c5952
Gitweb: http://git.kernel.org/tip/b72f8051f34b8164a62391e3676edc34523c5952
Author: Keith Busch
AuthorDate: Wed, 19 Apr 2017 19:51:10 -0400
Committer: Thomas Gleixner
CommitDate: Thu, 20 Apr 2017 16:03:09 +0200
genirq/affinity: Fix
ation")
Reported-by: Andrei Vagin <ava...@virtuozzo.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index d052947..e2d356d 100644
--- a
ation")
Reported-by: Andrei Vagin
Signed-off-by: Keith Busch
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index d052947..e2d356d 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -
On Wed, Apr 19, 2017 at 03:32:06PM -0700, Andrei Vagin wrote:
> This patch works for me.
Awesome, thank you much for confirming, and again, sorry for the
breakage. I see virtio-scsi is one reliable way to have reproduced this,
so I'll incorporate that into tests before posting future kernel core
On Wed, Apr 19, 2017 at 03:32:06PM -0700, Andrei Vagin wrote:
> This patch works for me.
Awesome, thank you much for confirming, and again, sorry for the
breakage. I see virtio-scsi is one reliable way to have reproduced this,
so I'll incorporate that into tests before posting future kernel core
On Wed, Apr 19, 2017 at 12:53:44PM -0700, Andrei Vagin wrote:
> On Wed, Apr 19, 2017 at 01:03:59PM -0400, Keith Busch wrote:
> > If it's a divide by 0 as your last link indicates, that must mean there
> > are possible nodes, but have no CPUs, and those should be skipped. If
>
On Wed, Apr 19, 2017 at 12:53:44PM -0700, Andrei Vagin wrote:
> On Wed, Apr 19, 2017 at 01:03:59PM -0400, Keith Busch wrote:
> > If it's a divide by 0 as your last link indicates, that must mean there
> > are possible nodes, but have no CPUs, and those should be skipped. If
>
On Wed, Apr 19, 2017 at 09:20:27AM -0700, Andrei Vagin wrote:
> Hi,
>
> Something is wrong with this patch. We run CRIU tests for upstream kernels.
> And we found that a kernel with this patch can't be booted.
>
> https://travis-ci.org/avagin/linux/builds/223557750
>
> We don't have access to
On Wed, Apr 19, 2017 at 09:20:27AM -0700, Andrei Vagin wrote:
> Hi,
>
> Something is wrong with this patch. We run CRIU tests for upstream kernels.
> And we found that a kernel with this patch can't be booted.
>
> https://travis-ci.org/avagin/linux/builds/223557750
>
> We don't have access to
On Fri, Apr 14, 2017 at 03:10:30PM -0300, Helen Koike wrote:
> + Add missing maintainers from scripts/get_maintainer.pl in the email thread
>
> Hi,
>
> I would like to know if it would be possible to get this patch for kernel
> 4.12.
> Should I send a pull request? Or do you usually get the
On Fri, Apr 14, 2017 at 03:10:30PM -0300, Helen Koike wrote:
> + Add missing maintainers from scripts/get_maintainer.pl in the email thread
>
> Hi,
>
> I would like to know if it would be possible to get this patch for kernel
> 4.12.
> Should I send a pull request? Or do you usually get the
Commit-ID: 3412386b531244f24a27c79ee003506a52a00848
Gitweb: http://git.kernel.org/tip/3412386b531244f24a27c79ee003506a52a00848
Author: Keith Busch <keith.bu...@intel.com>
AuthorDate: Thu, 13 Apr 2017 13:28:12 -0400
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate:
Commit-ID: 3412386b531244f24a27c79ee003506a52a00848
Gitweb: http://git.kernel.org/tip/3412386b531244f24a27c79ee003506a52a00848
Author: Keith Busch
AuthorDate: Thu, 13 Apr 2017 13:28:12 -0400
Committer: Thomas Gleixner
CommitDate: Thu, 13 Apr 2017 23:41:00 +0200
irq/affinity: Fix extra
On Thu, Apr 13, 2017 at 12:06:43AM -0700, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <h...@lst.de>
This is great. As an added bonus, more of struct nvme_queue's hot values
are in the same cache line!
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Thu, Apr 13, 2017 at 12:06:43AM -0700, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig
This is great. As an added bonus, more of struct nvme_queue's hot values
are in the same cache line!
Reviewed-by: Keith Busch
anced nodes")
Reported-by: Xiaolong Ye <xiaolong...@intel.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index dc52911..d052947 1
anced nodes")
Reported-by: Xiaolong Ye
Signed-off-by: Keith Busch
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index dc52911..d052947 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
> url:
> https://github.com/0day-ci/linux/commits/Keith-Busch/irq-affinity-Assign-all-CPUs-a-vector/20170401-035036
>
>
> in testcase: fsmark
> on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with
> 128G memory
> with following parameters:
>
&
> url:
> https://github.com/0day-ci/linux/commits/Keith-Busch/irq-affinity-Assign-all-CPUs-a-vector/20170401-035036
>
>
> in testcase: fsmark
> on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with
> 128G memory
> with following parameters:
>
&
Commit-ID: 7bf8222b9bd0ba867e18b7f4537b61ef2e92eee8
Gitweb: http://git.kernel.org/tip/7bf8222b9bd0ba867e18b7f4537b61ef2e92eee8
Author: Keith Busch <keith.bu...@intel.com>
AuthorDate: Mon, 3 Apr 2017 15:25:53 -0400
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate
Commit-ID: 7bf8222b9bd0ba867e18b7f4537b61ef2e92eee8
Gitweb: http://git.kernel.org/tip/7bf8222b9bd0ba867e18b7f4537b61ef2e92eee8
Author: Keith Busch
AuthorDate: Mon, 3 Apr 2017 15:25:53 -0400
Committer: Thomas Gleixner
CommitDate: Tue, 4 Apr 2017 11:57:28 +0200
irq/affinity: Fix CPU
in that node. This will guarantee that
every CPU is assigned at least one vector.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Reviewed-by: Christoph Hellwig <h...@lst.de>
---
v1 -> v2:
Updated the change log with a more
in that node. This will guarantee that
every CPU is assigned at least one vector.
Signed-off-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Christoph Hellwig
---
v1 -> v2:
Updated the change log with a more coherent description of the problem
and solution, and removed the unnecessary blk
:00:00 2001
From: Keith Busch <keith.bu...@intel.com>
Date: Tue, 28 Mar 2017 16:26:23 -0600
Subject: [PATCH] irq/affinity: Assign all CPUs a vector
The number of vectors to assign needs to be adjusted for each node such
that it doesn't exceed the number of CPUs in that node. This patch
rec
:00:00 2001
From: Keith Busch
Date: Tue, 28 Mar 2017 16:26:23 -0600
Subject: [PATCH] irq/affinity: Assign all CPUs a vector
The number of vectors to assign needs to be adjusted for each node such
that it doesn't exceed the number of CPUs in that node. This patch
recalculates the vector assignment
On Wed, Mar 29, 2017 at 08:15:50PM +0300, Sagi Grimberg wrote:
>
> > The number of vectors to assign needs to be adjusted for each node such
> > that it doesn't exceed the number of CPUs in that node. This patch
> > recalculates the vector assignment per-node so that we don't try to
> > assign
On Wed, Mar 29, 2017 at 08:15:50PM +0300, Sagi Grimberg wrote:
>
> > The number of vectors to assign needs to be adjusted for each node such
> > that it doesn't exceed the number of CPUs in that node. This patch
> > recalculates the vector assignment per-node so that we don't try to
> > assign
:
blk_mq_map_swqueue dereferences NULL while mapping s/w queues when CPUs
are unnassigned, so making sure all CPUs are assigned fixes that.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/affinity.c | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
:
blk_mq_map_swqueue dereferences NULL while mapping s/w queues when CPUs
are unnassigned, so making sure all CPUs are assigned fixes that.
Signed-off-by: Keith Busch
---
kernel/irq/affinity.c | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/kernel/irq
corruption or double allocation[1][2],
> > when doing I/O and removing NVMe device at the sametime.
>
> I agree, completing it looks bogus. If the request is in a scheduler or
> on a software queue, this won't end well at all. Looks like it was
> introduced by this patch:
>
>
corruption or double allocation[1][2],
> > when doing I/O and removing NVMe device at the sametime.
>
> I agree, completing it looks bogus. If the request is in a scheduler or
> on a software queue, this won't end well at all. Looks like it was
> introduced by this patch:
>
> com
On Thu, Mar 09, 2017 at 03:01:27PM -0700, Thomas Fjellstrom wrote:
> On Monday, March 6, 2017 5:46:33 PM MST Keith Busch wrote:
> >
> > echo 128 > /sys/block/nvme0n1/queue/max_hw_sectors_kb
> > echo 128 > /sys/block/nvme0n1/queue/max_nr_requests
> >
>
&g
On Thu, Mar 09, 2017 at 03:01:27PM -0700, Thomas Fjellstrom wrote:
> On Monday, March 6, 2017 5:46:33 PM MST Keith Busch wrote:
> >
> > echo 128 > /sys/block/nvme0n1/queue/max_hw_sectors_kb
> > echo 128 > /sys/block/nvme0n1/queue/max_nr_requests
> >
>
&g
On Sun, Mar 05, 2017 at 11:11:45PM -0700, Thomas Fjellstrom wrote:
> Tonight I decided to try kernel 4.11-rc1. Still getting page allocation
> failures and aborted nvme commands once iozone gets to the fwrite/fread
> testing.
>
> The taint seems to be comming from previos warnings from the radeon
On Sun, Mar 05, 2017 at 11:11:45PM -0700, Thomas Fjellstrom wrote:
> Tonight I decided to try kernel 4.11-rc1. Still getting page allocation
> failures and aborted nvme commands once iozone gets to the fwrite/fread
> testing.
>
> The taint seems to be comming from previos warnings from the radeon
On Wed, Mar 01, 2017 at 03:37:03PM -0700, Logan Gunthorpe wrote:
> On 01/03/17 03:26 PM, Keith Busch wrote:
> > I think this is from using the managed device resource API to request the
> > irq actions. The scope of the resource used to be tied to the pci_dev's
> > dev,
On Wed, Mar 01, 2017 at 03:37:03PM -0700, Logan Gunthorpe wrote:
> On 01/03/17 03:26 PM, Keith Busch wrote:
> > I think this is from using the managed device resource API to request the
> > irq actions. The scope of the resource used to be tied to the pci_dev's
> > dev,
On Wed, Mar 01, 2017 at 03:41:20PM -0600, Bjorn Helgaas wrote:
> On Sat, Feb 25, 2017 at 11:53:13PM -0700, Logan Gunthorpe wrote:
> > Changes since v4:
> >
> > * Turns out pushing the pci release code into the device release
> > function didn't work as I would have liked. If you try to unbind
On Wed, Mar 01, 2017 at 03:41:20PM -0600, Bjorn Helgaas wrote:
> On Sat, Feb 25, 2017 at 11:53:13PM -0700, Logan Gunthorpe wrote:
> > Changes since v4:
> >
> > * Turns out pushing the pci release code into the device release
> > function didn't work as I would have liked. If you try to unbind
On Mon, Feb 27, 2017 at 08:35:06PM +0200, Sagi Grimberg wrote:
> > On Sat, Feb 25, 2017 at 08:16:04PM +0100, Matias Bjørling wrote:
> > > On 02/25/2017 07:21 PM, Christoph Hellwig wrote:
> > > > No way in hell. vs is vendor specific and we absolutely can't overload
> > > > it with any sort of
901 - 1000 of 1501 matches
Mail list logo