Re: PATCH V4 0/5 nvme-pci: fixes on nvme_timeout and nvme_dev_disable

2018-04-17 Thread Ming Lei
On Thu, Mar 08, 2018 at 02:19:26PM +0800, Jianchao Wang wrote: > Firstly, really appreciate Keith and Sagi's precious advice on previous > versions. > And this is the version 4. > > Some patches of the previous patchset have been submitted and the left is > this patchset > which has been

Re: [PATCH] blk-mq: start request gstate with gen 1

2018-04-16 Thread Ming Lei
fore the > scsi_cmnd is not initialized, scsi_cmnd->device is still NULL at > the moment, then we will get crash. > > Cc: Bart Van Assche <bart.vanass...@wdc.com> > Cc: Tejun Heo <t...@kernel.org> > Cc: Ming Lei <ming@redhat.com> > Cc: Martin Steigerwald

Re: [PATCH] blk-mq: start request gstate with gen 1

2018-04-16 Thread Ming Lei
fore the > scsi_cmnd is not initialized, scsi_cmnd->device is still NULL at > the moment, then we will get crash. > > Cc: Bart Van Assche > Cc: Tejun Heo > Cc: Ming Lei > Cc: Martin Steigerwald > Cc: sta...@vger.kernel.org > Signed-off-by: Jianchao Wang > --- > b

Re: [PATCH] blk-mq: mark hctx RESTART when get budget fails

2018-04-16 Thread Ming Lei
On Mon, Apr 16, 2018 at 03:55:36PM +0800, Jianchao Wang wrote: > When get budget fails, blk_mq_sched_dispatch_requests does not do > anything to ensure the hctx to be restarted. We can survive from > this, because only the scsi implements .get_budget and it always > runs the hctx queues when

Re: [PATCH] blk-mq: mark hctx RESTART when get budget fails

2018-04-16 Thread Ming Lei
On Mon, Apr 16, 2018 at 03:55:36PM +0800, Jianchao Wang wrote: > When get budget fails, blk_mq_sched_dispatch_requests does not do > anything to ensure the hctx to be restarted. We can survive from > this, because only the scsi implements .get_budget and it always > runs the hctx queues when

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Thu, Apr 12, 2018 at 09:43:02PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > > > Sorry for the delay. I reverted my change, added this one. I didn't > > > reboot, I just unloaded and loaded thi

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Thu, Apr 12, 2018 at 09:43:02PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > > > Sorry for the delay. I reverted my change, added this one. I didn't > > > reboot, I just unloaded and loaded thi

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > Sure, thanks for your sharing. > > > > Wakko, could you test the following patch and see if there is any > > difference? > > > > -- > > diff --git a/drivers/tar

Re: 4.15.14 crash with iscsi target and dvd

2018-04-12 Thread Ming Lei
On Tue, Apr 10, 2018 at 08:45:25PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > Sure, thanks for your sharing. > > > > Wakko, could you test the following patch and see if there is any > > difference? > > > > -- > > diff --git a/drivers/tar

Re: 4.15.14 crash with iscsi target and dvd

2018-04-09 Thread Ming Lei
On Mon, Apr 09, 2018 at 07:43:01PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > On Mon, Apr 09, 2018 at 09:30:11PM +, Bart Van Assche wrote: > > > Hello Ming, > > > > > > Can you have a look at this? The start of this e-mail thread is available > &g

Re: 4.15.14 crash with iscsi target and dvd

2018-04-09 Thread Ming Lei
On Mon, Apr 09, 2018 at 07:43:01PM -0400, Wakko Warner wrote: > Ming Lei wrote: > > On Mon, Apr 09, 2018 at 09:30:11PM +, Bart Van Assche wrote: > > > Hello Ming, > > > > > > Can you have a look at this? The start of this e-mail thread is available > &g

Re: 4.15.14 crash with iscsi target and dvd

2018-04-09 Thread Ming Lei
ac60eb58b145839b5893e > > Author: Ming Lei <tom.leim...@gmail.com> > > Date: Fri Nov 11 20:05:32 2016 +0800 > > > > target: avoid accessing .bi_vcnt directly > > > > When the bio is full, bio_add_pc_page() will return zero, > > so

Re: 4.15.14 crash with iscsi target and dvd

2018-04-09 Thread Ming Lei
ac60eb58b145839b5893e > > Author: Ming Lei > > Date: Fri Nov 11 20:05:32 2016 +0800 > > > > target: avoid accessing .bi_vcnt directly > > > > When the bio is full, bio_add_pc_page() will return zero, > > so use this information tell when th

Re: [s390x] New regression was found on kernel-4.16

2018-04-09 Thread Ming Lei
On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote: > Hi, > > I got this BUG_ON() on s390x platform with kernel-v4.16.0. > > [1.200196] [ cut here ] > [1.200201] kernel BUG at block/bio.c:1798! > [1.200228] illegal operation: 0001 ilc:1 [#1] SMP > [

Re: [s390x] New regression was found on kernel-4.16

2018-04-09 Thread Ming Lei
On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote: > Hi, > > I got this BUG_ON() on s390x platform with kernel-v4.16.0. > > [1.200196] [ cut here ] > [1.200201] kernel BUG at block/bio.c:1798! > [1.200228] illegal operation: 0001 ilc:1 [#1] SMP > [

Re: [s390x] New regression was found on kernel-4.16

2018-04-09 Thread Ming Lei
Hi Li Wang, On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote: > Hi, > > I got this BUG_ON() on s390x platform with kernel-v4.16.0. Today I saw this bug too, from my first look, seems it is because that get_max_io_size() returns zero in blk_bio_segment_split(). And I trigger that in one

Re: [s390x] New regression was found on kernel-4.16

2018-04-09 Thread Ming Lei
Hi Li Wang, On Mon, Apr 09, 2018 at 06:18:04PM +0800, Li Wang wrote: > Hi, > > I got this BUG_ON() on s390x platform with kernel-v4.16.0. Today I saw this bug too, from my first look, seems it is because that get_max_io_size() returns zero in blk_bio_segment_split(). And I trigger that in one

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-07 Thread Ming Lei
On Fri, Apr 06, 2018 at 11:49:47PM +0200, Thomas Gleixner wrote: > On Fri, 6 Apr 2018, Thomas Gleixner wrote: > > > On Fri, 6 Apr 2018, Ming Lei wrote: > > > > > > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread. > > > And

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-07 Thread Ming Lei
On Fri, Apr 06, 2018 at 11:49:47PM +0200, Thomas Gleixner wrote: > On Fri, 6 Apr 2018, Thomas Gleixner wrote: > > > On Fri, 6 Apr 2018, Ming Lei wrote: > > > > > > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread. > > > And

[tip:irq/core] genirq/affinity: Spread irq vectors among present CPUs as far as possible

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: d3056812e7dfe6bf4f8ad9e397a9116dd5d32d15 Gitweb: https://git.kernel.org/tip/d3056812e7dfe6bf4f8ad9e397a9116dd5d32d15 Author: Ming Lei <ming@redhat.com> AuthorDate: Thu, 8 Mar 2018 18:53:58 +0800 Committer: Thomas Gleixner <t...@linutronix.de> CommitDate: Fri, 6

[tip:irq/core] genirq/affinity: Spread irq vectors among present CPUs as far as possible

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: d3056812e7dfe6bf4f8ad9e397a9116dd5d32d15 Gitweb: https://git.kernel.org/tip/d3056812e7dfe6bf4f8ad9e397a9116dd5d32d15 Author: Ming Lei AuthorDate: Thu, 8 Mar 2018 18:53:58 +0800 Committer: Thomas Gleixner CommitDate: Fri, 6 Apr 2018 12:19:51 +0200 genirq/affinity: Spread irq

[tip:irq/core] genirq/affinity: Allow irq spreading from a given starting point

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: 1a2d0914e23aab386f5d5acb689777e24151c2c8 Gitweb: https://git.kernel.org/tip/1a2d0914e23aab386f5d5acb689777e24151c2c8 Author: Ming Lei <ming@redhat.com> AuthorDate: Thu, 8 Mar 2018 18:53:57 +0800 Committer: Thomas Gleixner <t...@linutronix.de> CommitDate: Fri, 6

[tip:irq/core] genirq/affinity: Allow irq spreading from a given starting point

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: 1a2d0914e23aab386f5d5acb689777e24151c2c8 Gitweb: https://git.kernel.org/tip/1a2d0914e23aab386f5d5acb689777e24151c2c8 Author: Ming Lei AuthorDate: Thu, 8 Mar 2018 18:53:57 +0800 Committer: Thomas Gleixner CommitDate: Fri, 6 Apr 2018 12:19:51 +0200 genirq/affinity: Allow irq

[tip:irq/core] genirq/affinity: Rename *node_to_possible_cpumask as *node_to_cpumask

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: 47778f33dcba7feb92031643b37e477892f82b62 Gitweb: https://git.kernel.org/tip/47778f33dcba7feb92031643b37e477892f82b62 Author: Ming Lei <ming@redhat.com> AuthorDate: Thu, 8 Mar 2018 18:53:55 +0800 Committer: Thomas Gleixner <t...@linutronix.de> CommitDate: Fri, 6

[tip:irq/core] genirq/affinity: Rename *node_to_possible_cpumask as *node_to_cpumask

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: 47778f33dcba7feb92031643b37e477892f82b62 Gitweb: https://git.kernel.org/tip/47778f33dcba7feb92031643b37e477892f82b62 Author: Ming Lei AuthorDate: Thu, 8 Mar 2018 18:53:55 +0800 Committer: Thomas Gleixner CommitDate: Fri, 6 Apr 2018 12:19:50 +0200 genirq/affinity: Rename

[tip:irq/core] genirq/affinity: Move actual irq vector spreading into a helper function

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: b3e6aaa8d94d618e685c4df08bef991a4fb43923 Gitweb: https://git.kernel.org/tip/b3e6aaa8d94d618e685c4df08bef991a4fb43923 Author: Ming Lei <ming@redhat.com> AuthorDate: Thu, 8 Mar 2018 18:53:56 +0800 Committer: Thomas Gleixner <t...@linutronix.de> CommitDate: Fri, 6

[tip:irq/core] genirq/affinity: Move actual irq vector spreading into a helper function

2018-04-06 Thread tip-bot for Ming Lei
Commit-ID: b3e6aaa8d94d618e685c4df08bef991a4fb43923 Gitweb: https://git.kernel.org/tip/b3e6aaa8d94d618e685c4df08bef991a4fb43923 Author: Ming Lei AuthorDate: Thu, 8 Mar 2018 18:53:56 +0800 Committer: Thomas Gleixner CommitDate: Fri, 6 Apr 2018 12:19:51 +0200 genirq/affinity: Move

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-06 Thread Ming Lei
Hi Thomas, On Wed, Apr 04, 2018 at 09:38:26PM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Ming Lei wrote: > > On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote: > > > In the example above: > > > > > > > > > irq 39, cpu

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-06 Thread Ming Lei
Hi Thomas, On Wed, Apr 04, 2018 at 09:38:26PM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Ming Lei wrote: > > On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote: > > > In the example above: > > > > > > > > > irq 39, cpu

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-04 Thread Ming Lei
On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Thomas Gleixner wrote: > > I'm aware how that hw-queue stuff works. But that only works if the > > spreading algorithm makes the interrupts affine to offline/not-present CPUs > > when the block device is

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-04 Thread Ming Lei
On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Thomas Gleixner wrote: > > I'm aware how that hw-queue stuff works. But that only works if the > > spreading algorithm makes the interrupts affine to offline/not-present CPUs > > when the block device is

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-04 Thread Ming Lei
On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Ming Lei wrote: > > On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > > 1) before 84676c1f21 ("genirq/affin

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-04 Thread Ming Lei
On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote: > On Wed, 4 Apr 2018, Ming Lei wrote: > > On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > > 1) before 84676c1f21 ("genirq/affin

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-03 Thread Ming Lei
On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote: > On Thu, 8 Mar 2018, Ming Lei wrote: > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible > > CPUs") > > irq 39, cpu list 0 > > irq 40, cpu list 1 > > ir

Re: [PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-04-03 Thread Ming Lei
On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote: > On Thu, 8 Mar 2018, Ming Lei wrote: > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible > > CPUs") > > irq 39, cpu list 0 > > irq 40, cpu list 1 > > ir

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-29 Thread Ming Lei
Hi Thomas, On Fri, Mar 09, 2018 at 04:08:19PM +0100, Thomas Gleixner wrote: > On Fri, 9 Mar 2018, Ming Lei wrote: > > On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote: > > > > > So my understanding is that these irq patches are enhancements and > >

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-29 Thread Ming Lei
Hi Thomas, On Fri, Mar 09, 2018 at 04:08:19PM +0100, Thomas Gleixner wrote: > On Fri, 9 Mar 2018, Ming Lei wrote: > > On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote: > > > > > So my understanding is that these irq patches are enhancements and > >

Re: 4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Ming Lei
ks held by swapper/0/0: > [    2.170658]  #0:  (&(>lock)->rlock){..-.}, at: [<b45eaf9e>] > dasd_block_tasklet+0x1cc/0x480 > [    2.170676]  #1:  (rcu_read_lock){}, at: [<bc7fa045>] > hctx_lock+0x34/0x110 > [    2.170750] Last Breaking-Event-Address: > [

Re: 4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Ming Lei
ks held by swapper/0/0: > [    2.170658]  #0:  (&(>lock)->rlock){..-.}, at: [<b45eaf9e>] > dasd_block_tasklet+0x1cc/0x480 > [    2.170676]  #1:  (rcu_read_lock){}, at: [<bc7fa045>] > hctx_lock+0x34/0x110 > [    2.170750] Last Breaking-Event-Address: > [ 

Re: [RFC PATCH] nvme: avoid race-conditions when enabling devices

2018-03-21 Thread Ming Lei
On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote: > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote: > >> NVMe driver uses threads for the work at device reset, including enabling > >> the PCIe device. When multiple NVMe devices are initialized, their reset > >>

Re: [RFC PATCH] nvme: avoid race-conditions when enabling devices

2018-03-21 Thread Ming Lei
On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote: > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote: > >> NVMe driver uses threads for the work at device reset, including enabling > >> the PCIe device. When multiple NVMe devices are initialized, their reset > >>

Re: [RFC PATCH] nvme: avoid race-conditions when enabling devices

2018-03-21 Thread Ming Lei
On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote: > NVMe driver uses threads for the work at device reset, including enabling > the PCIe device. When multiple NVMe devices are initialized, their reset > works may be scheduled in parallel. Then pci_enable_device_mem can be > called

Re: [RFC PATCH] nvme: avoid race-conditions when enabling devices

2018-03-21 Thread Ming Lei
On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote: > NVMe driver uses threads for the work at device reset, including enabling > the PCIe device. When multiple NVMe devices are initialized, their reset > works may be scheduled in parallel. Then pci_enable_device_mem can be > called

Re: [Possible REGRESSION, 4.16-rc4] Error updating SMART data during runtime and could not connect to lvmetad at some boot attempts

2018-03-13 Thread Ming Lei
On Tue, Mar 13, 2018 at 02:08:23PM +0100, Martin Steigerwald wrote: > Hans de Goede - 11.03.18, 15:37: > > Hi Martin, > > > > On 11-03-18 09:20, Martin Steigerwald wrote: > > > Hello. > > > > > > Since 4.16-rc4 (upgraded from 4.15.2 which worked) I have an issue > > > with SMART checks

Re: [Possible REGRESSION, 4.16-rc4] Error updating SMART data during runtime and could not connect to lvmetad at some boot attempts

2018-03-13 Thread Ming Lei
On Tue, Mar 13, 2018 at 02:08:23PM +0100, Martin Steigerwald wrote: > Hans de Goede - 11.03.18, 15:37: > > Hi Martin, > > > > On 11-03-18 09:20, Martin Steigerwald wrote: > > > Hello. > > > > > > Since 4.16-rc4 (upgraded from 4.15.2 which worked) I have an issue > > > with SMART checks

Re: [PATCH V3] nvme-pci: assign separate irq vectors for adminq and ioq1

2018-03-13 Thread Ming Lei
queues + 1), > + PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, ); > + if (ret <= 0) > return -EIO; > - dev->max_qid = nr_io_queues; > + dev->num_vecs = ret; > + dev->max_qid = max(ret - 1, 1); > > /* >* Should investigate if there's a performance win from allocating > -- > 2.7.4 > Reviewed-by: Ming Lei <ming@redhat.com> Thanks, Ming

Re: [PATCH V3] nvme-pci: assign separate irq vectors for adminq and ioq1

2018-03-13 Thread Ming Lei
pci_free_irq_vectors(pdev); > - nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues, > - PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY); > - if (nr_io_queues <= 0) > + ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_io_queues + 1), > + PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, ); > + if (ret <= 0) > return -EIO; > - dev->max_qid = nr_io_queues; > + dev->num_vecs = ret; > + dev->max_qid = max(ret - 1, 1); > > /* >* Should investigate if there's a performance win from allocating > -- > 2.7.4 > Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-13 Thread Ming Lei
On Tue, Mar 13, 2018 at 09:38:41AM +0200, Artem Bityutskiy wrote: > On Tue, 2018-03-13 at 11:11 +0800, Dou Liyang wrote: > > I also > > met the situation that BIOS told to ACPI that it could support > > physical > > CPUs hotplug, But actually, there was no hardware slots in the > > machine. > >

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-13 Thread Ming Lei
On Tue, Mar 13, 2018 at 09:38:41AM +0200, Artem Bityutskiy wrote: > On Tue, 2018-03-13 at 11:11 +0800, Dou Liyang wrote: > > I also > > met the situation that BIOS told to ACPI that it could support > > physical > > CPUs hotplug, But actually, there was no hardware slots in the > > machine. > >

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Ming Lei
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote: > On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > > > So I suspect we'll need to go with a patch like this, just with a way > > better changelog. > > I have to agree this is required for that use case. I'll run

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Ming Lei
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote: > On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > > > So I suspect we'll need to go with a patch like this, just with a way > > better changelog. > > I have to agree this is required for that use case. I'll run

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-09 Thread Ming Lei
On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote: > On Fri, 9 Mar 2018, Ming Lei wrote: > > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > > Actually, it isn't a real fix, the re

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-09 Thread Ming Lei
On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote: > On Fri, 9 Mar 2018, Ming Lei wrote: > > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > > Actually, it isn't a real fix, the re

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
On Fri, Mar 09, 2018 at 09:00:08AM +0200, Artem Bityutskiy wrote: > On Fri, 2018-03-09 at 09:24 +0800, Ming Lei wrote: > > Hi Thomas, > > > > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > &g

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
On Fri, Mar 09, 2018 at 09:00:08AM +0200, Artem Bityutskiy wrote: > On Fri, 2018-03-09 at 09:24 +0800, Ming Lei wrote: > > Hi Thomas, > > > > On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > > > On Thu, 8 Mar 2018, Ming Lei wrote: > > > &g

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
Hi Thomas, On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > On Thu, 8 Mar 2018, Ming Lei wrote: > > Actually, it isn't a real fix, the real one is in the following two: > > > > 0c20244d458e scsi: megaraid_sas: fix selection of reply queue > >

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
Hi Thomas, On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote: > On Thu, 8 Mar 2018, Ming Lei wrote: > > Actually, it isn't a real fix, the real one is in the following two: > > > > 0c20244d458e scsi: megaraid_sas: fix selection of reply queue > >

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
On Thu, Mar 08, 2018 at 03:18:33PM +0200, Artem Bityutskiy wrote: > On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote: > > Hi, > > > > This patchset tries to spread among online CPUs as far as possible, so > > that we can avoid to allocate too less irq vectors

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
On Thu, Mar 08, 2018 at 03:18:33PM +0200, Artem Bityutskiy wrote: > On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote: > > Hi, > > > > This patchset tries to spread among online CPUs as far as possible, so > > that we can avoid to allocate too less irq vectors

Re: [PATCH V4 2/5] nvme: add helper interface to flush in-flight requests

2018-03-08 Thread Ming Lei
r NVMe controller is resetted successfully, these request will be dispatched again. So please keep the name of 'cancel' or use sort of name. Thanks, Ming Lei

Re: [PATCH V4 2/5] nvme: add helper interface to flush in-flight requests

2018-03-08 Thread Ming Lei
these request will be dispatched again. So please keep the name of 'cancel' or use sort of name. Thanks, Ming Lei

[PATCH V3 3/4] genirq/affinity: support to do irq vectors spread starting from any vector

2018-03-08 Thread Ming Lei
CPUs. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 23 +++ 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affin

[PATCH V3 3/4] genirq/affinity: support to do irq vectors spread starting from any vector

2018-03-08 Thread Ming Lei
CPUs. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 23 +++ 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index e119e86bed48..616f040c5d02 100644

[PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
-by: Laurence Oberman <lober...@redhat.com> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 35 +-- 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 616f040c5d02.

[PATCH V3 4/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
t the optimal result from NUMA view, but it returns more irq vectors with online CPU mapped, given in reality one CPU should be enough to handle one irq vector, so it is better to do this way. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Reported-by: Laurence Oberman Signed-off-by: Ming L

[PATCH V3 2/4] genirq/affinity: move actual irq vector spread into one helper

2018-03-08 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/i

[PATCH V3 1/4] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask

2018-03-08 Thread Ming Lei
The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/i

[PATCH V3 1/4] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask

2018-03-08 Thread Ming Lei
The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 +- 1 file changed, 13 insertions

[PATCH V3 2/4] genirq/affinity: move actual irq vector spread into one helper

2018-03-08 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 97 +-- 1 file changed, 55 insertions(+), 42 deletions

[PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
prep patch - add Reviewed-by tag Thanks Ming Ming Lei (4): genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask genirq/affinity: move actual irq vector spread into one helper genirq/affinity: support to do irq vectors spread starting from any vector

[PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-08 Thread Ming Lei
prep patch - add Reviewed-by tag Thanks Ming Ming Lei (4): genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask genirq/affinity: move actual irq vector spread into one helper genirq/affinity: support to do irq vectors spread starting from any vector

Re: [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper

2018-03-08 Thread Ming Lei
rong git tree, please drop us a note to > help improve the system] > > url: > https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912 > config: i386-randconfig-a1-201809 (attached as .config) > c

Re: [PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper

2018-03-08 Thread Ming Lei
rong git tree, please drop us a note to > help improve the system] > > url: > https://github.com/0day-ci/linux/commits/Ming-Lei/genirq-affinity-irq-vector-spread-among-online-CPUs-as-far-as-possible/20180305-184912 > config: i386-randconfig-a1-201809 (attached as .config) > c

[PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
-by: Laurence Oberman <lober...@redhat.com> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 35 +-- 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index a8c5d07890a6.

[PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector

2018-03-04 Thread Ming Lei
CPUs. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 23 +++ 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affin

[PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
t the optimal result from NUMA view, but it returns more irq vectors with online CPU mapped, given in reality one CPU should be enough to handle one irq vector, so it is better to do this way. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Reported-by: Laurence Oberman Signed-off-by: Ming L

[PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector

2018-03-04 Thread Ming Lei
CPUs. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 23 +++ 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 256adf92ec62..a8c5d07890a6 100644

[PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask()

2018-03-04 Thread Ming Lei
Inside irq_create_affinity_masks(), once 'node_to_cpumask' is created, it is accessed read-only, so mark it as const for get_nodes_in_cpumask(). Cc: Thomas Gleixner <t...@linutronix.de> Cc: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- ker

[PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask()

2018-03-04 Thread Ming Lei
Inside irq_create_affinity_masks(), once 'node_to_cpumask' is created, it is accessed read-only, so mark it as const for get_nodes_in_cpumask(). Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion

[PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
to be allocated, otherwise performance can be hurt in current assignment. V2: - address coments from Christoph - mark irq_build_affinity_masks as static - move constification of get_nodes_in_cpumask's parameter into one prep patch - add Reviewed-by tag Thanks Ming Ming

[PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask

2018-03-04 Thread Ming Lei
The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/i

[PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
to be allocated, otherwise performance can be hurt in current assignment. V2: - address coments from Christoph - mark irq_build_affinity_masks as static - move constification of get_nodes_in_cpumask's parameter into one prep patch - add Reviewed-by tag Thanks Ming Ming

[PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask

2018-03-04 Thread Ming Lei
The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 +- 1 file changed, 13 insertions

[PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper

2018-03-04 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner <t...@linutronix.de> Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/i

[PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper

2018-03-04 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 97 +-- 1 file changed, 55 insertions(+), 42 deletions

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Ming Lei
On Wed, Feb 28, 2018 at 05:47:26PM +0100, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some heuristics to figure > out which way to use.

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Ming Lei
On Wed, Feb 28, 2018 at 05:47:26PM +0100, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some heuristics to figure > out which way to use.

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-24 Thread Ming Lei
On Sat, Feb 24, 2018 at 08:54:31AM +0100, Paolo Valente wrote: > > > > Il giorno 23 feb 2018, alle ore 17:17, Ming Lei <ming@redhat.com> ha > > scritto: > > > > Hi Paolo, > > > > On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrot

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-24 Thread Ming Lei
On Sat, Feb 24, 2018 at 08:54:31AM +0100, Paolo Valente wrote: > > > > Il giorno 23 feb 2018, alle ore 17:17, Ming Lei ha > > scritto: > > > > Hi Paolo, > > > > On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote: > >> > >

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo, On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote: > > > > Il giorno 23 feb 2018, alle ore 16:07, Ming Lei <ming@redhat.com> ha > > scritto: > > > > Hi Paolo, > > > > On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo V

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo, On Fri, Feb 23, 2018 at 04:41:36PM +0100, Paolo Valente wrote: > > > > Il giorno 23 feb 2018, alle ore 16:07, Ming Lei ha > > scritto: > > > > Hi Paolo, > > > > On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote: > >&

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo, On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote: > Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via > RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device > be re-inserted into the active I/O scheduler for that device. As a No, this

Re: [PATCH BUGFIX V3] block, bfq: add requeue-request hook

2018-02-23 Thread Ming Lei
Hi Paolo, On Wed, Feb 07, 2018 at 10:19:20PM +0100, Paolo Valente wrote: > Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via > RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device > be re-inserted into the active I/O scheduler for that device. As a No, this

Re: [PATCH] loop: Fix lost writes caused by missing flag

2018-02-21 Thread Ming Lei
t; ssize_t bw; > > - iov_iter_bvec(, ITER_BVEC, bvec, 1, bvec->bv_len); > + iov_iter_bvec(, ITER_BVEC | WRITE, bvec, 1, bvec->bv_len); > > file_start_write(file); > bw = vfs_iter_write(file, , ppos, 0); > -- > 2.14.3 > Reviewed-by: Ming Lei <ming@redhat.com> -- Ming Lei

Re: [PATCH] loop: Fix lost writes caused by missing flag

2018-02-21 Thread Ming Lei
.c > +++ b/drivers/block/loop.c > @@ -266,7 +266,7 @@ static int lo_write_bvec(struct file *file, struct > bio_vec *bvec, loff_t *ppos) > struct iov_iter i; > ssize_t bw; > > - iov_iter_bvec(, ITER_BVEC, bvec, 1, bvec->bv_len); > + iov_iter_bvec(, ITER_BVEC | WRITE, bvec, 1, bvec->bv_len); > > file_start_write(file); > bw = vfs_iter_write(file, , ppos, 0); > -- > 2.14.3 > Reviewed-by: Ming Lei -- Ming Lei

Re: [PATCH]nvme-pci: Fixes EEH failure on ppc

2018-02-06 Thread Ming Lei
On Tue, Feb 06, 2018 at 02:01:05PM -0600, wenxiong wrote: > On 2018-02-06 10:33, Keith Busch wrote: > > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxi...@vmlinux.vnet.ibm.com > > wrote: > > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return > > > nvme_timeout(struct request *req, bool

Re: [PATCH]nvme-pci: Fixes EEH failure on ppc

2018-02-06 Thread Ming Lei
On Tue, Feb 06, 2018 at 02:01:05PM -0600, wenxiong wrote: > On 2018-02-06 10:33, Keith Busch wrote: > > On Mon, Feb 05, 2018 at 03:49:40PM -0600, wenxi...@vmlinux.vnet.ibm.com > > wrote: > > > @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return > > > nvme_timeout(struct request *req, bool

[PATCH 2/5] genirq/affinity: move actual irq vector spread into one helper

2018-02-06 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner <t...@linutronix.de> Cc: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 99 +

[PATCH 3/5] genirq/affinity: support to do irq vectors spread starting from any vector

2018-02-06 Thread Ming Lei
CPUs. Cc: Thomas Gleixner <t...@linutronix.de> Cc: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lei <ming@redhat.com> --- kernel/irq/affinity.c | 23 +++ 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/ir

[PATCH 2/5] genirq/affinity: move actual irq vector spread into one helper

2018-02-06 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 99 +-- 1 file changed, 56 insertions(+), 43 deletions(-) diff --git

<    2   3   4   5   6   7   8   9   10   11   >