Re: [PATCH 2/9] virtio_pci: use shared interrupts for virtqueues

2017-02-06 Thread Jason Wang
On 2017年02月06日 01:15, Christoph Hellwig wrote: This lets IRQ layer handle dispatching IRQs to separate handlers for the case where we don't have per-VQ MSI-X vectors, and allows us to greatly simplify the code based on the assumption that we always have interrupt vector 0 (legacy INTx or config

Re: [PATCH 1/9] virtio_pci: remove struct virtio_pci_vq_info

2017-02-06 Thread Jason Wang
On 2017年02月06日 01:15, Christoph Hellwig wrote: We don't really need struct virtio_pci_vq_info, as most field in there are redundant: - the vq backpointer is not strictly neede to start with - the entry in the vqs list is not needed - the generic virtqueue already has list, we only need

RE: [PATCH] genhd: Do not hold event lock when scheduling workqueue elements

2017-02-06 Thread Dexuan Cui
booting the guest. > > With next-20170203 (mentioned in my mail last Friday), I got the same > calltrace as Hannes. > > With today's linux-next (next-20170206), actually the calltrace changed to > the below. > [ 122.023036] ? remove_wait_queue+0x70/0x70 > [ 122.

Re: [lkp-robot] [scsi, block] 0dba1314d4: WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup

2017-02-06 Thread Dan Williams
On Mon, Feb 6, 2017 at 8:09 PM, Jens Axboe wrote: > On 02/06/2017 05:14 PM, James Bottomley wrote: >> On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote: >>> On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote: Dan, can you please quote your emails? I can't find any content

Re: [lkp-robot] [scsi, block] 0dba1314d4: WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup

2017-02-06 Thread Jens Axboe
On 02/06/2017 05:14 PM, James Bottomley wrote: > On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote: >> On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote: >>> Dan, >>> >>> can you please quote your emails? I can't find any content >>> inbetween all these quotes. >> >> Sorry, I'm using g

Re: [PATCH] genhd: Do not hold event lock when scheduling workqueue elements

2017-02-06 Thread Bart Van Assche
On Tue, 2017-02-07 at 02:23 +, Dexuan Cui wrote: > Any news on this thread? > > The issue is still blocking Linux from booting up normally in my test. :-( > > Have we identified the faulty patch? > If so, at least I can try to revert it to boot up. It's interesting that you have a reproducib

RE: [PATCH] genhd: Do not hold event lock when scheduling workqueue elements

2017-02-06 Thread Dexuan Cui
> From: linux-block-ow...@vger.kernel.org [mailto:linux-block- > ow...@vger.kernel.org] On Behalf Of Dexuan Cui > Sent: Friday, February 3, 2017 20:23 > To: Hannes Reinecke ; Bart Van Assche > ; h...@suse.de; ax...@kernel.dk > Cc: h...@lst.de; linux-ker...@vger.kernel.org; linux-block@vger.kernel.o

Re: [lkp-robot] [scsi, block] 0dba1314d4: WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup

2017-02-06 Thread James Bottomley
On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote: > On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote: > > Dan, > > > > can you please quote your emails? I can't find any content > > inbetween all these quotes. > > Sorry, I'm using gmail, but I'll switch to attaching the logs. > >

Re: [PATCH] blk-mq-sched: (un)register elevator when (un)registering queue

2017-02-06 Thread Jens Axboe
On 02/06/2017 01:52 PM, Omar Sandoval wrote: > From: Omar Sandoval > > I noticed that when booting with a default blk-mq I/O scheduler, the > /sys/block/*/queue/iosched directory was missing. However, switching > after boot did create the directory. This is because we skip the initial > elevator

[PATCH] blk-mq-sched: (un)register elevator when (un)registering queue

2017-02-06 Thread Omar Sandoval
From: Omar Sandoval I noticed that when booting with a default blk-mq I/O scheduler, the /sys/block/*/queue/iosched directory was missing. However, switching after boot did create the directory. This is because we skip the initial elevator register/unregister when we don't have a ->request_fn(),

Re: [PATCH v2] blk-mq-sched: separate mark hctx and queue restart operations

2017-02-06 Thread Omar Sandoval
On Mon, Feb 06, 2017 at 01:07:41PM -0700, Jens Axboe wrote: > On 02/06/2017 12:53 PM, Omar Sandoval wrote: > > On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote: > >> On 02/06/2017 12:24 PM, Omar Sandoval wrote: > >>> From: Omar Sandoval > >>> > >>> In blk_mq_sched_dispatch_requests(), we

Re: [PATCH v2] blk-mq-sched: separate mark hctx and queue restart operations

2017-02-06 Thread Jens Axboe
On 02/06/2017 12:53 PM, Omar Sandoval wrote: > On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote: >> On 02/06/2017 12:24 PM, Omar Sandoval wrote: >>> From: Omar Sandoval >>> >>> In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() >>> after we dispatch requests left ov

Re: [PATCH v2] blk-mq-sched: separate mark hctx and queue restart operations

2017-02-06 Thread Omar Sandoval
On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote: > On 02/06/2017 12:24 PM, Omar Sandoval wrote: > > From: Omar Sandoval > > > > In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() > > after we dispatch requests left over on our hardware queue dispatch > > list. Thi

Re: [PATCH v2] blk-mq-sched: separate mark hctx and queue restart operations

2017-02-06 Thread Jens Axboe
On 02/06/2017 12:24 PM, Omar Sandoval wrote: > From: Omar Sandoval > > In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() > after we dispatch requests left over on our hardware queue dispatch > list. This is so we'll go back and dispatch requests from the scheduler. > In thi

[PATCH v2] blk-mq-sched: separate mark hctx and queue restart operations

2017-02-06 Thread Omar Sandoval
From: Omar Sandoval In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() after we dispatch requests left over on our hardware queue dispatch list. This is so we'll go back and dispatch requests from the scheduler. In this case, it's only necessary to restart the hardware queue

Re: [PATCH 1/6] genirq: allow assigning affinity to present but not online CPUs

2017-02-06 Thread Christoph Hellwig
On Mon, Feb 06, 2017 at 12:03:05PM -0500, Keith Busch wrote: > Can we use the online CPUs and create a new hot-cpu notifier to the nvme > driver to free/reallocate as needed? We were doing that before blk-mq. Now > blk-mq can change the number hardware contexts on a live queue, so we > can reintrod

Re: [PATCH 1/6] genirq: allow assigning affinity to present but not online CPUs

2017-02-06 Thread Keith Busch
On Sun, Feb 05, 2017 at 05:40:23PM +0100, Christoph Hellwig wrote: > Hi Joe, > > On Fri, Feb 03, 2017 at 08:58:09PM -0500, Joe Korty wrote: > > IIRC, some years ago I ran across a customer system where > > the #cpus_present was twice as big as #cpus_possible. > > > > Hyperthreading was turned off

Re: [PATCH] block: don't try Write Same from __blkdev_issue_zeroout

2017-02-06 Thread Jens Axboe
On 02/05/2017 10:10 AM, Christoph Hellwig wrote: > Write Same can return an error asynchronously if it turns out the > underlying SCSI device does not support Write Same, which makes a > proper fallback to other methods in __blkdev_issue_zeroout impossible. > Thus only issue a Write Same from blkde

[PATCH] nbd: freeze the queue before making changes

2017-02-06 Thread Josef Bacik
The way we make changes to the NBD device is inherently racey, as we could be in the middle of a request and suddenly change the number of connections. In practice this isn't a big deal, but with timeouts we have to take the config_lock in order to protect ourselves since it is important those val

Re: [PATCH 0/4 v2] BDI lifetime fix

2017-02-06 Thread Thiago Jung Bauermann
Am Montag, 6. Februar 2017, 12:48:42 BRST schrieb Thiago Jung Bauermann: > 216 static inline void wb_get(struct bdi_writeback *wb) > 217 { > 218 if (wb != &wb->bdi->wb) > 219 percpu_ref_get(&wb->refcnt); > 220 } > > So it looks like wb->bdi is NULL. Sorry, looking a little

Re: [PATCH 0/4 v2] BDI lifetime fix

2017-02-06 Thread Thiago Jung Bauermann
Hello, Am Dienstag, 31. Januar 2017, 13:54:25 BRST schrieb Jan Kara: > this is a second version of the patch series that attempts to solve the > problems with the life time of a backing_dev_info structure. Currently it > lives inside request_queue structure and thus it gets destroyed as soon as >

Re: [PATCH 4/4] md: fast clone bio in bio_clone_mddev()

2017-02-06 Thread Christoph Hellwig
On Mon, Feb 06, 2017 at 06:43:32PM +0800, Ming Lei wrote: > In theory, ->bio_set still might be NULL in case of failed memory allocation, > please see md_run(). And that's something that should be fixed. Silently not having mempool is very bad behavior.

Re: [PATCH 4/4] md: fast clone bio in bio_clone_mddev()

2017-02-06 Thread Ming Lei
On Mon, Feb 6, 2017 at 4:54 PM, Christoph Hellwig wrote: > On Sun, Feb 05, 2017 at 02:22:13PM +0800, Ming Lei wrote: >> Firstly bio_clone_mddev() is used in raid normal I/O and isn't >> in resync I/O path. >> >> Secondly all the direct access to bvec table in raid happens on >> resync I/O except f

Re: [PATCH 4/4] md: fast clone bio in bio_clone_mddev()

2017-02-06 Thread Christoph Hellwig
On Sun, Feb 05, 2017 at 02:22:13PM +0800, Ming Lei wrote: > Firstly bio_clone_mddev() is used in raid normal I/O and isn't > in resync I/O path. > > Secondly all the direct access to bvec table in raid happens on > resync I/O except for write behind of raid1, in which we still > use bio_clone() fo

Re: [PATCH 2/4] md: introduce bio_clone_slow_mddev_partial()

2017-02-06 Thread Christoph Hellwig
> +struct bio *bio_clone_slow_mddev_partial(struct bio *bio, gfp_t gfp_mask, > + struct mddev *mddev, int offset, > + int size) > +{ > + struct bio_set *bs; > + > + if (!mddev || !mddev->bio_set) > + bs =