On 2017年02月06日 01:15, Christoph Hellwig wrote:
This lets IRQ layer handle dispatching IRQs to separate handlers for the
case where we don't have per-VQ MSI-X vectors, and allows us to greatly
simplify the code based on the assumption that we always have interrupt
vector 0 (legacy INTx or config
On 2017年02月06日 01:15, Christoph Hellwig wrote:
We don't really need struct virtio_pci_vq_info, as most field in there
are redundant:
- the vq backpointer is not strictly neede to start with
- the entry in the vqs list is not needed - the generic virtqueue already
has list, we only need
booting the guest.
>
> With next-20170203 (mentioned in my mail last Friday), I got the same
> calltrace as Hannes.
>
> With today's linux-next (next-20170206), actually the calltrace changed to
> the below.
> [ 122.023036] ? remove_wait_queue+0x70/0x70
> [ 122.
On Mon, Feb 6, 2017 at 8:09 PM, Jens Axboe wrote:
> On 02/06/2017 05:14 PM, James Bottomley wrote:
>> On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote:
>>> On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote:
Dan,
can you please quote your emails? I can't find any content
On 02/06/2017 05:14 PM, James Bottomley wrote:
> On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote:
>> On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote:
>>> Dan,
>>>
>>> can you please quote your emails? I can't find any content
>>> inbetween all these quotes.
>>
>> Sorry, I'm using g
On Tue, 2017-02-07 at 02:23 +, Dexuan Cui wrote:
> Any news on this thread?
>
> The issue is still blocking Linux from booting up normally in my test. :-(
>
> Have we identified the faulty patch?
> If so, at least I can try to revert it to boot up.
It's interesting that you have a reproducib
> From: linux-block-ow...@vger.kernel.org [mailto:linux-block-
> ow...@vger.kernel.org] On Behalf Of Dexuan Cui
> Sent: Friday, February 3, 2017 20:23
> To: Hannes Reinecke ; Bart Van Assche
> ; h...@suse.de; ax...@kernel.dk
> Cc: h...@lst.de; linux-ker...@vger.kernel.org; linux-block@vger.kernel.o
On Sun, 2017-02-05 at 21:13 -0800, Dan Williams wrote:
> On Sun, Feb 5, 2017 at 1:13 AM, Christoph Hellwig wrote:
> > Dan,
> >
> > can you please quote your emails? I can't find any content
> > inbetween all these quotes.
>
> Sorry, I'm using gmail, but I'll switch to attaching the logs.
>
>
On 02/06/2017 01:52 PM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> I noticed that when booting with a default blk-mq I/O scheduler, the
> /sys/block/*/queue/iosched directory was missing. However, switching
> after boot did create the directory. This is because we skip the initial
> elevator
From: Omar Sandoval
I noticed that when booting with a default blk-mq I/O scheduler, the
/sys/block/*/queue/iosched directory was missing. However, switching
after boot did create the directory. This is because we skip the initial
elevator register/unregister when we don't have a ->request_fn(),
On Mon, Feb 06, 2017 at 01:07:41PM -0700, Jens Axboe wrote:
> On 02/06/2017 12:53 PM, Omar Sandoval wrote:
> > On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote:
> >> On 02/06/2017 12:24 PM, Omar Sandoval wrote:
> >>> From: Omar Sandoval
> >>>
> >>> In blk_mq_sched_dispatch_requests(), we
On 02/06/2017 12:53 PM, Omar Sandoval wrote:
> On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote:
>> On 02/06/2017 12:24 PM, Omar Sandoval wrote:
>>> From: Omar Sandoval
>>>
>>> In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart()
>>> after we dispatch requests left ov
On Mon, Feb 06, 2017 at 12:39:57PM -0700, Jens Axboe wrote:
> On 02/06/2017 12:24 PM, Omar Sandoval wrote:
> > From: Omar Sandoval
> >
> > In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart()
> > after we dispatch requests left over on our hardware queue dispatch
> > list. Thi
On 02/06/2017 12:24 PM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart()
> after we dispatch requests left over on our hardware queue dispatch
> list. This is so we'll go back and dispatch requests from the scheduler.
> In thi
From: Omar Sandoval
In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart()
after we dispatch requests left over on our hardware queue dispatch
list. This is so we'll go back and dispatch requests from the scheduler.
In this case, it's only necessary to restart the hardware queue
On Mon, Feb 06, 2017 at 12:03:05PM -0500, Keith Busch wrote:
> Can we use the online CPUs and create a new hot-cpu notifier to the nvme
> driver to free/reallocate as needed? We were doing that before blk-mq. Now
> blk-mq can change the number hardware contexts on a live queue, so we
> can reintrod
On Sun, Feb 05, 2017 at 05:40:23PM +0100, Christoph Hellwig wrote:
> Hi Joe,
>
> On Fri, Feb 03, 2017 at 08:58:09PM -0500, Joe Korty wrote:
> > IIRC, some years ago I ran across a customer system where
> > the #cpus_present was twice as big as #cpus_possible.
> >
> > Hyperthreading was turned off
On 02/05/2017 10:10 AM, Christoph Hellwig wrote:
> Write Same can return an error asynchronously if it turns out the
> underlying SCSI device does not support Write Same, which makes a
> proper fallback to other methods in __blkdev_issue_zeroout impossible.
> Thus only issue a Write Same from blkde
The way we make changes to the NBD device is inherently racey, as we
could be in the middle of a request and suddenly change the number of
connections. In practice this isn't a big deal, but with timeouts we
have to take the config_lock in order to protect ourselves since it is
important those val
Am Montag, 6. Februar 2017, 12:48:42 BRST schrieb Thiago Jung Bauermann:
> 216 static inline void wb_get(struct bdi_writeback *wb)
> 217 {
> 218 if (wb != &wb->bdi->wb)
> 219 percpu_ref_get(&wb->refcnt);
> 220 }
>
> So it looks like wb->bdi is NULL.
Sorry, looking a little
Hello,
Am Dienstag, 31. Januar 2017, 13:54:25 BRST schrieb Jan Kara:
> this is a second version of the patch series that attempts to solve the
> problems with the life time of a backing_dev_info structure. Currently it
> lives inside request_queue structure and thus it gets destroyed as soon as
>
On Mon, Feb 06, 2017 at 06:43:32PM +0800, Ming Lei wrote:
> In theory, ->bio_set still might be NULL in case of failed memory allocation,
> please see md_run().
And that's something that should be fixed. Silently not having mempool
is very bad behavior.
On Mon, Feb 6, 2017 at 4:54 PM, Christoph Hellwig wrote:
> On Sun, Feb 05, 2017 at 02:22:13PM +0800, Ming Lei wrote:
>> Firstly bio_clone_mddev() is used in raid normal I/O and isn't
>> in resync I/O path.
>>
>> Secondly all the direct access to bvec table in raid happens on
>> resync I/O except f
On Sun, Feb 05, 2017 at 02:22:13PM +0800, Ming Lei wrote:
> Firstly bio_clone_mddev() is used in raid normal I/O and isn't
> in resync I/O path.
>
> Secondly all the direct access to bvec table in raid happens on
> resync I/O except for write behind of raid1, in which we still
> use bio_clone() fo
> +struct bio *bio_clone_slow_mddev_partial(struct bio *bio, gfp_t gfp_mask,
> + struct mddev *mddev, int offset,
> + int size)
> +{
> + struct bio_set *bs;
> +
> + if (!mddev || !mddev->bio_set)
> + bs =
25 matches
Mail list logo