On Fri, May 11, 2018 at 08:22:08AM +0200, Christoph Hellwig wrote:
> On Thu, May 10, 2018 at 08:13:03AM -0700, Darrick J. Wong wrote:
> > I ran xfstests on this for fun last night but hung in g/095:
> >
> > FSTYP -- xfs (debug)
> > PLATFORM -- Linux/x86_64 submarine-djwong-mtr01 4.17.
On Thu, May 10, 2018 at 03:49:53PM -0600, Andreas Dilger wrote:
> Would it make sense to change the bio_add_page() and bio_add_pc_page()
> to use the more common convention instead of continuing the spread of
> this non-standard calling convention? This is doubly problematic since
> "off" and "len
On Thu, May 10, 2018 at 08:08:38AM -0700, Darrick J. Wong wrote:
> > > > + sector_t *bno = data;
> > > > +
> > > > + if (iomap->type == IOMAP_MAPPED)
> > > > + *bno = (iomap->addr + pos - iomap->offset) >>
> > > > inode->i_blkbits;
> > >
> > > Does this need to be carefu
On Thu, May 10, 2018 at 04:52:00PM +0800, Ming Lei wrote:
> On Wed, May 9, 2018 at 3:47 PM, Christoph Hellwig wrote:
> > For the upcoming removal of buffer heads in XFS we need to keep track of
> > the number of outstanding writeback requests per page. For this we need
> > to know if bio_add_page
On Thu, May 10, 2018 at 08:13:03AM -0700, Darrick J. Wong wrote:
> I ran xfstests on this for fun last night but hung in g/095:
>
> FSTYP -- xfs (debug)
> PLATFORM -- Linux/x86_64 submarine-djwong-mtr01 4.17.0-rc4-djw
> MKFS_OPTIONS -- -f -m reflink=1,rmapbt=1, -i sparse=1, -b size=1
On Fri, May 11, 2018 at 5:05 AM, Keith Busch
wrote:
> On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
>> Hi Keith,
>>
>> On Tue, May 8, 2018 at 11:30 PM, Keith Busch wrote:
>> > On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote:
>> >> This sync may be raced with one timed-out req
On Thu, May 10, 2018 at 04:43:57PM -0600, Keith Busch wrote:
> On Fri, May 11, 2018 at 06:03:59AM +0800, Ming Lei wrote:
> > Sorry, forget to mention, it isn't enough to simply sync timeout inside
> > reset().
> >
> > Another tricky thing is about freeze & unfreeze, now freeze is done in
> > nvme
On Fri, May 11, 2018 at 06:03:59AM +0800, Ming Lei wrote:
> Sorry, forget to mention, it isn't enough to simply sync timeout inside
> reset().
>
> Another tricky thing is about freeze & unfreeze, now freeze is done in
> nvme_dev_disable(), and unfreeze is done in nvme_reset_work. That means
> we
Hi Laurence,
Great thanks for your so quick test!
On Fri, May 11, 2018 at 5:59 AM, Laurence Oberman wrote:
> On Thu, 2018-05-10 at 18:28 +0800, Ming Lei wrote:
>> On Sat, May 05, 2018 at 07:11:33PM -0400, Laurence Oberman wrote:
>> > On Sat, 2018-05-05 at 21:58 +0800, Ming Lei wrote:
>> > > Hi,
On Fri, May 11, 2018 at 5:18 AM, Keith Busch
wrote:
> On Fri, May 11, 2018 at 05:10:40AM +0800, Ming Lei wrote:
>> On Fri, May 11, 2018 at 5:05 AM, Keith Busch
>> wrote:
>> > On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
>> >> Hi Keith,
>> >>
>> >> On Tue, May 8, 2018 at 11:30 PM, Kei
On Thu, 2018-05-10 at 18:28 +0800, Ming Lei wrote:
> On Sat, May 05, 2018 at 07:11:33PM -0400, Laurence Oberman wrote:
> > On Sat, 2018-05-05 at 21:58 +0800, Ming Lei wrote:
> > > Hi,
> > >
> > > The 1st patch introduces blk_quiesce_timeout() and
> > > blk_unquiesce_timeout()
> > > for NVMe, meant
On Thu, May 10, 2018 at 03:44:41PM -0600, Keith Busch wrote:
> On Fri, May 11, 2018 at 05:24:46AM +0800, Ming Lei wrote:
> > Could you share me the link?
>
> The diff was in this reply here:
>
> http://lists.infradead.org/pipermail/linux-nvme/2018-April/017019.html
>
> > Firstly, the previous nv
On Thu, May 10, 2018 at 03:44:41PM -0600, Keith Busch wrote:
> On Fri, May 11, 2018 at 05:24:46AM +0800, Ming Lei wrote:
> > Could you share me the link?
>
> The diff was in this reply here:
>
> http://lists.infradead.org/pipermail/linux-nvme/2018-April/017019.html
>
> > Firstly, the previous nv
On May 10, 2018, at 12:40 AM, Christoph Hellwig wrote:
>
> On Wed, May 09, 2018 at 08:12:43AM -0700, Matthew Wilcox wrote:
>> (page, len, off) is a bit weird to me. Usually we do (page, off, len).
>
> That's what I'd usually do, too. But this odd convention is what
> bio_add_page uses, so I de
On Fri, May 11, 2018 at 05:24:46AM +0800, Ming Lei wrote:
> Could you share me the link?
The diff was in this reply here:
http://lists.infradead.org/pipermail/linux-nvme/2018-April/017019.html
> Firstly, the previous nvme_sync_queues() won't work reliably, so this
> patch introduces blk_unquiesc
On Thu, May 10, 2018 at 03:18:29PM -0600, Keith Busch wrote:
> On Fri, May 11, 2018 at 05:10:40AM +0800, Ming Lei wrote:
> > On Fri, May 11, 2018 at 5:05 AM, Keith Busch
> > wrote:
> > > On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
> > >> Hi Keith,
> > >>
> > >> On Tue, May 8, 2018 at
On Fri, May 11, 2018 at 05:10:40AM +0800, Ming Lei wrote:
> On Fri, May 11, 2018 at 5:05 AM, Keith Busch
> wrote:
> > On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
> >> Hi Keith,
> >>
> >> On Tue, May 8, 2018 at 11:30 PM, Keith Busch wrote:
> >> > On Sat, Apr 28, 2018 at 11:50:17AM +0
On Fri, May 11, 2018 at 5:05 AM, Keith Busch
wrote:
> On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
>> Hi Keith,
>>
>> On Tue, May 8, 2018 at 11:30 PM, Keith Busch wrote:
>> > On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote:
>> >> This sync may be raced with one timed-out req
On Fri, May 11, 2018 at 04:52:11AM +0800, Ming Lei wrote:
> Hi Keith,
>
> On Tue, May 8, 2018 at 11:30 PM, Keith Busch wrote:
> > On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote:
> >> This sync may be raced with one timed-out request, which may be handled
> >> as BLK_EH_HANDLED or BLK_EH
On Thu, May 10, 2018 at 03:01:04PM +, Bart Van Assche wrote:
> On Sat, 2018-05-05 at 21:58 +0800, Ming Lei wrote:
> > Turns out the current way can't drain timout completely because mod_timer()
> > can be triggered in the work func, which can be just run inside the synced
> > timeout work:
> >
On Mon, May 07, 2018 at 08:04:18AM -0700, James Smart wrote:
>
>
> On 5/5/2018 6:59 AM, Ming Lei wrote:
> > --- a/drivers/nvme/host/pci.c
> > +++ b/drivers/nvme/host/pci.c
> > @@ -2365,14 +2365,14 @@ static void nvme_remove_dead_ctrl(struct nvme_dev
> > *dev, int status)
> > nvme_put
Hi Keith,
On Tue, May 8, 2018 at 11:30 PM, Keith Busch wrote:
> On Sat, Apr 28, 2018 at 11:50:17AM +0800, Ming Lei wrote:
>> This sync may be raced with one timed-out request, which may be handled
>> as BLK_EH_HANDLED or BLK_EH_RESET_TIMER, so the above sync queues can't
>> work reliably.
>
> Min
On Thu, May 10, 2018 at 01:10:15PM -0600, Alex Williamson wrote:
> On Thu, 10 May 2018 18:41:09 +
> "Stephen Bates" wrote:
> > >Reasons is that GPU are giving up on PCIe (see all specialize link like
> > >NVlink that are popping up in GPU space). So for fast GPU inter-connect
> >
On Thu, 10 May 2018 18:41:09 +
"Stephen Bates" wrote:
> >Reasons is that GPU are giving up on PCIe (see all specialize link like
> >NVlink that are popping up in GPU space). So for fast GPU inter-connect
> >we have this new links.
>
> I look forward to Nvidia open-licensin
On 10/05/18 12:41 PM, Stephen Bates wrote:
> Hi Jerome
>
>>Note on GPU we do would not rely on ATS for peer to peer. Some part
>>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>>are the part likely to be use in peer to peer.
>
> OK this is good to know. I agree
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components mos
Recently the blk-mq timeout handling code was reworked. See also Tejun
Heo, "[PATCHSET v4] blk-mq: reimplement timeout handling", 08 Jan 2018
(https://www.mail-archive.com/linux-block@vger.kernel.org/msg16985.html).
This patch reworks the blk-mq timeout handling code again. The timeout
handling co
On 10/05/18 11:11 AM, Stephen Bates wrote:
>> Not to me. In the p2pdma code we specifically program DMA engines with
>> the PCI bus address.
>
> Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
> initiator with an IOVA but with the PCI bus address...
>
>> So regardl
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the packe
On 5/10/18 11:02 AM, Omar Sandoval wrote:
> On Thu, May 10, 2018 at 10:24:24AM -0600, Jens Axboe wrote:
>> From: Omar Sandoval
>>
>> Make sure the user passed the right value to
>> sbitmap_queue_min_shallow_depth().
>
> An unlucky bisect that lands between this change and the BFQ/Kyber
> changes
On 5/10/18 11:01 AM, Omar Sandoval wrote:
> On Thu, May 10, 2018 at 10:24:23AM -0600, Jens Axboe wrote:
>> From: Omar Sandoval
>>
>> The sbitmap queue wake batch is calculated such that once allocations
>> start blocking, all of the bits which are already allocated must be
>> enough to fulfill the
On Thu, May 10, 2018 at 10:24:26AM -0600, Jens Axboe wrote:
> We don't expect the async depth to be smaller than the wake batch
> count for sbitmap, but just in case, inform sbitmap of what shallow
> depth kyber may use.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: J
On Thu, May 10, 2018 at 10:24:25AM -0600, Jens Axboe wrote:
> If our shallow depth is smaller than the wake batching of sbitmap,
> we can introduce hangs. Ensure that sbitmap knows how low we'll go.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
> ---
> bl
On Thu, May 10, 2018 at 10:24:24AM -0600, Jens Axboe wrote:
> From: Omar Sandoval
>
> Make sure the user passed the right value to
> sbitmap_queue_min_shallow_depth().
An unlucky bisect that lands between this change and the BFQ/Kyber
changes is going to trigger this warning. We should have it a
On Thu, May 10, 2018 at 10:24:23AM -0600, Jens Axboe wrote:
> From: Omar Sandoval
>
> The sbitmap queue wake batch is calculated such that once allocations
> start blocking, all of the bits which are already allocated must be
> enough to fulfill the batch counters of all of the waitqueues. Howeve
On Thu, May 10, 2018 at 10:24:22AM -0600, Jens Axboe wrote:
> bfqd->sb_shift was attempted used as a cache for the sbitmap queue
> shift, but we don't need it, as it never changes. Kill it with fire.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
> ---
> b
On Thu, May 10, 2018 at 10:24:21AM -0600, Jens Axboe wrote:
> It doesn't change, so don't put it in the per-IO hot path.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
> ---
> block/bfq-iosched.c | 97
> +++-
On Thu, May 10, 2018 at 10:24:20AM -0600, Jens Axboe wrote:
> Reserved tags are used for error handling, we don't need to
> care about them for regular IO. The core won't call us for these
> anyway.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
> ---
> bl
On Thu, May 10, 2018 at 10:24:19AM -0600, Jens Axboe wrote:
> It's not useful, they are internal and/or error handling recovery
> commands.
>
> Acked-by: Paolo Valente
Reviewed-by: Omar Sandoval
> Signed-off-by: Jens Axboe
> ---
> block/blk-mq.c | 6 --
> 1 file changed, 4 insertions(+),
On 10/05/18 08:16 AM, Stephen Bates wrote:
> Hi Christian
>
>> Why would a switch not identify that as a peer address? We use the PASID
>>together with ATS to identify the address space which a transaction
>>should use.
>
> I think you are conflating two types of TLPs here. If the de
We don't expect the async depth to be smaller than the wake batch
count for sbitmap, but just in case, inform sbitmap of what shallow
depth kyber may use.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/kyber-iosched.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/kyb
If we have multiple callers of sbq_wake_up(), we can end up in a
situation where the wait_cnt will continually go more and more
negative. Consider the case where our wake batch is 1, hence
wait_cnt will start out as 1.
wait_cnt == 1
CPU0CPU1
atomic_dec_return(), cnt ==
bfqd->sb_shift was attempted used as a cache for the sbitmap queue
shift, but we don't need it, as it never changes. Kill it with fire.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/bfq-iosched.c | 16 +++-
block/bfq-iosched.h | 6 --
2 files changed, 7 insertions
It doesn't change, so don't put it in the per-IO hot path.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/bfq-iosched.c | 97 +++--
1 file changed, 50 insertions(+), 47 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosche
From: Omar Sandoval
Make sure the user passed the right value to
sbitmap_queue_min_shallow_depth().
Acked-by: Paolo Valente
Signed-off-by: Omar Sandoval
Signed-off-by: Jens Axboe
---
lib/sbitmap.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index d2147
From: Omar Sandoval
The sbitmap queue wake batch is calculated such that once allocations
start blocking, all of the bits which are already allocated must be
enough to fulfill the batch counters of all of the waitqueues. However,
the shallow allocation depth can break this invariant, since we blo
If our shallow depth is smaller than the wake batching of sbitmap,
we can introduce hangs. Ensure that sbitmap knows how low we'll go.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/bfq-iosched.c | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a
Found another issue in sbitmap around our wake batch accounting.
See the last patch for details.
This series passes my testing. The sbitmap change was improved
by Omar, I swapped in his patch instead.
You can also find this series here:
http://git.kernel.dk/cgit/linux-block/log/?h=for-4.18/iosch
Reserved tags are used for error handling, we don't need to
care about them for regular IO. The core won't call us for these
anyway.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/bfq-iosched.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/block/bfq-io
It's not useful, they are internal and/or error handling recovery
commands.
Acked-by: Paolo Valente
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4e9d83594cca..64630caaf27e 100644
---
On 5/9/18 10:32 PM, Paolo Valente wrote:
>
>
>> Il giorno 09 mag 2018, alle ore 22:49, Jens Axboe ha
>> scritto:
>>
>> Omar had some valid complaints about the previous patchset, mostly
>> around the fact that we should not be updating depths on a per-IO
>> basis. He's right. In fact, BFQ oddl
On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
> When invoked for an I/O request rq, [ ... ]
Tested-by: Bart Van Assche
> Il giorno 10 mag 2018, alle ore 18:12, Bart Van Assche
> ha scritto:
>
> On Fri, 2018-05-04 at 22:11 +0200, Paolo Valente wrote:
>>> Il giorno 30 mar 2018, alle ore 18:57, Bart Van Assche
>>> ha scritto:
>>>
>>> On Fri, 2018-03-30 at 10:23 +0200, Paolo Valente wrote:
Still 4.16-rc1,
On Fri, 2018-05-04 at 22:11 +0200, Paolo Valente wrote:
> > Il giorno 30 mar 2018, alle ore 18:57, Bart Van Assche
> > ha scritto:
> >
> > On Fri, 2018-03-30 at 10:23 +0200, Paolo Valente wrote:
> > > Still 4.16-rc1, being that the version for which you reported this
> > > issue in the first pla
On Thu, 2018-05-10 at 15:16 +, Bart Van Assche wrote:
> On Fri, 2018-05-04 at 16:42 -0400, Laurence Oberman wrote:
> > I was never able to reproduce Barts original issue using his tree
> > and
> > actual mlx5/cx4 hardware and ibsrp
> > I enabled BFQ with no other special tuning for the moath an
pr_ logging uses allow a prefix to be specified with a
specific #define pr_fmt
The default of pr_fmt in printk.h is #define pr_fmt(fmt) fmt
so no prefixing of logging output is generically done.
There are several output logging uses like dump_stack() that are
unprefixed and should remain unprefix
Converting pr_fmt from a simple define to use KBUILD_MODNAME added
some duplicate logging prefixes to existing uses.
Remove them.
Signed-off-by: Joe Perches
---
drivers/block/aoe/aoeblk.c | 29 ++---
drivers/block/aoe/aoechr.c | 11 +--
drivers/block/aoe/aoecmd
Converting pr_fmt from a simple define to use KBUILD_MODNAME added
some duplicate logging prefixes to existing uses.
Remove them.
Signed-off-by: Joe Perches
---
block/blk-mq.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9ce9
> Il giorno 10 mag 2018, alle ore 17:16, Bart Van Assche
> ha scritto:
>
> On Fri, 2018-05-04 at 16:42 -0400, Laurence Oberman wrote:
>> I was never able to reproduce Barts original issue using his tree and
>> actual mlx5/cx4 hardware and ibsrp
>> I enabled BFQ with no other special tuning for
On Fri, 2018-05-04 at 16:42 -0400, Laurence Oberman wrote:
> I was never able to reproduce Barts original issue using his tree and
> actual mlx5/cx4 hardware and ibsrp
> I enabled BFQ with no other special tuning for the moath and subpaths.
> I was waiting for him to come back from vacation to chec
On Wed, May 09, 2018 at 09:47:57AM +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series adds support for reading blocks from disk using the iomap
> interface, and then gradually switched the buffered I/O path to not
> require buffer heads. It has survived xfstests for 1k and 4k block
> size.
On Thu, May 10, 2018 at 08:42:50AM +0200, Christoph Hellwig wrote:
> On Wed, May 09, 2018 at 09:46:28AM -0700, Darrick J. Wong wrote:
> > On Wed, May 09, 2018 at 09:48:07AM +0200, Christoph Hellwig wrote:
> > > This adds a simple iomap-based implementation of the legacy ->bmap
> > > interface. Not
On Sat, 2018-05-05 at 21:58 +0800, Ming Lei wrote:
> Turns out the current way can't drain timout completely because mod_timer()
> can be triggered in the work func, which can be just run inside the synced
> timeout work:
>
> del_timer_sync(&q->timeout);
> cancel_work_sync(&q->time
On Thu, May 10, 2018 at 04:29:44PM +0200, Christian König wrote:
> Am 10.05.2018 um 16:20 schrieb Stephen Bates:
> > Hi Jerome
> >
> > > As it is tie to PASID this is done using IOMMU so looks for caller
> > > of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> > > user is the
On Wed, May 09, 2018 at 01:07:36PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:35PM -0400, Johannes Weiner wrote:
> > --- a/kernel/sched/psi.c
> > +++ b/kernel/sched/psi.c
> > @@ -260,6 +260,18 @@ void psi_task_change(struct task_struct *task, u64
> > now, int clear, int set)
> >
On Thu, May 10, 2018 at 02:16:25PM +, Stephen Bates wrote:
> Hi Christian
>
> > Why would a switch not identify that as a peer address? We use the PASID
> >together with ATS to identify the address space which a transaction
> >should use.
>
> I think you are conflating two types of
Am 10.05.2018 um 16:20 schrieb Stephen Bates:
Hi Jerome
As it is tie to PASID this is done using IOMMU so looks for caller
of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows
On Wed, May 09, 2018 at 12:21:00PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > + local_irq_disable();
> > + rq = this_rq();
> > + raw_spin_lock(&rq->lock);
> > + rq_pin_lock(rq, &rf);
>
> Given that churn in sched.h, you seen rq_lock(
On 5/10/18 1:27 AM, Christophe JAILLET wrote:
> Branch to the right label in the error handling path in order to keep it
> logical.
Looks good, applied.
--
Jens Axboe
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of intel_svm_bind_m
On Wed, May 09, 2018 at 12:14:54PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 15750c222ca2..1658477466d5 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
[
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
On Wed, May 09, 2018 at 12:05:51PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > + u64 some[NR_PSI_RESOURCES] = { 0, };
> > + u64 full[NR_PSI_RESOURCES] = { 0, };
>
> > + some[r] /= max(nonidle_total, 1UL);
> > + full[r]
On Wed, May 09, 2018 at 12:04:55PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > +static void psi_clock(struct work_struct *work)
> > +{
> > + u64 some[NR_PSI_RESOURCES] = { 0, };
> > + u64 full[NR_PSI_RESOURCES] = { 0, };
> > + unsigned l
On Wed, May 09, 2018 at 11:59:38AM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> > new file mode 100644
> > index ..b22b0ffc729d
> > --- /dev/null
> > +++ b/include
On Wed, May 09, 2018 at 11:49:06AM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:33PM -0400, Johannes Weiner wrote:
> > +static inline unsigned long
> > +fixed_power_int(unsigned long x, unsigned int frac_bits, unsigned int n)
> > +{
> > + unsigned long result = 1UL << frac_bits;
>
On Wed, May 09, 2018 at 01:38:49PM +0200, Peter Zijlstra wrote:
> On Wed, May 09, 2018 at 12:46:18PM +0200, Peter Zijlstra wrote:
> > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> >
> > > @@ -2038,6 +2038,7 @@ try_to_wake_up(struct task_struct *p, unsigned int
> > > state, in
Am 09.05.2018 um 18:45 schrieb Logan Gunthorpe:
On 09/05/18 07:40 AM, Christian König wrote:
The key takeaway is that when any device has ATS enabled you can't
disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits w
On Sat, May 05, 2018 at 07:11:33PM -0400, Laurence Oberman wrote:
> On Sat, 2018-05-05 at 21:58 +0800, Ming Lei wrote:
> > Hi,
> >
> > The 1st patch introduces blk_quiesce_timeout() and
> > blk_unquiesce_timeout()
> > for NVMe, meantime fixes blk_sync_queue().
> >
> > The 2nd patch covers timeout
On Wed, May 9, 2018 at 3:47 PM, Christoph Hellwig wrote:
> For the upcoming removal of buffer heads in XFS we need to keep track of
> the number of outstanding writeback requests per page. For this we need
> to know if bio_add_page merged a region with the previous bvec or not.
> Instead of addin
Branch to the right label in the error handling path in order to keep it
logical.
Signed-off-by: Christophe JAILLET
---
drivers/block/mtip32xx/mtip32xx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/mtip32xx/mtip32xx.c
b/drivers/block/mtip32xx/mtip32xx.c
ind
82 matches
Mail list logo