On 1/29/19 2:12 AM, Jan Kara wrote:
> On Mon 28-01-19 22:41:41, John Hubbard wrote:
[...]
>> Here is the case I'm wondering about:
>>
>> thread A thread B
>>
>>
th trylock version ksmd
may take 8s - 11s to run two full scans. And, the number of
pages_sharing and pages_to_scan keep same. Basically, this change has
no harm >
Cc: Hugh Dickins
Cc: Andrea Arcangeli
Suggested-by: John Hubbard
Reviewed-by: Kirill Tkhai
Signed-off-by: Yang Shi
> really don't have preference.
Yes, either one is fine. I like to see less code on the screen, all else being
equal,
but it's an extremely minor point, and sometimes being explicit instead is
better anyway.
thanks,
--
John Hubbard
NVIDIA
From: John Hubbard
This combines the common elements of these routines:
page_cache_get_speculative()
page_cache_add_speculative()
This was anticipated by the original author, as shown by the comment
in commit ce0ad7f095258 ("powerpc/mm: Lockless get_user_pages_fast()
for 64-bi
From: John Hubbard
Hi,
I ran across this while working on the get_user_pages() + [R]DMA problem,
but we might as well remove the small bit of code duplication, independent
of gup/dma (which is going to take "a little bit" longer to get submitted,
ha).
John Hubbard
From: John Hubbard
For infiniband code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(), or
put_user_pages*(), instead of put_page()
This is a tiny part of the second step of fixing the problem described
in [1]. The steps are:
1) Provide put_user_page
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also introduces put_user_pages(), and a few dirty/locked variations,
as a replacement
From: John Hubbard
Hi,
It seems about time to post these initial patches: I think we have pretty
good consensus on the concept and details of the put_user_pages() approach.
Therefore, here are the first two patches, to get started on converting the
get_user_pages() call sites to use
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a safe way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also adds release_user_pages(), a drop-in replacement for
release_pages(). This is intended
From: John Hubbard
Hi,
With respect to tracking get_user_pages*() pages with page->dma_pinned*
fields [1], I spent a few days retrofitting most of the get_user_pages*()
call sites, by adding calls to a new put_user_page() function, in place
of put_page(), where appropriate. This will w
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().
Also: rename release_user_pages(), to avoid a naming
conflict with the new external function of the same name.
CC: Al Viro
Signed-off-by: John Hubbard
elease_user_pages'
> static void release_user_pages(struct page **pages, int pages_count,
> ^~
Yes. Patches #1 and #2 need to be combined here. I'll do that in the next
version, which will probably include several of the easier put_user_page()
conversions, as well.
thanks,
--
John Hubbard
NVIDIA
On 06/20/2018 05:08 AM, Jan Kara wrote:
> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>> On 06/19/2018 03:41 AM, Jan Kara wrote:
>>> On Tue 19-06-18 02:02:55, Matthew Wilcox wrote:
>>>> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote:
[...]
>>
On 06/25/2018 08:21 AM, Jan Kara wrote:
> On Thu 21-06-18 18:30:36, Jan Kara wrote:
>> On Wed 20-06-18 15:55:41, John Hubbard wrote:
>>> On 06/20/2018 05:08 AM, Jan Kara wrote:
>>>> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>>>>> On 06/19/2018 03:
On 06/25/2018 08:21 AM, Jan Kara wrote:
> On Thu 21-06-18 18:30:36, Jan Kara wrote:
>> On Wed 20-06-18 15:55:41, John Hubbard wrote:
>>> On 06/20/2018 05:08 AM, Jan Kara wrote:
>>>> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>>>>> On 06/19/2018 03:
ry_to_unmap_one
At the moment, they are both just doing an evil little early-out:
if (PageDmaPinned(page))
return false;
...but we talked about maybe waiting for the condition to clear, instead?
Thoughts?
And if so, does it sound reasonable to refactor wait_on_page_bit_common(),
so that it learns how to wait for a bit that, while inside struct page, is
not within page->flags?
thanks,
--
John Hubbard
NVIDIA
put_devmap_managed_page
devmap_managed_key
__put_devmap_managed_page
So if the goal is to restore put_page() to be effectively EXPORT_SYMBOL
again, then I think there would also need to be either a non-inlined
wrapper for devmap_managed_key (awkward for a static key), or else make
it EXPORT_SYMBOL, or maybe something else that's less obvious to me at the
moment.
thanks,
--
John Hubbard
NVIDIA
On 06/15/2018 10:22 PM, Dan Williams wrote:
> On Fri, Jun 15, 2018 at 9:43 PM, John Hubbard wrote:
>> On 06/13/2018 12:51 PM, Dan Williams wrote:
>>> [ adding Andrew, Christoph, and linux-mm ]
>>>
>>> On Wed, Jun 13, 2018 at 12:33 PM, Joe Gorse wrote:
[sn
From: John Hubbard
This fixes a few problems that come up when using devices (NICs, GPUs,
for example) that want to have direct access to a chunk of system (CPU)
memory, so that they can DMA to/from that memory. Problems [1] come up
if that memory is backed by persistence storage; for example
From: John Hubbard
In preparation for a subsequent patch, consolidate the error handling
for __get_user_pages(). This provides a single location (the "out:" label)
for operating on the collected set of pages that are about to be returned.
As long as we are already touching every use o
From: John Hubbard
Hi,
I'm including people who have been talking about this. This is in one sense
a medium-term work around, because there is a plan to talk about more
extensive fixes at the upcoming Linux Plumbers Conference. I am seeing
several customer bugs, though, and I really want to fix
On 06/17/2018 01:10 PM, Dan Williams wrote:
> On Sun, Jun 17, 2018 at 1:04 PM, Jason Gunthorpe wrote:
>> On Sun, Jun 17, 2018 at 12:53:04PM -0700, Dan Williams wrote:
diff --git a/mm/rmap.c b/mm/rmap.c
index 6db729dc4c50..37576f0a4645 100644
+++ b/mm/rmap.c
@@ -1360,6 +1360,8
On 06/17/2018 12:53 PM, Dan Williams wrote:
> [..]
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 6db729dc4c50..37576f0a4645 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1360,6 +1360,8 @@ static bool try_to_unmap_one(struct page *page, struct
>> vm_area_struct *vma,
>>
ior that the hardware cannot otherwise do: access to non-pinned memory.
I know this was brought up before. Definitely would like to hear more
opinions and brainstorming here.
thanks,
--
John Hubbard
NVIDIA
Hi Christoph,
Thanks for looking at this...
On 06/18/2018 12:56 AM, Christoph Hellwig wrote:
> On Sat, Jun 16, 2018 at 06:25:10PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> This fixes a few problems that come up when using devices (NICs, GPUs,
>
On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>> Yes. However, my thinking was: get_user_pages() can become a way to indicate
>> that
>> these pages are going to be treated specially. In particular, the call
On 06/18/2018 10:56 AM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>>> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>>>> Yes. However, my thinking was: get
On 06/18/2018 12:21 PM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 11:14 AM, John Hubbard wrote:
>> On 06/18/2018 10:56 AM, Dan Williams wrote:
>>> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>>>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>&
: introduce MEMORY_DEVICE_FS_DAX and
> CONFIG_DEV_PAGEMAP_OPS")
> Reported-by: Joe Gorse
> Reported-by: John Hubbard
> Signed-off-by: Dan Williams
> ---
> kernel/memremap.c |4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/memremap
ight for some get_user_pages_fast() users (e.g. direct IO) - Al Viro
> had an idea to use page lock for that path but e.g. fs/direct-io.c would have
> problems due to lock ordering constraints (filesystem ->get_block would
> suddently get called with the page lock held). But we can probably leave
> performance optimizations for phase two.
So I assume that phase one would be to apply this approach only to
get_user_pages_longterm. (Please let me know if that's wrong.)
thanks,
--
John Hubbard
NVIDIA
On 06/19/2018 06:24 PM, Dan Williams wrote:
> On Tue, Jun 19, 2018 at 11:11 AM, John Hubbard wrote:
>> On 06/19/2018 03:41 AM, Jan Kara wrote:
>>> On Tue 19-06-18 02:02:55, Matthew Wilcox wrote:
>>>> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote:
> [..
On 06/19/2018 06:57 PM, Dan Williams wrote:
> On Tue, Jun 19, 2018 at 6:34 PM, John Hubbard wrote:
>> On 06/19/2018 06:24 PM, Dan Williams wrote:
>>> On Tue, Jun 19, 2018 at 11:11 AM, John Hubbard wrote:
>>>> On 06/19/2018 03:41 AM, Jan Kara wrote:
>>>>
From: John Hubbard
This fixes a few problems that came up when using devices (NICs, GPUs,
for example) that want to have direct access to a chunk of system (CPU)
memory, so that they can DMA to/from that memory. Problems [1] come up
if that memory is backed by persistence storage; for example
From: John Hubbard
The page->dma_pinned_flags and _count fields require
lock protection. A lock at approximately the granularity
of the zone_lru_lock is called for, but adding to the
locking contention of zone_lru_lock is undesirable,
because that is a pre-existing hot spot. Fortunat
From: John Hubbard
This patch sets and restores the new page->dma_pinned_flags and
page->dma_pinned_count fields, but does not actually use them for
anything yet.
In order to use these fields at all, the page must be removed from
any LRU list that it's on. The patch also adds some preca
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error handling for
__get_user_pages(). This provides a single location (the "out:" label)
for operating on the col
From: John Hubbard
Update page_mkclean(), page_mkclean's callers, and try_to_unmap(), so that
there is a choice: in some cases, skipped dma-pinned pages. In other cases
(sync_mode == WB_SYNC_ALL), wait for those pages to become unpinned.
This fixes some problems that came up when using devices
From: John Hubbard
Add a sync_mode parameter to clear_page_dirty_for_io(), to specify the
writeback sync mode, and also pass in the appropriate value
(WB_SYNC_NONE or WB_SYNC_ALL), from each filesystem location that calls
it. This will be used in subsequent patches, to allow page_mkclean
From: John Hubbard
Add two struct page fields that, combined, are unioned with
struct page->lru. There is no change in the size of
struct page. These new fields are for type safety and clarity.
Also add page flag accessors to test, set and clear the new
page->dma_pinned_flags field.
Th
ed to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: x86_64-randconfig-x010-201826 (attached as .config)
&g
ng git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: i386-randconfig-x075-201826 (attached as .config)
> compiler:
ed to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: x86_64-randconfig-x010-201826 (attached as .config)
&g
On 07/01/2018 05:56 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard
>
There were some typos in patches #4 and #5, which I've fixed locally.
Let me know if anyone would like me to repost with those right away, otherwise
I'll wait for other review besides the kbuild test robot.
Mea
On 07/01/2018 10:52 PM, Leon Romanovsky wrote:
> On Thu, Jun 28, 2018 at 11:17:43AM +0200, Jan Kara wrote:
>> On Wed 27-06-18 19:42:01, John Hubbard wrote:
>>> On 06/27/2018 10:02 AM, Jan Kara wrote:
>>>> On Wed 27-06-18 08:57:18, Jason Gunthorpe wrote:
>>&g
On 07/01/2018 11:34 PM, Leon Romanovsky wrote:
> On Sun, Jul 01, 2018 at 11:10:04PM -0700, John Hubbard wrote:
>> On 07/01/2018 10:52 PM, Leon Romanovsky wrote:
>>> On Thu, Jun 28, 2018 at 11:17:43AM +0200, Jan Kara wrote:
>>>> On Wed 27-06-18 19:42:01, John Hubbard
On 07/02/2018 02:53 AM, Jan Kara wrote:
> On Sun 01-07-18 17:56:53, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
> ...
>
>> @@ -904,12 +907,24 @@ static inline void get_page(struct page *page)
>> */
>> VM_BUG_ON_PAGE(page_ref_count(
ed in? Also the
> locking is IMHO going to hurt a lot and we need to avoid it.
>
> What I think needs to happen is that in page_mkclean(), after you've
> cleared all the page tables, you check PageDmaPinned() and wait if needed.
> Page cannot be faulted in again as we hold page loc
On 07/02/2018 03:17 AM, Jan Kara wrote:
> On Sun 01-07-18 17:56:49, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> An upcoming patch requires a way to operate on each page that
>> any of the get_user_pages_*() variants returns.
>>
>> In prep
On 07/02/2018 05:08 PM, Christopher Lameter wrote:
> On Mon, 2 Jul 2018, John Hubbard wrote:
>
>>>
>>> These two are just wrong. You cannot make any page reference for
>>> PageDmaPinned() account against a pin count. First, it is just conceptually
>>>
's my main concern:
>
Hi Tom,
Thanks again for looking at this!
> On 11/10/2018 3:50 AM, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>> ...
>> --
>> WITHOUT the patch:
>> --
From: John Hubbard
Hi,
Keith Busch and Dan Williams noticed that this patch
(which was part of my RFC[1] for the get_user_pages + DMA
fix) also fixes a bug. Accordingly, I'm adjusting
the changelog and posting this as it's own patch.
[1] https://lkml.kernel.org/r/20181110085041.10071-1-jhubb
From: John Hubbard
Commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while pinning pages")
attempted to operate on each page that get_user_pages had retrieved. In
order to do that, it created a common exit point from the routine.
However, one case was missed, which this patch fixes
On 11/21/18 8:49 AM, Tom Talpey wrote:
> On 11/21/2018 1:09 AM, John Hubbard wrote:
>> On 11/19/18 10:57 AM, Tom Talpey wrote:
>>> ~14000 4KB read IOPS is really, really low for an NVMe disk.
>>
>> Yes, but Jan Kara's original config file for fio is *intended* to h
his call",
rather than the generic case of crossing a vma boundary. (I think there's a fine
point that I must be overlooking.) But it's still a valid case, either way.
--
thanks,
John Hubbard
NVIDIA
rs really are
complex) really seems worth the extra work, so that's a big benefit.
Next steps: I want to go try this dynamic_page approach out right away.
If there are pieces such as page_to_pfn and related, that are already in
progress, I'd definitely like to work on top of that. Also, any up front
advice or pitfalls to avoid is always welcome, of course. :)
thanks,
--
John Hubbard
NVIDIA
On 11/27/18 5:21 PM, Tom Talpey wrote:
> On 11/21/2018 5:06 PM, John Hubbard wrote:
>> On 11/21/18 8:49 AM, Tom Talpey wrote:
>>> On 11/21/2018 1:09 AM, John Hubbard wrote:
>>>> On 11/19/18 10:57 AM, Tom Talpey wrote:
[...]
>>>
>>> What I'd real
On 11/28/18 5:59 AM, Tom Talpey wrote:
> On 11/27/2018 9:52 PM, John Hubbard wrote:
>> On 11/27/18 5:21 PM, Tom Talpey wrote:
>>> On 11/21/2018 5:06 PM, John Hubbard wrote:
>>>> On 11/21/18 8:49 AM, Tom Talpey wrote:
>>>>> On 11/21/2018 1:09 AM, John Hu
On 11/29/18 6:18 PM, Tom Talpey wrote:
> On 11/29/2018 8:39 PM, John Hubbard wrote:
>> On 11/28/18 5:59 AM, Tom Talpey wrote:
>>> On 11/27/2018 9:52 PM, John Hubbard wrote:
>>>> On 11/27/18 5:21 PM, Tom Talpey wrote:
>>>>> On 11/21/2018 5:06 PM, John H
On 11/29/18 6:30 PM, Tom Talpey wrote:
> On 11/29/2018 9:21 PM, John Hubbard wrote:
>> On 11/29/18 6:18 PM, Tom Talpey wrote:
>>> On 11/29/2018 8:39 PM, John Hubbard wrote:
>>>> On 11/28/18 5:59 AM, Tom Talpey wrote:
>>>>> On 11/27/2018 9:52 PM, John H
nned dax pages, see
>>> dax_layout_busy_page(). As much as possible I want to eliminate the
>>> concept of "dax pages" as a special case that gets sprinkled
>>> throughout the mm.
>>>
>>>> For [O1] and [O2] i believe a solution with mapcount
On 12/7/18 9:18 PM, Matthew Wilcox wrote:
> On Fri, Dec 07, 2018 at 04:52:42PM -0800, John Hubbard wrote:
>> I see. OK, HMM has done an efficient job of mopping up unused fields, and
>> now we are
>> completely out of space. At this point, after thinking about it carefull
about it then trying to make it pass under the radar.
>
> This will put the burden on broken user and allow you to properly
> recycle your DAX page.
>
> Think of it as revoke through mmu notifier.
>
> So patchset would be:
> enum mmu_notifier_event {
> + MMU_NOTI
On 12/12/18 2:04 PM, Jerome Glisse wrote:
> On Wed, Dec 12, 2018 at 01:56:00PM -0800, John Hubbard wrote:
>> On 12/12/18 1:30 PM, Jerome Glisse wrote:
>>> On Wed, Dec 12, 2018 at 08:27:35AM -0800, Dan Williams wrote:
>>>> On Wed, Dec 12, 2018 at 7:03 AM Jerome Gli
On 12/12/18 2:14 PM, Jerome Glisse wrote:
> On Wed, Dec 12, 2018 at 02:11:58PM -0800, John Hubbard wrote:
>> On 12/12/18 2:04 PM, Jerome Glisse wrote:
>>> On Wed, Dec 12, 2018 at 01:56:00PM -0800, John Hubbard wrote:
>>>> On 12/12/18 1:30 PM, Jerome Glisse wrote:
&g
t can replay page faults, in many cases.
I think as long as we specify that the acceptable consequence of doing, say,
umount on a filesystem that has active DMA happening is that the associated
processes get killed, then we're going to be OK.
What would worry me is if there was an expectation that processes could
continue working properly after such a scenario.
thanks,
--
John Hubbard
NVIDIA
an't call ->page_mkwrite() from
>>> put_user_page(), so I don't think this is workable at all.
>>
>> Hu why ? i can not think of any reason whike you could not. User of
>
> It's not a fault path, you can't safely lock pages, you can't take
> fault-path only locks in the IO path (mmap_sem inversion problems),
> etc.
>
Yes, I looked closer at ->page_mkwrite (ext4_page_mkwrite, for example),
and it's clearly doing lock_page(), so it does seem like this particular
detail (calling page_mkwrite from put_user_page) is dead.
> /me has a nagging feeling this was all explained in a previous
> discussions of this patchset...
>
Yes, lots of related discussion definitely happened already, for example
this October thread covered page_mkwrite and interactions with gup:
https://lore.kernel.org/r/20181001061127.GQ31060@dastard
...but so far, this is the first time I recall seeing a proposal to call
page_mkwrite from put_user_page.
thanks,
--
John Hubbard
NVIDIA
On 12/13/18 9:21 PM, Dan Williams wrote:
> On Thu, Dec 13, 2018 at 7:53 PM John Hubbard wrote:
>>
>> On 12/12/18 4:51 PM, Dave Chinner wrote:
>>> On Wed, Dec 12, 2018 at 04:59:31PM -0500, Jerome Glisse wrote:
>>>> On Thu, Dec 13, 2018 at 08:46:41AM +1100, Dave
n that it is the lightest weight
solution for that.
So as I understand it, this would use page->_mapcount to store both the real
mapcount, and the dma pinned count (simply added together), but only do so for
file-backed (non-anonymous) pages:
__get_user_pages()
{
...
get_page(page);
if (!PageAnon)
atomic_inc(page->_mapcount);
...
}
put_user_page(struct page *page)
{
...
if (!PageAnon)
atomic_dec(>_mapcount);
put_page(page);
...
}
...and then in the various consumers of the DMA pinned count, we use
page_mapped(page)
to see if any mapcount remains, and if so, we treat it as DMA pinned. Is that
what you
had in mind?
--
thanks,
John Hubbard
NVIDIA
On 12/19/18 3:08 AM, Jan Kara wrote:
> On Tue 18-12-18 21:07:24, Jerome Glisse wrote:
>> On Tue, Dec 18, 2018 at 03:29:34PM -0800, John Hubbard wrote:
>>> OK, so let's take another look at Jerome's _mapcount idea all by itself
>>> (using
>>> *only* the
On 12/4/18 12:28 PM, Dan Williams wrote:
> On Mon, Dec 3, 2018 at 4:17 PM wrote:
>>
>> From: John Hubbard
>>
>> Introduces put_user_page(), which simply calls put_page().
>> This provides a way to update all get_user_pages*() callers,
>> so that they ca
On 12/4/18 3:03 PM, Dan Williams wrote:
> On Tue, Dec 4, 2018 at 1:56 PM John Hubbard wrote:
>>
>> On 12/4/18 12:28 PM, Dan Williams wrote:
>>> On Mon, Dec 3, 2018 at 4:17 PM wrote:
>>>>
>>>> From: John Hubbard
>>>>
On 12/4/18 4:40 PM, Dan Williams wrote:
> On Tue, Dec 4, 2018 at 4:37 PM Jerome Glisse wrote:
>>
>> On Tue, Dec 04, 2018 at 03:03:02PM -0800, Dan Williams wrote:
>>> On Tue, Dec 4, 2018 at 1:56 PM John Hubbard wrote:
>>>>
>>>> On 12/4/18 12:28 P
hich I
should have added in this cover letter. Here's a start:
https://lore.kernel.org/r/20181110085041.10071-1-jhubb...@nvidia.com
...and it looks like this small patch series is not going to work out--I'm
going to have to fall back to another RFC spin. So I'll be sure to include
you and everyone on that. Hope that helps.
thanks,
--
John Hubbard
NVIDIA
On 12/3/18 11:53 PM, Mike Rapoport wrote:
> Hi John,
>
> Thanks for having documentation as a part of the patch. Some kernel-doc
> nits below.
>
> On Mon, Dec 03, 2018 at 04:17:19PM -0800, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Introduces
On 12/4/18 5:44 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 05:15:19PM -0800, Matthew Wilcox wrote:
>> On Tue, Dec 04, 2018 at 04:58:01PM -0800, John Hubbard wrote:
>>> On 12/4/18 3:03 PM, Dan Williams wrote:
>>>> Except the LRU fields are already in us
From: John Hubbard
KASAN reports a use-after-free during startup, in mei_cl_write:
BUG: KASAN: use-after-free in mei_cl_write+0x601/0x870 [mei]
(drivers/misc/mei/client.c:1770)
This is caused by commit 98e70866aacb ("mei: add support for variable
length mei headers."), whi
On 10/16/18 1:51 AM, Jan Kara wrote:
> On Sun 14-10-18 10:01:24, Dave Chinner wrote:
>> On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
>>> On 10/12/18 8:55 PM, Dave Chinner wrote:
>>>> On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com w
On 10/17/18 4:09 AM, Michal Hocko wrote:
> On Tue 16-10-18 18:48:23, John Hubbard wrote:
> [...]
>> It's hard to say exactly what the active/inactive/unevictable list should
>> be when DMA is done and put_user_page*() is called, because we don't know
>> if some device rea
On 10/1/18 7:35 AM, Dennis Dalessandro wrote:
> On 9/28/2018 11:12 PM, John Hubbard wrote:
>> On 9/28/18 8:39 AM, Jason Gunthorpe wrote:
>>> On Thu, Sep 27, 2018 at 10:39:47PM -0700, john.hubb...@gmail.com wrote:
>>>> From: John Hubbard
>> [...]
>>&g
On 10/3/18 9:27 AM, Jan Kara wrote:
> On Fri 28-09-18 20:12:33, John Hubbard wrote:
>> static inline void release_user_pages(struct page **pages,
>> - unsigned long npages)
>> +
On 10/3/18 9:22 AM, Jan Kara wrote:
> On Thu 27-09-18 22:39:48, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Introduces put_user_page(), which simply calls put_page().
>> This provides a way to update all get_user_pages*() callers,
>> so that t
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error handling for
__get_user_pages(). This provides a single location (the "out:" label)
for operating on the col
From: John Hubbard
Changes since v1:
-- Renamed release_user_pages*() to put_user_pages*(), from Jan's feedback.
-- Removed the goldfish.c changes, and instead, only included a single
user (infiniband) of the new functions. That is because goldfish.c no
longer has a name collision
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().
This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2], [3], [4].
[1] https://lwn.net/Articles/753027
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also introduces put_user_pages(), and a few dirty/locked variations,
as a replacement
On 10/5/18 8:17 AM, Jason Gunthorpe wrote:
> On Thu, Oct 04, 2018 at 09:02:24PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Introduces put_user_page(), which simply calls put_page().
>> This provides a way to update all get_user_pages*() ca
On 10/12/18 12:35 AM, Balbir Singh wrote:
On Thu, Oct 11, 2018 at 11:00:10PM -0700, john.hubb...@gmail.com wrote:
From: John Hubbard
[...]>> +/*
+ * put_user_pages_dirty() - for each page in the @pages array, make
+ * that page (or its head page, if a compound page)
On 10/11/18 11:30 PM, Balbir Singh wrote:
On Thu, Oct 11, 2018 at 11:00:09PM -0700, john.hubb...@gmail.com wrote:
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error
On 10/12/18 3:56 AM, Balbir Singh wrote:
> On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
[...]
>> + * Because page->dma_pinned_flags is unioned with page->lru, any page that
>> + * uses these flags must NOT be on an
On 10/12/18 4:07 AM, Balbir Singh wrote:
> On Thu, Oct 11, 2018 at 11:00:14PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
[...]
>> +static int pin_page_for_dma(struct page *page)
>> +{
>> +int ret = 0;
>> +struct zone *zone;
>&
On 10/12/18 8:55 PM, Dave Chinner wrote:
> On Thu, Oct 11, 2018 at 11:00:12PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
[...]
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 5ed8f6292a53..017ab82e36ca 100644
>> --- a/includ
On 10/13/18 9:47 AM, Christoph Hellwig wrote:
> On Sat, Oct 13, 2018 at 12:34:12AM -0700, John Hubbard wrote:
>> In patch 6/6, pin_page_for_dma(), which is called at the end of
>> get_user_pages(),
>> unceremoniously rips the pages out of the LRU, as a prerequisite to using
From: John Hubbard
Hi,
This short series prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
I'd like to get the first two patches into the -mm tree.
Patch 1, although not technically critical to do now, is still nice to have,
because it's
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error handling for
__get_user_pages(). This provides a single location (the "out:" label)
for operating on the col
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new release_user_pages(),
instead of calling put_page().
This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
[1] https://lwn.net/Articles
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also adds release_user_pages(), a drop-in replacement for
release_pages(). This is intended
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().
This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
[1] https://lwn.net/Articles/753027
On 9/28/18 8:29 AM, Jerome Glisse wrote:
> On Thu, Sep 27, 2018 at 10:39:45PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Hi,
>>
>> This short series prepares for eventually fixing the problem described
>> in [1], and is following a pl
On 9/28/18 2:49 PM, Jerome Glisse wrote:
> On Fri, Sep 28, 2018 at 12:06:12PM -0700, John Hubbard wrote:
>> On 9/28/18 8:29 AM, Jerome Glisse wrote:
>>> On Thu, Sep 27, 2018 at 10:39:45PM -0700, john.hubb...@gmail.com wrote:
>>>> From: John Hubbard
[...]
>>&g
On 9/28/18 8:39 AM, Jason Gunthorpe wrote:
> On Thu, Sep 27, 2018 at 10:39:47PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
[...]
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index a41792dbae1f..9430d697c
901 - 1000 of 1254 matches
Mail list logo