that such a complex implementation would
be worthwhile.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/remap_range.c | 109 ++-
1 file changed, 52 insertions(+), 57 deletions(-)
diff --git a/fs/remap_range.c b/fs/remap_range.c
index 77dba3a
This is the folio equivalent of page_mapping(). Adjust
page_file_mapping() and page_mapping_file() to use folios internally.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 16 ++--
mm/swapfile.c | 6 +++---
mm/util.c | 20 ++--
3
Implement readahead_batch_length() to determine the number of bytes in
the current batch of readahead pages and use it in btrfs.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/btrfs/extent_io.c| 6 ++
include/linux/pagemap.h | 9 +
2 files changed, 11 insertions(+), 4 deletions
If we know we have a folio, we can call put_folio() instead of put_page()
and save the overhead of calling compound_head(). Also skips the
devmap checks.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
. Until they do, they can
assume that the folio being passed in contains a single page.
Also convert filler_t to take a folio as these two are tightly
intertwined.
Signed-off-by: Matthew Wilcox (Oracle)
---
Documentation/filesystems/locking.rst | 2 +-
Documentation/filesystems/vfs.rst | 18
Turn wait_on_page_locked() and wait_on_page_locked_killable() into
wrappers.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index
Turn pagecache_get_page() into a wrapper around filemap_get_folio().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 21 +-
mm/filemap.c| 141 +---
2 files changed, 94 insertions(+), 68 deletions(-)
diff --git a/include
Pages being added to the page cache should already be folios, so
turn add_to_page_cache_lru() into a wrapper. Saves hundreds of
bytes of text.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 13 +++--
mm/filemap.c| 62
The pagecache only contains folios, so this is the right thing to do.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 654bba53442a..b9f25a2d8312 100644
--- a
Most of the users turn it back into a struct page pointer, but
some can make use of it as a folio immediately.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/dir.c| 2 +-
fs/btrfs/compression.c | 4 ++--
fs/cachefiles/rdwr.c| 6 --
fs/ceph/addr.c | 2 +-
fs
t folio' that always refers to an entire
(possibly compound) page, and points to the head page (or base page).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 10 ++
include/linux/mm_types.h | 17 +
include/linux/pagemap.h | 14 ++
3 fil
This already operated on the entire compound page, but now we can avoid
calling compound_head quite so many times.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 8 ++--
mm/swap.c| 28 +---
2 files changed, 19 insertions(+), 17
With my config, this function shrinks from 480 bytes to 240 bytes
due to elimination of repeated calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/filemap.c b/mm
These wrappers are mostly for typesafety, but they also ensure that
the page allocator allocates a compound page.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/gfp.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index
This is like lock_page() but for use by callers who know they have a folio.
Convert __lock_page() to be __lock_folio(). This saves one call to
compound_head() per contended call to lock_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 21 +++--
mm
Convert unlock_page() to call unlock_folio(). By using a folio we avoid
doing a repeated compound_head() This shortens the function from 120
bytes to 76 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16 +++-
mm/filemap.c| 27
These new functions are the folio analogues of the PageFlags functions.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 80 ++
1 file changed, 63 insertions(+), 17 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux
If we know we have a folio, we can call get_folio() instead of get_page()
and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__lock_folio_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux
an listen to a discussion of page folios
from last week here: https://www.youtube.com/watch?v=iP49_ER1FUM
Git tree version here (against next-20201216):
https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/folio
Matthew Wilcox (Oracle) (25):
mm: Introduce struct folio
mm:
On Tue, Dec 15, 2020 at 05:54:51PM -0700, Yu Zhao wrote:
> On Mon, Dec 07, 2020 at 10:24:29PM +0000, Matthew Wilcox wrote:
> > On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote:
> > > Move scattered VM_BUG_ONs to two essential places that cover all
> > > lru l
On Tue, Dec 15, 2020 at 07:41:23PM +0300, Sergey Temerkhanov wrote:
> Unlock RCU before running another loop iteration
Why?
On Tue, Dec 15, 2020 at 09:13:00AM -0500, Liam R. Howlett wrote:
> @@ -3025,25 +3025,6 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long,
> start, unsigned long, size,
>
> flags &= MAP_NONBLOCK;
> flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE;
> - if (vma->vm_flags & VM_LOCKED)
On Mon, Dec 14, 2020 at 09:54:06AM -0800, Linus Torvalds wrote:
> > I expected to hate it more, but it looks reasonable. Opencoded
> > xas_for_each() smells bad, but...
>
> I think the open-coded xas_for_each() per se isn't a problem, but I
> agree that the startup condition is a bit ugly. And I'm
On Mon, Dec 14, 2020 at 04:11:28PM +0100, Uladzislau Rezki wrote:
> On Sun, Dec 13, 2020 at 09:51:34PM +0000, Matthew Wilcox wrote:
> > If we need to iterate the list efficiently, i'd suggest getting rid of
> > the list and using an xarray instead. maybe a maple tree, once tha
On Sun, Dec 13, 2020 at 07:39:36PM +0100, Uladzislau Rezki wrote:
> On Sun, Dec 13, 2020 at 01:08:43PM -0500, Waiman Long wrote:
> > When multiple locks are acquired, they should be released in reverse
> > order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
> > case.
> >
> > s_sta
On Sun, Dec 13, 2020 at 08:30:40AM -0600, Eric W. Biederman wrote:
> Stephen Brennan writes:
>
> > The pid_revalidate() function requires dropping from RCU into REF lookup
> > mode. When many threads are resolving paths within /proc in parallel,
> > this can result in heavy spinlock contention as
On Sun, Dec 13, 2020 at 08:22:32AM -0600, Eric W. Biederman wrote:
> Matthew Wilcox writes:
>
> > On Thu, Dec 03, 2020 at 04:02:12PM -0800, Stephen Brennan wrote:
> >> -void pid_update_inode(struct task_struct *task, struct inode *inode)
> >> +static int do_pid_u
On Thu, Dec 03, 2020 at 04:02:12PM -0800, Stephen Brennan wrote:
> -void pid_update_inode(struct task_struct *task, struct inode *inode)
> +static int do_pid_update_inode(struct task_struct *task, struct inode *inode,
> +unsigned int flags)
I'm really nitpicking here, b
On Fri, Dec 04, 2020 at 10:48:53AM -0500, Josef Bacik wrote:
> We on the program committee hope everybody has been able to stay safe and
> healthy during this challenging time, and look forward to being able to see
> all of you in person again when it is safe.
>
> The current plans for LSFMMBPF 20
On Fri, Dec 11, 2020 at 12:19:50PM +0800, Muchun Song wrote:
> +++ b/mm/filemap.c
> @@ -207,7 +207,7 @@ static void unaccount_page_cache_page(struct
> address_space *mapping,
> if (PageTransHuge(page))
> __dec_lruvec_page_state(page, NR_SHMEM_THPS);
> } el
On Sat, Nov 21, 2020 at 02:13:21PM +, David Howells wrote:
> I had a go switching the iov_iter stuff away from using a type bitmask to
> using an ops table to get rid of the if-if-if-if chains that are all over
> the place. After I pushed it, someone pointed me at Pavel's two patches.
>
> I h
On Thu, Dec 10, 2020 at 12:05:04PM -0800, Joe Perches wrote:
> Also, given the ever increasing average identifier length, strict
> adherence to 80 columns is sometimes just not possible without silly
> visual gymnastics. The kernel now has quite a lot of 30+ character
> length function names, cons
On Thu, Dec 10, 2020 at 11:25:13PM +0530, Md Muazzam Husain wrote:
> Hi All,
> I have kernel version 3.10.59. So during the allocation for skb i
That kernel is over 6 years old: https://lwn.net/Articles/618650/ It's
been out of support since November 2017. The development kernel it was
b
On Wed, Dec 09, 2020 at 03:01:36PM -0800, Linus Torvalds wrote:
> On Wed, Dec 9, 2020 at 2:58 PM Al Viro wrote:
> >
> > On Wed, Dec 09, 2020 at 07:49:38PM +, Matthew Wilcox wrote:
> > >
> > > Assuming this is safe, you can use RCU_INIT_POINTER() here because
On Wed, Dec 09, 2020 at 11:04:13AM -0800, Linus Torvalds wrote:
> In particular, it made it a nightmare to read what do_fault_around()
> does: it does that odd
>
> if (pmd_none(*vmf->pmd)) {
> vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm);
>
> and then it calls ->map_
On Wed, Dec 09, 2020 at 11:47:56AM -0800, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 8:03 PM Matthew Wilcox wrote:
> > On Tue, Dec 08, 2020 at 06:22:50PM -0800, Ira Weiny wrote:
> > > Therefore, I tend to agree with Dan that if anything is to be done it
> > > should
On Wed, Dec 09, 2020 at 12:04:38PM -0600, Eric W. Biederman wrote:
> @@ -397,8 +397,9 @@ static struct fdtable *close_files(struct files_struct *
> files)
> set = fdt->open_fds[j++];
> while (set) {
> if (set & 1) {
> -
On Wed, Dec 09, 2020 at 05:55:53PM +, Christoph Hellwig wrote:
> On Wed, Dec 09, 2020 at 01:37:05PM +, Pavel Begunkov wrote:
> > Yeah, I had troubles to put comments around, and it's still open.
> >
> > For current cases it can be bound to kiocb, e.g. "if an bvec iter passed
> > "together"
On Wed, Dec 09, 2020 at 03:46:28PM +0100, Stanislaw Gruszka wrote:
> At this point of release cycle we should probably go with revert,
> but I think the main problem is that BPF and ERROR_INJECTION use
> function that is not intended to be used externally. For external users
> add_to_page_cache_lru
On Tue, Dec 08, 2020 at 06:22:50PM -0800, Ira Weiny wrote:
> Right now we have a mixed bag. zero_user() [and it's variants, circa 2008]
> does a BUG_ON.[0] While the other ones do nothing; clear_highpage(),
> clear_user_highpage(), copy_user_highpage(), and copy_highpage().
Erm, those functions
On Tue, Dec 08, 2020 at 02:45:55PM -0800, Darrick J. Wong wrote:
> On Tue, Dec 08, 2020 at 10:32:34PM +0000, Matthew Wilcox wrote:
> > On Tue, Dec 08, 2020 at 02:23:10PM -0800, Dan Williams wrote:
> > > On Tue, Dec 8, 2020 at 1:51 PM Matthew Wilcox wrote:
> > > >
&
On Tue, Dec 08, 2020 at 02:23:10PM -0800, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 1:51 PM Matthew Wilcox wrote:
> >
> > On Tue, Dec 08, 2020 at 01:32:55PM -0800, Ira Weiny wrote:
> > > On Mon, Dec 07, 2020 at 03:49:55PM -0800, Dan Williams wrote:
> > >
On Tue, Dec 08, 2020 at 01:32:55PM -0800, Ira Weiny wrote:
> On Mon, Dec 07, 2020 at 03:49:55PM -0800, Dan Williams wrote:
> > On Mon, Dec 7, 2020 at 3:40 PM Matthew Wilcox wrote:
> > >
> > > On Mon, Dec 07, 2020 at 03:34:44PM -0800, Dan Williams wrote:
> > &
ns to use
folios. Eventually, we'll be able to convert some of the PageFoo flags
to be only available as FolioFoo flags.
I have a Zoom call this Friday at 18:00 UTC (13:00 Eastern,
10:00 Pacific, 03:00 Tokyo, 05:00 Sydney, 19:00 Berlin).
Meeting ID: 960 8868 8749, passcode 2097152
Feel
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__lock_folio_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux
eparate destroy path for gigantic pages.
Thanks for catching and fixing this.
Reviewed-by: Matthew Wilcox (Oracle)
Move the declaration into mm/internal.h and rename the function to
rotate_reclaimable_folio(). This eliminates all five of the calls to
compound_head() in this function.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/swap.h | 1 -
mm/filemap.c | 2 +-
mm/internal.h
This is like lock_page() but for use by callers who know they have a folio.
Convert __lock_page() to be __lock_folio(). This saves one call to
compound_head() per contended call to lock_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 21 +++--
mm
t folio' that always refers to an entire
(possibly compound) page, and points to the head page (or base page).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 5 +
include/linux/mm_types.h | 17 +
2 files changed, 22 insertions(+)
diff --git a/inc
Convert mapping_get_entry() to return a folio and convert
pagecache_get_page() to use the folio where possible. The seemingly
dangerous cast of a page pointer to a folio pointer is safe because
__page_cache_alloc() allocates an order-0 page, which is a folio by
definition.
Signed-off-by: Matthew
These new functions are the folio analogues of the PageFlags functions.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page
If we know we have a folio, we can call get_folio() instead of get_page()
and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b
Convert unlock_page() to call unlock_folio(). By using a folio we avoid
doing a repeated compound_head() This shortens the function from 120
bytes to 76 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 16 +++-
mm/filemap.c| 27
Pages being added to the page cache should already be folios, so
turn add_to_page_cache_lru() into a wrapper. Saves hundreds of
bytes of text.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 13 +++--
mm/filemap.c| 62
With my config, this function shrinks from 480 bytes to 240 bytes
due to elimination of repeated calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/filemap.c b/mm
If we know we have a folio, we can call put_folio() instead of put_page()
and save the overhead of calling compound_head(). Also skips the
devmap checks.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
On Tue, Dec 08, 2020 at 08:38:14AM -0800, Ira Weiny wrote:
> On Tue, Dec 08, 2020 at 12:23:16PM +0000, Matthew Wilcox wrote:
> > On Mon, Dec 07, 2020 at 02:57:03PM -0800, ira.we...@intel.com wrote:
> > > Placing these functions in 'highmem.h' is suboptimal especially
On Mon, Dec 07, 2020 at 02:57:03PM -0800, ira.we...@intel.com wrote:
> Placing these functions in 'highmem.h' is suboptimal especially with the
> changes being proposed in the functionality of kmap. From a caller
> perspective including/using 'highmem.h' implies that the functions
> defined in tha
On Mon, Dec 07, 2020 at 03:34:44PM -0800, Dan Williams wrote:
> On Mon, Dec 7, 2020 at 3:27 PM Matthew Wilcox wrote:
> >
> > On Mon, Dec 07, 2020 at 02:57:03PM -0800, ira.we...@intel.com wrote:
> > > +static inline void memcpy_page(struct page *
On Mon, Dec 07, 2020 at 02:57:03PM -0800, ira.we...@intel.com wrote:
> +static inline void memcpy_page(struct page *dst_page, size_t dst_off,
> +struct page *src_page, size_t src_off,
> +size_t len)
> +{
> + char *dst = kmap_local_page(dst
On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote:
> Move scattered VM_BUG_ONs to two essential places that cover all
> lru list additions and deletions.
I'd like to see these converted into VM_BUG_ON_PGFLAGS so you have
to take that extra CONFIG step to enable checking them.
On Mon, Dec 07, 2020 at 11:17:29AM +0530, Naresh Kamboju wrote:
> While running "mkfs -t ext4" on arm64 juno-r2 device connected with SSD drive
> the following kernel warning reported on stable rc 5.9.13-rc1 kernel.
>
> Steps to reproduce:
> --
> # boot arm64 Juno-r2 device with st
On Mon, Dec 07, 2020 at 11:06:10AM +0530, Naresh Kamboju wrote:
> While booting arm64 hikey board with stable-rc 5.9.13-rc1 the following
> warning
> noticed. This is hard to reproduce.
Ugh. You've got two warnings interleaved here. This is impossible
to read. Do you have any examples where on
On Thu, Dec 03, 2020 at 09:42:54AM -0800, Andy Lutomirski wrote:
> I suspect that something much more clever could be done in which the heap is
> divided up into a few independently randomized sections and heap pages are
> randomized within the sections might do much better. There should certainl
On Wed, Dec 02, 2020 at 09:25:51PM -0800, Andy Lutomirski wrote:
> This code compiles, but I haven't even tried to boot it. The earlier
> part of the series isn't terribly interesting -- it's a handful of
> cleanups that remove all reads of ->active_mm from arch/x86. I've
> been meaning to do tha
On Tue, Dec 01, 2020 at 11:45:47PM +0200, Topi Miettinen wrote:
> + /* Randomize allocation */
> + if (randomize_vmalloc) {
> + voffset = get_random_long() & (roundup_pow_of_two(vend -
> vstart) - 1);
> + voffset = PAGE_ALIGN(voffset);
> + if (voffset +
On Thu, Dec 03, 2020 at 12:04:22AM +0530, Jeffrin Jose T wrote:
> hello,
>
>
> 2 new suspected memory leaks. See below...
You've reported this to the wrong place. It looks like the HID
driver would be the place which is leaking memory, and is probably
a better place to report it.
> ---
On Tue, Dec 01, 2020 at 06:28:45PM -0800, Dan Williams wrote:
> On Tue, Dec 1, 2020 at 12:49 PM Matthew Wilcox wrote:
> >
> > On Tue, Dec 01, 2020 at 12:42:39PM -0800, Dan Williams wrote:
> > > On Mon, Nov 30, 2020 at 6:24 PM Matthew Wilcox
> > > wrote:
> &
On Tue, Dec 01, 2020 at 12:42:39PM -0800, Dan Williams wrote:
> On Mon, Nov 30, 2020 at 6:24 PM Matthew Wilcox wrote:
> >
> > On Mon, Nov 30, 2020 at 05:20:25PM -0800, Dan Williams wrote:
> > > Kirill, Willy, compound page experts,
> > >
> > > I am se
On Mon, Nov 30, 2020 at 06:06:03PM -0500, Peter Xu wrote:
> Faulting around for reads are in most cases helpful for the performance so
> that
> continuous memory accesses may avoid another trip of page fault. However it
> may not always work as expected.
>
> For example, userfaultfd registered r
On Mon, Nov 30, 2020 at 05:20:25PM -0800, Dan Williams wrote:
> Kirill, Willy, compound page experts,
>
> I am seeking some debug ideas about the following splat:
>
> BUG: Bad page state in process lt-pmem-ns pfn:121a12
> page:51ef73f7 refcount:0 mapcount:-1024
> mapping:
On Sun, Nov 29, 2020 at 07:34:29PM +0800, Hillf Danton wrote:
> > radix_tree_next_slot include/linux/radix-tree.h:422 [inline]
> > idr_for_each+0x206/0x220 lib/idr.c:202
> > io_destroy_buffers fs/io_uring.c:8275 [inline]
>
> Matthew, can you shed any light on the link between the use of idr
> r
On Fri, Nov 27, 2020 at 11:07:07AM -0800, t...@redhat.com wrote:
> +++ b/fs/fcntl.c
> @@ -526,7 +526,7 @@ SYSCALL_DEFINE3(fcntl64, unsigned int, fd, unsigned int,
> cmd,
> (dst)->l_whence = (src)->l_whence; \
> (dst)->l_start = (src)->l_start;\
> (dst)->l_len = (src)
map occurred.
> >
> > Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al
> > Viro all suggested putting this code into helper functions. Al Viro
> > further pointed out that these functions already existed in the iov_iter
> > code.[1]
> >
>
On Thu, Nov 26, 2020 at 05:23:59PM -0500, Peter Xu wrote:
> For missing mode uffds, fault around does not help because if the page cache
> existed, then the page should be there already. If the page cache is not
> there, nothing else we can do, either. If the fault-around code is destined
> to
>
On Thu, Nov 26, 2020 at 11:24:59AM -0800, Hugh Dickins wrote:
> On Thu, 26 Nov 2020, Matthew Wilcox wrote:
> > On Wed, Nov 25, 2020 at 04:11:57PM -0800, Hugh Dickins wrote:
> > > > + index =
> > > > tr
On Thu, Nov 26, 2020 at 04:19:55PM +, Marc Zyngier wrote:
> On 2020-11-26 15:57, Matthew Wilcox wrote:
> > On Thu, Nov 26, 2020 at 03:53:58PM +, David Brazdil wrote:
> > > The hypervisor starts trapping host SMCs and intercepting host's PSCI
> > > CPU_O
On Thu, Nov 26, 2020 at 03:53:58PM +, David Brazdil wrote:
> The hypervisor starts trapping host SMCs and intercepting host's PSCI
> CPU_ON/SUSPEND calls. It replaces the host's entry point with its own,
> initializes the EL2 state of the new CPU and installs the nVHE hyp vector
> before ERETin
On Thu, Nov 26, 2020 at 04:44:04PM +0100, Vlastimil Babka wrote:
> However, Matthew wanted to increase pagevec size [1] and once 15^2 becomes
> 63^2, it starts to be somewhat more worrying.
>
> [1]
> https://lore.kernel.org/linux-mm/20201105172651.2455-1-wi...@infradead.org/
Well, Tim wanted it
On Thu, Nov 26, 2020 at 02:06:19PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 26, 2020 at 12:56:06PM +0000, Matthew Wilcox wrote:
> > On Thu, Nov 26, 2020 at 01:42:07PM +0100, Peter Zijlstra wrote:
> > > + pgdp = pgd_offset(mm, addr);
> > > + pgd = READ_ONCE(*pgdp);
&
On Thu, Nov 26, 2020 at 01:42:07PM +0100, Peter Zijlstra wrote:
> + pgdp = pgd_offset(mm, addr);
> + pgd = READ_ONCE(*pgdp);
I forget how x86-32-PAE maps to Linux's PGD/P4D/PUD/PMD scheme, but
according to volume 3, section 4.4.2, PAE paging uses a 64-bit PDE, so
whether a PDE is a PGD or
;
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Matthew Wilcox (Oracle)
On Thu, Nov 26, 2020 at 01:01:15PM +0100, Peter Zijlstra wrote:
> +#ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
> +/*
> + * WARNING: only to be used in the get_user_pages_fast() implementation.
> + * With get_user_pages_fast(), we walk down the pagetables without taking any
> + * locks. For this we would li
On Thu, Nov 26, 2020 at 01:01:17PM +0100, Peter Zijlstra wrote:
> The (new) page-table walker in arch_perf_get_page_size() is broken in
> various ways. Specifically while it is used in a lockless manner, it
> doesn't depend on CONFIG_HAVE_FAST_GUP nor uses the proper _lockless
> offset methods, nor
On Wed, Nov 25, 2020 at 04:11:57PM -0800, Hugh Dickins wrote:
> The little fix definitely needed was shown by generic/083: each
> fsstress waiting for page lock, happens even without forcing huge
> pages. See below...
Huh ... I need to look into why my xfstests run is skipping generic/083:
0006 g
On Wed, Nov 25, 2020 at 01:34:04PM +0100, Vlastimil Babka wrote:
> On 11/25/20 4:46 AM, Matthew Wilcox (Oracle) wrote:
> > Code outside mm/ should not be calling free_unref_page(). Also
> > move free_unref_page_list().
>
> Good idea.
>
> > Signed-off-by: Matthew
On Wed, Nov 25, 2020 at 09:43:15AM +0100, David Hildenbrand wrote:
> On 25.11.20 04:46, Matthew Wilcox (Oracle) wrote:
> > The page has just been allocated, so its refcount is 1. free_unref_page()
> > is for use on pages which have a zero refcount. Use __free_page()
>
The page has just been allocated, so its refcount is 1. free_unref_page()
is for use on pages which have a zero refcount. Use __free_page()
like the other implementations of pte_alloc_one().
Fixes: 1ae9ae5f7df7 ("sparc: handle pgtable_page_ctor() fail")
Signed-off-by: Matthew Wilc
Code outside mm/ should not be calling free_unref_page(). Also
move free_unref_page_list().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/gfp.h | 2 --
mm/internal.h | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux
On Tue, Nov 17, 2020 at 11:43:02PM +, Matthew Wilcox wrote:
> On Tue, Nov 17, 2020 at 07:15:13PM +0000, Matthew Wilcox wrote:
> > I find both of these functions exceptionally confusing. Does this
> > make it easier to understand?
>
> Never mind, this is buggy. I
On Tue, Nov 24, 2020 at 06:16:28PM +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-24 18:52:44 [+0530], Naresh Kamboju wrote:
> > While running LTP test case access01 the following kernel BUG
> > noticed on linux next 20201124 tag kernel on i386.
> >
> > git short log:
> >
>
On Tue, Nov 24, 2020 at 11:21:13AM -0800, Ira Weiny wrote:
> On Tue, Nov 24, 2020 at 02:19:41PM +0000, Matthew Wilcox wrote:
> > On Mon, Nov 23, 2020 at 10:07:39PM -0800, ira.we...@intel.com wrote:
> > > +static inline void memzero_page(struct page *page, size_t offset,
On Tue, Nov 24, 2020 at 11:00:42AM -0800, Linus Torvalds wrote:
> On Tue, Nov 24, 2020 at 10:33 AM Matthew Wilcox wrote:
> >
> > We could fix this by turning that 'if' into a 'while' in
> > write_cache_pages().
>
> That might be the simplest patch
On Tue, Nov 24, 2020 at 08:28:16AM -0800, Hugh Dickins wrote:
> On Tue, 24 Nov 2020, Matthew Wilcox wrote:
> > On Mon, Nov 23, 2020 at 08:07:24PM -0800, Hugh Dickins wrote:
> > >
> > > Then on crashing a second time, realized there's a stronger reason against
>
On Mon, Nov 23, 2020 at 10:49:38PM -0800, Chris Goldsworthy wrote:
> +static void __evict_bh_lru(void *arg)
> +{
> + struct bh_lru *b = &get_cpu_var(bh_lrus);
> + struct buffer_head *bh = arg;
> + int i;
> +
> + for (i = 0; i < BH_LRU_SIZE; i++) {
> + if (b->bhs[i] == bh
On Mon, Nov 23, 2020 at 10:07:39PM -0800, ira.we...@intel.com wrote:
> +static inline void memzero_page(struct page *page, size_t offset, size_t len)
> +{
> + memset_page(page, 0, offset, len);
> +}
This is a less-capable zero_user_segments().
ache, freed, and reused for something else by the time that
> wake_up_page() is reached.
>
> https://lore.kernel.org/linux-mm/20200827122019.gc14...@casper.infradead.org/
> Matthew Wilcox suggested avoiding or weakening the PageWaiters() tail
> check; but I'm paranoid about even loo
On Tue, Nov 24, 2020 at 11:07:41AM +0100, Thorsten Leemhuis wrote:
> There is nothing special with this text, it's just that GPL is known to not
> be really ideal for documentation. That makes it hard for people to reuse
> parts of the docs outside of the kernel context, say in books or on
> websit
On Mon, Nov 23, 2020 at 07:42:30PM -0800, Andrew Morton wrote:
> Matthew's series "Overhaul multi-page lookups for THP" chnages the
> shmem code quite a bit, and in the area of truncate. Matthew, could
> you please fire up that reproducer?
Almost certainly my fault. I was trying to get the shmem
701 - 800 of 2700 matches
Mail list logo