From: Kent Overstreet
This converts from seq_buf to printbuf. Here we're using printbuf with
an external buffer, meaning it's a direct conversion.
Signed-off-by: Kent Overstreet
Cc: Dan Williams
Cc: Dave Hansen
Cc: nvd...@lists.linux.dev
---
tools/testing/nvdimm/test/ndtest.c | 22 ++
Displaying two registers per line takes 15 lines. That improves to just
10 lines if we display three registers per line, which reduces the amount
of information lost when oopses are cut off. It stays within 80 columns
and matches x86-64.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/arm64
Now that compound_head() accepts a const struct page pointer, these two
functions can be marked as not modifying the page pointer they are passed.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page_ref.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include
The struct page is not modified by these routines, so it can be marked
const.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pageblock-flags.h | 2 +-
mm/page_alloc.c | 13 +++--
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/include/linux
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 04a34c08e0a6..d8e26243db25 100644
--- a/include/linux/page-flags.h
+++ b/include/li
dump_page_owner() only uses struct page to find the page_ext, and
lookup_page_ext() already takes a const argument.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page_owner.h | 6 +++---
mm/page_owner.c| 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a
Move the PagePoisoned test into dump_page(). Skip the hex print
for poisoned pages -- we know they're full of . Move the
reason printing from __dump_page() to dump_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/debug.c | 25 +++--
1 file chang
The only caller of __dump_page() now opencodes dump_page(), so
remove it as an externally visible symbol.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mmdebug.h | 3 +--
mm/debug.c | 2 +-
mm/page_alloc.c | 3 +--
3 files changed, 3 insertions(+), 5 deletions
ed to work on, but I offer these patches as a few
steps towards being able to make dump_page() take a const page pointer.
Matthew Wilcox (Oracle) (6):
mm: Make __dump_page static
mm/debug: Factor PagePoisoned out of __dump_page
mm/page_owner: Constify dump_page_owner
mm: Make compound_head c
loc().
Since page_pool doesn't want to set its magic value on pages which are
pfmemalloc, we can use bit 1 of compound_head to indicate that the page
came from the memory reserves.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 12 +++-
include/linux/mm_type
d a racing get_user_pages_fast()
could dereference a bogus compound_head().
Fixes: c25fff7171be ("mm: add dma_addr_t to struct page")
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm_types.h | 4 ++--
include/net/page_pool.h | 12 +++-
net/core/page_pool.c | 12 +
s new functionality. It is much less urgent.
I'd really like to see Mel & Michal's thoughts on it.
I have only compile-tested these patches.
Matthew Wilcox (Oracle) (2):
mm: Fix struct page layout on 32-bit systems
mm: Indicate pfmemalloc pages in compound_head
include/li
c8 add(%rax,%rcx,8),%rbx
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
---
include/linux/mm.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 25b9041f9925..2327f99b121f 100644
--- a/include/linux/mm.h
+++ b
be cleared on free. To avoid
this, insert three words of padding and use the same bits as ->index
and ->private, neither of which have to be cleared on free.
Fixes: c25fff7171be ("mm: add dma_addr_t to struct page")
Signed-off-by: Matthew Wilco
I'd really appreciate people testing this, particularly on
arm32/mips32/ppc32 systems with a 64-bit dma_addr_t.
Matthew Wilcox (Oracle) (1):
mm: Fix struct page layout on 32-bit systems
include/linux/mm_types.h | 38 ++
1 file changed, 26 insertions(+
Reinforce that if we're waiting for a bit in a struct page, that's
actually in the head page by changing the type from page to folio.
Increases the size of cachefiles by two bytes, but the kernel core
is unchanged in size.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christo
All callers have a folio, so use it directly.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
mm/filemap.c | 23 ---
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index
We must always wait on the folio, otherwise we won't be woken up.
This commit shrinks the kernel by 691 bytes, mostly due to moving
the page waitqueue lookup into wait_on_folio_bit_common().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff L
Move wait_for_stable_page() into the folio compatibility file.
wait_for_stable_folio() avoids a call to compound_head() and is 14 bytes
smaller than wait_for_stable_page() was. The net text size grows by 24
bytes as a result of this patch.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by
compound_head() which saves 8 bytes and 15 bytes in the two functions.
That is more than offset by adding the wait_on_page_writeback
compatibility wrapper for a net increase in text of 15 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
fs/afs/write.c
net
saving of 70 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 3 ++-
mm/filemap.c| 38 +++---
mm/folio-compat.c | 6 ++
3 files changed, 27 insertions
Also add wait_on_folio_locked_killable(). Turn wait_on_page_locked()
and wait_on_page_locked_killable() into wrappers. This eliminates a
call to compound_head() from each call-site, reducing text size by 200
bytes for me.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 9 ++---
mm/filemap.c| 10 --
mm/memory.c | 8
3 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/include/linux/pagemap.h b/include
es to 403 bytes, saving 111 bytes. The text
shrinks by 132 bytes in total.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
fs/io_uring.c | 2 +-
include/linux/pagemap.h | 17 -
mm/filemap.c
__lock_page_killable()
was. lock_page_maybe_drop_mmap() shrinks by 68 bytes and
__lock_page_or_retry() shrinks by 66 bytes. That's a total of 154 bytes
of text saved.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h
. __lock_folio is 59 bytes while __lock_page was 79.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 24 +++-
mm/filemap.c| 29 +++--
2 files changed, 34 insertions(+), 19
t any path that uses unlock_folio() will execute
4 fewer instructions.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 3 ++-
mm/filemap.c| 27 ++-
mm/folio-compat.c | 6
Add new wrapper functions folio_memcg(), lock_folio_memcg(),
unlock_folio_memcg(), mem_cgroup_folio_lruvec() and
count_memcg_folio_event()
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/memcontrol.h | 30
This is the folio equivalent of page_mapcount().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mm.h | 16
1 file changed, 16 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 143b354c3f4a
ntire sequence will disappear.
Also add folio_mapping() documentation.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
Documentation/core-api/mm-api.rst | 2 ++
include/linux/mm.h| 14 -
include/linux/page
These are just wrappers around their page counterpart.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
This helper returns the page index of the next folio in the file (ie
the end of this folio, plus one).
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a
folio_index() is the equivalent of page_index() for folios.
folio_file_page() is the equivalent of find_subpage().
folio_contains() is the equivalent of thp_contains().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/pagemap.h | 53
d() in get_page()
& put_page().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mm_types.h | 16 ++
include/linux/pagemap.h | 48
2 files changed, 45 insertions(+), 19 deletions(-)
d
saves 1727 bytes of text with the distro-derived config that
I'm testing due to removing a double call to compound_head() in
PageSwapCache().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/page-flags.h
If we know we have a folio, we can call get_folio() instead
of get_page() and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mm.h | 26
Some functions
grow a little while others shrink. I presume the compiler is making
different inlining decisions.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mm.h | 33 -
1 file ch
These functions mirror their page reference counterparts.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
Documentation/core-api/mm-api.rst | 1 +
include/linux/page_ref.h | 88 ++-
2 files changed, 88
These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mmdebug.h | 20
1 file changed, 20 insertions(+)
diff --git a
Allow page counters to be more readily modified by callers which have
a folio. Name these wrappers with 'stat' instead of 'state' as requested
by Linus here:
https://lore.kernel.org/linux-mm/CAHk-=wj847sudr-kt+46ft3+xffgiwpgthvm7djwgdi4cvr...@mail.gmail.com/
Signed-off-by: Ma
These are just convenience wrappers for callers with folios; pgdat and
zone can be reached from tail pages as well as head pages.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
Reviewed-by: Christoph Hellwig
Acked-by: Jeff Layton
---
include/linux/mm.h | 10 ++
1 file
a tail page.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Jeff Layton
---
include/linux/mm.h | 74 +
include/linux/mm_types.h | 80
2 files changed, 154 insertions(+)
diff --git a/include/linux/mm.h b
c8 add(%rax,%rcx,8),%rbx
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b58c73e50da0..036f63a44a5c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2
age stats in units of pages instead of units
of folios (Zi Yan)
v3:
- Rebase on next-20210127. Two major sources of conflict, the
generic_file_buffered_read refactoring (in akpm tree) and the
fscache work (in dhowells tree).
v2:
- Pare patch series back to just infrastructure and the page
Now that all users have been converted, require the split_lock parameter
be passed to bit_spin_lock(), bit_spin_unlock() and variants. Use it
to track the lockdep state of each lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/bit_spinlock.h | 26 ++
1 file
NeilBrown noticed the same problem with bit spinlocks that I did,
but chose to solve it locally in the rhashtable implementation rather
than lift it all the way to the bit spin lock implementation. Convert
rhashtables to use split_locks.
Signed-off-by: Matthew Wilcox (Oracle)
Cc: NeilBrown
Allow lockdep to track zsmalloc's pin bit spin lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/zsmalloc.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9a7c91c14b84..9d89a1857901 100644
--- a/mm/zsmalloc.c
+++
Allow lockdep to track slub's page bit spin lock.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/slub.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 9c0e26ddf300..2ed2abe080ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -346,19 +3
Allow lockdep to track the journal bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/jbd2/journal.c| 18 ++
include/linux/jbd2.h | 10 ++
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index
Allow lockdep to track the zram bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/block/zram/zram_drv.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index cf8deecc39ef..8b678cc6ed21
Allow lockdep to track the airq bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/s390/include/asm/airq.h | 5 +++--
drivers/s390/cio/airq.c | 3 +++
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/s390/include/asm/airq.h b/arch/s390/include/asm/airq.h
Now that all users have been converted, require the split_lock parameter
be passed to hlist_bl_lock() and hlist_bl_unlock().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/list_bl.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/list_bl.h b
Allow lockdep to track the mbcache hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/mbcache.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 97c54d3a2227..4ce03ea348dd 100644
--- a/fs/mbcache.c
Allow lockdep to track the hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/gfs2/quota.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 9b1aca7e1264..a933eb441ee9 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2
Allow lockdep to track the fscache cookie hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/fscache/cookie.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 751bc5b1cddf..65d514d12592 100644
--- a
Allow lockdep to track the d_hash bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/dcache.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 7d24ff7eb206..a3861d330001 100644
--- a/fs/dcache.c
+++ b/fs
Allow lockdep to track the dm-snap bit spin locks.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/md/dm-snap.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 8f3ad87e6117..4c2a01e433de 100644
--- a/drivers/md
Make hlist_bl_lock() and hlist_bl_unlock() variadic to help with the
transition. Also add hlist_bl_lock_nested().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/list_bl.h | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/linux/list_bl.h b/include
Make bit_spin_lock() and variants variadic to help with the transition.
The split_lock parameter will become mandatory at the end of the series.
Also add bit_spin_lock_nested() and bit_spin_unlock_assign() which will
both be used by the rhashtable code later.
Signed-off-by: Matthew Wilcox (Oracle
Bitlocks do not currently participate in lockdep. Conceptually, a
bit_spinlock is a split lock, eg across each bucket in a hash table.
The struct split_lock gives us somewhere to record the lockdep_map.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/split_lock.h | 37
I want to use split_lock_init() for a global symbol, so rename this
local one.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/x86/kernel/cpu/intel.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index
pendencies.
This split_lock would also give us somewhere to queue waiters, should we
choose to do that. Or a centralised place to handle PREEMPT_RT mutexes.
But I'll leave that for someone who knows what they're doing; for now
this keeps the same implementation.
Matthew Wilcox (Oracle)
he current cpu_relax()
implementation intact for now.
The API change breaks all users except for the two which have been
converted. This is an RFC, and I'm willing to fix all the rest.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/dcache.c | 25 ++--
Use kvcalloc or kvmalloc_array instead (depending whether zeroing is
useful).
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/md/dm-snap-persistent.c | 6 +++---
drivers/md/dm-snap.c| 5 +++--
drivers/md/dm-table.c | 30 ++
include/linux
Reinforce that if we're waiting for a bit in a struct page, that's
actually in the head page by changing the type from page to folio.
Increases the size of cachefiles by two bytes, but the kernel core
is unchanged in size.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/cachefiles/rdwr
All callers have a folio, so use it directly.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/filemap.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 76e1c4be1205..51b2091d402c 100644
--- a/mm/filemap.c
+++ b/mm
net
saving of 70 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 3 ++-
mm/filemap.c| 38 +++---
mm/folio-compat.c | 6 ++
3 files changed, 27 insertions(+), 20 deletions(-)
diff --git a/include/linux/pagemap.h b
We must always wait on the folio, otherwise we won't be woken up.
This commit shrinks the kernel by 691 bytes, mostly due to moving
the page waitqueue lookup into wait_on_folio_bit_common().
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/write.c | 2 +-
include/linux/ne
Move wait_for_stable_page() into the folio compatibility file.
wait_for_stable_folio() avoids a call to compound_head() and is 14 bytes
smaller than wait_for_stable_page() was. The net text size grows by 24
bytes as a result of this patch.
Signed-off-by: Matthew Wilcox (Oracle)
---
include
. __lock_folio is 59 bytes while __lock_page was 79.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 24 +++-
mm/filemap.c| 29 +++--
2 files changed, 34 insertions(+), 19 deletions(-)
diff --git a/include/linux/pagemap.h b/include
compound_head() which saves 8 bytes and 15 bytes in the two functions.
That is more than offset by adding the wait_on_page_writeback
compatibility wrapper for a net increase in text of 15 bytes.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/afs/write.c | 5 +++--
include/linux/pagemap.h | 3
Also add wait_on_folio_locked_killable(). Turn wait_on_page_locked()
and wait_on_page_locked_killable() into wrappers. This eliminates a
call to compound_head() from each call-site, reducing text size by 200
bytes for me.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 26
es to 403 bytes, saving 111 bytes. The text
shrinks by 132 bytes in total.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/io_uring.c | 2 +-
include/linux/pagemap.h | 17 -
mm/filemap.c| 31 ---
3 files changed, 17 insertions(+
__lock_page_killable()
was. lock_page_maybe_drop_mmap() shrinks by 68 bytes and
__lock_page_or_retry() shrinks by 66 bytes. That's a total of 154 bytes
of text saved.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 15 ++-
mm/filemap.c| 17 +
2
t any path that uses unlock_folio() will execute
4 fewer instructions.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 3 ++-
mm/filemap.c| 27 ++-
mm/folio-compat.c | 6 ++
3 files changed, 18 insertions(+), 18 deletions(-)
Add new wrapper functions folio_memcg(), lock_folio_memcg(),
unlock_folio_memcg(), mem_cgroup_folio_lruvec() and
count_memcg_folio_event()
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/memcontrol.h | 30 ++
1 file changed, 30 insertions(+)
diff --git a
: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 9 ++---
mm/filemap.c| 10 --
mm/memory.c | 8
3 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 054e9dd7628e..43664bef7392 100644
This is the folio equivalent of page_mapcount().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 16
1 file changed, 16 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a4f2818aeb1d..fc15a256e686 100644
--- a/include/linux/mm.h
+++ b
ntire sequence will disappear.
Also add folio_mapping() documentation.
Signed-off-by: Matthew Wilcox (Oracle)
---
Documentation/core-api/mm-api.rst | 2 ++
include/linux/mm.h| 14 -
include/linux/pagemap.h | 35 +--
include/linux/s
These are just wrappers around their page counterpart.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 3aefe6558f7d..b4570422a691 100644
--- a/include/linux
folio_index() is the equivalent of page_index() for folios.
folio_file_page() is the equivalent of find_subpage().
folio_contains() is the equivalent of thp_contains().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 53 +
1 file
This helper returns the page index of the next folio in the file (ie
the end of this folio, plus one).
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/pagemap.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index
d() in get_page()
& put_page().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm_types.h | 16 ++
include/linux/pagemap.h | 48
2 files changed, 45 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm_types.h b/include/lin
saves 1727 bytes of text with the distro-derived config that
I'm testing due to removing a double call to compound_head() in
PageSwapCache().
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/page-flags.h | 130 ++---
1 file changed, 107 insertions(+
If we know we have a folio, we can call get_folio() instead
of get_page() and save the overhead of calling compound_head().
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 26 +-
1 file changed, 17 insertions(+), 9 deletions(-)
diff
Some functions
grow a little while others shrink. I presume the compiler is making
different inlining decisions.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 33 -
1 file changed, 28 insertions(+), 5 deletions(-)
diff --
These functions mirror their page reference counterparts.
Signed-off-by: Matthew Wilcox (Oracle)
---
Documentation/core-api/mm-api.rst | 1 +
include/linux/page_ref.h | 88 ++-
2 files changed, 88 insertions(+), 1 deletion(-)
diff --git a/Documentation
These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mmdebug.h | 20
1 file changed, 20 insertions(+)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index
Allow page counters to be more readily modified by callers which have
a folio. Name these wrappers with 'stat' instead of 'state' as requested
by Linus here:
https://lore.kernel.org/linux-mm/CAHk-=wj847sudr-kt+46ft3+xffgiwpgthvm7djwgdi4cvr...@mail.gmail.com/
Signed-off-by: Ma
These are just convenience wrappers for callers with folios; pgdat and
zone can be reached from tail pages as well as head pages.
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Zi Yan
---
include/linux/mm.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/mm.h
its of pages instead of units
of folios (Zi Yan)
v3:
- Rebase on next-20210127. Two major sources of conflict, the
generic_file_buffered_read refactoring (in akpm tree) and the
fscache work (in dhowells tree).
v2:
- Pare patch series back to just infrastructure and the page waiting
parts.
Matthe
a tail page.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 78
include/linux/mm_types.h | 65 +
mm/util.c| 19 ++
3 files changed, 162 insertions(+)
diff --git a/include
.
Signed-off-by: Matthew Wilcox (Oracle)
---
MAINTAINERS | 7 +++
arch/arm64/kernel/module.c| 3 +--
arch/arm64/net/bpf_jit_comp.c | 3 +--
arch/parisc/kernel/module.c | 5 ++---
arch/x86/hyperv/hv_init.c | 3 +--
5 files changed, 12 insertions(+), 9 deletions(-)
diff
ation speed of vmalloc(4MB) by approximately
5% in our benchmark. It's still dominated by the 1024 calls to
alloc_pages_node(), which will be the subject of a later patch.
Signed-off-by: Matthew Wilcox (Oracle)
---
mm/vmalloc.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff
Allow the caller of kvmalloc to specify who counts as the allocator
of the memory instead of assuming it's the immediate caller.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 4 +++-
include/linux/slab.h | 2 ++
mm/util.c
4 and powerpc32 builds
as well as x86, but I wouldn't be surprised if the buildbots tell me I
missed something.
Matthew Wilcox (Oracle) (4):
mm/vmalloc: Change the 'caller' type to unsigned long
mm/util: Add kvmalloc_node_caller
mm/vmalloc: Use kvmalloc to allocate the table of
explicit function name.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/arm/include/asm/io.h | 6 +--
arch/arm/include/asm/mach/map.h | 3 --
arch/arm/kernel/module.c | 4 +-
arch/arm/mach-imx/mm-imx3.c | 2 +-
arch/arm/mach-ixp4xx/common.c
If we're trying to allocate 4MB of memory, the table will be 8KiB in size
(1024 pointers * 8 bytes per pointer), which can usually be satisfied
by a kmalloc (which is significantly faster). Instead of changing this
open-coded implementation, just use kvmalloc().
Signed-off-by: Matthew W
Allow the caller of kvmalloc to specify who counts as the allocator
of the memory instead of assuming it's the immediate caller.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/mm.h | 4 +++-
include/linux/slab.h | 2 ++
mm/util.c
Implement readahead_batch_length() to determine the number of bytes in
the current batch of readahead pages and use it in btrfs.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/btrfs/extent_io.c| 6 ++
include/linux/pagemap.h | 9 +
2 files changed, 11 insertions(+), 4 deletions
1 - 100 of 453 matches
Mail list logo