> > I understand that having an extra set of page tables could potentially
> > waste memory, especially if VAs are sparse, but in this case we use
> > page tables exclusively for contiguous VA space (copy [src, src +
> > size]). Therefore, the extra memory usage is tiny. The ratio for
> > kernels w
> >
> > Stephen, do you want to send a new patch based on the current
> > linux-next, or do you want me to send an updated version?
>
> I'll send another one and include it in linux-next today.
I appreciate it.
Pasha
Stephen, do you want to send a new patch based on the current
linux-next, or do you want me to send an updated version?
Thank you,
Pasha
On Wed, Feb 3, 2021 at 5:36 PM Pavel Tatashin wrote:
>
> > > After the most recent build errors, I tried to apply Pavel's patch
>
> > After the most recent build errors, I tried to apply Pavel's patch
> >
> > https://lore.kernel.org/linux-mm/CA+CK2bBjC8=crsl5vhwkcevpsqsxwhsanvjsfnmerlt8vwt...@mail.gmail.com/
> > but patch said that it was already applied (by Andrew I think),
> > so I bailed out (gave up).
>
> As far as I c
On Wed, Feb 3, 2021 at 8:23 AM Joao Martins wrote:
>
> On 1/25/21 7:47 PM, Pavel Tatashin wrote:
> > When pages are isolated in check_and_migrate_movable_pages() we skip
> > compound number of pages at a time. However, as Jason noted, it is
> > not necessary correct that
On Tue, Feb 2, 2021 at 3:42 AM Christoph Hellwig wrote:
>
> On Mon, Feb 01, 2021 at 10:03:06AM -0500, Pavel Tatashin wrote:
> > Two new warnings are reported by sparse:
> >
> > "sparse warnings: (new ones prefixed by >>)"
> > >> arch/arm6
The same problem as fixed here:
https://lore.kernel.org/linux-mm/CA+CK2bBjC8=crsl5vhwkcevpsqsxwhsanvjsfnmerlt8vwt...@mail.gmail.com/
Thank you,
Pasha
On Tue, Feb 2, 2021 at 9:32 AM Naresh Kamboju wrote:
>
> Linux next tag 20210202 arm, riscv and sh builds with allnoconfig and
> tinyconfig failed
Hi Geert,
The fix is here:
https://lore.kernel.org/linux-mm/CA+CK2bBjC8=crsl5vhwkcevpsqsxwhsanvjsfnmerlt8vwt...@mail.gmail.com/
Thank you,
Pasha
On Tue, Feb 2, 2021 at 5:35 AM Geert Uytterhoeven wrote:
>
> On Tue, Feb 2, 2021 at 10:13 AM Stephen Rothwell
> wrote:
> > After merging the akpm-cu
Hi Andrew,
Should I send new patches or is the update for this patch sufficient?
Here is updated patch:
>From 9fb856f3a5cfda18a4b84e81dfb0266bee4a4ea6 Mon Sep 17 00:00:00 2001
From: Pavel Tatashin
Date: Mon, 18 Jan 2021 17:35:18 -0500
Subject: [PATCH v9 08/14] mm/gup: do not migrate zero p
Hi James,
> The problem I see with this is rewriting the relocation code. It needs to
> work whether the
> machine has enough memory to enable the MMU during kexec, or not.
>
> In off-list mail to Pavel I proposed an alternative implementation here:
> https://gitlab.arm.com/linux-arm/linux-jm/-/t
transient, we retry indefinitely.
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages
allocated from CMA region")
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 60
1 file c
When pages are longterm pinned, we must migrated them out of movable zone.
The function that migrates them has a hidden loop with goto. The loop is
to retry on isolation failures, and after successful migration.
Make this code better by moving this loop to the caller.
Signed-off-by: Pavel
Document the special handling of page pinning when ZONE_MOVABLE present.
Signed-off-by: Pavel Tatashin
Suggested-by: David Hildenbrand
Acked-by: Michal Hocko
---
Documentation/admin-guide/mm/memory-hotplug.rst | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/admin
We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only
movable CMA pages. Generalize the function that migrates CMA pages to
migrate all movable pages. Use is_pinnable_page() to check which
pages need to be migrated
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
difference between pin vs no-pin case.
Also change type of nr from int to long, as it counts number of pages.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/gup_test.c | 23 ++-
mm/gup_test.h | 3 ++-
too
In __get_user_pages_locked() i counts number of pages which should be
long, as long is used in all other places to contain number of pages, and
32-bit becomes increasingly small for handling page count proportional
values.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/gup.c | 2
ks between doing gup/pup on pages that have been
pre-faulted in from user space, vs. doing gup/pup on pages that are not
faulted in until gup/pup time (via FOLL_TOUCH). This decision is
controlled with the new -z command line option.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/
ation
context which can only get pages suitable for long-term pins.
Also re-name:
memalloc_nocma_save()/memalloc_nocma_restore
to
memalloc_pin_save()/memalloc_pin_restore()
and make the new functions common.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
Acked-by: Michal Hocko
---
in
hen we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.
Suggested-by: Michal Hocko
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/page_all
is declared in memory.c which is compiled
with CONFIG_MMU.
Signed-off-by: Pavel Tatashin
---
include/linux/mm.h | 3 ++-
include/linux/mmzone.h | 4
include/linux/pgtable.h | 3 +--
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
ZONE_MOVABLE and not of MIGRATE_CMA type.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
include/linux/mm.h | 11 +++
include/linux/sched/mm.h | 6 +-
mm/hugetlb.c | 2 +-
mm/page_alloc.c | 20 +---
4 files changed, 26 insertions
the addresses.
The resulting pages[i] might end-up having pages which are not compound
size page aligned.
Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge
page")
Reported-by: Jason Gunthorpe
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/
-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 16f10d5a9eb6..88ce41f41543 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1557,7 +1557,6 @@ static long
1-pasha.tatas...@soleen.com
v8
https://lore.kernel.org/lkml/20210125194751.1275316-1-pasha.tatas...@soleen.com
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movable zone
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm
In order not to fragment CMA the pinned pages are migrated. However,
they are migrated to ZONE_MOVABLE, which also should not have pinned pages.
Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning
is allowed.
Signed-off-by: Pavel Tatashin
Reviewed-by: David Hildenbrand
__bitwise type attribute and requires __force added to casting
in order to avoid these warnings.
Fixes: 50f53fb72181 ("arm64: trans_pgd: make trans_pgd_map_page generic")
Reported-by: kernel test robot
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hibernate.c | 4 ++--
1 file
This is against for-next/kexec, fix for sparse warning that was reported by
kernel test robot [1].
[1] https://lore.kernel.org/linux-arm-kernel/202101292143.c6tckvvx-...@intel.com
Pavel Tatashin (1):
arm64: hibernate: add __force attribute to gfp_t casting
arch/arm64/kernel/hibernate.c | 4
On Sun, Jan 31, 2021 at 8:09 AM Lecopzer Chen wrote:
>
>
> Hi,
>
> [...]
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c93e801a45e9..3f17c73ad582 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3807,16 +3807,13 @@ alloc_flags_nofragment(struct zone *zone, gfp_t
>
On Fri, Jan 29, 2021 at 2:12 PM Pavel Tatashin
wrote:
>
> On Fri, Jan 29, 2021 at 2:06 PM Pavel Tatashin
> wrote:
> >
> > > > Definitely, but we should try figuring out what's going on here. I
> > > > assume on x86-64 it behaves differently?
>
On Fri, Jan 29, 2021 at 2:06 PM Pavel Tatashin
wrote:
>
> > > Definitely, but we should try figuring out what's going on here. I
> > > assume on x86-64 it behaves differently?
> >
> > Yes, we should root cause. I highly suspect that there is somewhere
>
> > Definitely, but we should try figuring out what's going on here. I
> > assume on x86-64 it behaves differently?
>
> Yes, we should root cause. I highly suspect that there is somewhere
> alignment miscalculations happen that cause this memory waste with the
> offset 16M. I am also not sure why t
On Fri, Jan 29, 2021 at 9:51 AM Joao Martins wrote:
>
> Hey Pavel,
>
> On 1/29/21 1:50 PM, Pavel Tatashin wrote:
> >> Since we last talked about this the enabling for EFI "Special Purpose"
> >> / Soft Reserved Memory has gone upstream and instantiates devi
On Fri, Jan 29, 2021 at 8:19 AM David Hildenbrand wrote:
>
> On 29.01.21 03:06, Pavel Tatashin wrote:
> >>> Might be related to the broken custom pfn_valid() implementation for
> >>> ZONE_DEVICE.
> >>>
> >>> https://lkml.kernel.org/r/1608
> > Might be related to the broken custom pfn_valid() implementation for
> > ZONE_DEVICE.
> >
> > https://lkml.kernel.org/r/1608621144-4001-1-git-send-email-anshuman.khand...@arm.com
> >
> > And essentially ignoring sub-section data in there for now as well (but
> > might not be that relevant yet).
On 1/21/21 1:26 PM, Will Deacon wrote:
> On Wed, 20 Jan 2021 21:29:12 -0800, Sudarshan Rajagopalan wrote:
>> This patch is the follow-up from the discussions in the thread [1].
>> Reducing the section size has the merit of reducing wastage of reserved
>> memory
>> for vmmemmap mappings for sect
On Thu, Jan 28, 2021 at 3:01 PM Eric W. Biederman wrote:
>
> Pavel Tatashin writes:
>
> > kmsg_dump(KMSG_DUMP_SHUTDOWN) is called before
> > machine_restart(), machine_halt(), machine_power_off(), the only one that
> > is missing is machine_kexec().
> >
> >
On Wed, Jan 27, 2021 at 5:18 PM David Hildenbrand wrote:
>
> >> Ordinary reboots or kexec-style reboots? I assume the latter, because
> >> otherwise there is no guarantee about persistence, right?
> >
> > Both, our firmware supports cold and warm reboot. When we do warm
> > reboot, memory content
On Wed, Jan 27, 2021 at 4:09 PM David Hildenbrand wrote:
>
> On 27.01.21 21:43, Pavel Tatashin wrote:
> > This is something that Dan Williams and I discussed off the mailing
> > list sometime ago, but I want to have a broader discussion about this
> > problem so I could s
This is something that Dan Williams and I discussed off the mailing
list sometime ago, but I want to have a broader discussion about this
problem so I could send out a fix that would be acceptable.
We have a 2G pmem device that is carved out of regular memory that we
use to pass data across reboot
kernel exited to have a precise
measurement in time spent in purgatory), we won't be easilty do that
if arm64_relocate_new_kernel can't accept more arguments.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 18 ++
arch/arm64/kernel/asm-offsets.
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 5 +
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel
.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 4
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel/machine_kexec.c | 17 -
arch/arm64/kernel/relocate_kernel.S | 15 ++-
arch/arm64/kernel/vmlinux.lds.S | 19
Configure a page table located in kexec-safe memory that has
the following mappings:
1. identity mapping for text of relocation function with executable
permission.
2. va mappings for all source ranges
3. va mappings for all destination ranges.
Signed-off-by: Pavel Tatashin
---
arch/arm64
improvement.
The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/relocate_kernel.S | 131 ++--
1 file changed, 87 insertions(+), 44 deletions(-)
diff --git a
Now, that relocation is done using virtual addresses, reloc_arg->head
is not needed anymore.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h| 2 --
arch/arm64/kernel/asm-offsets.c | 1 -
arch/arm64/kernel/machine_kexec.c | 1 -
3 files changed, 4 deletions(-)
diff --
atas...@soleen.com
The first attempt to enable MMU, some bugs that prevented performance
improvement. The page tables unnecessary configured idmap for the whole
physical space.
https://lore.kernel.org/lkml/20190731153857.4045-1-pasha.tatas...@soleen.com
No linear copy, bug with EL2 reboots.
Pavel Tatas
On Wed, Jan 27, 2021 at 11:42 AM Tyler Hicks
wrote:
>
> On 2021-01-25 19:21:22, Pavel Tatashin wrote:
> > I forgot to make changes to arch/arm64/Kconfig. The correct patch is
> > below.
> >
> > ---
> >
> > From a2bc374320d7c7efd3c40644ad3d6d59a024b30
On Tue, Jan 26, 2021 at 3:09 PM Jens Axboe wrote:
>
> On 1/26/21 7:46 AM, Pavel Tatashin wrote:
> > Currently, loop device has only one global lock: loop_ctl_mutex.
> >
> > This becomes hot in scenarios where many loop devices are used.
> >
> > Scale it by i
On Wed, Jan 27, 2021 at 10:59 AM Will Deacon wrote:
>
> On Mon, 25 Jan 2021 14:19:05 -0500, Pavel Tatashin wrote:
> > Changelog:
> > v10:
> > - Addressed a lot of comments form James Morse and from Marc Zyngier
> > - Added review-by's
>
On Tue, Jan 26, 2021 at 5:58 PM Will Deacon wrote:
>
> Hi Pavel,
>
> On Mon, Jan 25, 2021 at 02:19:05PM -0500, Pavel Tatashin wrote:
> > Changelog:
> > v10:
> > - Addressed a lot of comments form James Morse and from Marc Zyngier
> > - Added revie
6>[ 70.918725] psci: CPU5 killed (polled 0 ms)
<5>[ 70.919704] CPU6: shutdown
<6>[ 70.920726] psci: CPU6 killed (polled 4 ms)
<5>[ 70.921642] CPU7: shutdown
<6>[ 70.922650] psci: CPU7 killed (polled 0 ms)
Signed-off-by: Pavel Tatashin
Reviewed-by: Kees Cook
Changelog
v2
- Added review-by's
- Sync with mainline
Allow to study performance shutdown via kexec reboot calls by having kmsg
log saved via pstore.
Previous submissions
v1 https://lore.kernel.org/lkml/20200605194642.62278-1-pasha.tatas...@soleen.com
Pavel Tatashin (1):
/lkml/20200717205322.127694-1-pasha.tatas...@soleen.com
Pavel Tatashin (1):
loop: scale loop device by introducing per device lock
drivers/block/loop.c | 93 +---
drivers/block/loop.h | 1 +
2 files changed, 54 insertions(+), 40 deletions(-)
--
2.25.1
: loop_index_idr, loop_lookup,
loop_add.
The new lock ordering requirement is that loop_ctl_mutex must be taken
before lo_mutex.
Signed-off-by: Pavel Tatashin
Reviewed-by: Tyler Hicks
Reviewed-by: Petr Vorel
---
drivers/block/loop.c | 93 +---
drivers/block
On Tue, Jan 26, 2021 at 4:53 AM Chaitanya Kulkarni
wrote:
>
> On 1/25/21 12:15 PM, Pavel Tatashin wrote:
> > Currently, loop device has only one global lock:
> > loop_ctl_mutex.
> Above line can be :-
> Currently, loop device has only one global lock: loop_ctl_mutex.
OK
On Tue, Jan 26, 2021 at 4:24 AM Petr Vorel wrote:
>
> Hi,
>
> > Currently, loop device has only one global lock:
> > loop_ctl_mutex.
>
> > This becomes hot in scenarios where many loop devices are used.
>
> > Scale it by introducing per-device lock: lo_mutex that protects the
> > fields in struct
I forgot to make changes to arch/arm64/Kconfig. The correct patch is
below.
---
>From a2bc374320d7c7efd3c40644ad3d6d59a024b301 Mon Sep 17 00:00:00 2001
From: Pavel Tatashin
Date: Mon, 29 Jul 2019 21:24:25 -0400
Subject: [PATCH v10 16/18] arm64: kexec: configure trans_pgd page table for
ke
kernel exited to have a precise
measurement in time spent in purgatory), we won't be easilty do that
if arm64_relocate_new_kernel can't accept more arguments.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 18 ++
arch/arm64/kernel/asm-offsets.
patch from James to trans_pgd interface, so it can be
commonly used by both Kexec and Hibernate. Some minor clean-ups.]
Signed-off-by: Pavel Tatashin
Link:
https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.mo...@arm.com/
---
arch/arm64/include/asm/trans_pgd.h | 3 ++
arch/ar
describes the instruction itself.
4. Some comment corrections
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/relocate_kernel.S | 36 +++--
1 file changed, 8 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/kernel/relocate_kernel.S
b/arch/arm64/kernel
Currently, kexec_image_info() is called during load time, and
right before kernel is being kexec'ed. There is no need to do both.
So, call it only once when segments are loaded and the physical
location of page with copy of arm64_relocate_new_kernel is known.
Signed-off-by: Pavel Tatashin
-off-by: Pavel Tatashin
Reviewed-by: James Morse
---
arch/arm64/include/asm/kexec.h| 1 +
arch/arm64/kernel/machine_kexec.c | 46 +--
2 files changed, 20 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
There should be p4dp used when p4d page is allocated.
This is not a functional issue, but for the logical correctness this
should be fixed.
Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables")
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hibernate.c | 6 +++-
trans_pgd_* should be independent from mm context because the tables that
are created by this code are used when there are no mm context around, as
it is between kernels. Simply replace mm_init's with NULL.
Signed-off-by: Pavel Tatashin
Acked-by: James Morse
---
arch/arm64/mm/trans_pgd.c
Now, that we abstracted the required functions move them to a new home.
Later, we will generalize these function in order to be useful outside
of hibernation.
Signed-off-by: Pavel Tatashin
Reviewed-by: James Morse
---
arch/arm64/Kconfig | 4 +
arch/arm64/include/asm
kexec is going to use a different allocator, so make
trans_pgd_map_page to accept allocator as an argument, and also
kexec is going to use a different map protection, so also pass
it via argument.
Signed-off-by: Pavel Tatashin
Reviewed-by: Matthias Brugger
---
arch/arm64/include/asm
Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is
enabled. This adds ugly ifdefs to c files.
Always enabled dtb_mem, when it is not used, it is NULL.
Change the dtb_mem to phys_addr_t, as it is a physical address.
Signed-off-by: Pavel Tatashin
Reviewed-by: James Morse
---
arch
cal space.
https://lore.kernel.org/lkml/20190731153857.4045-1-pasha.tatas...@soleen.com
No linear copy, bug with EL2 reboots.
James Morse (2):
arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
arm64: trans_pgd: hibernate: idmap the single page that holds the copy
page routines
Pavel
On Sun, Jan 24, 2021 at 6:03 PM John Hubbard wrote:
>
> On 1/21/21 7:37 PM, Pavel Tatashin wrote:
> > In gup_test both gup_flags and test_flags use the same flags field.
> > This is broken.
> >
> > Farther, in the actual gup_test.c all the passed gup_flags are
On Sun, Jan 24, 2021 at 6:18 PM John Hubbard wrote:
>
> On 1/21/21 7:37 PM, Pavel Tatashin wrote:
> > When pages are pinned they can be faulted in userland and migrated, and
> > they can be faulted right in kernel without migration.
> >
> > In either case, the
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 5 +
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel
> > +.macro el1_sync_64
> > + br x4 /* Jump to new world from el2 */
> > + .fill 31, 4, 0 /* Set other 31 instr to zeroes */
> > +.endm
>
> The common idiom to write this is to align the beginning of the
> macro, and not to bother about what follow
.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 4
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel/machine_kexec.c | 17 -
arch/arm64/kernel/relocate_kernel.S | 15 ++-
arch/arm64/kernel/vmlinux.lds.S | 19
tas...@soleen.com
v7
https://lore.kernel.org/lkml/20210122033748.924330-1-pasha.tatas...@soleen.com
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movable zone
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an error on migration failure
mm/
-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 16f10d5a9eb6..88ce41f41543 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1557,7 +1557,6 @@ static long
the addresses.
The resulting pages[i] might end-up having pages which are not compound
size page aligned.
Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge
page")
Reported-by: Jason Gunthorpe
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/
In order not to fragment CMA the pinned pages are migrated. However,
they are migrated to ZONE_MOVABLE, which also should not have pinned pages.
Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning
is allowed.
Signed-off-by: Pavel Tatashin
Reviewed-by: David Hildenbrand
transient, we retry indefinitely.
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages
allocated from CMA region")
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 60
1 file c
ation
context which can only get pages suitable for long-term pins.
Also re-name:
memalloc_nocma_save()/memalloc_nocma_restore
to
memalloc_pin_save()/memalloc_pin_restore()
and make the new functions common.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
Acked-by: Michal Hocko
---
in
is declared in memory.c which is compiled
with CONFIG_MMU.
Signed-off-by: Pavel Tatashin
---
include/linux/mm.h | 3 ++-
include/linux/mmzone.h | 4
include/linux/pgtable.h | 3 +--
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only
movable CMA pages. Generalize the function that migrates CMA pages to
migrate all movable pages. Use is_pinnable_page() to check which
pages need to be migrated
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
ZONE_MOVABLE and not of MIGRATE_CMA type.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
include/linux/mm.h | 11 +++
include/linux/sched/mm.h | 6 +-
mm/hugetlb.c | 2 +-
mm/page_alloc.c | 20 +---
4 files changed, 26 insertions
hen we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.
Suggested-by: Michal Hocko
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/page_all
In __get_user_pages_locked() i counts number of pages which should be
long, as long is used in all other places to contain number of pages, and
32-bit becomes increasingly small for handling page count proportional
values.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/gup.c | 2
Document the special handling of page pinning when ZONE_MOVABLE present.
Signed-off-by: Pavel Tatashin
Suggested-by: David Hildenbrand
Acked-by: Michal Hocko
---
Documentation/admin-guide/mm/memory-hotplug.rst | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/admin
When pages are longterm pinned, we must migrated them out of movable zone.
The function that migrates them has a hidden loop with goto. The loop is
to retry on isolation failures, and after successful migration.
Make this code better by moving this loop to the caller.
Signed-off-by: Pavel
difference between pin vs no-pin case.
Also change type of nr from int to long, as it counts number of pages.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/gup_test.c | 23 ++-
mm/gup_test.h | 3 ++-
too
ks between doing gup/pup on pages that have been
pre-faulted in from user space, vs. doing gup/pup on pages that are not
faulted in until gup/pup time (via FOLL_TOUCH). This decision is
controlled with the new -z command line option.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/
, loop_lookup, loop_add.
Lock ordering: loop_ctl_mutex > lo_mutex.
Signed-off-by: Pavel Tatashin
Reviewed-by: Tyler Hicks
---
drivers/block/loop.c | 92 +---
drivers/block/loop.h | 1 +
2 files changed, 54 insertions(+), 39 deletions(-)
diff --git a/driv
2s on our machine.
v2 https://lore.kernel.org/lkml/20200723211748.13139-1-pasha.tatas...@soleen.com
v1
https://lore.kernel.org/lkml/20200717205322.127694-1-pasha.tatas...@soleen.com
Pavel Tatashin (1):
loop: scale loop device by introducing per device loc
Now, that relocation is done using virtual addresses, reloc_arg->head
is not needed anymore.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h| 2 --
arch/arm64/kernel/asm-offsets.c | 1 -
arch/arm64/kernel/machine_kexec.c | 1 -
3 files changed, 4 deletions(-)
diff --
Make trans_pgd_create_copy and its subroutines to use allocator that is
passed as an argument
Signed-off-by: Pavel Tatashin
Reviewed-by: James Morse
---
arch/arm64/include/asm/trans_pgd.h | 4 +--
arch/arm64/kernel/hibernate.c | 7 -
arch/arm64/mm/trans_pgd.c | 49
x0 will contain the only argument to arm64_relocate_new_kernel; don't
use it as a temp. Reassigned registers to free-up x0 so we won't need
to copy argument, and can use it at the beginning and at the end of the
function.
Signed-off-by: Pavel Tatashin
Reviewed-by: James Morse
---
improvement.
The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/relocate_kernel.S | 131 ++--
1 file changed, 87 insertions(+), 44 deletions(-)
diff --git a
Configure a page table located in kexec-safe memory that has
the following mappings:
1. identity mapping for text of relocation function with executable
permission.
2. va mappings for all source ranges
3. va mappings for all destination ranges.
Signed-off-by: Pavel Tatashin
---
arch/arm64
-by: Pavel Tatashin
---
arch/arm64/include/asm/mmu_context.h | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/mmu_context.h
b/arch/arm64/include/asm/mmu_context.h
index 0b3079fd28eb..70ce8c1d2b07 100644
--- a/arch/arm64/include/asm/mmu_context.h
On Mon, Jan 25, 2021 at 9:28 AM Jason Gunthorpe wrote:
>
> On Wed, Jan 20, 2021 at 09:26:41AM -0500, Pavel Tatashin wrote:
>
> > I thought about this, and it would code a little cleaner. But, the
> > reason I did not is because zero_page is perfectly pinnable, it is not
>
On Thu, May 7, 2020 at 12:22 PM James Morse wrote:
>
> Hi Pavel,
>
> On 26/03/2020 03:24, Pavel Tatashin wrote:
> > Currently, kexec relocation function (arm64_relocate_new_kernel) accepts
> > the following arguments:
> >
> > head: start of array
On Wed, Apr 29, 2020 at 1:01 PM James Morse wrote:
>
> Hi Pavel,
>
> On 26/03/2020 03:24, Pavel Tatashin wrote:
> > Change argument types from unsigned long to a more descriptive
> > phys_addr_t.
>
> For 'entry', which is a physical addresses, sure...
On Wed, Apr 29, 2020 at 1:01 PM James Morse wrote:
>
> Hi Pavel,
>
> On 26/03/2020 03:24, Pavel Tatashin wrote:
> > Currently, kernel relocation function is configured in machine_kexec()
> > at the time of kexec reboot by using control_code_page.
> >
> > This
101 - 200 of 1001 matches
Mail list logo