t; >
> > (Note I've discarded some of the email logs, which of no interest
> > to the discovered problem. Please also note that I haven't got any
> > Broadcom hardware to test out a solution suggested below.)
> >
> > On Sun, Feb 28, 2021 at 10:19:51AM -0800, Flori
p_ShortPremble BIT(5)
> -
>
> /*-
> * Below is the definition for 802.11i / 802.1x
>
> *--
> --
> 2.26.2
Reviewed-by: Mike Ximing Chen
> -Original Message-
> From: Mauro Carvalho Chehab On Behalf Of Mauro
> Carvalho Chehab
>
> static int xr_probe(struct usb_serial *serial, const struct usb_device_id
> *id)
> {
> + struct xr_port_private *port_priv;
> +
> /* Don't bind to control interface */
> if
On Fri, Feb 26, 2021 at 11:16:06AM -0500, George Kennedy wrote:
> On 2/26/2021 6:17 AM, Mike Rapoport wrote:
> > Hi George,
> >
> > On Thu, Feb 25, 2021 at 08:19:18PM -0500, George Kennedy wrote:
> > >
> > > Not sure if it's the right thing to do, but
)
{
/* call board setup routine */
plat_mem_setup();
+ early_init_fdt_reserve_self();
+ early_init_fdt_scan_reserved_mem();
memblock_set_bottom_up(true);
bootcmdline_init();
@@ -636,9 +638,6 @@ static void __init arch_mem_init(char **cmdline_p)
check_kernel_sections_mem();
- early_init_fdt_reserve_self();
- early_init_fdt_scan_reserved_mem();
-
#ifndef CONFIG_NUMA
memblock_set_node(0, PHYS_ADDR_MAX, , 0);
#endif
--
Sincerely yours,
Mike.
> -Original Message-
> From: Rasmus Villemoes
> Sent: Friday, February 26, 2021 9:14 AM
> To: Greg Kroah-Hartman ; Rob Herring
>
> Cc: devicet...@vger.kernel.org; linux-kernel@vger.kernel.org; Arnd Bergmann
> ; linux-...@vger.kernel.org; Rasmus Villemoes
>
> Subject: [PATCH 2/2]
there was the suggestion to change the
!in_task to in_atomic.
I need to do some research on the subtle differences between in_task,
in_atomic, etc. TBH, I 'thought' !in_task would prevent the issue
reported here. But, that obviously is not the case.
--
Mike Kravetz
Hi Mathieu,
On Thu, 25 Feb 2021 at 21:51, Mathieu Poirier
wrote:
>
> On Thu, Jan 28, 2021 at 05:09:32PM +, Mike Leach wrote:
> > Add calls to activate the selected configuration as perf starts
> > and stops the tracing session.
> >
> > Signed-off-by: Mi
HI Mathieu,
On Thu, 25 Feb 2021 at 21:20, Mathieu Poirier
wrote:
>
> On Thu, Jan 28, 2021 at 05:09:31PM +, Mike Leach wrote:
> > Configurations are first activated, then when any coresight device is
> > enabled, the active configurations are checked and any matching
HI Mathieu.
On Wed, 24 Feb 2021 at 18:33, Mathieu Poirier
wrote:
>
> On Thu, Jan 28, 2021 at 05:09:30PM +, Mike Leach wrote:
> > Loaded coresight configurations are registered in the cs_etm\cs_config sub
> > directory. This extends the etm-perf code to handle t
Hi Mathieu,
On Mon, 22 Feb 2021 at 17:38, Mathieu Poirier
wrote:
>
> Hi Mike,
>
> On Thu, Jan 28, 2021 at 05:09:28PM +, Mike Leach wrote:
> > API for individual devices to register with the syscfg management
> > system is added.
> >
> > Devices registe
in
coresight_syscfg.h are management items. I am happy to change the name
but would prefer is stay in coresight_syscfg.h
Thanks
Mike
> I may have to come back to this patch but for now it holds together.
>
> More comments to come on Monday.
>
> Thanks,
> Mathieu
>
> &g
Hi Mathieu,
On Thu, 18 Feb 2021 at 23:52, Mathieu Poirier
wrote:
>
> On Thu, Jan 28, 2021 at 05:09:27PM +, Mike Leach wrote:
> > Creates an system management API to allow complex configurations and
> > features to be programmed into a CoreSight infrastructure.
> >
Hi Mathieu,
On Mon, 22 Feb 2021 at 18:50, Mathieu Poirier
wrote:
>
> On Thu, Jan 28, 2021 at 05:09:27PM +, Mike Leach wrote:
> > Creates an system management API to allow complex configurations and
> > features to be programmed into a CoreSight infrastructure.
> >
Hi George,
On Thu, Feb 25, 2021 at 08:19:18PM -0500, George Kennedy wrote:
>
> Mike,
>
> To get rid of the 0xBE453000 hardcoding, I added the following patch
> to your above patch to get the iBFT table "address" to use with
> memblock_reserve():
>
>
On Thu, Feb 25, 2021 at 07:38:44PM +0100, Vlastimil Babka wrote:
> On 2/25/21 7:05 PM, Mike Rapoport wrote:
> >>
> >> What if two zones are adjacent? I.e. if the hole was at a boundary between
> >> two
> >> zones.
> >
> > What do you mean by
On Thu, Feb 25, 2021 at 04:08:51PM -0800, Andrew Morton wrote:
> On Fri, 26 Feb 2021 00:43:51 +0200 Mike Rapoport wrote:
>
> > From: Mike Rapoport
>
> > void __meminit __weak memmap_init_zone(struct zone *zone)
> > {
> > unsigned long zone
++
> mm/vmscan.c | 5 +++--
> 4 files changed, 34 insertions(+), 9 deletions(-)
Thanks,
Changes look good. I like the simple retry one time for pages which may
go from free to in use.
Reviewed-by: Mike Kravetz
BTW,
This series will need to be rebased on lat
From: Mike Rapoport
There could be struct pages that are not backed by actual physical memory.
This can happen when the actual memory bank is not a multiple of
SECTION_SIZE or when an architecture does not register memory holes
reserved by the firmware as memblock.memory.
Such pages
From: Mike Rapoport
Hi,
@Andrew, this is based on v5.11-mmotm-2021-02-18-18-29 with the previous
version reverted
Commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
that check each PFN") exposed several issues with the memory map
initialization and these p
mit 2c7452a075d4. So, when
start_isolate_page_range goes to allocate another gigantic page it will
never notice/operate on the existing gigantic page.
Again, this is confusing and I might be missing something.
In any case, I agree that gigantic pages are tricky and we should leave
them out of the discussion for now. We can rethink this later if
necessary.
--
Mike Kravetz
lock_set_bottom_up()")
> Signed-off-by: Arnd Bergmann
I thought it'll go via memblock tree but since Andrew has already took it
Reviewed-by: Mike Rapoport
> ---
> include/linux/memblock.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/l
alloc_contig_range.
>
> Signed-off-by: Oscar Salvador
Thanks Oscar,
I spent a bunch of time looking for possible race issues. Thankfully,
the recent code from Muchun dealing with free lists helps. In addition,
all the hugetlb acounting looks good.
Reviewed-by: Mike Kravetz
>
On 2/25/21 9:49 AM, Axel Rasmussen wrote:
> On Wed, Feb 24, 2021 at 4:26 PM Mike Kravetz wrote:
>>
>> On 2/18/21 4:48 PM, Axel Rasmussen wrote:
>>
>>> @@ -401,8 +398,10 @@ vm_fault_t handle_userfault(struct vm_fault *vmf,
>>> unsigned long rea
On Thu, Feb 25, 2021 at 09:54:34AM -0800, Linus Torvalds wrote:
> On Thu, Feb 25, 2021 at 9:07 AM Mike Rapoport wrote:
> >
> > >
> > > We might still double-initialize PFNs when two zones overlap within a
> > > section, correct?
> >
> > You mean th
On Thu, Feb 25, 2021 at 06:51:53PM +0100, Vlastimil Babka wrote:
> On 2/24/21 4:39 PM, Mike Rapoport wrote:
> > From: Mike Rapoport
>
> Hi, thanks for your efforts. I'll just nit pick on the description/comments
> as I
> don't feel confident about judging the implementati
On Thu, Feb 25, 2021 at 11:31:04AM -0500, George Kennedy wrote:
>
>
> On 2/25/2021 11:07 AM, Mike Rapoport wrote:
> > On Thu, Feb 25, 2021 at 10:22:44AM -0500, George Kennedy wrote:
> > > > > > > On 2/24/2021 5:37 AM, Mike Rapoport wrote:
> > > App
So we do need to memblock_reserve() iBFT region, but I still couldn't find
the right place to properly get its address without duplicating ACPI tables
parsing :(
[0.00] BIOS-e820: [mem 0xbe49b000-0xbe49bfff] ACPI data
--
Sincerely yours,
Mike.
On Thu, Feb 25, 2021 at 04:59:06PM +0100, David Hildenbrand wrote:
> On 24.02.21 16:39, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > There could be struct pages that are not backed by actual physical memory.
> > This can happen when the actual me
On Thu, Feb 25, 2021 at 10:22:44AM -0500, George Kennedy wrote:
>
> > > > > On 2/24/2021 5:37 AM, Mike Rapoport wrote:
>
> Applied just your latest patch, but same failure.
>
> I thought there was an earlier comment (which I can't find now) that stated
> that me
4c 89 f6mov%r14,%rsi
> aa2: e8 00 00 00 00 call aa7
> aa3: R_X86_64_PLT32
> __ubsan_handle_load_invalid_value-0x4
> aa7: eb cf jmpa78
> aa9: 66 2e 0f 1f 84 00 00cs nopw 0x0(%rax,%rax,1)
> ab0: 00 00 00
> ab3: 66 2e 0f 1f 84 00 00cs nopw 0x0(%rax,%rax,1)
> aba: 00 00 00
> abd: 0f 1f 00nopl (%rax)
>
> This means that the sanitiers added a lot of extra checking around what
> would have been a trivial global variable access otherwise. In this case,
> not inlining would be a reasonable decision.
>
> Arnd
--
Sincerely yours,
Mike.
On Thu, Feb 25, 2021 at 07:38:19AM -0500, George Kennedy wrote:
> On 2/25/2021 3:53 AM, Mike Rapoport wrote:
> > Hi George,
> >
> > > On 2/24/2021 5:37 AM, Mike Rapoport wrote:
> > > > On Tue, Feb 23, 2021 at 04:46:28PM -0500, George Kennedy wrote:
> &
Hi George,
> On 2/24/2021 5:37 AM, Mike Rapoport wrote:
> > On Tue, Feb 23, 2021 at 04:46:28PM -0500, George Kennedy wrote:
> > > Mike,
> > >
> > > Still no luck.
> > >
> > > [ 30.193723] iscsi: registered transport (iser)
> > >
it can check and potentially update the page's
> contents.
>
> Huge PMD sharing would prevent these faults from occurring for
> suitably aligned areas, so disable it upon UFFD registration.
>
> Reviewed-by: Peter Xu
> Signed-off-by: Axel Rasmussen
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
}
> }
>
> /*
>
I'm good with the hugetlb.c changes. Since this in nearly identical to
the other handle_userfault() in this routine, it might be good to create
a common wrapper. But, that is not required.
--
Mike Kravetz
From: Mike Rapoport
There could be struct pages that are not backed by actual physical memory.
This can happen when the actual memory bank is not a multiple of
SECTION_SIZE or when an architecture does not register memory holes
reserved by the firmware as memblock.memory.
Such pages
From: Mike Rapoport
Hi,
@Andrew, this is based on v5.11-mmotm-2021-02-18-18-29 with the previous
version reverted
Commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
that check each PFN") exposed several issues with the memory map
initialization and these p
On Tue, Feb 23, 2021 at 04:46:28PM -0500, George Kennedy wrote:
>
> Mike,
>
> Still no luck.
>
> [ 30.193723] iscsi: registered transport (iser)
> [ 30.195970] iBFT detected.
> [ 30.196571] BUG: unable to handle page fault for address: ff240004
Hmm,
Expand comments, no functional change.
Signed-off-by: Mike Kravetz
---
include/linux/hugetlb.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index cccd1aab69dd..c0467a7a1fe0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux
On 2/23/21 3:21 PM, Mike Kravetz wrote:
> On 2/23/21 2:58 PM, Oscar Salvador wrote:
>> On 2021-02-23 23:55, Mike Kravetz wrote:
>>> Yes, that is the more common case where the once active hugetlb page
>>> will be simply added to the free list via enqueue_huge_page().
On 2/23/21 3:58 PM, Andrew Morton wrote:
> On Tue, 23 Feb 2021 10:06:12 -0800 Mike Kravetz
> wrote:
>
>> On 2/23/21 6:57 AM, Gerald Schaefer wrote:
>>> Hi,
>>>
>>> LTP triggered a panic on s390 in hugepage_subpool_put_pages() with
>>> linux-
On 2/23/21 2:58 PM, Oscar Salvador wrote:
> On 2021-02-23 23:55, Mike Kravetz wrote:
>> Yes, that is the more common case where the once active hugetlb page
>> will be simply added to the free list via enqueue_huge_page(). This
>> path does not go through prep_new_huge_pa
On 2/23/21 2:45 PM, Oscar Salvador wrote:
> On Tue, Feb 23, 2021 at 01:55:44PM -0800, Mike Kravetz wrote:
>> Gerald Schaefer reported a panic on s390 in hugepage_subpool_put_pages()
>> with linux-next 5.12.0-20210222.
>> Call trace:
>> hugepage_subpool_
ointer in prep_new_huge_page().
Fixes: f1280272ae4d ("hugetlb: use page.private for hugetlb specific page
flags")
Reported-by: Gerald Schaefer
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c232cb67dda2..7ae
On Tue, Feb 23, 2021 at 04:16:44PM -0500, George Kennedy wrote:
>
>
> On 2/23/2021 3:09 PM, Mike Rapoport wrote:
> > On Tue, Feb 23, 2021 at 01:05:05PM -0500, George Kennedy wrote:
> > > On 2/23/2021 10:47 AM, Mike Rapoport wrote:
> > >
> > > It no
On Tue, Feb 23, 2021 at 01:05:05PM -0500, George Kennedy wrote:
> On 2/23/2021 10:47 AM, Mike Rapoport wrote:
>
> It now crashes here:
>
> [ 0.051019] ACPI: Early table checksum verification disabled
> [ 0.056721] ACPI: RSDP 0xBFBFA014 24 (v02 BOCHS )
>
max_huge_pages to __free_huge_page is
actually how the code puts newly allocated pages on it's interfal free
list.
I will do a bit more verification and put together a patch (it should
be simple).
--
Mike Kravetz
Hi George,
On Tue, Feb 23, 2021 at 09:35:32AM -0500, George Kennedy wrote:
>
> On 2/23/2021 5:33 AM, Mike Rapoport wrote:
> > (re-added CC)
> >
> > On Mon, Feb 22, 2021 at 08:24:59PM -0500, George Kennedy wrote:
> > > On 2/22/2021 4:55 PM, Mike Rapoport wrote:
&
(re-added CC)
On Mon, Feb 22, 2021 at 08:24:59PM -0500, George Kennedy wrote:
>
> On 2/22/2021 4:55 PM, Mike Rapoport wrote:
> > On Mon, Feb 22, 2021 at 01:42:56PM -0500, George Kennedy wrote:
> > > On 2/22/2021 11:13 AM, David Hildenbrand wrote:
> > > > On 22.
On Tue, Feb 23, 2021 at 10:49:44AM +0100, David Hildenbrand wrote:
> On 23.02.21 10:48, Mike Rapoport wrote:
> > On Tue, Feb 23, 2021 at 09:04:19AM +0100, David Hildenbrand wrote:
> > > On 22.02.21 11:57, Mike Rapoport wrote:
> > > > From: Mike Rapoport
> >
On Tue, Feb 23, 2021 at 09:04:19AM +0100, David Hildenbrand wrote:
> On 22.02.21 11:57, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > There could be struct pages that are not backed by actual physical memory.
> > This can happen when the actual me
; h->free_huge_pages_node[nid]--;
> h->max_huge_pages--;
> - update_and_free_page(h, head);
> - rc = 0;
> + rc = update_and_free_page(h, head);
> + if (rc)
> + h->max_
count is 0. That could indicate that the page is on the buddy
> > PCP list. Could be that it is getting reused a couple of times.
> >
> > The PFN 0xbe453 looks a little strange, though. Do we expect ACPI tables
> > close to 3 GiB ? No idea. Could it be that you are trying to map a wrong
> > table? Just a guess.
> >
> > >
> > > What would be the correct way to reserve the page so that the above
> > > would not be hit?
> >
> > I would have assumed that if this is a binary blob, that someone (which
> > I think would be acpi code) reserved via memblock_reserve() early during
> > boot.
> >
> > E.g., see drivers/acpi/tables.c:acpi_table_upgrade()->memblock_reserve().
>
> acpi_table_upgrade() gets called, but bails out before memblock_reserve() is
> called. Thus, it appears no pages are getting reserved.
acpi_table_upgrade() does not actually reserve memory but rather open
codes memblock allocation with memblock_find_in_range() +
memblock_reserve(), so it does not seem related anyway.
Do you have by chance a full boot log handy?
> 503 void __init acpi_table_upgrade(void)
> 504 {
...
> 568 if (table_nr == 0)
> 569 return; <-- bails
> out here
> "drivers/acpi/tables.c"
>
> George
>
--
Sincerely yours,
Mike.
fffc000()
> > > > [ 1.121116] raw: 000fc000 ea0002f914c8 ea0002f914c8
> > > >
> > > > [ 1.122638] raw:
> > > >
> > > > [ 1.124146] page dumped because: acpi_map pre SetPageReserved
> > > >
> > > > I also added dump_page() before unmapping, but it is not hit. The
> > > > following for the same pfn now shows up I believe as a result of setting
> > > > PageReserved:
> > > >
> > > > [ 28.098208] BUG:Bad page state in process mo dprobe pfn:be453
> > > > [ 28.098394] page:ea0002f914c0 refcount:0 mapcount:0
> > > > mapping: index:0x1 pfn:0xbe453
> > > > [ 28.098394] flags: 0xfc0001000(reserved)
> > > > [ 28.098394] raw: 000fc0001000 dead0100 dead0122
> > > >
> > > > [ 28.098394] raw: 0001
> > > >
> > > > [ 28.098394] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag(s) set
> > > > [ 28.098394] page_owner info is not present (never set?)
> > > > [ 28.098394] Modules linked in:
> > > > [ 28.098394] CPU: 2 PID: 204 Comm: modprobe Not tainted
> > > > 5.11.0-3dbd5e3 #66
> > > > [ 28.098394] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> > > > BIOS 0.0.0 02/06/2015
> > > > [ 28.098394] Call Trace:
> > > > [ 28.098394] dump_stack+0xdb/0x120
> > > > [ 28.098394] bad_page.cold.108+0xc6/0xcb
> > > > [ 28.098394] check_new_page_bad+0x47/0xa0
> > > > [ 28.098394] get_page_from_freelist+0x30cd/0x5730
> > > > [ 28.098394] ? __isolate_free_page+0x4f0/0x4f0
> > > > [ 28.098394] ? init_object+0x7e/0x90
> > > > [ 28.098394] __alloc_pages_nodemask+0x2d8/0x650
> > > > [ 28.098394] ? write_comp_data+0x2f/0x90
> > > > [ 28.098394] ? __alloc_pages_slowpath.constprop.103+0x2110/0x2110
> > > > [ 28.098394] ? __sanitizer_cov_trace_pc+0x21/0x50
> > > > [ 28.098394] alloc_pages_vma+0xe2/0x560
> > > > [ 28.098394] do_fault+0x194/0x12c0
> > > > [ 28.098394] ? write_comp_data+0x2f/0x90
> > > > [ 28.098394] __handle_mm_fault+0x1650/0x26c0
> > > > [ 28.098394] ? copy_page_range+0x1350/0x1350
> > > > [ 28.098394] ? write_comp_data+0x2f/0x90
> > > > [ 28.098394] ? write_comp_data+0x2f/0x90
> > > > [ 28.098394] handle_mm_fault+0x1f9/0x810
> > > > [ 28.098394] ? write_comp_data+0x2f/0x90
> > > > [ 28.098394] do_user_addr_fault+0x6f7/0xca0
> > > > [ 28.098394] exc_page_fault+0xaf/0x1a0
> > > > [ 28.098394] asm_exc_page_fault+0x1e/0x30
> > > > [ 28.098394] RIP: 0010:__clear_user+0x30/0x60
> > >
> > > I think the PAGE_FLAGS_CHECK_AT_PREP check in this instance means that
> > > someone is trying to allocate that page with the PG_reserved bit set.
> > > This means that the page actually was exposed to the buddy.
> > >
> > > However, when you SetPageReserved(), I don't think that PG_buddy is set
> > > and the refcount is 0. That could indicate that the page is on the buddy
> > > PCP list. Could be that it is getting reused a couple of times.
> > >
> > > The PFN 0xbe453 looks a little strange, though. Do we expect ACPI tables
> > > close to 3 GiB ? No idea. Could it be that you are trying to map a wrong
> > > table? Just a guess.
>
> Nah, ACPI MADT enumerates the table and that is the proper location of it.
> >
> > ... but I assume ibft_check_device() would bail out on an invalid checksum.
> > So the question is, why is this page not properly marked as reserved
> > already.
>
> The ibft_check_device ends up being called as module way way after the
> kernel has cleaned the memory.
>
> The funny thing about iBFT is that (it is also mentioned in the spec)
> that the table can resize in memory .. or in the ACPI regions (which
^ reside I presume?
> have no E820_RAM and are considered "MMIO" regions).
>
> Either place is fine, so it can be in either RAM or MMIO :-(
I'd say that the tables in this case are in E820_RAM, because with MMIO we
wouldn't get to kmap() at the first place.
It can be easily confirmed by comparing the problematic address with
/proc/iomem.
Can't say I have a clue about what's going on there, but the theory that
somehow iBFT table does not get PG_Reserved during boot makes sense.
Do you see "iBFT found at 0x" early in the kernel log?
I don't know if ACPI relocates the tables, but I could not find anywhere
that it reserves the original ones. The memblock_reserve() in
acpi_table_upgrade() is merely a part of open coded memblock allocation.
--
Sincerely yours,
Mike.
Somehow I've managed to break the threading, the cover letter is here:
https://lore.kernel.org/lkml/20210222105400.28583-1-r...@kernel.org
On Mon, Feb 22, 2021 at 12:57:28PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport
>
> There could be struct pages that are not backed by actual
From: Mike Rapoport
There could be struct pages that are not backed by actual physical memory.
This can happen when the actual memory bank is not a multiple of
SECTION_SIZE or when an architecture does not register memory holes
reserved by the firmware as memblock.memory.
Such pages
From: Mike Rapoport
Hi,
@Andrew, this is based on v5.11-mmotm-2021-02-18-18-29 with the previous
version reverted
Commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
that check each PFN") exposed several issues with the memory map
initialization and these p
Hi Suzuki,
On Thu, 18 Feb 2021 at 15:14, Suzuki K Poulose wrote:
>
> On 2/18/21 2:30 PM, Mike Leach wrote:
> > HI Suzuki,
> >
> > On Thu, 18 Feb 2021 at 07:50, Suzuki K Poulose
> > wrote:
> >>
> >> Hi Mike
> >>
>
On Mon, Feb 22, 2021 at 07:34:52AM +, Matthew Garrett wrote:
> On Mon, Feb 08, 2021 at 10:49:18AM +0200, Mike Rapoport wrote:
>
> > It is unsafe to allow saving of secretmem areas to the hibernation
> > snapshot as they would be visible after the resume and this essential
Greeting Dearest,
This to inform you about the current economic meltdown world wide
compensation by United Nation new president (UN) and Economic
Community of West Africa State (ECOWAS) Organization in which you are
one of them to receive this compensation valued of US$7.3M and is on
transit
Yes, something like this should work. I'll let Oscar work out the details.
One thing to note is that you also need to check for old_page not on the
free list here. It could have been allocated and in use. In addition,
make sure to check the new flag HPageFreed to ensure page is on free list
befor
map(). mbind() runs
> afterwards. Preallocation saves you from that.
>
> I suspect something similar will happen with anonymous memory with mbind()
> even if we reserved swap space. Did not test yet, though.
>
Sorry, for jumping in late ... hugetlb keyword just hit my mail filters :)
Yes, it is true that hugetlb reservations are not numa aware. So, even if
pages are reserved at mmap time one could still SIGBUS if a fault is
restricted to a node with insufficient pages.
I looked into this some years ago, and there really is not a good way to
make hugetlb reservations numa aware. preallocation, or on demand
populating as proposed here is a way around the issue.
--
Mike Kravetz
sense to log a warning if ignoring a user specified parameter.
The user should not be attempting boot time allocation and CMA reservation
for 1G pages.
I do not think we should drop the warning as the it tells the user thay
have specified two incompatible allocation options.
--
Mike Krave
en pages are
> freed
> + * instead of enqueued again.
> + */
> + spin_lock(_lock);
> + h->surplus_huge_pages++;
> + h->surplus_huge_pages_node[nid]++;
> +
On 2/17/21 12:25 PM, Peter Xu wrote:
> On Wed, Feb 10, 2021 at 04:03:22PM -0800, Mike Kravetz wrote:
>> There was is no hugetlb specific routine for clearing soft dirty and
>> other referrences. The 'default' routines would only clear the
>> VM_SOFTDIRTY flag in the vma.
On 2/17/21 11:35 AM, Peter Xu wrote:
> On Wed, Feb 10, 2021 at 04:03:20PM -0800, Mike Kravetz wrote:
>> Pagemap was only using the vma flag PM_SOFT_DIRTY for hugetlb vmas.
>> This is insufficient. Check the individual pte entries.
>>
>> Signed-off-by: Mike Kravetz
&g
On 2/17/21 11:32 AM, Peter Xu wrote:
> On Wed, Feb 10, 2021 at 04:03:19PM -0800, Mike Kravetz wrote:
>> hugetlb fault processing code would COW all write faults where the
>> pte was not writable. Soft dirty will write protect ptes as part
>> of it's tracking mechanism. The
On 2/17/21 8:24 AM, Peter Xu wrote:
> On Wed, Feb 10, 2021 at 04:03:18PM -0800, Mike Kravetz wrote:
>> Add interfaces to set and clear soft dirty in hugetlb ptes. Make
>> hugetlb interfaces needed for /proc clear_refs available outside
>> hugetlb.c.
>>
>> arch/
On 2/18/21 2:27 PM, Peter Xu wrote:
> On Thu, Feb 18, 2021 at 02:13:52PM -0800, Mike Kravetz wrote:
>> On 2/18/21 1:54 PM, Peter Xu wrote:
>>> It is a preparation work to be able to behave differently in the per
>>> architecture huge_pte_alloc() according to different
I'm not so
> sure
> about that after rereading the code, yet again.
I have not followed this thread, but HugeTLB hit my mail filter and I can
help with this question.
No, PageTransCompoundMap() will not detect HugeTLB. hugetlb pages do not
use the compound_mapcount_ptr field. So, that final check/return in
PageTransCompoundMap() will always be false.
--
Mike Kravetz
inux/hugetlb.h | 3 +++
> mm/hugetlb.c| 51 +
> 3 files changed, 58 insertions(+)
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
; pte_t *huge_pte_alloc(struct mm_struct *mm,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> unsigned long addr, unsigned long sz)
> {
> pgd_t *pgd;
Didn't kernel test robot report this build error on the first patch series?
--
Mike Kravetz
On 2/18/21 9:34 AM, Mike Kravetz wrote:
> On 2/18/21 9:25 AM, Jason Gunthorpe wrote:
>> On Thu, Feb 18, 2021 at 02:45:54PM +, Matthew Wilcox wrote:
>>> On Wed, Feb 17, 2021 at 11:02:52AM -0800, Andrew Morton wrote:
>>>> On Wed, 17 Feb 2021 10:49:25 -0800 Mike Kr
Matthew Wilcox wrote:
>>>>> On Wed, Feb 17, 2021 at 11:02:52AM -0800, Andrew Morton wrote:
>>>>>> On Wed, 17 Feb 2021 10:49:25 -0800 Mike Kravetz
>>>>>> wrote:
>>>>>>> page structs are not guaranteed to be contiguous for
On 2/18/21 9:25 AM, Jason Gunthorpe wrote:
> On Thu, Feb 18, 2021 at 02:45:54PM +, Matthew Wilcox wrote:
>> On Wed, Feb 17, 2021 at 11:02:52AM -0800, Andrew Morton wrote:
>>> On Wed, 17 Feb 2021 10:49:25 -0800 Mike Kravetz
>>> wrote:
>>>> page stru
> -Original Message-
> From: gre...@linuxfoundation.org
> Sent: Thursday, February 18, 2021 2:53 AM
> To: Chen, Mike Ximing
> Cc: net...@vger.kernel.org; Linux Kernel Mailing List ker...@vger.kernel.org>; da...@davemloft.net; k...@kernel.org; a...@arndb.de;
HI Suzuki,
On Thu, 18 Feb 2021 at 07:50, Suzuki K Poulose wrote:
>
> Hi Mike
>
> On 2/16/21 9:00 AM, Mike Leach wrote:
> > Hi Anshuman,
> >
> > There have been plenty of detailed comments so I will restrict mine to
> > a few general issues:-
> >
>
> -Original Message-
> From: Mike Ximing Chen
> Sent: Wednesday, February 10, 2021 12:54 PM
> To: net...@vger.kernel.org
> Cc: da...@davemloft.net; k...@kernel.org; a...@arndb.de;
> gre...@linuxfoundation.org; Williams, Dan J ;
> pierre-
> louis.boss...@li
address += PUD_SIZE) {
> + unsigned long tmp = address;
> +
> + ptep = huge_pte_offset(mm, address, sz);
> + if (!ptep)
> + continue;
> + ptl = huge_pte_lock(h, mm, ptep);
> + /* We don't want 'addre
mm/hugetlb.c | 20 ++--
> 4 files changed, 26 insertions(+), 8 deletions(-)
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
On 2/17/21 12:13 AM, Michal Hocko wrote:
> On Tue 16-02-21 11:44:34, Mike Kravetz wrote:
> [...]
>> If we are not going to do the allocations under the lock, then we will need
>> to either preallocate or take the workqueue approach.
>
> We can still drop the lock temporar
On 2/17/21 11:02 AM, Andrew Morton wrote:
> On Wed, 17 Feb 2021 10:49:25 -0800 Mike Kravetz
> wrote:
>
>> page structs are not guaranteed to be contiguous for gigantic pages. The
>> routine update_and_free_page can encounter a gigantic page, yet it assumes
>> page
an
Signed-off-by: Mike Kravetz
Cc:
---
mm/hugetlb.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4bdb58ab14cb..94e9fa803294 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1312,14 +1312,16 @@ static inline void
destroy_compound_gig
with CONFIG_SPARSEMEM and
!CONFIG_SPARSEMEM_VMEMMAP. Then, hotplug add memory for the area where the
gigantic page will be allocated.
Fixes: 8fb5debc5fcd ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for
userfaultfd support")
Signed-off-by: Mike Kravetz
Cc:
---
mm/memory.c | 10 +++
readahead_expand
"tomorrow", but it fits into my plan to
get Orangefs the extra pages it needs
without me having open-coded page cache
code in orangefs_readpage.
-Mike
On Wed, Feb 17, 2021 at 10:42 AM David Howells wrote:
>
> Mike Marshall wrote:
>
> > I plan to try
I plan to try and use readahead_expand in Orangefs...
-Mike
On Tue, Feb 16, 2021 at 8:28 AM Matthew Wilcox wrote:
>
> On Tue, Feb 16, 2021 at 11:32:15AM +0100, Christoph Hellwig wrote:
> > On Mon, Feb 15, 2021 at 03:44:52PM +, David Howells wrote:
> > > Provide a funct
enario
as Michal sugested.
However, this is an 'opt in' feature. So, I would not expect anyone who
carefully plans the size of their hugetlb pool to enable such a feature.
If there is a use case where hugetlb pages are used in a non-essential
application, this might be of use.
--
Mike Kravetz
in
the case of freeing a single page, but would become more complex when doing
bulk freeing. After a little thought, the workqueue approach may even end
up simpler. However, I would suggest a very simple workqueue implementation
with non-blocking allocations. If we can not quickly get vmemmap pages,
put the page back on the hugetlb free list and treat as a surplus page.
--
Mike Kravetz
*/
> if (start_pfn != pfn) {
> @@ -1486,7 +1490,7 @@ fast_isolate_freepages(struct compact_control *cc)
> }
>
> cc->total_free_scanned += nr_scanned;
> - if (!page)
> + if (!page || page_zone(page) != cc->zone)
> return cc->free_pfn;
>
> low_pfn = page_to_pfn(page);
> --
> 2.30.0
>
--
Sincerely yours,
Mike.
Hi Anshuman,
On Tue, 16 Feb 2021 at 09:44, Anshuman Khandual
wrote:
>
> Hello Mike,
>
> On 2/16/21 2:30 PM, Mike Leach wrote:
> > Hi Anshuman,
> >
> > There have been plenty of detailed comments so I will restrict mine to
> > a few general issues:
On Mon, Feb 15, 2021 at 09:45:30AM +0100, David Hildenbrand wrote:
> On 14.02.21 18:29, Mike Rapoport wrote:
> > On Fri, Feb 12, 2021 at 10:56:19AM +0100, David Hildenbrand wrote:
> > > On 12.02.21 10:55, David Hildenbrand wrote:
> > > > On 08.02.21 12:08, Mike Rap
On Tue, Feb 16, 2021 at 09:33:20AM +0100, Michal Hocko wrote:
> On Mon 15-02-21 23:24:40, Mike Rapoport wrote:
> > On Mon, Feb 15, 2021 at 10:00:31AM +0100, Michal Hocko wrote:
> > > On Sun 14-02-21 20:00:16, Mike Rapoport wrote:
> > > > On Fri, Feb 12, 2021 at 02:18:
> - CORESIGHT format (indicates the Frame format)
> - RAW format (indicates the format of the source)
>
> The default value is CORESIGHT format for all the records
> (i,e == 0). Add the RAW format for the TRBE sink driver.
>
> Cc: Peter Zijlstra
> Cc: Mike Leach
> C
as
> truncated to fit */
> +#define PERF_AUX_FLAG_OVERWRITE0x02/* snapshot
> from overwrite mode */
> +#define PERF_AUX_FLAG_PARTIAL 0x04/* record contains
> gaps */
> +#define PERF_AUX_FLAG_COLLISION0x08/* sample
> collided with another */
> +#define PERF_AUX_FLAG_PMU_FORMAT_TYPE_MASK 0xff00 /* PMU specific trace
> format type */
>
> #define PERF_FLAG_FD_NO_GROUP (1UL << 0)
> #define PERF_FLAG_FD_OUTPUT(1UL << 1)
> --
> 2.7.4
>
Reviewed by: Mike Leach
--
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK
that the decoder does not synchronize
with the data stream until a genuine sync point is found.
4) TRBE needs to be a loadable module like the rest of coresight.
Regards
Mike
On Mon, 15 Feb 2021 at 09:46, Anshuman Khandual
wrote:
>
>
> On 2/13/21 1:56 AM, Mathieu Poirier wrote:
> >
On Mon, Feb 15, 2021 at 10:00:31AM +0100, Michal Hocko wrote:
> On Sun 14-02-21 20:00:16, Mike Rapoport wrote:
> > On Fri, Feb 12, 2021 at 02:18:20PM +0100, Michal Hocko wrote:
>
> > We can correctly set the zone links for the reserved pages for holes in the
> >
On Thu, 28 Jan 2021 at 17:18, Catalin Marinas wrote:
>
> On Wed, Jan 27, 2021 at 02:25:33PM +0530, Anshuman Khandual wrote:
> > This adds TRBE related registers and corresponding feature macros.
> >
> > Cc: Mathieu Poirier
> > Cc: Mike Leach
> > Cc: Suzuki K
Hi Mathieu,
On Mon, 15 Feb 2021 at 16:56, Mathieu Poirier
wrote:
>
> On Mon, Feb 15, 2021 at 04:27:26PM +, Mike Leach wrote:
> > HI Anshuman
> >
> > On Wed, 27 Jan 2021 at 08:55, Anshuman Khandual
> > wrote:
> > >
> > > Add support for dedicat
e and a sink device. But such connections are not present for certain
> percpu source and sink devices which are exclusively linked and dependent.
> Build the path directly and skip connection scanning for such devices.
>
> Cc: Mathieu Poirier
> Cc: Mike Leach
> Cc: Suzuki K Poulos
as the event is active and tracing,
> also provides us with access to the critical information
> needed to wind up a session even in the absence of an active
> output_handle.
>
> This is not an issue for the legacy sinks as none of them supports
> an IRQ and is centrally handled by the etm-
601 - 700 of 19093 matches
Mail list logo