Date:Thu, 14 Mar 2019 18:12:18 - (UTC)
From:chris...@astron.com (Christos Zoulas)
Message-ID:
| Great debugging :-)
The hard part was all Jason's work - he picked out just where
things would be going wrong - after that it wasn't difficult.
| LGTM, at this
> On Mar 14, 2019, at 10:27 AM, Robert Elz wrote:
>
>Date:Thu, 14 Mar 2019 08:06:58 -0700
>From:Jason Thorpe
>Message-ID: <134778ad-a675-414a-bbb3-7eeeaf2c2...@me.com>
>
> | Great sleuthing, you pretty much nailed it.
>
> OK, great.This is the patch I plan
In article <26375.1552584...@jinx.noi.kre.to>,
Robert Elz wrote:
>
Great debugging :-)
LGTM, at this point I'd merge them:
static int
round_and_check(const struct vm_map *map, vaddr_t *addr, vsize_t *size)
{
const vsize_t pageoff = (vsize_t)(*addr & PAGE_MASK);
*addr -=
Date:Thu, 14 Mar 2019 08:06:58 -0700
From:Jason Thorpe
Message-ID: <134778ad-a675-414a-bbb3-7eeeaf2c2...@me.com>
| Great sleuthing, you pretty much nailed it.
OK, great.This is the patch I plan to commit soon (rather than
waiting on lots of review - so that
On Mar 14, 2019, at 5:44 AM, Robert Elz wrote:
>
> am guessing at the UVMHIST_LOG() stuff (copy/paste/edit...)
>
> Does all of this sound plausible, and reasonable to do ?
> Please do remember, that before this, I have never been
> anywhere near uvm or anything pmap related, so have mercy!
Date:Wed, 13 Mar 2019 11:44:51 -0700
From:Jason Thorpe
Message-ID: <9a2a4a34-35b0-490e-9a92-aab44174f...@me.com>
| I would suggest instrumenting-with-printf the "new_pageable"
| case of uvm_map_pageable()
That didn't show much we didn't already know.
Turns out
Date:Wed, 13 Mar 2019 11:44:51 -0700
From:Jason Thorpe
Message-ID: <9a2a4a34-35b0-490e-9a92-aab44174f...@me.com>
| Ok, well, I see some problematic code in sys_mlock() and sys_munlock(),
| but I don't think it's affecting this case (and it may in fact have
|
> On Mar 13, 2019, at 10:27 AM, Robert Elz wrote:
>
> Some progress.
>
> #1 touching the buffer that malloc() returns (which is page aligned)
> made no difference - it is likely that the malloc llibrary would have
> done that in any case (the malloc is for just one page, so either it
> is
Some progress.
#1 touching the buffer that malloc() returns (which is page aligned)
made no difference - it is likely that the malloc llibrary would have
done that in any case (the malloc is for just one page, so either it
is resident (or paged) or it is a ZFoD page, and most likely not the
OK, with DIAGNOSTIC enabled, and with this patch made:
--- uvm_page.c 19 May 2018 15:03:26 - 1.198
+++ uvm_page.c 13 Mar 2019 08:51:11 -
@@ -1605,9 +1605,11 @@
uvm_pageunwire(struct vm_page *pg)
{
KASSERT(mutex_owned(_pageqlock));
+ KASSERT(pg->wire_count != 0);
On Wed, Mar 13, 2019 at 03:22:27PM +0700, Robert Elz wrote:
> [...]
>
> netbsd# df /tmp
> Filesystem1K-blocks Used Avail %Cap Mounted on
> tmpfs 4 4 0 100% /tmp
>
> That's what it showed (it was still in my xterm scrollback buffer from
> the
Date:Tue, 12 Mar 2019 23:21:59 -0700
From:Jason Thorpe
Message-ID:
| THAT is particularly special, because the code in question is:
|
|
| void
| uvm_pagewire(struct vm_page *pg)
| {
| KASSERT(mutex_owned(_pageqlock));
| #if
Date:Tue, 12 Mar 2019 23:21:59 -0700
From:Jason Thorpe
Message-ID:
Thanks for the reply. I have dropped tech-kern and tech-userlevel
from this reply though.
| The test employs a bogus understanding of how malloc() is specified.
Yes, that is kind of obvious,
> On Mar 12, 2019, at 9:09 PM, Robert Elz wrote:
>
> The first issue I noticed, is that t_mlock() apparently belives
> the malloc(3) man page, which states:
>
> The malloc() function allocates size bytes of uninitialized memory. The
> allocated space is suitably aligned (after
Date:Wed, 13 Mar 2019 11:09:09 +0700
From:Robert Elz
Message-ID: <27829.1552450...@jinx.noi.kre.to>
A few corrections/additions to my message:
| "page" is the page size.(4KB, 8KB or 16KB or ...)
Looks to be 4K. Is that correct?
| From the number of kernel
Apologies for the multi-list posting, but I think this needs a wide
audience - please respect the Reply-To and send replies only to
current-users@
I have been looking into this, a little.
First, while the t_mlock() test is most likely broken, it
should never cause a kernel panic (or even a
16 matches
Mail list logo