Eric W. Biederman wrote:
> Any chance you had CONFIG_DEBUG_PAGEALLOC selected?
>
Yes, that's quite likely.
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> And it just occurred to me PSE disabled, otherwise you would not have
>> needed more than 4 pages. I supposed you were testing the Xen case.
>>
>
> No, actually, I wasn't. It was booting native (the Xen boot path
>
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
And it just occurred to me PSE disabled, otherwise you would not have
needed more than 4 pages. I supposed you were testing the Xen case.
No, actually, I wasn't. It was booting native (the Xen boot path
doesn't go
Eric W. Biederman wrote:
Any chance you had CONFIG_DEBUG_PAGEALLOC selected?
Yes, that's quite likely.
J
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Chris Wright wrote:
> I was using real hardware with your .config when I reproduced it.
>
Yes, I first found it on real hardware. I haven't tested my fix on real
hardware yet, but it seems OK on kvm.
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
* Jeremy Fitzhardinge ([EMAIL PROTECTED]) wrote:
> Eric W. Biederman wrote:
> > Then why you had to allocate enough pages to cause a failure has me stumped.
> > Perhaps there is some other bug?
>
> Perhaps, but nothing comes to mind. I'll see what happens when I boot
> this kernel on real
Eric W. Biederman wrote:
> Then why you had to allocate enough pages to cause a failure has me stumped.
> Perhaps there is some other bug?
>
Perhaps, but nothing comes to mind. I'll see what happens when I boot
this kernel on real hardware (rather than kvm).
J
-
To unsubscribe from this list:
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> Since you have PSE disabled for Xen my hunch is that somehow that
>> got left on for your test boot.
>
> No. Under Xen cpuid masks out PSE (and complains if you try to set it
> in a pte), but when booting native it
Eric W. Biederman wrote:
> Since you have PSE disabled for Xen my hunch is that somehow that
> got left on for your test boot.
No. Under Xen cpuid masks out PSE (and complains if you try to set it
in a pte), but when booting native it will just use a plain unadorned cpuid.
J
-
To
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> And it just occurred to me PSE disabled, otherwise you would not have
>> needed more than 4 pages. I supposed you were testing the Xen case.
>>
>
> No, actually, I wasn't. It was booting native (the Xen boot path
Eric W. Biederman wrote:
> And it just occurred to me PSE disabled, otherwise you would not have
> needed more than 4 pages. I supposed you were testing the Xen case.
>
No, actually, I wasn't. It was booting native (the Xen boot path
doesn't go that way), and it should have booted like a
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> Jeremy did your kernel have PAE enabled?
>>
>> It just occurred to me that we have at all of the memory below 1M (say about
>> 512K) mapped and available to setup new mappings.
>>
>> The only way I can see a page fault
Eric W. Biederman wrote:
> Jeremy did your kernel have PAE enabled?
>
> It just occurred to me that we have at all of the memory below 1M (say about
> 512K) mapped and available to setup new mappings.
>
> The only way I can see a page fault happening is if you were using a PAE
> enabled kernel (so
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Chuck Ebbert wrote:
>> H. Peter Anvin wrote:
>>
>>> Andi Kleen wrote:
>>>
Then we would have seen reports surely?
>
> Yes, I would have thought so. It surprised me that such an obvious bug
> could be there, apparently for
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Chuck Ebbert wrote:
H. Peter Anvin wrote:
Andi Kleen wrote:
Then we would have seen reports surely?
Yes, I would have thought so. It surprised me that such an obvious bug
could be there, apparently for a long time. But it's
Eric W. Biederman wrote:
Jeremy did your kernel have PAE enabled?
It just occurred to me that we have at all of the memory below 1M (say about
512K) mapped and available to setup new mappings.
The only way I can see a page fault happening is if you were using a PAE
enabled kernel (so you
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
Jeremy did your kernel have PAE enabled?
It just occurred to me that we have at all of the memory below 1M (say about
512K) mapped and available to setup new mappings.
The only way I can see a page fault happening is if
Eric W. Biederman wrote:
And it just occurred to me PSE disabled, otherwise you would not have
needed more than 4 pages. I supposed you were testing the Xen case.
No, actually, I wasn't. It was booting native (the Xen boot path
doesn't go that way), and it should have booted like a normal
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
And it just occurred to me PSE disabled, otherwise you would not have
needed more than 4 pages. I supposed you were testing the Xen case.
No, actually, I wasn't. It was booting native (the Xen boot path
doesn't go
Eric W. Biederman wrote:
Since you have PSE disabled for Xen my hunch is that somehow that
got left on for your test boot.
No. Under Xen cpuid masks out PSE (and complains if you try to set it
in a pte), but when booting native it will just use a plain unadorned cpuid.
J
-
To unsubscribe
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
Since you have PSE disabled for Xen my hunch is that somehow that
got left on for your test boot.
No. Under Xen cpuid masks out PSE (and complains if you try to set it
in a pte), but when booting native it will just use
Eric W. Biederman wrote:
Then why you had to allocate enough pages to cause a failure has me stumped.
Perhaps there is some other bug?
Perhaps, but nothing comes to mind. I'll see what happens when I boot
this kernel on real hardware (rather than kvm).
J
-
To unsubscribe from this list:
* Jeremy Fitzhardinge ([EMAIL PROTECTED]) wrote:
Eric W. Biederman wrote:
Then why you had to allocate enough pages to cause a failure has me stumped.
Perhaps there is some other bug?
Perhaps, but nothing comes to mind. I'll see what happens when I boot
this kernel on real hardware
Chris Wright wrote:
I was using real hardware with your .config when I reproduced it.
Yes, I first found it on real hardware. I haven't tested my fix on real
hardware yet, but it seems OK on kvm.
J
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Eric W. Biederman wrote:
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
H. Peter Anvin wrote:
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
Or you could use a fixmap.
That certain number of page table slots should be
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> H. Peter Anvin wrote:
>> It would be *trivial* to make a certain number of page table slots
>> available at the end of the head.S-generated map.
>
> Or you could use a fixmap.
That certain number of page table slots should be the fixmap slots.
H. Peter Anvin wrote:
> It would be *trivial* to make a certain number of page table slots
> available at the end of the head.S-generated map.
Or you could use a fixmap.
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Jeremy Fitzhardinge wrote:
Why not? Er, except in the case where the page is needed to map itself
- but that can be dealt with with a transient fixmap mapping.
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
Eric W. Biederman wrote:
> If (cpu_has_pse) it may only be an additional two pages.
> INIT_MAP_BEYOND is currently mapping a lot more then that.
>
Ah, yes. It allocates an extra two pages for pagetables, and then maps
an extra 8MB or so.
>> Would that be necessary? Is there any need to
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> Consider a memory hole of size 8M immediately after our bootmem bitmap.
>> head.S which knows nothing of holes will map the pages of the hole
>> into the initial page tables assuming that is where the page tables
>>
Eric W. Biederman wrote:
> Consider a memory hole of size 8M immediately after our bootmem bitmap.
> head.S which knows nothing of holes will map the pages of the hole
> into the initial page tables assuming that is where the page tables
> will live.
>
Sure, but considering we're only talking
"H. Peter Anvin" <[EMAIL PROTECTED]> writes:
> Agreed. However, saying that your patch shouldn't go into the mainline kernel
> until that has been fixed is spurious and wrong. It fixes a real problem with
> minimal risk.
For a stable and frozen kernel it is probably the best we can do.
On Monday 23 April 2007 19:45:41 H. Peter Anvin wrote:
> Eric W. Biederman wrote:
> >
> > - I know of one system that had BIOS tables at 16MB I believe (and
> > thus had a fairly low hole).
> >
>
> Please name names, otherwise this is just rumouring. Seriously. We
> have enough cargo-cult
"H. Peter Anvin" <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>>
>> - I know of one system that had BIOS tables at 16MB I believe (and
>> thus had a fairly low hole).
>>
>
> Please name names, otherwise this is just rumouring. Seriously. We have
> enough
> cargo-cult programming as
Eric W. Biederman wrote:
- I know of one system that had BIOS tables at 16MB I believe (and
thus had a fairly low hole).
Please name names, otherwise this is just rumouring. Seriously. We
have enough cargo-cult programming as it is.
A lot of old ISA systems had an option to put a
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> H. Peter Anvin wrote:
>> Since we allocate the maximum possible memory statically, I fail to
>> see how holes could make the situation any worse, or better.
>
> No, we map enough space to map 4G (~4 pages), but we don't actually map
> 4G. If a
"H. Peter Anvin" <[EMAIL PROTECTED]> writes:
> Since we allocate the maximum possible memory statically, I fail to see how
> holes could make the situation any worse, or better.
Consider a memory hole of size 8M immediately after our bootmem bitmap.
head.S which knows nothing of holes will map
Jeremy Fitzhardinge wrote:
H. Peter Anvin wrote:
Since we allocate the maximum possible memory statically, I fail to
see how holes could make the situation any worse, or better.
No, we map enough space to map 4G (~4 pages), but we don't actually map
4G. If a hole happened to start within
H. Peter Anvin wrote:
> Since we allocate the maximum possible memory statically, I fail to
> see how holes could make the situation any worse, or better.
No, we map enough space to map 4G (~4 pages), but we don't actually map
4G. If a hole happened to start within that 4 page mapping, then the
Jeremy Fitzhardinge wrote:
Eric W. Biederman wrote:
The only way to ensure this will not happen is to do what we do
on x86_64 and map the new page table page into our address space
before we write to it. Assuming the page we allocate is already
mapped is simply not robust.
So you mean
Eric W. Biederman wrote:
> The only way to ensure this will not happen is to do what we do
> on x86_64 and map the new page table page into our address space
> before we write to it. Assuming the page we allocate is already
> mapped is simply not robust.
>
So you mean make alloc_bootmem make
Eric W. Biederman wrote:
I happened to be looking at this stretch of code and I have realized
that this is quite simply the wrong fix.
The problem is that it depends intimately on the details of
alloc_bootmem_pages_low. Essentially the problem is that when
we are setting up the identity
Jeremy Fitzhardinge <[EMAIL PROTECTED]> writes:
> Chuck Ebbert wrote:
>> H. Peter Anvin wrote:
>>
>>> Andi Kleen wrote:
>>>
Then we would have seen reports surely?
>
> Yes, I would have thought so. It surprised me that such an obvious bug
> could be there, apparently for
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Chuck Ebbert wrote:
H. Peter Anvin wrote:
Andi Kleen wrote:
Then we would have seen reports surely?
Yes, I would have thought so. It surprised me that such an obvious bug
could be there, apparently for a long time. But it's
Eric W. Biederman wrote:
I happened to be looking at this stretch of code and I have realized
that this is quite simply the wrong fix.
The problem is that it depends intimately on the details of
alloc_bootmem_pages_low. Essentially the problem is that when
we are setting up the identity
Eric W. Biederman wrote:
The only way to ensure this will not happen is to do what we do
on x86_64 and map the new page table page into our address space
before we write to it. Assuming the page we allocate is already
mapped is simply not robust.
So you mean make alloc_bootmem make sure
Jeremy Fitzhardinge wrote:
Eric W. Biederman wrote:
The only way to ensure this will not happen is to do what we do
on x86_64 and map the new page table page into our address space
before we write to it. Assuming the page we allocate is already
mapped is simply not robust.
So you mean
H. Peter Anvin wrote:
Since we allocate the maximum possible memory statically, I fail to
see how holes could make the situation any worse, or better.
No, we map enough space to map 4G (~4 pages), but we don't actually map
4G. If a hole happened to start within that 4 page mapping, then the
H. Peter Anvin [EMAIL PROTECTED] writes:
Since we allocate the maximum possible memory statically, I fail to see how
holes could make the situation any worse, or better.
Consider a memory hole of size 8M immediately after our bootmem bitmap.
head.S which knows nothing of holes will map the
Jeremy Fitzhardinge wrote:
H. Peter Anvin wrote:
Since we allocate the maximum possible memory statically, I fail to
see how holes could make the situation any worse, or better.
No, we map enough space to map 4G (~4 pages), but we don't actually map
4G. If a hole happened to start within
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
H. Peter Anvin wrote:
Since we allocate the maximum possible memory statically, I fail to
see how holes could make the situation any worse, or better.
No, we map enough space to map 4G (~4 pages), but we don't actually map
4G. If a hole
Eric W. Biederman wrote:
- I know of one system that had BIOS tables at 16MB I believe (and
thus had a fairly low hole).
Please name names, otherwise this is just rumouring. Seriously. We
have enough cargo-cult programming as it is.
A lot of old ISA systems had an option to put a
H. Peter Anvin [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
- I know of one system that had BIOS tables at 16MB I believe (and
thus had a fairly low hole).
Please name names, otherwise this is just rumouring. Seriously. We have
enough
cargo-cult programming as it is.
The
On Monday 23 April 2007 19:45:41 H. Peter Anvin wrote:
Eric W. Biederman wrote:
- I know of one system that had BIOS tables at 16MB I believe (and
thus had a fairly low hole).
Please name names, otherwise this is just rumouring. Seriously. We
have enough cargo-cult programming
H. Peter Anvin [EMAIL PROTECTED] writes:
Agreed. However, saying that your patch shouldn't go into the mainline kernel
until that has been fixed is spurious and wrong. It fixes a real problem with
minimal risk.
For a stable and frozen kernel it is probably the best we can do.
However the
Eric W. Biederman wrote:
Consider a memory hole of size 8M immediately after our bootmem bitmap.
head.S which knows nothing of holes will map the pages of the hole
into the initial page tables assuming that is where the page tables
will live.
Sure, but considering we're only talking about
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
Eric W. Biederman wrote:
Consider a memory hole of size 8M immediately after our bootmem bitmap.
head.S which knows nothing of holes will map the pages of the hole
into the initial page tables assuming that is where the page tables
will live.
Eric W. Biederman wrote:
If (cpu_has_pse) it may only be an additional two pages.
INIT_MAP_BEYOND is currently mapping a lot more then that.
Ah, yes. It allocates an extra two pages for pagetables, and then maps
an extra 8MB or so.
Would that be necessary? Is there any need to remap it?
Jeremy Fitzhardinge wrote:
Why not? Er, except in the case where the page is needed to map itself
- but that can be dealt with with a transient fixmap mapping.
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
H. Peter Anvin wrote:
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
Or you could use a fixmap.
J
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
H. Peter Anvin wrote:
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
Or you could use a fixmap.
That certain number of page table slots should be the fixmap slots.
If you do
Eric W. Biederman wrote:
Jeremy Fitzhardinge [EMAIL PROTECTED] writes:
H. Peter Anvin wrote:
It would be *trivial* to make a certain number of page table slots
available at the end of the head.S-generated map.
Or you could use a fixmap.
That certain number of page table slots should be
Chuck Ebbert wrote:
> H. Peter Anvin wrote:
>
>> Andi Kleen wrote:
>>
>>> Then we would have seen reports surely?
>>>
Yes, I would have thought so. It surprised me that such an obvious bug
could be there, apparently for a long time. But it's real, and
potentially affects
H. Peter Anvin wrote:
> Andi Kleen wrote:
>> On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
>>> Andi Kleen wrote:
> Is some version of this going in for 2.6.21, or is it not a real
> problem?
When it's only seen with Xen it's not a real problem right now.
>>> It's not just
Andi Kleen wrote:
On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen, though. It will affect all
On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
> Andi Kleen wrote:
> >> Is some version of this going in for 2.6.21, or is it not a real problem?
> >
> > When it's only seen with Xen it's not a real problem right now.
>
> It's not just seen only with Xen, though. It will affect all
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen, though. It will affect all kernels in
a particular range of sizes, and we have ordinary kernels
>
> Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Jeremy Fitzhardinge wrote:
> head.S creates the very initial pagetable for the kernel. This just
> maps enough space for the kernel itself, and an allocation bitmap.
> The amount of mapped memory is rounded up to 4Mbytes, and so this
> typically ends up mapping 8Mbytes of memory.
>
> When
Jeremy Fitzhardinge wrote:
head.S creates the very initial pagetable for the kernel. This just
maps enough space for the kernel itself, and an allocation bitmap.
The amount of mapped memory is rounded up to 4Mbytes, and so this
typically ends up mapping 8Mbytes of memory.
When booting,
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
-Andi
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen, though. It will affect all kernels in
a particular range of sizes, and we have ordinary kernels
On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen, though. It will affect all kernels in
Andi Kleen wrote:
On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen, though. It will affect all
H. Peter Anvin wrote:
Andi Kleen wrote:
On Thursday 19 April 2007 22:55:50 H. Peter Anvin wrote:
Andi Kleen wrote:
Is some version of this going in for 2.6.21, or is it not a real
problem?
When it's only seen with Xen it's not a real problem right now.
It's not just seen only with Xen,
Chuck Ebbert wrote:
H. Peter Anvin wrote:
Andi Kleen wrote:
Then we would have seen reports surely?
Yes, I would have thought so. It surprised me that such an obvious bug
could be there, apparently for a long time. But it's real, and
potentially affects everyone. It
Jan Engelhardt <[EMAIL PROTECTED]> writes:
> LOW_PAGES = (0x1ULL - __PAGE_OFFSET) >> PAGE_SHIT_asm
The assembler does not now anything about ULL.
Andreas.
--
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key
On Apr 14 2007 15:04, H. Peter Anvin wrote:
>
> Jeremy Fitzhardinge wrote:
>> -
>> +LOW_PAGES = 1<<(32-PAGE_SHIFT_asm)
>> +
>
> Again, for debugging... it would be interesting to replace this with:
>
> LOW_PAGES = (0x1-__PAGE_OFFSET) >> PAGE_SHIFT_asm
LOW_PAGES = (0x1ULL -
On Apr 14 2007 15:04, H. Peter Anvin wrote:
Jeremy Fitzhardinge wrote:
-
+LOW_PAGES = 1(32-PAGE_SHIFT_asm)
+
Again, for debugging... it would be interesting to replace this with:
LOW_PAGES = (0x1-__PAGE_OFFSET) PAGE_SHIFT_asm
LOW_PAGES = (0x1ULL - __PAGE_OFFSET)
Jan Engelhardt [EMAIL PROTECTED] writes:
LOW_PAGES = (0x1ULL - __PAGE_OFFSET) PAGE_SHIT_asm
The assembler does not now anything about ULL.
Andreas.
--
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint
Jeremy Fitzhardinge wrote:
-
+LOW_PAGES = 1<<(32-PAGE_SHIFT_asm)
+
Again, for debugging... it would be interesting to replace this with:
LOW_PAGES = (0x1-__PAGE_OFFSET) >> PAGE_SHIFT_asm
... to smoke out further problems; this will take the strict definition
of "lowmem" (modulo the
head.S creates the very initial pagetable for the kernel. This just
maps enough space for the kernel itself, and an allocation bitmap.
The amount of mapped memory is rounded up to 4Mbytes, and so this
typically ends up mapping 8Mbytes of memory.
When booting, pagetable_init() needs to create
head.S creates the very initial pagetable for the kernel. This just
maps enough space for the kernel itself, and an allocation bitmap.
The amount of mapped memory is rounded up to 4Mbytes, and so this
typically ends up mapping 8Mbytes of memory.
When booting, pagetable_init() needs to create
Jeremy Fitzhardinge wrote:
-
+LOW_PAGES = 1(32-PAGE_SHIFT_asm)
+
Again, for debugging... it would be interesting to replace this with:
LOW_PAGES = (0x1-__PAGE_OFFSET) PAGE_SHIFT_asm
... to smoke out further problems; this will take the strict definition
of lowmem (modulo the pci
84 matches
Mail list logo