yeah I remembered how it all worked after I wrote that..
You'd think they'd eventually get the idea of letting the kernel have it's
own 'cr3' and some TLBs eh?
listenning intel?
On 8 Jul 1999, Ville-Pertti Keinonen wrote:
[EMAIL PROTECTED] (Julian Elischer) writes:
we already use the
[EMAIL PROTECTED] (Julian Elischer) writes:
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
You can't extend the address space that way,
[EMAIL PROTECTED] (Patryk Zadarnowski) writes:
You can't extend the address space that way, segments are all parts of
the single 4GB address space described by the page mapping.
True, but you can reserve a part of the 4GB address space (say 128MB of it)
for partitioning into tiny (say
:yeah I remembered how it all worked after I wrote that..
:You'd think they'd eventually get the idea of letting the kernel have it's
:own 'cr3' and some TLBs eh?
:
:listenning intel?
This is intel we are talking about. Their mmu/cache technology is
always a few years behind the times.
jul...@whistle.com (Julian Elischer) writes:
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
You can't extend the address space that way,
yeah I remembered how it all worked after I wrote that..
You'd think they'd eventually get the idea of letting the kernel have it's
own 'cr3' and some TLBs eh?
listenning intel?
On 8 Jul 1999, Ville-Pertti Keinonen wrote:
jul...@whistle.com (Julian Elischer) writes:
we already use the
jul...@whistle.com (Julian Elischer) writes:
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
You can't extend the address space that way,
dil...@apollo.backplane.com (Matthew Dillon) writes:
pair-down the fields in both structures. For example, the vnode structure
contains a lot of temporary clustering fields that could be removed
entirely if clustering operations are done at the time of the actual I/O
rather
patr...@mycenae.ilion.eu.org (Patryk Zadarnowski) writes:
You can't extend the address space that way, segments are all parts of
the single 4GB address space described by the page mapping.
True, but you can reserve a part of the 4GB address space (say 128MB of it)
for partitioning into
:yeah I remembered how it all worked after I wrote that..
:You'd think they'd eventually get the idea of letting the kernel have it's
:own 'cr3' and some TLBs eh?
:
:listenning intel?
This is intel we are talking about. Their mmu/cache technology is
always a few years behind the times.
On Wed, 7 Jul 1999 17:03:16 -0700 (PDT)
Matthew Dillon [EMAIL PROTECTED] wrote:
If this could result in a smaller overall structure, it may be worth it.
To really make the combined structure smaller we would also have to
pair-down the fields in both structures. For example,
Ok, but for a kernel hacker this *should* be funny.
My system locked up because it had too much memory. Specifically, there
is a contrived limit to the size of the kernel malloc pool of 40MB,
and 80MB for the entire pool based on VM_KMEM_SIZE_MAX.
Unfortunately, if you have
Since we have increased the hard page table allocation for the kernel to
1G (?) we should be able to safely increase VM_KMEM_SIZE_MAX. I was
thinking of increasing it to 512MB. This increase only effects
large-memory systems. It keeps them from locking up :-)
Anyone have
:
: Yes, I do - at least with the 512MB figure. That would be half of the 1GB
:KVA space and large systems really need that space for things like network
:buffers and other map regions.
:
:-DG
:
:David Greenman
:Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org
: Yes, I do - at least with the 512MB figure. That would be half of the 1GB
:KVA space and large systems really need that space for things like network
:buffers and other map regions.
:
:-DG
:
:David Greenman
:Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org
:Creator
:limit ought to work for a 4G machine
:
:Since most of those news files were small, I think Kirk's news test code
:is pretty much the worse case scenario as far as vnode allocation goes.
:
: Well, I could possibly live with 256MB, but the vnode/fsnode consumption
:seems to be getting
David Greenman wrote:
Yes, I do - at least with the 512MB figure. That would be half of the 1GB
KVA space and large systems really need that space for things like network
buffers and other map regions.
Matthew Dillon dil...@apollo.backplane.com wrote:
What would be an acceptable upper
:It appears we're rapidly approaching the point where 32-bits isn't
:enough. We could increase KVA - but that cuts into process VM space
:(and a large machine is likely to have large processes).
True, though what we are talking about here is scaling issue with
main memory. We should be
:limit ought to work for a 4G machine
:
:Since most of those news files were small, I think Kirk's news test code
:is pretty much the worse case scenario as far as vnode allocation goes.
:
: Well, I could possibly live with 256MB, but the vnode/fsnode consumption
:seems to be getting
On Wed, 7 Jul 1999, Matthew Dillon wrote:
:limit ought to work for a 4G machine
:
:Since most of those news files were small, I think Kirk's news test code
:is pretty much the worse case scenario as far as vnode allocation goes.
:
: Well, I could possibly live with 256MB,
: We've been here before, a couple of times. This started to become an issue
:when the limits were removed and has gotten worse as the vnode and fsnode
:structs have grown over time. We're running into some limits on how much
:space we can give to the kernel since there are a number of folks
:or do what Kirk wants to do and merge the VM and Vnode structures
:I belive the UVM does a bit in this direction due to kirk's influence.
:
:julian
If this could result in a smaller overall structure, it may be worth it.
To really make the combined structure smaller we would also have to
On Wed, 7 Jul 1999 16:55:28 -0700 (PDT)
Julian Elischer jul...@whistle.com wrote:
or do what Kirk wants to do and merge the VM and Vnode structures
I belive the UVM does a bit in this direction due to kirk's influence.
A uvm_object is not a standalone thing in UVM. Every thing that's
On Wed, 7 Jul 1999, Jason Thorpe wrote:
On Wed, 7 Jul 1999 16:55:28 -0700 (PDT)
Julian Elischer jul...@whistle.com wrote:
or do what Kirk wants to do and merge the VM and Vnode structures
I belive the UVM does a bit in this direction due to kirk's influence.
A uvm_object is not a
On Wed, 7 Jul 1999 17:03:16 -0700 (PDT)
Matthew Dillon dil...@apollo.backplane.com wrote:
If this could result in a smaller overall structure, it may be worth it.
To really make the combined structure smaller we would also have to
pair-down the fields in both structures. For
Jason Thorpe wrote:
On Wed, 7 Jul 1999 17:03:16 -0700 (PDT)
Matthew Dillon dil...@apollo.backplane.com wrote:
If this could result in a smaller overall structure, it may be worth i
t.
To really make the combined structure smaller we would also have to
pair-down the
On Thu, 08 Jul 1999 08:36:19 +0800
Peter Wemm pe...@netplex.com.au wrote:
Out of curiosity, how does it handle the problem of small 512 byte
directories? Does it consume a whole page or does it do something smarter?
Or does the ubc work apply to read/write only and the filesystem itself
:The way this is done in the still-in-development branch of NetBSD's
:unified buffer cache is to basically elimiate the old buffer cache
:interface for vnode read/write completely. When you want to do that
:sort of I/O to a vnode, you simply map a window of the object into
:KVA space (via
:On Thu, 08 Jul 1999 08:36:19 +0800
: Peter Wemm pe...@netplex.com.au wrote:
:
: Out of curiosity, how does it handle the problem of small 512 byte
: directories? Does it consume a whole page or does it do something smarter?
: Or does the ubc work apply to read/write only and the filesystem
On Thursday, 8 July 1999 at 9:26:09 +1000, Peter Jeremy wrote:
David Greenman wrote:
Yes, I do - at least with the 512MB figure. That would be half of the 1GB
KVA space and large systems really need that space for things like network
buffers and other map regions.
Matthew Dillon
:Why not put the kernel in a different address space? IIRC there's no
:absolute requirement for the kernel and userland to be in the same
:address space, and that way we would have 4 GB for each.
:
:Greg
No, the syscall overhead is way too high if we have to mess with MMU
context. This
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
julian
On Wed, 7 Jul 1999, Matthew Dillon wrote:
:Why not put the kernel in a different address
Why not put the kernel in a different address space? IIRC there's no
absolute requirement for the kernel and userland to be in the same
address space, and that way we would have 4 GB for each.
Wouldn't that make system calls that need to share data between kernel
and user spaces hopelessly
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
... as I've suggested a few days ago, and was told to shut up with a (rather
irrelevant) reference
On Wed, 7 Jul 1999 18:21:03 -0700 (PDT)
Matthew Dillon dil...@apollo.backplane.com wrote:
Now, I also believe that when UVM maps those pages, it makes them
copy-on-write so I/O can be initiated on the data without having to
stall anyone attempting to make further
On Thu, 8 Jul 1999, Patryk Zadarnowski wrote:
Why not put the kernel in a different address space? IIRC there's no
absolute requirement for the kernel and userland to be in the same
address space, and that way we would have 4 GB for each.
Wouldn't that make system calls that need to
36 matches
Mail list logo