On Tue, Jul 16, 2013 at 02:12:42PM -0700, Alan Cox wrote:
> On Tue, Jul 16, 2013 at 7:08 AM, Kurt Lidl <l...@pix.net> wrote:
> 
> > On Wed, Jun 19, 2013 at 1:32 AM, Chris Torek <chris.torek at gmail.com>
> >> wrote:
> >>
> >>  In src/sys/amd64/include/vmparam.**h is this handy map:
> >>>
> >>>  * 0x0000000000000000 - 0x00007fffffffffff   user map
> >>>  * 0x0000800000000000 - 0xffff7fffffffffff   does not exist (hole)
> >>>  * 0xffff800000000000 - 0xffff804020100fff   recursive page table (512GB
> >>> slot)
> >>>  * 0xffff804020101000 - 0xfffffdffffffffff   unused
> >>>  * 0xfffffe0000000000 - 0xfffffeffffffffff   1TB direct map
> >>>  * 0xffffff0000000000 - 0xffffff7fffffffff   unused
> >>>  * 0xffffff8000000000 - 0xffffffffffffffff   512GB kernel map
> >>
> >> The actual data that I've seen shows that DIMMs are doubling in size at
> >> about half that pace, about every three years.  For example, see
> >> http://users.ece.cmu.edu/~**omutlu/pub/mutlu_memory-**
> >> scaling_imw13_invited-talk.**pdfslide<http://users.ece.cmu.edu/~omutlu/pub/mutlu_memory-scaling_imw13_invited-talk.pdfslide>
> >> #8.  So, I think that a factor of 16 is a lot more than we'll need in
> >> the next five years.  I would suggest configuring the kernel virtual
> >> address space for 4 TB.  Once you go beyond 512 GB, 4 TB is the net
> >> "plateau" in terms of address translation cost.  At 4 TB all of the PML4
> >> entries for the kernel virtual address space will reside in the same L2
> >> cache line, so a page table walk on a TLB miss for an instruction fetch
> >> will effectively prefetch the PML4 entry for the kernel heap and vice
> >> versa.
> >>
> >
> > The largest commodity motherboards that are shipping today support
> > 24 DIMMs, at a max size of 32GB per DIMM.  That's 768GB, right now.
> > (So FreeBSD is already "out of bits" in terms of supporting current
> > shipping hardware.)
> 
> 
> 
> Actually, this scenario with 768 GB of RAM on amd64 as it is today is
> analogous to the typical 32-bit i386 machine, where the amount of RAM has
> long exceeded the default 1 GB size of the kernel virtual address space.
>  In theory, we could currently handle up to 1 TB of RAM, but the kernel
> virtual address space would only be 512 GB.

Talking about virtual address space.
I plan to permanently mmap serveral nGB sized files (all together 6-8TB)
into a single process address space.
Actually I see the user map is 128TB, so I shouldn't get into trouble
by doing this and also have lot of additional space left to avoid problems
because of fragmentation.
The system has 192G physical memory, so mapping tables have enough space.
Is there anything else to worry about?

-- 
B.Walter <be...@bwct.de> http://www.bwct.de
Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm.
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to