-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tracy,

Nelson, Tracy M. wrote:
> I imagine that when the JVM calls [cm]alloc, one of the first things alloc()
> does is call sbrk() to expand your process' memory space.  That'll fail
> right away if you don't have enough VM available.

I do not experience that scenario when allocating memory on my machine.
I was able to calloc up to 3.9GB of memory on my 1GB-physical 1GB-swap
machine.

Linux doesn't actually allocate the memory until you try to use it,
which is why I was puzzled by fast-fail behavior of the JVM.

> Once you hit
> the limit (malloc() should return (void *)-1), halve your allocation unit
> size and try again.

This is certainly complicated by the way that glibc malloc and calloc
work (by lying to the requesting program).

> I may try this myself when I get home (don't have a
> Unix box handy here), but I'm running 64-bit Solaris on an Opteron, so I
> don't know if my results will help.

Yeah, not so much. Perhaps in a 32-bit (non-Java) virtual machine.

> | Then again, I was allocating my memory in 50MB chunks and then writing
> | to them. Perhaps Sun's JVM on Linux allocates the whole chunk of memory
> | at once (which doesn't do anything on Linux glibc) and then writes to
> | the last page to see that it can be used. That doesn't seem right, so
> | there must be some other voodoo going on here.
> 
> What doesn't seem right?  Remember, the heap isn't a bunch of empty pages;
> you have to set up your block list, then initialize PermGen and other system
> spaces.  It's entirely possible that the JVM needs to access the last page
> as part of initializing itself.

Well, I think that glibc malloc and the Linux memory manager collude to
do nothing with requested-but-unused memory. So, if the JVM allocates
3GB of Java heap space and then the writes to only 5% of pages (I'm
assuming that part of heap initialization is not zeroing the memory or
anything stupid like that), only that small amount of memory gets
allocated and the JVM should be able to start up without complaint (even
though it will eventually fail due to OS trickery).

> Maybe the PPC kernel does some kind of
> "optimistic allocation", where it marks the process size appropriately and
> just uses an allocate-on-write mechanism to permit the process to use pages
> up to the stated memory limit.

That's exactly what glibc and Linux are documented (and observed ;) to
do, which is something I've been trying to point out from the beginning.

> This would permit the kernel to
> over-allocate VM on the pretext that processes that request large amounts of
> memory often don't use much of it.  Kind of a "VM-kiting" scheme, where the
> kernel allocates memory it doesn't have in the hopes that when the
> requesting program goes to use it, some other program will have exited and
> freed up some space.  This is all pure conjecture on my part, feel free to
> laugh heartily.

It's no joke: it happens, and when you try to use your memory that
doesn't exist, the Linux kernel basically chooses a random process to
kill in order to reclaim memory to allow the write to work. Usually,
your own application (the JVM, in this case), just fails horribly.

In my tests on the larger machine, the JVM kindly tells me that it can't
give me that much memory, rather than crashing and burning as I would
expect after being tricked by the OS.

- -chris

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF+FSf9CaO5/Lv0PARAi0uAJ96H6jR5efHLa+NhuReA/+lGeVwQwCeN5kV
nZ6IQZ1iU1dikw+ocRA75o4=
=wl32
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to