| From: Christopher Schultz [mailto:[EMAIL PROTECTED]
| Sent: Wednesday, 14 March, 2007 12:05
| 
| Perhaps, but the JVM actually refuses to start right away. In my "eat
| all my memory" tests, I was able to eat around 1.6GB before I brought my
| machine to a crawl. It took more than a minute for my terminal to
| respond to a CRTL-C (or perhaps the kernel killed my process for virtual
| memory exhaustion). That process took at least a minute.

I imagine that when the JVM calls [cm]alloc, one of the first things alloc()
does is call sbrk() to expand your process' memory space.  That'll fail
right away if you don't have enough VM available.  If you're expanding the
arena in stages, then you won't die until you hit the limit.  If you want to
hit the limit faster, double your allocation size each time.  Once you hit
the limit (malloc() should return (void *)-1), halve your allocation unit
size and try again.  Keep halving and allocating until you get success.
Keep halving and allocating until your allocation size is less than a page.
You might also want to see what results you get by just calling realloc()
with increasing sizes.  I may try this myself when I get home (don't have a
Unix box handy here), but I'm running 64-bit Solaris on an Opteron, so I
don't know if my results will help.
 
| Then again, I was allocating my memory in 50MB chunks and then writing
| to them. Perhaps Sun's JVM on Linux allocates the whole chunk of memory
| at once (which doesn't do anything on Linux glibc) and then writes to
| the last page to see that it can be used. That doesn't seem right, so
| there must be some other voodoo going on here.

What doesn't seem right?  Remember, the heap isn't a bunch of empty pages;
you have to set up your block list, then initialize PermGen and other system
spaces.  It's entirely possible that the JVM needs to access the last page
as part of initializing itself.  I know nothing about JVM internals and not
much about Linux, but I used to be a fair VMS hacker, and I did learn a bit
about virtual memory implementation (all probably obsolete if not downright
wrong nowadays...)
 
| I have a buddy with a 128MB ppc-Linux 2.6 box with 512MB swap
| and java installed. I had him check, and his bombs (with a jvm panic!)
| when asking for even 512MB. So, it looks like physical memory and swap
| also have an effect. I would have expected the JVM to simply attempt to
| allocate the memory and use it.

Which is probably exactly what happened.  The panic arose when the JVM
attempted to initialize the heap, and found out that the kernel had lied to
it about there being 512M available.  Maybe the PPC kernel does some kind of
"optimistic allocation", where it marks the process size appropriately and
just uses an allocate-on-write mechanism to permit the process to use pages
up to the stated memory limit.  This would permit the kernel to
over-allocate VM on the pretext that processes that request large amounts of
memory often don't use much of it.  Kind of a "VM-kiting" scheme, where the
kernel allocates memory it doesn't have in the hopes that when the
requesting program goes to use it, some other program will have exited and
freed up some space.  This is all pure conjecture on my part, feel free to
laugh heartily.
-----------------------------------------
------------------------------------------------------------
The information contained in this message is confidential
proprietary property of Nelnet, Inc. and its affiliated 
companies (Nelnet) and is intended for the recipient only.
Any reproduction, forwarding, or copying without the express
permission of Nelnet is strictly prohibited. If you have
received this communication in error, please notify us
immediately by replying to this e-mail.
------------------------------------------------------------

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to