> I am running into a problem where my vm grows too big, to fast (faster than the
> consumers of the collected data can consume it), resulting in a
> OutOfMemoryError in all running threads. I changed the vm to run with -mx50m
> and it ran for much longer (about 5 days) then it too did the same thing.
>
> Currently I am trying to add some sort of alert before it runs out of memory,
> mostly for debugging purposes, but it seems like the set of methods to access
> memory do not take into account the maximum limits passed in via -mx. I can
> see that this is a good thing, because I could tell it that it has some insane
> amount of memory when it really has a few megs... but is there a happy medium?
>
It might seem naive, but have you try to regularly force garbage collcetion
(System.gc() ) ?
I have some problems of this form with memory consuming operations, that caused me
unecessary out of memory errors (with the vm running on 128M ). Adding a regular
garbage collector call somewhat improved the situation.
TIA
Dimitris