This is usual in multithreaded programs
There are two solutions to these sorts of problems
1) Limit the number of threads to 5 at any time.
2) Write the data on a temporary files and read it later.

Which one is a mater of time ( speed ).

If you are reading over a network, pipe the data directly on to
temporary files.
Then read the data from the files and delete the files


> I am running into a problem where my vm grows too big, to fast (faster
than the
> consumers of the collected data can consume it), resulting in a
> OutOfMemoryError in all running threads.  I changed the vm to run with
-mx50m
> and it ran for much longer (about 5 days) then it too did the same
thing.
>
> Currently I am trying to add some sort of alert before it runs out of
memory,
> mostly for debugging purposes, but it seems like the set of methods to
access
> memory do not take into account the maximum limits passed in via -mx.
I can
> see that this is a good thing, because I could tell it that it has
some insane
> amount of memory when it really has a few megs... but is there a happy
medium?
>


Reply via email to