Depending on what your memory needs to do you might find Guava useful:
https://github.com/google/guava/wiki/CachesExplained

Its a time/access based cache which will remove elements if they time out.

Its all dependant on what your business logic is though.

On 8 December 2016 at 10:48:17, Eranga Heshan ([email protected]) wrote:

I think you are correct because I ran "free -m" command on the terminal and
saw that free memory was rapidly decreasing. When it was near empty, the
worker was killed. Then a new worker was started with a new PID.

And yeah, there is a lot of data cached in memory waiting to be executed or
waiting to be sent over the network. Probably that would be the issue.

I have not heard of Guava caching technique. Instead of caching, I think of
extending the memory of the server so I could extend the heap assignment to
the worker.

Thanks,
Regards,



Eranga Heshan
*Undergraduate*
Computer Science & Engineering
University of Moratuwa
Mobile:  +94 71 138 2686 <%2B94%2071%20552%202087>
Email: [email protected] <[email protected]>
<https://www.facebook.com/erangaheshan>   <https://twitter.com/erangaheshan>
   <https://www.linkedin.com/in/erangaheshan>

On Thu, Dec 8, 2016 at 2:42 PM, Matt Lowe <[email protected]> wrote:

> My guess is the following:
>
> 1. Memory limit is reached within worker JVM
> 2. Worker process starts to throw OutOfMemoryErrors (though this could be
> silent and not in the logs)
> 3. JVM will terminate due to lack of memory
> 4. Supervisor will start a new JVM with new process Id and start a worker
>
> This will happen over and over. As storm is a Highly Available service, a
> new worker will be added if one should die.
>
> I assume you are caching a lot of data in your bolt to reach this limit?
> Have you tried using Guava cache?
>
> //Matt
>
>
> On 8 December 2016 at 09:19:43, Matt Lowe ([email protected])
> wrote:
>
> Hello Eranga,
>
> Do you have the worker/supervisor logs when the worker dies?
>
> //Matt
>
>
> On 8 December 2016 at 03:07:39, Eranga Heshan ([email protected])
> wrote:
>
> Hi all,
>
> In Storm, I configured the worker's maximum heap size to 1900MB. Then I
> carefully monitored what happens when it reaches the limit.
>
> The worker is killed and a new worker is spawned with another process id.
> Is my observation correct? If so why is that happening?
>
> Thanks,
> Regards,
>
>
> Eranga Heshan
> *Undergraduate*
> Computer Science & Engineering
> University of Moratuwa
> Mobile:  +94 71 138 2686 <%2B94%2071%20552%202087>
> Email: [email protected] <[email protected]>
> <https://www.facebook.com/erangaheshan>
> <https://twitter.com/erangaheshan>
> <https://www.linkedin.com/in/erangaheshan>
>
>

Reply via email to