My guess is the following:

1. Memory limit is reached within worker JVM
2. Worker process starts to throw OutOfMemoryErrors (though this could be
silent and not in the logs)
3. JVM will terminate due to lack of memory
4. Supervisor will start a new JVM with new process Id and start a worker

This will happen over and over. As storm is a Highly Available service, a
new worker will be added if one should die.

I assume you are caching a lot of data in your bolt to reach this limit?
Have you tried using Guava cache?

//Matt


On 8 December 2016 at 09:19:43, Matt Lowe ([email protected]) wrote:

Hello Eranga,

Do you have the worker/supervisor logs when the worker dies?

//Matt


On 8 December 2016 at 03:07:39, Eranga Heshan ([email protected]) wrote:

Hi all,

In Storm, I configured the worker's maximum heap size to 1900MB. Then I
carefully monitored what happens when it reaches the limit.

The worker is killed and a new worker is spawned with another process id.
Is my observation correct? If so why is that happening?

Thanks,
Regards,


Eranga Heshan
*Undergraduate*
Computer Science & Engineering
University of Moratuwa
Mobile:  +94 71 138 2686 <%2B94%2071%20552%202087>
Email: [email protected] <[email protected]>
<https://www.facebook.com/erangaheshan>   <https://twitter.com/erangaheshan>
   <https://www.linkedin.com/in/erangaheshan>

Reply via email to