Re: Spark ResourceLeak??

2016-07-19 Thread Ted Yu
ResourceLeakDetector doesn't seem to be from Spark.

Please check dependencies for potential leak.

Cheers

On Tue, Jul 19, 2016 at 6:11 AM, Guruji <saurabh.g...@gmail.com> wrote:

> I am running a Spark Cluster on Mesos. The module reads data from Kafka as
> DirectStream and pushes it into elasticsearch after referring to a redis
> for
> getting Names against IDs.
>
> I have been getting this message in my worker logs.
>
> *16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are creating too
> many HashedWheelTimer instances.  HashedWheelTimer is a shared resource
> that
> must be reused across the JVM,so that only a few instances are created.
> *
>
> Can't figure out the reason for the Resource Leak. Although when this
> happens, the Batches start slowing down and the pending Queue increases.
> There is hardly going back from there, other than killing it and starting
> it
> again.
>
> Any idea why the resource leak? This message seems to be related to akka
> when I googled. I am using Spark 1.6.2.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-ResourceLeak-tp27355.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Spark ResourceLeak?

2016-07-19 Thread saurabh guru
I am running a Spark Cluster on Mesos. The module reads data from Kafka as
DirectStream and pushes it into elasticsearch after referring to a redis
for getting Names against IDs.

I have been getting this message in my worker logs.


*16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are creating too
many HashedWheelTimer instances.  HashedWheelTimer is a shared resource
that must be reused across the JVM,so that only a few instances are
created. *

Can't figure out the reason for the Resource Leak. Although when this
happens, the Batches start slowing down and the pending Queue increases.
There is hardly going back from there, other than killing it and starting
it again.

Any idea why the resource leak? This message seems to be related to akka
when I googled. I am using Spark 1.6.2.

-- 
Thanks,
Saurabh


Spark ResourceLeak??

2016-07-19 Thread Guruji
I am running a Spark Cluster on Mesos. The module reads data from Kafka as
DirectStream and pushes it into elasticsearch after referring to a redis for
getting Names against IDs.

I have been getting this message in my worker logs. 

*16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are creating too
many HashedWheelTimer instances.  HashedWheelTimer is a shared resource that
must be reused across the JVM,so that only a few instances are created.
*

Can't figure out the reason for the Resource Leak. Although when this
happens, the Batches start slowing down and the pending Queue increases.
There is hardly going back from there, other than killing it and starting it
again.

Any idea why the resource leak? This message seems to be related to akka
when I googled. I am using Spark 1.6.2.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-ResourceLeak-tp27355.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org