Hi,
I am facing below error msg now. please help me.
2016-01-21 16:06:14,123 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
connect to /xxx.xx.xx.xx:50010 for block, add to deadNodes and continue.
java.nio.channels.ClosedByInterruptException
java.nio.channels.ClosedByInterruptException
at
Can you look in the executor logs and see why the sparkcontext is being
shutdown? Similar discussion happened here previously.
http://apache-spark-user-list.1001560.n3.nabble.com/RECEIVED-SIGNAL-15-SIGTERM-td23668.html
Thanks
Best Regards
On Thu, Jan 21, 2016 at 5:11 PM, Soni spark
Please also check AppMaster log.
Thanks
> On Jan 21, 2016, at 3:51 AM, Akhil Das wrote:
>
> Can you look in the executor logs and see why the sparkcontext is being
> shutdown? Similar discussion happened here previously.
>
Hi Friends,
I spark job is successfully running on local mode but failing on
cluster mode. Below is the error message i am getting. anyone can help
me.
16/01/21 16:38:07 INFO twitter4j.TwitterStreamImpl: Establishing connection.
16/01/21 16:38:07 INFO twitter.TwitterReceiver: Twitter receiver
Exception below is at WARN level.
Can you check hdfs healthiness ?
Which hadoop version are you using ?
There should be other fatal error if your job failed.
Cheers
On Thu, Jan 21, 2016 at 4:50 AM, Soni spark
wrote:
> Hi,
>
> I am facing below error msg now. please