Re: Stand Alone Cluster - Strange issue

2015-12-22 Thread Madabhattula Rajesh Kumar
Hi Ted,

Thank you. Yes. This issue is related to
https://issues.apache.org/jira/browse/SPARK-4170

Regards,
Rajesh

On Wed, Dec 23, 2015 at 12:09 AM, Ted Yu  wrote:

> This should be related:
> https://issues.apache.org/jira/browse/SPARK-4170
>
> On Tue, Dec 22, 2015 at 9:34 AM, Madabhattula Rajesh Kumar <
> mrajaf...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a standalone cluster. One Master + One Slave. I'm getting below
>> "NULL POINTER" exception.
>>
>> Could you please help me on this issue.
>>
>>
>> *Code Block :-*
>>  val accum = sc.accumulator(0)
>> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x) *==> This
>> line giving exception.*
>>
>> Exception :-
>>
>> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.0 in stage
>> 0.0 (TID 1, 172.25.111.123): *java.lang.NullPointerException*
>> at com.cc.ss.etl.Main$$anonfun$1.apply$mcVI$sp(Main.scala:25)
>> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
>> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> at
>> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>> at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Lost task 0.0 in stage
>> 0.0 (TID 0) on executor 172.25.111.123: java.lang.NullPointerException
>> (null) [duplicate 1]
>> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 0.1 in
>> stage 0.0 (TID 2, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
>> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 1.1 in
>> stage 0.0 (TID 3, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
>> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.1 in stage
>> 0.0 (TID 3, 172.25.111.123):
>>
>> Regards,
>> Rajesh
>>
>
>


Re: Stand Alone Cluster - Strange issue

2015-12-22 Thread Ted Yu
This should be related:
https://issues.apache.org/jira/browse/SPARK-4170

On Tue, Dec 22, 2015 at 9:34 AM, Madabhattula Rajesh Kumar <
mrajaf...@gmail.com> wrote:

> Hi,
>
> I have a standalone cluster. One Master + One Slave. I'm getting below
> "NULL POINTER" exception.
>
> Could you please help me on this issue.
>
>
> *Code Block :-*
>  val accum = sc.accumulator(0)
> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x) *==> This line
> giving exception.*
>
> Exception :-
>
> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.0 in stage
> 0.0 (TID 1, 172.25.111.123): *java.lang.NullPointerException*
> at com.cc.ss.etl.Main$$anonfun$1.apply$mcVI$sp(Main.scala:25)
> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Lost task 0.0 in stage
> 0.0 (TID 0) on executor 172.25.111.123: java.lang.NullPointerException
> (null) [duplicate 1]
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 0.1 in
> stage 0.0 (TID 2, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 1.1 in
> stage 0.0 (TID 3, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.1 in stage
> 0.0 (TID 3, 172.25.111.123):
>
> Regards,
> Rajesh
>


Re: Stand Alone Cluster - Strange issue

2015-12-22 Thread Ted Yu
Which Spark release are you using ?

Cheers

On Tue, Dec 22, 2015 at 9:34 AM, Madabhattula Rajesh Kumar <
mrajaf...@gmail.com> wrote:

> Hi,
>
> I have a standalone cluster. One Master + One Slave. I'm getting below
> "NULL POINTER" exception.
>
> Could you please help me on this issue.
>
>
> *Code Block :-*
>  val accum = sc.accumulator(0)
> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x) *==> This line
> giving exception.*
>
> Exception :-
>
> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.0 in stage
> 0.0 (TID 1, 172.25.111.123): *java.lang.NullPointerException*
> at com.cc.ss.etl.Main$$anonfun$1.apply$mcVI$sp(Main.scala:25)
> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
> at com.cc.ss.etl.Main$$anonfun$1.apply(Main.scala:25)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Lost task 0.0 in stage
> 0.0 (TID 0) on executor 172.25.111.123: java.lang.NullPointerException
> (null) [duplicate 1]
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 0.1 in
> stage 0.0 (TID 2, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
> 15/12/22 09:18:26 INFO scheduler.TaskSetManager: Starting task 1.1 in
> stage 0.0 (TID 3, 172.25.111.123, PROCESS_LOCAL, 2155 bytes)
> 15/12/22 09:18:26 WARN scheduler.TaskSetManager: Lost task 1.1 in stage
> 0.0 (TID 3, 172.25.111.123):
>
> Regards,
> Rajesh
>