Can you show related code in DriverAccumulator.java ?

Which Spark release do you use ?

Cheers

On Mon, Aug 3, 2015 at 3:13 PM, Anubhav Agarwal <anubha...@gmail.com> wrote:

> Hi,
> I am trying to modify my code to use HDFS and multiple nodes. The code
> works fine when I run it locally in a single machine with a single worker.
> I have been trying to modify it and I get the following error. Any hint
> would be helpful.
>
> java.lang.NullPointerException
>       at 
> thomsonreuters.trailblazer.main.DriverAccumulator.addAccumulator(DriverAccumulator.java:17)
>       at 
> thomsonreuters.trailblazer.main.DriverAccumulator.addAccumulator(DriverAccumulator.java:11)
>       at org.apache.spark.Accumulable.add(Accumulators.scala:73)
>       at 
> thomsonreuters.trailblazer.main.AllocationBolt.queueDriverRow(AllocationBolt.java:112)
>       at 
> thomsonreuters.trailblazer.main.AllocationBolt.executeRow(AllocationBolt.java:303)
>       at 
> thomsonreuters.trailblazer.main.FileMapFunction.call(FileMapFunction.java:49)
>       at 
> thomsonreuters.trailblazer.main.FileMapFunction.call(FileMapFunction.java:8)
>       at 
> org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:996)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:90)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:90)
>       at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
>       at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>       at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>       at org.apache.spark.scheduler.Task.run(Task.scala:64)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
>
> failed in write bolt execute null
> failed in write bolt execute null
> java.lang.NullPointerException
>
>

Reply via email to