Hey Peter,

I think that this is actually due to an error-handling issue: if you look
at the stack trace that you posted, the NPE is being thrown from an
error-handling branch of a `finally` block:

@Override public void write(scala.collection.Iterator<Product2<K, V>>
records) throws IOException { boolean success = false; try { while
(records.hasNext())
{ insertRecordIntoSorter(records.next()); } closeAndWriteOutput(); success =
true; } finally { if (!success) { sorter.cleanupAfterError(); // <---- this
is the line throwing the error } } }

I suspect that what's happening is that an exception is being thrown from
user / upstream code in the initial call to records.next(), but the
error-handling block is failing because sorter == null since we haven't
initialized it yet.

I'm going to file a JIRA for this and will try to add a set of regression
tests to the ShuffleSuite to make sure exceptions from user code aren't
swallowed like this.

On Fri, Jun 19, 2015 at 11:36 AM, Peter Rudenko <petro.rude...@gmail.com>
wrote:

>  Hi want to try new tungsten-sort shuffle manager, but on 1 stage
> executors start to die with NPE:
>
> 15/06/19 17:53:35 WARN TaskSetManager: Lost task 38.0 in stage 41.0 (TID
> 3176, ip-10-50-225-214.ec2.internal): java.lang.NullPointerException
>         at
> org.apache.spark.shuffle.unsafe.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:151)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:70)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
> Any suggestions?
>
> Thanks,
> Peter Rudenko
>

Reply via email to