[ https://issues.apache.org/jira/browse/CRUNCH-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Micah Whitacre updated CRUNCH-568: ---------------------------------- Attachment: CRUNCH-568a.patch Took a stab at writing an integration test for this problem to prove the patch fixes the problem. Unfortunately couldn't get it to fail prior to adding the fix. Here's the test I was trying to add. Going to offline for a bit so free to experiment to get it to fail to verify Josh's fix. > Aggregators fail on SparkPipeline > --------------------------------- > > Key: CRUNCH-568 > URL: https://issues.apache.org/jira/browse/CRUNCH-568 > Project: Crunch > Issue Type: Bug > Components: Spark > Affects Versions: 0.12.0 > Reporter: Nithin Asokan > Attachments: CRUNCH-568.patch, CRUNCH-568a.patch > > > Logging this based on mailing list discussion > http://mail-archives.apache.org/mod_mbox/crunch-user/201510.mbox/%3CCANb5z2KBqxZng92ToFo0MdTk2fd8jtGTjZ85h1yUo_akaetcXg%40mail.gmail.com%3E > Running a Crunch SparkPipeline with FirstN aggregator results in a > NullPointerException. > Example to recreate this > https://gist.github.com/nasokan/853ff80ce20ad7a78886 > Stack trace on driver logs. > {code} > 15/10/05 16:02:33 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 0, > 123.domain.xyz): java.lang.NullPointerException > at > org.apache.crunch.impl.mr.run.UniformHashPartitioner.getPartition(UniformHashPartitioner.java:32) > at > org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:62) > at > org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:35) > at > org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002) > at > org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at > org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) > at > org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) > at > org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) > at org.apache.spark.scheduler.Task.run(Task.scala:64) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)