Re: Anyone else having trouble with replicated off heap RDD persistence?

2016-08-24 Thread Michael Allman
FYI, I've updated the issue's description to include a very simple program 
which reproduces the issue for me.

Cheers,

Michael

> On Aug 23, 2016, at 4:54 PM, Michael Allman <mich...@videoamp.com> wrote:
> 
> I've replied on the issue's page, but in a word, "yes". See 
> https://issues.apache.org/jira/browse/SPARK-17204 
> <https://issues.apache.org/jira/browse/SPARK-17204>.
> 
> Michael
> 
> 
>> On Aug 23, 2016, at 11:55 AM, Reynold Xin <r...@databricks.com 
>> <mailto:r...@databricks.com>> wrote:
>> 
>> Does this problem still exist on today's master/branch-2.0? 
>> 
>> SPARK-16550 was merged. It might be fixed already.
>> 
>> On Tue, Aug 23, 2016 at 9:37 AM, Michael Allman <mich...@videoamp.com 
>> <mailto:mich...@videoamp.com>> wrote:
>> FYI, I posted this to user@ and have followed up with a bug report: 
>> https://issues.apache.org/jira/browse/SPARK-17204 
>> <https://issues.apache.org/jira/browse/SPARK-17204>
>> 
>> Michael
>> 
>>> Begin forwarded message:
>>> 
>>> From: Michael Allman <mich...@videoamp.com <mailto:mich...@videoamp.com>>
>>> Subject: Anyone else having trouble with replicated off heap RDD 
>>> persistence?
>>> Date: August 16, 2016 at 3:45:14 PM PDT
>>> To: user <u...@spark.apache.org <mailto:u...@spark.apache.org>>
>>> 
>>> Hello,
>>> 
>>> A coworker was having a problem with a big Spark job failing after several 
>>> hours when one of the executors would segfault. That problem aside, I 
>>> speculated that her job would be more robust against these kinds of 
>>> executor crashes if she used replicated RDD storage. She's using off heap 
>>> storage (for good reason), so I asked her to try running her job with the 
>>> following storage level: `StorageLevel(useDisk = true, useMemory = true, 
>>> useOffHeap = true, deserialized = false, replication = 2)`. The job would 
>>> immediately fail with a rather suspicious looking exception. For example:
>>> 
>>> com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 
>>> 9086
>>> at 
>>> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
>>> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>>> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>>> at 
>>> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:229)
>>> at 
>>> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:169)
>>> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>>> at 
>>> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
>>> at 
>>> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>>> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
>>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>>> at 
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificColumnarIterator.hasNext(Unknown
>>>  Source)
>>> at 
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown
>>>  Source)
>>> at 
>>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>>>  Source)
>>> at 
>>> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>>> at 
>>> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>>> at 
>>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>>> at 
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>>> at 
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:85)
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>>> at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at 

Re: Anyone else having trouble with replicated off heap RDD persistence?

2016-08-23 Thread Michael Allman
I've replied on the issue's page, but in a word, "yes". See 
https://issues.apache.org/jira/browse/SPARK-17204 
<https://issues.apache.org/jira/browse/SPARK-17204>.

Michael


> On Aug 23, 2016, at 11:55 AM, Reynold Xin <r...@databricks.com> wrote:
> 
> Does this problem still exist on today's master/branch-2.0? 
> 
> SPARK-16550 was merged. It might be fixed already.
> 
> On Tue, Aug 23, 2016 at 9:37 AM, Michael Allman <mich...@videoamp.com 
> <mailto:mich...@videoamp.com>> wrote:
> FYI, I posted this to user@ and have followed up with a bug report: 
> https://issues.apache.org/jira/browse/SPARK-17204 
> <https://issues.apache.org/jira/browse/SPARK-17204>
> 
> Michael
> 
>> Begin forwarded message:
>> 
>> From: Michael Allman <mich...@videoamp.com <mailto:mich...@videoamp.com>>
>> Subject: Anyone else having trouble with replicated off heap RDD persistence?
>> Date: August 16, 2016 at 3:45:14 PM PDT
>> To: user <u...@spark.apache.org <mailto:u...@spark.apache.org>>
>> 
>> Hello,
>> 
>> A coworker was having a problem with a big Spark job failing after several 
>> hours when one of the executors would segfault. That problem aside, I 
>> speculated that her job would be more robust against these kinds of executor 
>> crashes if she used replicated RDD storage. She's using off heap storage 
>> (for good reason), so I asked her to try running her job with the following 
>> storage level: `StorageLevel(useDisk = true, useMemory = true, useOffHeap = 
>> true, deserialized = false, replication = 2)`. The job would immediately 
>> fail with a rather suspicious looking exception. For example:
>> 
>> com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 
>> 9086
>>  at 
>> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
>>  at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>>  at 
>> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:229)
>>  at 
>> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:169)
>>  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>>  at 
>> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
>>  at 
>> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>>  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
>>  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>>  at 
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificColumnarIterator.hasNext(Unknown
>>  Source)
>>  at 
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown
>>  Source)
>>  at 
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>>  Source)
>>  at 
>> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>>  at 
>> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>>  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>>  at 
>> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>>  at 
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>>  at 
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>>  at org.apache.spark.scheduler.Task.run(Task.scala:85)
>>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  at java.lang.Thread.run(Thread.java:745)
>> 
>> or
>> 
>> java.lang.IndexOutOfBoundsException: Index: 6, Size: 0
>>  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>  at java.util.ArrayList.get(ArrayList.java:429)
>>  at 
>> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:60)
>>  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:834)
>>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:788)
>>  at 
>> org.apache.spark.serializer

Re: Anyone else having trouble with replicated off heap RDD persistence?

2016-08-23 Thread Reynold Xin
Does this problem still exist on today's master/branch-2.0?

SPARK-16550 was merged. It might be fixed already.

On Tue, Aug 23, 2016 at 9:37 AM, Michael Allman <mich...@videoamp.com>
wrote:

> FYI, I posted this to user@ and have followed up with a bug report:
> https://issues.apache.org/jira/browse/SPARK-17204
>
> Michael
>
> Begin forwarded message:
>
> *From: *Michael Allman <mich...@videoamp.com>
> *Subject: **Anyone else having trouble with replicated off heap RDD
> persistence?*
> *Date: *August 16, 2016 at 3:45:14 PM PDT
> *To: *user <u...@spark.apache.org>
>
> Hello,
>
> A coworker was having a problem with a big Spark job failing after several
> hours when one of the executors would segfault. That problem aside, I
> speculated that her job would be more robust against these kinds of
> executor crashes if she used replicated RDD storage. She's using off heap
> storage (for good reason), so I asked her to try running her job with the
> following storage level: `StorageLevel(useDisk = true, useMemory = true,
> useOffHeap = true, deserialized = false, replication = 2)`. The job would
> immediately fail with a rather suspicious looking exception. For example:
>
> com.esotericsoftware.kryo.KryoException: Encountered unregistered class
> ID: 9086
> at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(
> DefaultClassResolver.java:137)
> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
> at org.apache.spark.serializer.KryoDeserializationStream.
> readObject(KryoSerializer.scala:229)
> at org.apache.spark.serializer.DeserializationStream$$anon$1.
> getNext(Serializer.scala:169)
> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
> at org.apache.spark.util.CompletionIterator.hasNext(
> CompletionIterator.scala:32)
> at org.apache.spark.InterruptibleIterator.hasNext(
> InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> SpecificColumnarIterator.hasNext(Unknown Source)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> GeneratedIterator.processNext(Unknown Source)
> at org.apache.spark.sql.execution.BufferedRowIterator.
> hasNext(BufferedRowIterator.java:43)
> at org.apache.spark.sql.execution.WholeStageCodegenExec$$
> anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(
> BypassMergeSortShuffleWriter.java:125)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:79)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:47)
> at org.apache.spark.scheduler.Task.run(Task.scala:85)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> or
>
> java.lang.IndexOutOfBoundsException: Index: 6, Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:653)
> at java.util.ArrayList.get(ArrayList.java:429)
> at com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(
> MapReferenceResolver.java:60)
> at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:834)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:788)
> at org.apache.spark.serializer.KryoDeserializationStream.
> readObject(KryoSerializer.scala:229)
> at org.apache.spark.serializer.DeserializationStream$$anon$1.
> getNext(Serializer.scala:169)
> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
> at org.apache.spark.util.CompletionIterator.hasNext(
> CompletionIterator.scala:32)
> at org.apache.spark.InterruptibleIterator.hasNext(
> InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> SpecificColumnarIterator.hasNext(Unknown Source)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
> at org.apache.spark.sql.catalyst.expressions.GeneratedClass$
> GeneratedIterator.processNext(Unkno