[ 
https://issues.apache.org/jira/browse/SPARK-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088680#comment-14088680
 ] 

Graham Dennis commented on SPARK-2878:
--------------------------------------

I should mention that my current hacky workaround is to set 
`spark.task.maxFailures` to something very large (1000) because the failures 
are non-deterministic, and despite task failures, progress does happen.  If you 
have a better workaround, I'm interested!

> Inconsistent Kryo serialisation with custom Kryo Registrator
> ------------------------------------------------------------
>
>                 Key: SPARK-2878
>                 URL: https://issues.apache.org/jira/browse/SPARK-2878
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0, 1.0.2
>         Environment: Linux RedHat EL 6, 4-node Spark cluster.
>            Reporter: Graham Dennis
>
> The custom Kryo Registrator (a class with the 
> org.apache.spark.serializer.KryoRegistrator trait) is not used with every 
> Kryo instance created, and this causes inconsistent serialisation and 
> deserialisation.
> The Kryo Registrator is sometimes not used because of a ClassNotFound 
> exception that only occurs if it *isn't* the Worker thread (of an Executor) 
> that tries to create the KryoRegistrator.
> A complete description of the problem and a project reproducing the problem 
> can be found at https://github.com/GrahamDennis/spark-kryo-serialisation
> I have currently only tested this with Spark 1.0.0, but will try to test 
> against 1.0.2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to