[
https://issues.apache.org/jira/browse/SPARK-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14192977#comment-14192977
]
Kuldeep commented on SPARK-4080:
--------------------------------
Yes [~joshrosen], two sparkcontexts are being created from same jvm process to
connect to a local mesos cluster with single master and slave.
> "IOException: unexpected exception type" while deserializing tasks
> ------------------------------------------------------------------
>
> Key: SPARK-4080
> URL: https://issues.apache.org/jira/browse/SPARK-4080
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.1.0, 1.2.0
> Reporter: Josh Rosen
> Assignee: Josh Rosen
> Priority: Critical
> Fix For: 1.1.1, 1.2.0
>
>
> When deserializing tasks on executors, we sometimes see {{IOException:
> unexpected exception type}}:
> {code}
> java.io.IOException: unexpected exception type
>
> java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1538)
>
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1025)
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
>
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
>
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:163)
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:745)
> {code}
> Here are some occurrences of this bug reported on the mailing list and GitHub:
> - https://www.mail-archive.com/[email protected]/msg12129.html
> -
> http://mail-archives.apache.org/mod_mbox/incubator-spark-user/201409.mbox/%3ccaeawm8uop9tgarm5sceppzey5qxo+h8hu8ujzah5s-ajyzz...@mail.gmail.com%3E
> - https://github.com/yieldbot/flambo/issues/13
> - https://www.mail-archive.com/[email protected]/msg13283.html
> This is probably caused by throwing exceptions other than IOException from
> our custom {{readExternal}} methods (see
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7u40-b43/java/io/ObjectStreamClass.java#1022).
> [~davies] spotted an instance of this in TorrentBroadcast, where a failed
> {{require}} throws a different exception, but this issue has been reported in
> Spark 1.1.0 as well. To fix this, I'm going to add try-catch blocks around
> all of our {{readExternal}} and {{writeExternal}} methods to re-throw caught
> exceptions as IOException.
> This fix should allow us to determine the actual exceptions that are causing
> deserialization failures.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]