Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/3518#issuecomment-67070557
Hi @JoshRosen , I think I've finally understood what you've been saying
from the beginning (apologies for being slow). I haven't been thinking
correctly about what is going on during serialization. To make sure, I
understand correctly what is proposed as an amendment:
1) Use the existing code to attempt to serialize an RDD and its
dependencies and based on this, identify specifically which RDD is not
serializable.
2) Use a generalized object graph walker (similar to your Object graph
visualizer) which identifies both the explicit and implicit references within a
class (in this case our RDD) to explore the internal references within the RDD
and attempt to serialize those. By keeping track of this traversal we can
identify the exact chain of references that leads to a failure.
When dealing with references, how can I explicitly identify them in a
useful way? Will something like toString() on one of the references provide a
useful identifier that can facilitate debugging?
Earlier on you mentioned excluding references to certain Spark internals
while traversing the object graph. Could you elaborate on which internals could
be ignored in this manner? Would this simply be an optimization?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]