[ https://issues.apache.org/jira/browse/SPARK-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-729. ----------------------------- Resolution: Won't Fix I'm tentatively closing for lack of activity; it is problematic to implement and does change behavior. Although it's a problem it does also end up turning up at a reasonable time, when the closure is executed. The error is clear too. > Closures not always serialized at capture time > ---------------------------------------------- > > Key: SPARK-729 > URL: https://issues.apache.org/jira/browse/SPARK-729 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 0.7.0, 0.7.1 > Reporter: Matei Zaharia > Assignee: William Benton > > As seen in > https://groups.google.com/forum/?fromgroups=#!topic/spark-users/8pTchwuP2Kk > and its corresponding fix on > https://github.com/mesos/spark/commit/adba773fab6294b5764d101d248815a7d3cb3558, > it is possible for a closure referencing a var to see the latest version of > that var, instead of the version that was there when the closure was passed > to Spark. This is not good when failures or recomputations happen. We need to > serialize the closures on capture if possible, perhaps as part of > ClosureCleaner.clean. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org