Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/4234#issuecomment-71752716
I don't think this will always have the desired effect. In the cases where
you unpersist upstream RDDs before materializing their results, it's equivalent
to never caching anything at all. Rather than
```
val start = System.currentTimeMillis
val a = sc.parallelize(Array(1)).map(x => { Thread.sleep(5000); x}).cache()
a.count() // Use a once
val b = a.map(identity).cache() // Use a again
// Problem: b has not been computed yet, so a hasn't been used
a.unpersist()
b.count() // Now compute b using a
System.currentTimeMillis - start // => 10 seconds
```
the correct pattern is to materialize a result (here `b`) before
unpersisting its upstream RDDs:
```
val start = System.currentTimeMillis
val a = sc.parallelize(Array(1)).map(x => { Thread.sleep(5000); x}).cache()
a.count() // Use a once
val b = a.map(identity).cache()
b.count() // materialize b (cache and compute it), forcing a to be used
again
a.unpersist()
System.currentTimeMillis - start // => 5 seconds
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]