Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/15618
@HyukjinKwon So the idea is that you acquire resources required and dont
need to track it by wrapping them in Utils.tryWithResource (similar to memory
management in jvm).
As an example:
main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala change will
simply acquire the fileInputStream in the try and release it in the finally
automatically - without needing to manage it via catch/rethrow, etc (ex: what
if close() throws exception ?).
Even core/src/test/scala/org/apache/spark/FileSuite.scala,
core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala,
etc change can be modelled the same way.
You get the idea :-)
This is essentially analogous to try-with-resources in java.
Which is not to say it applies every where ofcourse : drawback is that
unlike in java, you need to explicitly specify the finally action, which can be
pita imo compared to java's idiom.
Since you are anyway going through the pain of making all these changes to
fix up code, might be a good idea to change it such that future tests will
follow the same pattern.
Thoughts ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]