Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2134#issuecomment-55337314
Hi @liyezhang556520, I did a cursory glance of your changes and I have a
high-level question before we dig deeper. While we drop the blocks in parallel,
we still need to occupy the chunk of memory that was held by the old blocks
that we're dropping. However, the whole point of unrolling new blocks safely is
to ensure that we don't use more memory than is available in the JVM. Doesn't
this introduce a potential condition where we unroll the new block quicker than
we drop the old block, and we can still run out of memory?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]