Github user squito commented on the pull request:

    https://github.com/apache/spark/pull/5784#issuecomment-97988730
  
    I've been trying to come up with a scenario when this would be undesirable. 
 I've only come up with one scenario: on the second put, you have less than the 
initial room available, AND the second block is bigger than your total memory.  
After this change, you'll end up with nothing in the cache, but before, you 
would still have the first block.
    
    however, that is quite a stretch:  you could end up in this situation 
before in any case, if you had more than the initial room available to start 
with, but the second block is bigger than your total memory.
    
    does that seem right?  not that this is a show stopper, just trying to make 
sure I understand


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to