Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/7160#issuecomment-118168599
  
    I think the concern is that the memory cache is not fully LRU. If a new big 
block shows up when the memory cache is almost full, then there's a chance that 
the new block will end up being dropped straight to disk instead.
    
    Here's how this issue is related: if the asynchronous unpersist happens too 
slowly, a workload that previously would have benefited from the new block 
being cached in memory may now drop the new block to disk directly instead. 
Whether this happens is largely workload dependent.
    
    @ilganeli Have you had a chance to benchmark performances of a few 
algorithms before and after this change? It will be hard to assert with 
confidence that this will actually improve performance for most workloads.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to