GitHub user JoshRosen opened a pull request:

    https://github.com/apache/spark/pull/11533

    [SPARK-13695] Don't cache MEMORY_AND_DISK blocks as bytes in memory after 
spills

    When a cached block is spilled to disk and read back in serialized form 
(i.e. as bytes), the current BlockManager implementation will attempt to 
re-insert the serialized block into the MemoryStore even if the block's storage 
level requests deserialized caching.
    
    This behavior adds some complexity to the MemoryStore but I don't think it 
offers many performance benefits and I'd like to remove it in order to simplify 
a larger refactoring patch. Therefore, this patch changes the behavior so that 
disk store reads will only cache bytes in the memory store for blocks with 
serialized storage levels.
    
    There are two places where we request serialized bytes from the BlockStore:
    
    1. getLocalBytes(), which is only called when reading local copies of 
TorrentBroadcast pieces. Broadcast pieces are always cached using a serialized 
storage level, so this won't lead to a mismatch in serialization forms if 
spilled bytes read from disk are cached as bytes in the memory store.
    2. the non-shuffle-block branch in getBlockData(), which is only called by 
the NettyBlockRpcServer when responding to requests to read remote blocks. 
Caching the serialized bytes in memory will only benefit us if those cached 
bytes are read before they're evicted and the likelihood of that happening 
seems low since the frequency of remote reads of non-broadcast cached blocks 
seems very low. Caching these bytes when they have a low probability of being 
read is bad if it risks the eviction of blocks which are cached in their 
expected serialized/deserialized forms, since those blocks seem more likely to 
be read in local computation.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/JoshRosen/spark 
remove-memorystore-level-mismatch

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11533.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11533
    
----
commit 3a2ccad5283d6b246efc9532232638a470acce0c
Author: Josh Rosen <[email protected]>
Date:   2016-03-05T00:07:34Z

    Update tests for new caching semantics.

commit 653b2fafa40b81011fe1dab689c54d23cfe769dc
Author: Josh Rosen <[email protected]>
Date:   2016-03-05T01:19:40Z

    Change semantic.

commit 6c5850086ade79a8402b173c5e3063b1a02aba3d
Author: Josh Rosen <[email protected]>
Date:   2016-03-05T01:21:39Z

    Add require() to guard against memory store overwriting

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to