Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/7762#discussion_r35998074
--- Diff:
unsafe/src/main/java/org/apache/spark/unsafe/map/BytesToBytesMap.java ---
@@ -448,18 +448,30 @@ public void putNewKey(
if (size == MAX_CAPACITY) {
throw new IllegalStateException("BytesToBytesMap has reached
maximum capacity");
}
+
// Here, we'll copy the data into our data pages. Because we only
store a relative offset from
// the key address instead of storing the absolute address of the
value, the key and value
// must be stored in the same memory page.
// (8 byte key length) (key) (8 byte value length) (value)
final long requiredSize = 8 + keyLengthBytes + 8 + valueLengthBytes;
- assert (requiredSize <= pageSizeBytes - 8); // Reserve 8 bytes for
the end-of-page marker.
- size++;
- bitset.set(pos);
- // If there's not enough space in the current page, allocate a new
page (8 bytes are reserved
- // for the end-of-page marker).
- if (currentDataPage == null || pageSizeBytes - 8 - pageCursor <
requiredSize) {
+ // --- Figure out where to insert the new record
---------------------------------------------
+
+ final MemoryBlock dataPage;
+ final Object dataPageBaseObject;
+ final long dataPageInsertOffset;
+ boolean useOverflowPage = requiredSize > pageSizeBytes - 8;
+ if (useOverflowPage) {
+ // The record is larger than the page size, so allocate a special
overflow page just to hold
+ // that record.
+ MemoryBlock overflowPage = memoryManager.allocatePage(requiredSize
+ 8);
--- End diff --
BytesToBytesMap does not currently integrate with the ShuffleMemoryManager
since it doesn't support spilling. It probably should, though, if only to
ensure that its memory consumption is counted towards the task's overall memory
allowance. Currently, any failures to allocate here will manifest as OOMs.
I think that part of the problem is that BytesToBytesMap itself doesn't
really know how to handle failure to allocate memory since it can't spill on
its own. Intuitively, I guess that the right approach is to make sure that
allocation failures do not leave the map in an inconsistent state then to
propagate the failure to the caller so that they can determine what to do
(fallback, spill the map, sort, etc).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]