Github user lemire commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r44731518
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -176,15 +179,17 @@ private[spark] object HighlyCompressedMapStatus {
// From a compression standpoint, it shouldn't matter whether we track
empty or non-empty
// blocks. From a performance standpoint, we benefit from tracking
empty blocks because
// we expect that there will be far fewer of them, so we will perform
fewer bitmap insertions.
+ val emptyBlocks = new RoaringBitmap()
+ val nonEmptyBlocks = new RoaringBitmap()
val totalNumBlocks = uncompressedSizes.length
- val emptyBlocks = new BitSet(totalNumBlocks)
while (i < totalNumBlocks) {
var size = uncompressedSizes(i)
if (size > 0) {
numNonEmptyBlocks += 1
+ nonEmptyBlocks.add(i)
totalSize += size
} else {
- emptyBlocks.set(i)
+ emptyBlocks.add(i)
}
--- End diff --
Let me add that even with ``BitSet``, you'd probably want to call
``trimToSize()`` after the ``BitSet``s are constructed since, like
``RoaringBitmap``, there are underlying "dynamic" arrays that have a capacity
that can exceed the actual data size. (This often makes little difference
statistically however.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]