Github user lemire commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9661#discussion_r44665632
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
    @@ -176,15 +179,17 @@ private[spark] object HighlyCompressedMapStatus {
         // From a compression standpoint, it shouldn't matter whether we track 
empty or non-empty
         // blocks. From a performance standpoint, we benefit from tracking 
empty blocks because
         // we expect that there will be far fewer of them, so we will perform 
fewer bitmap insertions.
    +    val emptyBlocks = new RoaringBitmap()
    +    val nonEmptyBlocks = new RoaringBitmap()
         val totalNumBlocks = uncompressedSizes.length
    -    val emptyBlocks = new BitSet(totalNumBlocks)
         while (i < totalNumBlocks) {
           var size = uncompressedSizes(i)
           if (size > 0) {
             numNonEmptyBlocks += 1
    +        nonEmptyBlocks.add(i)
             totalSize += size
           } else {
    -        emptyBlocks.set(i)
    +        emptyBlocks.add(i)
           }
    --- End diff --
    
    If you use ``RoaringBitmap`` and the RoaringBitmap objects are not expect 
to change often after this loop, a call such as 
``emptyBlocks.runOptimize();nonEmptyBlocks.runOptimize();`` might be warranted. 
It should be checked. 
    
    Though it should not help, you can also investigate whether adding 
``emptyBlocks.trim();nonEmptyBlocks.trim();`` can be helpful.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to