Github user dibbhatt commented on the pull request:

    https://github.com/apache/spark/pull/6614#issuecomment-109739301
  
    What I observed is , when Block is not able to unrollSafely to memory if 
enough space is not there, BlockManager won't try to put the block and in 
ReceivedBlockHandler it already throw the SparkException as it could not find 
the block it in PutResult. Thus block count won't go wrong if block is not able 
to unroll safely. So I was wrong earlier ...
    
    For MEMORY_DISK settings , if block not able to unroll to memory, it still 
get deseralized to Disk. Same for WAL based store. So for those cases ( storage 
level = memory + disk ) , also count will come fine if Block not able to unroll.
    
    thus I added the  isFullyConsumed in the CountingIterator but have not used 
it as such case will never arise that block not fully consumed and 
ReceivedBlockHandler still get the block ID.
    
    I have added few test cases to cover those block unrolling scenarios also.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to