Github user tdas commented on the pull request:

    https://github.com/apache/spark/pull/6614#issuecomment-108802877
  
    I think we need a different design. There is a way to count the elements in 
the iterator without putting it into some intermediate buffer. Rather, the 
iterator is going to be consumed any way (assuming StorageLevel is serialized) 
by the block manager. The counting can be done while that is happening. To do 
this you have to construct a special CountingIterator that wraps the original 
iterator. 
    ```
    class CountingIterator[T: Manifest](iterator: Iterator[T]) extends 
Iterator[T] {
       var count = 0
       def hasNext(): Boolean = iterator.hasNext
       def next() = {
        count+=1
        iterator.next()
       }
    }
    ```
    
    after iterating the original iterator via this counting iterator, you can 
get the count of the records it had iterated through. This number can then 
returned through BlockStoreResult object. 
    
    How does that sound?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to