Github user mridulm commented on the pull request:

    https://github.com/apache/spark/pull/1499#issuecomment-49949511
  
    @mateiz The total memory overhead actually goes much higher than 
num_streams right ?
    It should be order of num_streams + num_values for this key.
    
    For fairly large values, the latter might fit into memory, but the former 
might not (particularly as number of mappers increases).
    
    Or did I get this wrong from the PR ? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to