Github user squito commented on the pull request:

    https://github.com/apache/spark/pull/5400#issuecomment-114977357
  
    ping @tgravescs 
    
    btw, another design issue that was brought up earlier was whether we should 
just limit each block to never exceed 2GB, and just have the BlockManager take 
care of figuring out how to make multiple blocks if you ever went over.  
However, after prototyping that idea, I think it just pushes a lot more 
complexity into the BlockManager -- I wrote up some more notes on the jira here 
https://issues.apache.org/jira/browse/SPARK-6190?focusedCommentId=14387275&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14387275
    
    but I'm open to more opinions on it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to