Github user suyanNone commented on the pull request:

    https://github.com/apache/spark/pull/6586#issuecomment-108266326
  
    @srowen 
    if want to make physical memory used more reasonable, I think there have 2 
ways:
    1. use fileOutputStream or fileinputStream to write byte[] directly  
instead of use channel to read or write 
    2. Accoding each user's max rdd block size, just add more "3* max rdd block 
size " in memoryOverHead


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to