Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/5725#issuecomment-97152921
  
    I think good memory allocator implementations usually have a thread local 
cache to reduce contention. We can do the same thing too, can't we? The problem 
with only allocating at the TaskContext level is that you then lack global 
coordination, and as a result, each task is bounded by total_mem/num_tasks 
memory, rather than a more dynamic allocation.
    
    Maybe a good way to do this is to have the global memory allocator in 
SparkEnv, and TaskContext tracks the individual pages used by each task, so it 
can free it.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to