Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53260324
Yes, this becomes tricky. And I don't see a satisfying solution, as I would
have to predict how many tasks will run in parallel to ensure that there is
enough memory for each task.
This patch solves one problem, but will introduce new ones. Because it's
only dealing on the symptoms not on the cause.
I think it is better not to integrate it.
I've already created a pull request to get the cause fixed in Mesos:
https://github.com/apache/mesos/pull/24
On Mon, Aug 25, 2014 at 7:58 AM, Matei Zaharia <[email protected]>
wrote:
> That's true, now that we take 32 MB extra you need to change the logic
> about how many tasks we can allocate. That will make it trickier.
>
> â
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/spark/pull/1860#issuecomment-53229242>.
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]