Github user jongyoul commented on the pull request: https://github.com/apache/spark/pull/4170#issuecomment-71765405 @mateiz We agree with one executor and multi task is intended behaviour. In this situation, MesosScheduler offers CPUS_PER_TASK resources to executor when we launch separate task. If we launch two tasks, we offers 3 * CPUS_PER_TASK(= 1 for executor and 2 for tasks) for running only two tasks. @pwendell thinks that's too much resources and It's enough for executor to have one cores. In my PR, I enable to set executor's cores. In memory, we just offered memory to executor only. If we launch two tasks again, we offers 1 * calculateTotalMemory(sc) to all tasks. I think we offer one executor memory and two task memories. I agree that executor uses memory by itself, but we fix amount of those value
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org