[ 
https://issues.apache.org/jira/browse/MRQL-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557851#comment-14557851
 ] 

ASF GitHub Bot commented on MRQL-73:
------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/incubator-mrql/pull/5


> Set the max number of tasks in Spark mode
> -----------------------------------------
>
>                 Key: MRQL-73
>                 URL: https://issues.apache.org/jira/browse/MRQL-73
>             Project: MRQL
>          Issue Type: Bug
>          Components: Run-Time/Spark
>    Affects Versions: 0.9.6
>            Reporter: Leonidas Fegaras
>            Assignee: Leonidas Fegaras
>            Priority: Critical
>
> The number of worker nodes in Spark distributed mode, which are specified by 
> the MRQL -nodes parameter, must set the parameters SPARK_WORKER_INSTANCES 
> (called SPARK_EXECUTOR_INSTANCES in Spark 1.3.*) and SPARK_WORKER_CORES; 
> otherwise, Spark will always use all the available cores in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to