Hi, your question is really CM-related and not Spark-related, so I'm
bcc'ing the list and will reply separately.

On Tue, Nov 3, 2015 at 11:08 AM, billou2k <[email protected]> wrote:
> Hi,
> Sorry this is probably a silly question but
> I have a standard CDH 5.4.2 config with Spark 1.3 and I'm trying to setup
> Spark dynamic allocation which was introduced in CDH 5.4.x and Spark 1.2.
>
> According to the  doc
> <https://spark.apache.org/docs/1.2.0/job-scheduling.html#dynamic-resource-allocation>
> I should set "spark.dynamicAllocation.enabled" to true
> but I cannot find this parameter in CM in the spark config section.
> After checking in the top search field in CM, it was found in the Hive
> section : "HiveServer2 Default Group" was specified next to it and ticked.
>
> Is this OK and i can assume it's enabled or should it mention "gateway/spark
> default group" instead of hiveServer2?
> The same goes with other related spark dynamic allocation parameters such as
> "spark.dynamicAllocation.minExecutors" (set to 1), and
> "spark.dynamicAllocation.initialExecutors" (set to 1)
>
> However I cannot find
> "spark.dynamicAllocation.maxExecutors" (Should i add this one to a safety
> valve ?)
>
> As things stand I can see it is not active as I tested a long spark job that
> was constantly using the default 2 executors in Spark-shell.
>
> I assume this is partly because a number of other settings mentioned in the
> doc should be setup like
> "yarn.nodemanager.aux-services" and all related parameters that should be
> added to "yarn-site.xml" (safety valve?)
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-dynamic-allocation-config-tp25266.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to