Hi Nathanael,

My thought was that the Spark-specific parameters defined in /etc/spot.conf
(SPK_*, for cores and memory per executor, etc.) were global defaults that
could apply to any Spark application (could be Spark streaming ingest, or
the Spark ML LDA, ...) although a particular Spark application could choose
to override them.

Wondering if we should move the application-specific parameters (like those
for the LDA application, ALPHA, BETA, TOPIC_COUNT, ...) out of the global
/etc/spot.conf, and to their own (per-application) configuration files?

Curtis

On Mon, Apr 30, 2018 at 3:18 PM, Nate Smith <natedogs...@gmail.com> wrote:

> I’m adding some checks into ml_ops.sh to avoid passing spark-submit a
> bunch of empty variables.
>
> My question is rather the LDA_* options in spot.conf should really be
> SPK_LDA_*?
> they are variables for the spark job and yet it’s not instantly clear that
> they need to be included and can not be left blank when setting Spot up.
>
> - Nathanael

Reply via email to