[ 
https://issues.apache.org/jira/browse/SPARK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518508#comment-14518508
 ] 

Saisai Shao commented on SPARK-7169:
------------------------------------

Hi [~jlewandowski], regard to your second problem, I think you don't have to 
copy the metrics configuration file manually to every machine one by one, you 
could use spark-submit --file path/to/your/metrics_properties to transfer your 
configuration to each executor/container.

And for the first problem, is it a big problem that all the configuration files 
need to be in the same directory? I think lot's of Spark as well as Hadoop conf 
file has such requirement. But you could configure driver/executor with 
different parameters in conf file, since MetricsSystem supports such features.

Yes I think current metrics configuration may not so easy to use, any 
improvement is greatly appreciated :).

> Allow to specify metrics configuration more flexibly
> ----------------------------------------------------
>
>                 Key: SPARK-7169
>                 URL: https://issues.apache.org/jira/browse/SPARK-7169
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.2.2, 1.3.1
>            Reporter: Jacek Lewandowski
>            Priority: Minor
>
> Metrics are configured in {{metrics.properties}} file. Path to this file is 
> specified in {{SparkConf}} at a key {{spark.metrics.conf}}. The property is 
> read when {{MetricsSystem}} is created which means, during {{SparkEnv}} 
> initialisation. 
> h5.Problem
> When the user runs his application he has no way to provide the metrics 
> configuration for executors. Although one can specify the path to metrics 
> configuration file (1) the path is common for all the nodes and the client 
> machine so there is implicit assumption that all the machines has same file 
> in the same location, and (2) actually the user needs to copy the file 
> manually to the worker nodes because the file is read before the user files 
> are populated to the executor local directories. All of this makes it very 
> difficult to play with the metrics configuration.
> h5. Proposed solution
> I think that the easiest and the most consistent solution would be to move 
> the configuration from a separate file directly to {{SparkConf}}. We may 
> prefix all the configuration settings from the metrics configuration by, say 
> {{spark.metrics.props}}. For the backward compatibility, these properties 
> would be loaded from the specified as it works now. Such a solution doesn't 
> change the API so maybe it could be even included in patch release of Spark 
> 1.2 and Spark 1.3.
> Appreciate any feedback.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to