[ 
https://issues.apache.org/jira/browse/MESOS-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336558#comment-15336558
 ] 

Mallik Singaraju edited comment on MESOS-4087 at 6/18/16 3:43 PM:
------------------------------------------------------------------

I am looking at the stdout/stderr of the agent sandbox running the spark 
executor tasks on mesos.

Here is how I am submitting my job from a jenkins slave which has spark submit 
on it.

SPARK_JAVA_OPTS="\
-Dspark.executor.uri=https://s3.amazonaws.com/<sparkdir>/spark-1.6.1-bin-hadoop-2.6_scala-2.11.tgz
 \
-Dlog4j.configuration=log4j.properties \
" \
$SPARK_HOME/bin/spark-submit \
--class MyClassApp \
--deploy-mode cluster \
--verbose \
--conf spark.master=mesos://xx.xx.xx.xx:7070 \
--conf spark.ssl.enabled=true \
--conf spark.mesos.coarse=false \
--conf spark.cores.max=1 \
--conf spark.executor.memory=1G \
--conf spark.driver.memory=1G \
https://s3.amazonaws.com/<sparkjob>/<mysparkjob>.jar

I want to override the log4j config which is defaulted to spark_home/conf with 
the one from the classpath in <mysparkjob>.jar when the  spark executor task is 
being run. Goal is to add a graylog appender to log4j so that I can push the 
driver's as well as executor application specific logs to a central gray log 
server.

Looks like when a executor task runs on mesos spark is always loading the 
log4j.properties from the SPARK_HOME/conf instead from <mysparkjob>.jar



was (Author: [email protected]):
I am looking at the stdout/stderr of the agent sandbox running the spark 
executor tasks on mesos.

Here is how I am submitting my job from a jenkins slave which has spark submit 
on it.

SPARK_JAVA_OPTS="\
-Dspark.executor.uri=https://s3.amazonaws.com/<sparkdir>/spark-1.6.1-bin-hadoop-2.6_scala-2.11.tgz
 \
-Dlog4j.configuration=log4j.properties \
" \
$SPARK_HOME/bin/spark-submit \
--class com.uptake.ad.AnomalyDetectionApp \
--deploy-mode cluster \
--verbose \
--conf spark.master=mesos://xx.xx.xx.xx:7070 \
--conf spark.ssl.enabled=true \
--conf spark.mesos.coarse=false \
--conf spark.cores.max=1 \
--conf spark.executor.memory=1G \
--conf spark.driver.memory=1G \
https://s3.amazonaws.com/<sparkjob>/<mysparkjob>.jar

I want to override the log4j config which is defaulted to spark_home/conf with 
the one from the classpath in <mysparkjob>.jar when the  spark executor task is 
being run. Goal is to add a graylog appender to log4j so that I can push the 
driver's as well as executor application specific logs to a central gray log 
server.

Looks like when a executor task runs on mesos spark is always loading the 
log4j.properties from the SPARK_HOME/conf instead from <mysparkjob>.jar


> Introduce a module for logging executor/task output
> ---------------------------------------------------
>
>                 Key: MESOS-4087
>                 URL: https://issues.apache.org/jira/browse/MESOS-4087
>             Project: Mesos
>          Issue Type: Task
>          Components: containerization, modules
>            Reporter: Joseph Wu
>            Assignee: Joseph Wu
>              Labels: logging, mesosphere
>             Fix For: 0.27.0
>
>
> Existing executor/task logs are logged to files in their sandbox directory, 
> with some nuances based on which containerizer is used (see background 
> section in linked document).
> A logger for executor/task logs has the following requirements:
> * The logger is given a command to run and must handle the stdout/stderr of 
> the command.
> * The handling of stdout/stderr must be resilient across agent failover.  
> Logging should not stop if the agent fails.
> * Logs should be readable, presumably via the web UI, or via some other 
> module-specific UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to