Hi, Experts

I run my Spark Cluster on Yarn. I used to get executors' Logs from Spark's 
History Server. But after I started my Hadoop jobhistory server and made 
configuration to aggregate logs of hadoop jobs to a HDFS directory, I found 
that I could not get spark's executors' Logs any more. Is there any solution so 
that I could get logs of my spark jobs from Spark History Server and get logs 
of my map-reduce jobs from Hadoop History Server? Many Thanks!


Following is the configuration I made in Hadoop yarn-site.xml
yarn.log-aggregation-enable=true
yarn.nodemanager.remote-app-log-dir=/mr-history/agg-logs
yarn.log-aggregation.retain-seconds=259200
yarn.log-aggregation.retain-check-interval-seconds=-1‍

Reply via email to