I refreshed my Spark version to the master branch as of this morning, and
am noticing some strange behavior with executors and the UI reading
executor logs while running a job in what used to be standalone mode (is
still now called coarse grained scheduler mode or still standalone mode?).
For starters, the job seems to spin off only 1 executor (# 0) as per the
job's console log

However, when I drill down to the job's executor logs in
SPARK_HOME/work/<job-id>/, I see two directories "0" and "1". I was
expecting to see only the "0" directory since the job registered only a
single executor. I notice that my job's log output are sent to the stdout
in the "0" directory.

Secondly, the stdout/stderr links on the web UI seem to point to
executorId=1, and since my job's output is sent to the 0/stdout, the logs
on the UI don't have my job's log output.

Is this a known issue, or am I missing something?

Thanks,
Ameet

Reply via email to