2017-05-31 10:48 GMT+02:00 Paolo Patierno :
> No it's running in standalone mode as Docker image on Kubernetes.
>
>
> The only way I found was to access "stderr" file created under the "work"
> directory in the SPARK_HOME but ... is it the right way ?
>
I think that is the
oro Roman <alons...@gmail.com>
> *Sent:* Wednesday, May 31, 2017 8:39 AM
> *To:* Paolo Patierno
> *Cc:* user@spark.apache.org
> *Subject:* Re: Worker node log not showed
>
> Are you running the code with yarn?
>
> if so, figure out the applicationID through the web ui
Roman <alons...@gmail.com>
Sent: Wednesday, May 31, 2017 8:39 AM
To: Paolo Patierno
Cc: user@spark.apache.org
Subject: Re: Worker node log not showed
Are you running the code with yarn?
if so, figure out the applicationID through the web ui, then run the next
command:
yarn logs your_applic
Are you running the code with yarn?
if so, figure out the applicationID through the web ui, then run the next
command:
yarn logs your_application_id
Alonso Isidoro Roman
[image: https://]about.me/alonso.isidoro.roman
Hi all,
I have a simple cluster with one master and one worker. On another machine I
launch the driver where at some point I have following line of codes :
max.foreachRDD(rdd -> {
LOG.info("*** max.foreachRDD");
rdd.foreach(value -> {
LOG.info("*** rdd.foreach");
});