Hi Mich,

That's correct -- they're indeed duplicates in the table but not on
OS. The reason for this *might* be that you need to have separate
stdout and stderr for the failed execution(s). I'm using
--num-executors 2 and there are two executor backends.

$ jps -l
28865 sun.tools.jps.Jps
802 com.typesafe.zinc.Nailgun
28276 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
28804 org.apache.spark.executor.CoarseGrainedExecutorBackend
15450
28378 org.apache.hadoop.yarn.server.nodemanager.NodeManager
28778 org.apache.spark.executor.CoarseGrainedExecutorBackend
28748 org.apache.spark.deploy.yarn.ExecutorLauncher
28463 org.apache.spark.deploy.SparkSubmit

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Sat, Jun 18, 2016 at 6:16 PM, Mich Talebzadeh
<mich.talebza...@gmail.com> wrote:
> Can you please run jps on 1-node host and send the output. All those
> executor IDs some are just duplicates!
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
>
> On 18 June 2016 at 17:08, Jacek Laskowski <ja...@japila.pl> wrote:
>>
>> Hi,
>>
>> Thanks Mich and Akhil for such prompt responses! Here's the screenshot
>> [1] which is a part of
>> https://issues.apache.org/jira/browse/SPARK-16047 I reported today (to
>> have the executors sorted by status and id).
>>
>> [1]
>> https://issues.apache.org/jira/secure/attachment/12811665/spark-webui-executors.png
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> ----
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>>
>> On Sat, Jun 18, 2016 at 6:05 PM, Akhil Das <ak...@hacked.work> wrote:
>> > A screenshot of the executor tab will explain it better. Usually
>> > executors
>> > are allocated when the job is started, if you have a multi-node cluster
>> > then
>> > you'll see executors launched on different nodes.
>> >
>> > On Sat, Jun 18, 2016 at 9:04 PM, Jacek Laskowski <ja...@japila.pl>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> This is for Spark on YARN - a 1-node cluster with Spark 2.0.0-SNAPSHOT
>> >> (today build)
>> >>
>> >> I can understand that when a stage fails a new executor entry shows up
>> >> in web UI under Executors tab (that corresponds to a stage attempt). I
>> >> understand that this is to keep the stdout and stderr logs for future
>> >> reference.
>> >>
>> >> Why are there multiple executor entries under the same executor IDs?
>> >> What are the executor entries exactly? When are the new ones created
>> >> (after a Spark application is launched and assigned the
>> >> --num-executors executors)?
>> >>
>> >> Pozdrawiam,
>> >> Jacek Laskowski
>> >> ----
>> >> https://medium.com/@jaceklaskowski/
>> >> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>> >> Follow me at https://twitter.com/jaceklaskowski
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> >> For additional commands, e-mail: user-h...@spark.apache.org
>> >>
>> >
>> >
>> >
>> > --
>> > Cheers!
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to