Hello,

My Spark application is written in Scala and submitted to a Spark cluster
in standalone mode. The Spark Jobs for my application are listed in the
Spark UI like this:

Job Id     Description ...
6          saveAsTextFile at Foo.scala:202
5          saveAsTextFile at Foo.scala:201
4          count at Foo.scala:188
3          collect at Foo.scala:182
2          count at Foo.scala:162
1          count at Foo.scala:152
0          collect at Foo.scala:142


Is it possible to assign Job Descriptions to all these jobs in my Scala
code?

Thanks!
Rares

Reply via email to