[ 
https://issues.apache.org/jira/browse/BEAM-11213?focusedWorklogId=544924&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544924
 ]

ASF GitHub Bot logged work on BEAM-11213:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 31/Jan/21 17:18
            Start Date: 31/Jan/21 17:18
    Worklog Time Spent: 10m 
      Work Description: ibzib commented on a change in pull request #13743:
URL: https://github.com/apache/beam/pull/13743#discussion_r567454089



##########
File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/SparkCommonPipelineOptions.java
##########
@@ -32,6 +32,8 @@
 public interface SparkCommonPipelineOptions
     extends PipelineOptions, StreamingOptions, ApplicationNameOptions {
   String DEFAULT_MASTER_URL = "local[4]";
+  String DEFAULT_SPARK_HISTORY_DIR = "/tmp/spark-events/";

Review comment:
       Remove `DEFAULT_SPARK_HISTORY_DIR` and `DEFAULT_EVENT_LOG_ENABLED` since 
they are unused now.

##########
File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/SparkPipelineRunner.java
##########
@@ -212,6 +246,22 @@ public PortablePipelineResult run(RunnerApi.Pipeline 
pipeline, JobInfo jobInfo)
             pipelineOptions.as(MetricsOptions.class),
             result);
     metricsPusher.start();
+    if (pipelineOptions.getEventLogEnabled()) {
+      eventLoggingListener.onApplicationStart(
+          new SparkListenerApplicationStart(
+              jobInfo.jobId(),
+              scala.Option.apply(jobInfo.jobName()),
+              Instant.now().getMillis(),

Review comment:
       This block runs after the job has completed, so this would report the 
start time being almost exactly the same as the end time.

##########
File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/SparkPipelineRunner.java
##########
@@ -212,6 +246,22 @@ public PortablePipelineResult run(RunnerApi.Pipeline 
pipeline, JobInfo jobInfo)
             pipelineOptions.as(MetricsOptions.class),
             result);
     metricsPusher.start();
+    if (pipelineOptions.getEventLogEnabled()) {
+      eventLoggingListener.onApplicationStart(
+          new SparkListenerApplicationStart(
+              jobInfo.jobId(),

Review comment:
       There are a number of (job or app) * (name or id) fields. It will be 
misleading to users if one of these fields is used in place of another. I think 
the following mapping is more accurate:
   
   ```java
   appName = pipelineOptions.as(ApplicationNameOptions.class).getAppName();
   appId = jsc.getConf().getAppId();
   appAttemptId = "1";
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 544924)
    Time Spent: 7h  (was: 6h 50m)

> Beam metrics should be displayed in Spark UI
> --------------------------------------------
>
>                 Key: BEAM-11213
>                 URL: https://issues.apache.org/jira/browse/BEAM-11213
>             Project: Beam
>          Issue Type: Wish
>          Components: runner-spark
>            Reporter: Kyle Weaver
>            Assignee: Tomasz Szerszen
>            Priority: P2
>              Labels: portability-spark
>          Time Spent: 7h
>  Remaining Estimate: 0h
>
> All Beam metrics are visible in the Spark UI in a single accumulator value 
> (in the "Accumulators" tab), which is a large, hard-to-read blob. Originally, 
> this blob was rendered in a bespoke format 
> (https://github.com/apache/beam/blob/ead80b469ffeeddcd8e9e5c8dc462eec0b0ffc6b/sdks/java/core/src/main/java/org/apache/beam/sdk/metrics/MetricQueryResults.java#L63-L72).
>  I changed the format to JSON so it could be easily deserialized (BEAM-9600). 
> But then an issue was filed (BEAM-10294) reporting that the new JSON format 
> was harder to read than the original bespoke format. The temporary fix was to 
> revert to the bespoke format in Spark, while allowing Flink to continue to 
> use JSON. However, if Beam metrics are only visible as an accumulator, then 
> they are also unreadable because the payloads are in binary form (BEAM-10719).
> Having metrics visible in Spark's "Metrics" tab would A) make metrics easier 
> to read (even compared to the bespoke accumulator string format), and closer 
> to what users of Beamless Spark expect, and B) free us to use the accumulator 
> however we wish for Beam internal purposes, without worrying about 
> readability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to