ibzib commented on a change in pull request #13743:
URL: https://github.com/apache/beam/pull/13743#discussion_r568026000



##########
File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/SparkPipelineRunner.java
##########
@@ -212,6 +246,22 @@ public PortablePipelineResult run(RunnerApi.Pipeline 
pipeline, JobInfo jobInfo)
             pipelineOptions.as(MetricsOptions.class),
             result);
     metricsPusher.start();
+    if (pipelineOptions.getEventLogEnabled()) {
+      eventLoggingListener.onApplicationStart(
+          new SparkListenerApplicationStart(
+              jobInfo.jobId(),

Review comment:
       I am referring to certain fields in the Spark API that you have filled 
in incorrectly. Here (in comments) are the values I think you should use 
instead:
   
   ```java
   EventLoggingListener(
     String appId /* jsc.getConf().getAppId() */, 
     java.net.URI logBaseDir, 
     SparkConf sparkConf, 
     org.apache.hadoop.conf.Configuration hadoopConf)
   
   SparkListenerApplicationStart(
     String appName /* 
pipelineOptions.as(ApplicationNameOptions.class).getAppName() */, 
     scala.Option<String> appId /* jsc.getConf().getAppId() */, 
     long time, 
     String sparkUser, 
     scala.Option<String> appAttemptId /* "1" */, 
     scala.Option<scala.collection.Map<String,String>> driverLogs)
   ```
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to