vanzin commented on a change in pull request #23790: [SPARK-26792][CORE] Apply 
custom log URL to Spark UI
URL: https://github.com/apache/spark/pull/23790#discussion_r260907090
 
 

 ##########
 File path: 
core/src/test/scala/org/apache/spark/scheduler/CoarseGrainedSchedulerBackendSuite.scala
 ##########
 @@ -120,6 +124,54 @@ class CoarseGrainedSchedulerBackendSuite extends 
SparkFunSuite with LocalSparkCo
     }
   }
 
+  // Here we just have test for one happy case instead of all cases: other 
cases are covered in
+  // FsHistoryProviderSuite.
+  test("custom log url for Spark UI is applied") {
+    val conf = new SparkConf()
+      .set(CPUS_PER_TASK, 2)
+      .set(UI.CUSTOM_EXECUTOR_LOG_URL, getCustomExecutorLogUrl(includeFileName 
= true))
+      .setMaster("local-cluster[4, 3, 1024]")
+      .setAppName("test")
+
+    sc = new SparkContext(conf)
+    val backend = 
sc.schedulerBackend.asInstanceOf[CoarseGrainedSchedulerBackend]
+    val mockEndpointRef = mock[RpcEndpointRef]
+    val mockAddress = mock[RpcAddress]
+
+    val logUrls = getTestExecutorLogUrls
+    val attributes = getTestExecutorAttributes
+
+    var executorAddedCount: Int = 0
+    val listener = new SparkListener() {
+      override def onExecutorAdded(executorAdded: SparkListenerExecutorAdded): 
Unit = {
+        executorAddedCount += 1
+        assert(executorAdded.executorInfo.logUrlMap === Seq("stdout", 
"stderr").map { file =>
+          file -> getExpectedCustomExecutorLogUrl(attributes, Some(file))
+        }.toMap)
+      }
+    }
+
+    try {
+      sc.addSparkListener(listener)
+
+      backend.driverEndpoint.askSync[Boolean](
+        RegisterExecutor("1", mockEndpointRef, mockAddress.host, 1, logUrls, 
attributes))
 
 Review comment:
   Aren't these IDs the same that would be created by the executors you 
requested in the master string?
   
   If those start first (don't remember if `local-cluster` waits for executors 
to be up before returning control to the app), then you'll hit this code:
   
   ```
         case RegisterExecutor(executorId, executorRef, hostname, cores, 
logUrls, attributes) =>
           if (executorDataMap.contains(executorId)) {
             executorRef.send(RegisterExecutorFailed("Duplicate executor ID: " 
+ executorId))
             context.reply(true)
   ```
   
   Or maybe there's something I'm missing that prevents the configured 
executors from starting at all? Because you have a check later 
(`assert(sc.getExecutorIds().length == 3)`) that should fail given that you 
requested 4 executors when starting the context.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to