yangwwei commented on a change in pull request #277:
URL: 
https://github.com/apache/incubator-yunikorn-k8shim/pull/277#discussion_r693172356



##########
File path: pkg/cache/application.go
##########
@@ -58,6 +58,7 @@ type Application struct {
        placeholderAsk             *si.Resource // total placeholder request 
for the app (all task groups)
        placeholderTimeoutInSec    int64
        schedulingStyle            string
+       requestOriginatingPod      *v1.Pod // Original Pod which creates the 
requests

Review comment:
       If we submit a spark job with GS enabled, 1 driver + 5 executors like 
this:
   
   ```
   spark-1234-driver
   tg-spark-1234-executor-0
   tg-spark-1234-executor-1
   tg-spark-1234-executor-2
   tg-spark-1234-executor-3
   tg-spark-1234-executor-4
   ```
   if the placeholders timeout we push all events to the driver pod
   ```
   spark-1234-driver
   Events
     Application spark-1234 placeholder has been timed out, task: taskIDString0
     Application spark-1234 placeholder has been timed out, task: taskIDString1
     Application spark-1234 placeholder has been timed out, task: taskIDString2
     ...
   ```
   if we have 100 executors timeout, we will have 100 events like this printed 
in the driver pod, that won't help the users..
   when we have spark-operator running, it makes more sense to aggregate such 
messages and send them to the spark CRD object: 
https://github.com/apache/incubator-yunikorn-k8shim/blob/master/pkg/appmgmt/sparkoperator/spark.go
 , sending to the driver pod with such messages does not seem to be very 
helpful to me.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to