yittg edited a comment on pull request #15501:
URL: https://github.com/apache/flink/pull/15501#issuecomment-816352833


   @wangyang0918 Thank you for your timely feedback so that it will not be too 
far if we are in the wrong direction.
   
   > * Except for saving the web socket connections, I am not sure what else we 
could gain by using shared informer over separate watches. I am just uncertain 
to use such a powerful but complex feature. 
   >   
   
   Yeah, it's powerful but complex. So why i use it and implement like this, to 
be short, i want to make it act like the normal watching
   Let me explain it in detail:
   Say we are running a shared watching now, a new watching for CM-a is 
requested now. 
   An ADDED event should occur (if not existed before), and followed by some 
MODIFIED events if it is updated, (then deleted).
   We can not get the ADDED event from the shared watching if it is already 
existed, . So can we add an ADDED event by fetching from K8s API separately? It 
seems not correct, we cannot guarantee the timing of these two approach: 
duplicated ADDED events or  ADDED after MODIFIED. 
   So we should get the object from a place which is sequential with the events.
   And looks like  the informer is the proper way, brings some added value(like 
reconnecting) at the same time.
   
   ---
   
   **BUT**
   
   > * Because `FlinkKubeClient#watchConfigMaps` is only used for Kubernetes HA 
services. 
   
   If we just use it for Kubernetes HA services and watching those updated 
periodically Leader ConfigMap, and we don't care whether the ADDED event occur. 
We can just using a simple normal watching like before, and dispatch events 
just as they actually came out.
   
   > Compared with keeping the current `FlinkKubeClient#watchConfigMaps` 
interface, I prefer to remove it and add a new interface for creating shared 
informer. 
   
   And if we don't care about the original interface, create another for shared 
watching is ok.
   
   > Actually, we completely change the behavior about how to start the watch, 
do the callback, even recreate the watch when failed.
   
   The implementation here is expected to keep the behavior like before. But if 
we don't need to keep it because of the special use case (only for those Leader 
ConfigMaps), it's ok to me. 
   
   > 
   > ```
   > KubernetesSharedInformer<?, ?> 
createKubernetesConfigMapSharedInformer(Map<String, String> labels);
   > ```
   > 
   > * Just like `KubernetesLeaderElector`, maybe 
`Fabric8FlinkKubeConfigMapWatcher` could be renamed to 
`KubernetesSharedInformer` and put in 
`org.apache.flink.kubernetes.kubeclient.resources`.
   > * We could create/close HA related ConfigMap sharedInformer in 
`KubernetesHaServices` and pass it to `KubernetesLeaderElectionDriver` and 
`KubernetesLeaderRetrievalDriver`. Then the driver will register the event 
handler.
   
   And we can add new interface and it is the same simply watching like 
watching single ConfigMap:
   ```
   KubernetesWatch FlinkKubeClient#watchConfigMaps(Map<String, String> labels, 
Handler handler);
   ```
   And manage it in `KubernetesHaServices`.
   
   
   > * I am wondering whether we could share a `Processor` for same ConfigMap 
with different event handlers in `Fabric8FlinkKubeConfigMapWatcher` if the 
performance allows. Each ConfigMap just needs a dedicated `Processor` to 
process the events sequentially.
   
   Then we can share a `Processor` for same ConfigMap.
   
   
   And @tillrohrmann, what's your opinion?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to