wangyang0918 commented on pull request #15501:
URL: https://github.com/apache/flink/pull/15501#issuecomment-815791896


   @yittg Thanks for preparing this PR. I have carefully gone though the 
current implementation and want to share some feedbacks before starting a 
thorough review.
   
   * Except for saving the web socket connections, I am not sure what else we 
could gain by using shared informer over separate watches. I am just uncertain 
to use such a powerful but complex feature. Please note that I am not against 
with such a change.
     * More stable reconnect policy, which means we do not need to handle 
`TooOldResourceVersion` individually
     * A local cache, which could reduce load on Kubernetes API server. But it 
seems that we do not need to get/list resources.
   
   
   * Because `FlinkKubeClient#watchConfigMaps` is only used for Kubernetes HA 
services. Compared with keeping the current `FlinkKubeClient#watchConfigMaps` 
interface, I prefer to remove it and add a new interface for creating shared 
informer. Actually, we completely change the behavior about how to start the 
watch, do the callback, even recreate the watch when failed.
   ```
   KubernetesSharedInformer<?, ?> 
createKubernetesConfigMapSharedInformer(Map<String, String> labels);
   ```
   * Just like `KubernetesLeaderElector`, maybe 
`Fabric8FlinkKubeConfigMapWatcher` could be renamed to 
`KubernetesSharedInformer` and put in 
`org.apache.flink.kubernetes.kubeclient.resources`.
   * We could create/close HA related ConfigMap sharedInformer in 
`KubernetesHaServices` and pass it to `KubernetesLeaderElectionDriver` and 
`KubernetesLeaderRetrievalDriver`. Then the driver will register the event 
handler.
   * I am wondering whether we could share a `Processor` for same ConfigMap 
with different event handlers in `Fabric8FlinkKubeConfigMapWatcher` if the 
performance allows. Each ConfigMap just needs a dedicated `Processor` to 
process the events sequentially.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to