Al-assad opened a new issue, #2879:
URL: https://github.com/apache/incubator-streampark/issues/2879

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-streampark/streampark/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### Describe the proposal
   
   Due to many issues with the current version of StreamPark Flink on 
Kubernetes in terms of submitting and tracking tasks, we hope to refactor the 
entire module based on the Flink Kubernetes Operator to better address the 
following problems:
   
   Inaccurate and confusing tracking of Flink on Kubernetes status;
   Unstable and lengthy Flink K8s Application Job submission process;
   Scattered Event Watcher thread pool code;
   
   ### Task list
   
   ### Resources
   - Design documentation: https://www.craft.do/s/Pu5GI6vr4KIrj5
   - Demo code: https://github.com/Al-assad/streampark-flink-kubernetes-v2
   ยก
   
   ### Sub Tasks
   
   - [ ] Create shaded module for flink-kubernetes-operator-api dependency
   - [ ] Port the new module codes to Streampark-flink-kubernetes-v2
   - [ ] Refactor the lifecycle control of Flink application-mode jobs on 
Kubernetes
   - [ ] Refactor the lifecycle control of Flink session-mode jobs on Kubernetes
   - [ ] Refactor the lifecycle control code of Flink clusters on Kubernetes
   - [ ] Refactor the state tracking of Flink on Kubernetes
   - [ ] Modify docker and helm: Adding SVC configuration related to embedded 
file server.
   - [ ] Remove the old streampark-flink-kubernetes module code
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to