Gallardot commented on issue #16478: URL: https://github.com/apache/dolphinscheduler/issues/16478#issuecomment-2295258902
Before discussing this DSIP, I hope everyone can reach a basic consensus. Supporting customization can indeed meet more demand scenarios, but excessive customization can bring more problems. I see in the design that it supports users to directly create pod and configmap, and even supports creating multiple POD. Regarding the support for configmap, I have some questions: 1. Why support configmap? For the same workflow, does it create a configmap for each task instance? Is the content in the configmap different each time? If it is the same, why create it each time? As a configuration resource in k8s, shouldn't configmap be static? As a way to obtain configuration, besides configmap, should secret also be supported? 2. Should the configmap be mounted to the pod as a file? If so, should PV and PVC be supported? 3. If it is just to reference the configuration in the configmap, can it be directly referenced through env? Regarding the support for pod, I have some questions: 1. How is the name of the pod defined? 2. How is the lifecycle of the pod managed? Will DS delete it after the task ends? How to ensure that DS can definitely delete it? 3. If the execution strategy of the workflow is parallel, how should the pod be handled? 4. If multiple pods are created at the same time, are these pods related? Or is it just to run multiple pods concurrently? If it is concurrent, does it support deployments? Does it support StatefulSet? Should DS manage it as a controller of k8s resources? I am afraid this is not what DS should do. 5. Or more broadly, do you want to support the task of creating helm charts? If the issues are not adequately addressed, I am afraid I will vote -1 on this DSIP. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
