libanglang opened a new issue, #2186: URL: https://github.com/apache/incubator-streampark/issues/2186
### Search before asking - [X] I had searched in the [feature](https://github.com/apache/incubator-streampark/issues?q=is%3Aissue+label%3A%22Feature%22) and found no similar feature requirement. ### Description condation: 1. k8s use kata as a runtime which like a micro vm 2. streampark deployed on k8s and run a kubenetes application task problems arising: streampark can't connect to docker-daemon when run a kubenetes application task, because the following conf in k8s yaml does not take effect volumeMounts: - name: volume-docker mountPath: /var/run/docker.sock readOnly: true i guess kata container and host use their own kernel due to this socket file can't mount to kata container Why do I bring this up suggestion 1. docker may not be installed on k8s cluster because k8s use cotainerd + runc 2. we don't worry about kubenetes can schedule streampark to any node if we install a docker daemon on one node which cluster node can connect to it proposal: 1. install docker to support remote commutication [Service] ExecStart= ExecStart=/usr/bin/dockerd -H tcp://192.168.10.100:2375 -H unix://var/run/docker.sock 2. streampark conf docker: # instantiating DockerHttpClient http-client: max-connections: 10000 connection-timeout-sec: 10000 response-timeout-sec: 12000 docker-host: "tcp://192.168.10.100:2375" ### Usage Scenario condation: 1. k8s use kata as a runtime which like a micro vm 2. streampark deployed on k8s and run a kubenetes application task ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
