Nj-kol commented on issue #325: URL: https://github.com/apache/doris-spark-connector/issues/325#issuecomment-3568843730
> 1. Is `doris-toolkit-dev-cg1-1.doris-toolkit-dev-cg1.doris.svc.cluster.local:9060` accessible on the Spark side? > 2. If not, it needs to be read via arrow-flight. Refer to this link for SparkConnector configuration: https://doris.apache.org/docs/dev/ecosystem/spark-doris-connector#reading-via-arrow-flight-sql. > Also, `fe.conf` needs to be configured with `public_host={nginx ip}` and `arrow_flight_sql_proxy_port={nginx port}`. See [https://doris.apache.org/docs/dev/db-connect/arrow-flight-sql-connect#multiple-bes-share-the-same-ip-accessible-from-outside-the-cluster`](https://doris.apache.org/docs/dev/db-connect/arrow-flight-sql-connect#multiple-bes-share-the-same-ip-accessible-from-outside-the-cluster%60). 1. Is doris-toolkit-dev-cg1-1.doris-toolkit-dev-cg1.doris.svc.cluster.local:9060 accessible on the Spark side? - No because spark is outside k8s 2. I feel that is not the issue @JNSimba. I think the issue is that the BE nodes in compute groups are registering themselves to FE on launch using the FQDN of internal K8S, and when this Spark connector queries the FE to get the BE IP, that is what is being returned. Note that I have already put `public_host=<my_azure_lb>`, and it does not work for this connector; it does perfectly work when I use the `flight-sql-jdbc-core` library. As I said earlier, this issue can be solved by giving the provision the connector config to specify the BE IPs like `.option("doris.benodes", <provide_k8s_loadbalancer_url/ip>)` so that instead of the connector fetching the BE IPs from FE, it can simply use the BE IP directly ( which can be the `public_host={nginx ip}` ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
