ilamhs opened a new issue, #17702: URL: https://github.com/apache/pinot/issues/17702
## Problem `SegmentPushUtils` uses a hardcoded static `FileUploadDownloadClient` with no SSL context All push methods (`sendSegmentUriAndMetadata`, `sendSegmentUris`, `sendSegmentsUriAndMetadata`) use this singleton, meaning segment metadata push always happens over plain HTTP even when the `SegmentGenerationJobSpec` carries a valid `TlsSpec`. The `TlsSpec` is already part of the SPI and is consumed by the Spark ingestion plugins (`SparkSegmentGenerationJobRunner`), but the core push utilities and the Hadoop push runner (`HadoopSegmentMetadataPushJobRunner`) ignore it entirely. ## Proposed Fix When `spec.getTlsSpec() != null`, construct a `FileUploadDownloadClient(sslContext)` using the TLS keystore/truststore configuration instead of always using the default static client. The change would be localized to `SegmentPushUtils` — the 3-arg `sendSegmentUriAndMetadata` entry point can check the spec's `TlsSpec` and build the appropriate client, then pass it through to the inner implementation. This keeps the public API surface unchanged for existing callers. `HadoopSegmentMetadataPushJobRunner` would also benefit from the same treatment that the Spark runners already have. ## Impact - Backwards-compatible: existing callers with no `TlsSpec` get the same default behavior - Enables mTLS segment push without requiring forks or workarounds - Brings Hadoop push runner to parity with the Spark plugins regarding TLS support -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
