[
https://issues.apache.org/jira/browse/FLINK-20238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235530#comment-17235530
]
Jark Wu commented on FLINK-20238:
---------------------------------
This is not supported in 1.11, but has been supported in 1.12.
You can try out this feature using th 1.12 RC.
> flink-sql-connector-elasticsearch does not support Elasticsearch auth
> ----------------------------------------------------------------------
>
> Key: FLINK-20238
> URL: https://issues.apache.org/jira/browse/FLINK-20238
> Project: Flink
> Issue Type: Wish
> Components: Connectors / ElasticSearch
> Affects Versions: 1.11.1, 1.11.2
> Reporter: liu
> Priority: Major
>
> Flink SQL> INSERT INTO cdc_enriched_orders
> > SELECT o.*, p.name, p.description, s.shipment_id, s.origin, s.destination,
> > s.is_arrived
> > FROM cdc_orders AS o
> > LEFT JOIN cdc_products AS p ON o.product_id = p.id
> > LEFT JOIN cdc_shipments AS s ON o.order_id = s.order_id;
> [INFO] Submitting SQL update statement to the cluster...
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Unsupported options found for
> connector 'elasticsearch-7'.
> Unsupported options:
> password
> username
> Supported options:
> connection.max-retry-timeout
> connection.path-prefix
> connector
> document-id.key-delimiter
> failure-handler
> format
> hosts
> index
> json.fail-on-missing-field
> json.ignore-parse-errors
> json.timestamp-format.standard
> property-version
> sink.bulk-flush.backoff.delay
> sink.bulk-flush.backoff.max-retries
> sink.bulk-flush.backoff.strategy
> sink.bulk-flush.interval
> sink.bulk-flush.max-actions
> sink.bulk-flush.max-size
> sink.flush-on-checkpoint
--
This message was sent by Atlassian Jira
(v8.3.4#803005)