zhuxiaoshang commented on a change in pull request #13918:
URL: https://github.com/apache/flink/pull/13918#discussion_r517112072



##########
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/ElasticsearchOptions.java
##########
@@ -138,7 +138,11 @@
                        .withDescription("Elasticsearch connector requires to 
specify a format.\n" +
                                "The format must produce a valid json document. 
\n" +
                                "By default uses built-in 'json' format. Please 
refer to Table Formats section for more details.");
-
+       public static final ConfigOption<Integer> PARALLELISM =

Review comment:
       The parallelism option has been defined in `FactoryUtil`,you should 
reuse it.

##########
File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch6DynamicSinkFactory.java
##########
@@ -149,6 +151,15 @@ private void validate(Elasticsearch6Configuration config, 
Configuration original
                                        config.getPassword().orElse("")
                                ));
                }
+               if (config.getParellelism().isPresent()) {

Review comment:
       The parallelism has been validated in `CommonPhysicalSink`,so i think 
it's unnecessary here.

##########
File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch6DynamicSink.java
##########
@@ -115,50 +117,58 @@ public ChangelogMode getChangelogMode(ChangelogMode 
requestedMode) {
 
        @Override
        public SinkFunctionProvider getSinkRuntimeProvider(Context context) {
-               return () -> {
-                       SerializationSchema<RowData> format = 
this.format.createRuntimeEncoder(context, schema.toRowDataType());
-
-                       final RowElasticsearchSinkFunction upsertFunction =
-                               new RowElasticsearchSinkFunction(
-                                       
IndexGeneratorFactory.createIndexGenerator(config.getIndex(), schema),
-                                       config.getDocumentType(),
-                                       format,
-                                       XContentType.JSON,
-                                       REQUEST_FACTORY,
-                                       KeyExtractor.createKeyExtractor(schema, 
config.getKeyDelimiter())
-                               );
-
-                       final ElasticsearchSink.Builder<RowData> builder = 
builderProvider.createBuilder(
-                               config.getHosts(),
-                               upsertFunction);
-
-                       builder.setFailureHandler(config.getFailureHandler());
-                       
builder.setBulkFlushMaxActions(config.getBulkFlushMaxActions());
-                       builder.setBulkFlushMaxSizeMb((int) 
(config.getBulkFlushMaxByteSize() >> 20));
-                       
builder.setBulkFlushInterval(config.getBulkFlushInterval());
-                       
builder.setBulkFlushBackoff(config.isBulkFlushBackoffEnabled());
-                       
config.getBulkFlushBackoffType().ifPresent(builder::setBulkFlushBackoffType);
-                       
config.getBulkFlushBackoffRetries().ifPresent(builder::setBulkFlushBackoffRetries);
-                       
config.getBulkFlushBackoffDelay().ifPresent(builder::setBulkFlushBackoffDelay);
-
-                       // we must overwrite the default factory which is 
defined with a lambda because of a bug
-                       // in shading lambda serialization shading see 
FLINK-18006
-                       if (config.getUsername().isPresent()
-                               && config.getPassword().isPresent()
-                               && 
!StringUtils.isNullOrWhitespaceOnly(config.getUsername().get())
-                               && 
!StringUtils.isNullOrWhitespaceOnly(config.getPassword().get())) {
-                               builder.setRestClientFactory(new 
AuthRestClientFactory(config.getPathPrefix().orElse(null), 
config.getUsername().get(), config.getPassword().get()));
-                       } else {
-                               builder.setRestClientFactory(new 
DefaultRestClientFactory(config.getPathPrefix().orElse(null)));
+               return new SinkFunctionProvider() {

Review comment:
       I have added a util function in `SinkFunctionProvider` in 
https://github.com/apache/flink/pull/13902 , you can use it when the pr is 
merged.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to