YuriGusev commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-dynamodb/pull/1#discussion_r997909454


##########
flink-connector-aws-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/sink/DynamoDbSinkBuilder.java:
##########
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.base.sink.AsyncSinkBaseBuilder;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.streaming.connectors.dynamodb.config.DynamoDbTablesConfig;
+
+import java.util.Optional;
+import java.util.Properties;
+
+/**
+ * Builder to construct {@link DynamoDbSink}.
+ *
+ * <p>The following example shows the minimum setup to create a {@link 
DynamoDbSink} that writes
+ * records into DynamoDb
+ *
+ * <pre>{@code
+ * private static class DummyDynamoDbRequestConverter implements 
DynamoDbRequestConverter<String> {
+ *
+ *         @Override
+ *         public DynamoDbRequest apply(String s) {
+ *             final Map<String, DynamoDbAttributeValue> item = new 
HashMap<>();
+ *             item.put("your-key", 
DynamoDbAttributeValue.builder().s(s).build());
+ *             return DynamoDbRequest.builder()
+ *                     .tableName("your-table-name")
+ *                     
.putRequest(DynamoDbPutRequest.builder().item(item).build())
+ *                     .build();
+ *         }
+ *     }
+ * DynamoDbSink<String> dynamoDbSink = DynamoDbSink.<String>builder()
+ *                                          .setDynamoDbRequestConverter(new 
DummyDynamoDbRequestConverter())
+ *                                       .build();
+ * }</pre>
+ *
+ * <p>If the following parameters are not set in this builder, the following 
defaults will be used:
+ *
+ * <ul>
+ *   <li>{@code maxBatchSize} will be 25
+ *   <li>{@code maxInFlightRequests} will be 50
+ *   <li>{@code maxBufferedRequests} will be 10000
+ *   <li>{@code maxBatchSizeInBytes} will be 16 MB i.e. {@code 16 * 1000 * 
1000}
+ *   <li>{@code maxTimeInBufferMS} will be 5000ms
+ *   <li>{@code maxRecordSizeInBytes} will be 400 KB i.e. {@code 400 * 1000 * 
1000}
+ *   <li>{@code failOnError} will be false
+ *   <li>{@code dynamoDbTablesConfig} will be empty meaning no records 
deduplication will be
+ *       performed by the sink
+ * </ul>
+ *
+ * @param <InputT> type of elements that should be persisted in the destination
+ */
+@PublicEvolving
+public class DynamoDbSinkBuilder<InputT>
+        extends AsyncSinkBaseBuilder<InputT, DynamoDbWriteRequest, 
DynamoDbSinkBuilder<InputT>> {
+
+    private static final int DEFAULT_MAX_BATCH_SIZE = 25;
+    private static final int DEFAULT_MAX_IN_FLIGHT_REQUESTS = 50;

Review Comment:
   Hi,
   
   Duplicated keys only matter when they get to the same batch request, then 
dynamodb service will fail the request. I took a simple approach to have 
accumulator to deduplicate entries based on PK/SK in the same request. E.g. if 
we had 25 entries and 1 entry has duplicated PK/SK we would then write 24 
entries in the request.
   
   But if two entries with the same PK/SK get to two different in-flight batch 
request this is not a problem as deduplication then happens on DynamoDB side.
   
   If order of the duplicate/update entries matters I think batch sink can not 
be used anyway, as the order is mixed up by partial/full retries and 
parallelism in the sink. Then it is up to user to dedup before the sink.
   
   I may be misunderstood the question. :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to