robg-eb commented on code in PR #152:
URL: 
https://github.com/apache/flink-connector-aws/pull/152#discussion_r1765702206


##########
flink-connector-aws/flink-connector-dynamodb/src/main/java/org/apache/flink/connector/dynamodb/table/DynamoDbDynamicSinkFactory.java:
##########
@@ -58,6 +58,17 @@ public DynamicTableSink createDynamicTableSink(Context 
context) {
                         .setDynamoDbClientProperties(
                                 
dynamoDbConfiguration.getSinkClientProperties());
 
+        if (catalogTable.getResolvedSchema().getPrimaryKey().isPresent()) {
+            builder =
+                    builder.setPrimaryKeys(
+                            new HashSet<>(

Review Comment:
   @nicusX - While having a dedicated `PrimaryKey` object with two fields 
_partitionKey_ and _sortKey_ might work for the DataStream API if I were to add 
it to the Sink model, I am not clear on how that would then translate to the 
Table API.  I was hoping to just use the fact that the Table API / SQL API 
already supports the concept of passing in a `PRIMARY KEY` for that.  
   
   I don't think there's any reason here why we would need to separate the 
partition key and sort key for the purpose here of identifying the Primary Key 
- and in fact, there is also some naming collision then as well as the current 
Table API / SQL connector supports passing in a `PARTITIONED BY` clause, adding 
to potential confusion.   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to