[
https://issues.apache.org/jira/browse/FLINK-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698252#comment-16698252
]
ASF GitHub Bot commented on FLINK-4582:
---------------------------------------
tweise commented on a change in pull request #6968: [FLINK-4582] [kinesis]
Consuming data from DynamoDB streams to flink
URL: https://github.com/apache/flink/pull/6968#discussion_r236085962
##########
File path:
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/DynamodbStreamsDataFetcher.java
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.kinesis.internals;
+
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+import org.apache.flink.streaming.connectors.kinesis.KinesisShardAssigner;
+import
org.apache.flink.streaming.connectors.kinesis.metrics.ShardMetricsReporter;
+import
org.apache.flink.streaming.connectors.kinesis.model.DynamodbStreamsShardHandle;
+import org.apache.flink.streaming.connectors.kinesis.model.SequenceNumber;
+import org.apache.flink.streaming.connectors.kinesis.model.StreamShardHandle;
+import
org.apache.flink.streaming.connectors.kinesis.proxy.DynamodbStreamsProxy;
+import
org.apache.flink.streaming.connectors.kinesis.serialization.KinesisDeserializationSchema;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static
org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants.DEFAULT_DYNAMODB_STREAMS_SHARDID_FORMAT_CHECK;
+import static
org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants.DYNAMODB_STREAMS_SHARDID_FORMAT_CHECK;
+
+/**
+ * Dynamodb streams data fetcher.
+ * @param <T> type of fetched data.
+ */
+public class DynamodbStreamsDataFetcher <T> extends KinesisDataFetcher<T> {
+ private boolean shardIdFormatCheck = false;
+
+ /**
+ * Constructor.
+ *
+ * @param streams list of streams to fetch data
+ * @param sourceContext source context
+ * @param runtimeContext runtime context
+ * @param configProps config properties
+ * @param deserializationSchema deserialization schema
+ * @param shardAssigner shard assigner
+ */
+ public DynamodbStreamsDataFetcher(List<String> streams,
+ SourceFunction.SourceContext<T> sourceContext,
+ RuntimeContext runtimeContext,
+ Properties configProps,
+ KinesisDeserializationSchema<T> deserializationSchema,
+ KinesisShardAssigner shardAssigner) {
+
+ super(streams,
+ sourceContext,
+ sourceContext.getCheckpointLock(),
+ runtimeContext,
+ configProps,
+ deserializationSchema,
+ shardAssigner,
+ null,
+ new AtomicReference<>(),
+ new ArrayList<>(),
+
createInitialSubscribedStreamsToLastDiscoveredShardsState(streams),
+ // use DynamodbStreamsProxy
+ DynamodbStreamsProxy::create);
+
+ shardIdFormatCheck = Boolean.valueOf(configProps.getProperty(
+ DYNAMODB_STREAMS_SHARDID_FORMAT_CHECK,
+ DEFAULT_DYNAMODB_STREAMS_SHARDID_FORMAT_CHECK));
+ }
+
+ /**
+ * Updates the last discovered shard of a subscribed stream; only
updates if the update is valid.
+ */
+ @Override
+ public void advanceLastDiscoveredShardOfStream(String stream, String
shardId) {
Review comment:
Rather than duplicating complete logic from the base class, can we just
extract what is unique to DynamoDB? That might also eliminate the need to
expose `subscribedStreamsToLastDiscoveredShardIds`?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Allow FlinkKinesisConsumer to adapt for AWS DynamoDB Streams
> ------------------------------------------------------------
>
> Key: FLINK-4582
> URL: https://issues.apache.org/jira/browse/FLINK-4582
> Project: Flink
> Issue Type: New Feature
> Components: Kinesis Connector, Streaming Connectors
> Reporter: Tzu-Li (Gordon) Tai
> Assignee: Ying Xu
> Priority: Major
> Labels: pull-request-available
>
> AWS DynamoDB is a NoSQL database service that has a CDC-like (change data
> capture) feature called DynamoDB Streams
> (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html),
> which is a stream feed of item-level table activities.
> The DynamoDB Streams shard abstraction follows that of Kinesis Streams with
> only a slight difference in resharding behaviours, so it is possible to build
> on the internals of our Flink Kinesis Consumer for an exactly-once DynamoDB
> Streams source.
> I propose an API something like this:
> {code}
> DataStream dynamoItemsCdc =
> FlinkKinesisConsumer.asDynamoDBStream(tableNames, schema, config)
> {code}
> The feature adds more connectivity to popular AWS services for Flink, and
> combining what Flink has for exactly-once semantics, out-of-core state
> backends, and queryable state with CDC can have very strong use cases. For
> this feature there should only be an extra dependency to the AWS Java SDK for
> DynamoDB, which has Apache License 2.0.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)