This is an automated email from the ASF dual-hosted git repository.

nizhikov pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/master by this push:
     new 3864f18a72e IGNITE-18515 CDC: add documentation about metadata 
replication (#10481)
3864f18a72e is described below

commit 3864f18a72ea0e29c967509362654bc11328864c
Author: Ilya Shishkov <[email protected]>
AuthorDate: Mon Jan 16 15:59:05 2023 +0300

    IGNITE-18515 CDC: add documentation about metadata replication (#10481)
---
 .../change-data-capture-extensions.adoc             | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git 
a/docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc 
b/docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc
index b9e248cf053..736f754eb3f 100644
--- a/docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc
+++ b/docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc
@@ -25,6 +25,8 @@ 
link:https://github.com/apache/ignite-extensions/tree/master/modules/cdc-ext[Cha
 
 NOTE: For each cache replicated between clusters 
link:https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/version/CacheVersionConflictResolver.java[CacheVersionConflictResolver]
 should be defined.
 
+NOTE: All implementations of CDC replication support replication of 
link:https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/binary/BinaryType.html[BinaryTypes]
 and 
link:https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cdc/TypeMapping.html[TypeMappings]
+
 == Ignite to Java Thin Client CDC streamer
 This streamer starts link:thin-clients/java-thin-client[Java Thin Client] 
which connects to destination cluster.
 After connection is established, all changes captured by CDC will be 
replicated to destination cluster.
@@ -92,6 +94,8 @@ This way to replicate changes between clusters requires 
setting up two applicati
 
 NOTE: Instances of `ignite-cdc.sh` with configured streamer should be started 
on each server node of source cluster to capture all changes.
 
+IMPORTANT: CDC trough Kafka requires _metadata topic with the only one 
partition_ for sequential ordering guarantees.
+
 image:../../assets/images/integrations/CDC-ignite2kafka.svg[]
 
 === IgniteToKafkaCdcStreamer Configuration
@@ -101,8 +105,9 @@ 
image:../../assets/images/integrations/CDC-ignite2kafka.svg[]
 |Name |Description | Default value
 | `caches` | Set of cache names to replicate. | null
 | `kafkaProperties` | Kafka producer properties. | null
-| `topic` | Name of the Kafka topic. | null
-| `kafkaParts` | Number of Kafka topic partitions. | null
+| `topic` | Name of the Kafka topic for CDC events. | null
+| `kafkaParts` | Number of Kafka partitions in CDC events topic. | null
+| `metadataTopic` | Name of topic for replication of BinaryTypes and 
TypeMappings. | null
 | `onlyPrimary` | Flag to handle changes only on primary node. | `false`
 | `maxBatchSize` | Maximum size of concurrently produced Kafka records. When 
streamer reaches this number, it waits for Kafka acknowledgements, and then 
commits CDC offset. | `1024`
 | `kafkaRequestTimeout` | Kafka request timeout in milliseconds.  | `3000`
@@ -160,9 +165,11 @@ Kafka to Ignite configuration file should contain the 
following beans that will
 |===
 |Name |Description | Default value
 | `caches` | Set of cache names to replicate. | null
-| `topic` | Name of the Kafka topic. | null
-| `kafkaPartsFrom` | Lower Kafka partitions number (inclusive). | -1
-| `kafkaPartsTo` | Lower Kafka partitions number (exclusive). | -1
+| `topic` | Name of the Kafka topic for CDC events. | null
+| `kafkaPartsFrom` | Lower Kafka partitions number (inclusive) for CDC events 
topic. | -1
+| `kafkaPartsTo` | Lower Kafka partitions number (exclusive) for CDC events 
topic. | -1
+| `metadataTopic` | Name of topic for replication of BinaryTypes and 
TypeMappings. | null
+| `metadataConsumerGroup` | Group for `KafkaConsumer`, which polls from 
metadata topic | ignite-metadata-update-<kafkaPartsFrom>-<kafkaPartsTo>
 | `kafkaRequestTimeout` | Kafka request timeout in milliseconds.  | `3000`
 | `maxBatchSize` | Maximum number of events to be sent to destination cluster 
in a single batch. | 1024
 | `threadCount` | Count of threads to proceed consumers. Each thread poll 
records from dedicated partitions in round-robin manner. | 16
@@ -170,7 +177,7 @@ Kafka to Ignite configuration file should contain the 
following beans that will
 
 ==== Logging
 
-`kakfa-to-ignite.sh` uses the same logging configuration as the Ignite node 
does. The only difference is that the log is written in the 
"kafka-ignite-streamer.log" file.
+`kafka-to-ignite.sh` uses the same logging configuration as the Ignite node 
does. The only difference is that the log is written in the 
"kafka-ignite-streamer.log" file.
 
 == CacheVersionConflictResolver implementation
 
@@ -223,4 +230,4 @@ Configuration is done via Ignite node plugin:
         </property>
     </bean>
 </property>
-```
\ No newline at end of file
+```

Reply via email to