This is an automated email from the ASF dual-hosted git repository.
sk0x50 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/ignite-3.git
The following commit(s) were added to refs/heads/main by this push:
new 192ff91d8b IGNITE-18455 Added README.md and described threading model
for the distribution zones. Fixes #1635
192ff91d8b is described below
commit 192ff91d8b7491ffcee105d1db180cd3585b6429
Author: Mirza Aliev <[email protected]>
AuthorDate: Wed Feb 8 11:10:37 2023 +0200
IGNITE-18455 Added README.md and described threading model for the
distribution zones. Fixes #1635
Signed-off-by: Slava Koptilin <[email protected]>
---
modules/distribution-zones/README.md | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/modules/distribution-zones/README.md
b/modules/distribution-zones/README.md
new file mode 100644
index 0000000000..a567805b53
--- /dev/null
+++ b/modules/distribution-zones/README.md
@@ -0,0 +1,22 @@
+This module provides implementations for the Distribution Zones module.
+
+## Brief overview
+Data partitioning in Apache Ignite is controlled by the affinity function that
determines the mapping both between keys and partitions
+and partitions and nodes. Specifying an affinity function along with replica
factor and partitions count sometimes is not enough,
+meaning that explicit fine grained tuning is required in order to control what
data goes where.
+Distribution zones provides aforementioned configuration possibilities that
eventually makes it possible to achieve following goals:
+
++ The ability to trigger data rebalance upon adding and removing cluster nodes.
++ The ability to delay rebalance until the topology stabilizes.
++ The ability to conveniently specify data distribution adjustment rules for
tables.
+
+## Threading
+Every Ignite node has one scheduled thread pool executor and thread naming
format: (`%{consistentId}%dst-zones-scheduler`)
+
+This thread pool is responsible for the scheduling tasks that adds or removes
nodes from zone's data nodes field in meta storage.
+This process is called data nodes' scale up or scale down.
+Every intent for scale up and scale down is scheduled in the thread pool with
the timeout, that is presented in the configuration of
+a zone. After timeout, new data nodes of a zone are propagated to the meta
storage.
+If a new intent is appeared at the moment, when old intent has not been
started yet,
+then old intent will be canceled and replaced with a new one.
+