[
https://issues.apache.org/jira/browse/IGNITE-23466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Alexander Lapin updated IGNITE-23466:
-------------------------------------
Description:
h3. Motivation
In order to simply DistributionZone dataNodes processing and meet meta storage
compaction requirements it's required not to use MG versioning. Generally
following logic should be implemented.
* On topology event
** Check whether we already have corresponding pending scaleUp/Down timer. If
not:
*** Serialised timer is written to MG synchronously. In timer's data bag it's
expected to have
**** Awake/DataNodes atomic switch timestamp.
**** Data nodes to apply.
*** Else:
**** Rewrite existing timer by updating Awake/DataNodes timestamp along with
data nodes to apply. That should be done atomically in a way that calculated
topology event timestamp is used while comparison with already existing awake
one.
** Timer is scheduled.
* On timer
** Atomically remove the timer and set add new timestamp -> dataNodes entry in
dataNodes map in MG and DZM volatile cache.
* On node restart:
** restore corresponding volatile state from the MG.
* On filter change:
** Immediately trigger on timer.
* On timeout adjustments
** Rewrite existing timers if exists.
There's a chance that topology and node attributes should also be internally
versioned.
h3. Definition of Done
* DZM manages dataNodes and related history internally in last MG key like
Catalog does.
* New robust mechanic of atomic timers to dataNodes application is used.
Details are described in Motivation section.
was:
h3. Motivation
In order to simply DistributionZone dataNodes processing and meet meta storage
compaction requirements it's required not to use MG versioning. Generally
following logic should be implemented.
* On topology event
** Check whether we already have corresponding pending scaleUp/Down timer. If
not:
*** Serialised timer is written to MG synchronously. In timer's data bag it's
expected to have
**** Awake/DataNodes atomic switch timestamp.
**** Data nodes to apply.
*** Else:
**** Rewrite existing timer by updating Awake/DataNodes timestamp along with
data nodes to apply. That should be done atomically in a way that calculated
topology event timestamp is used while comparison with already existing awake
one.
** Timer is scheduled.
* On timer
** Atomically remove the timer and set add new timestamp -> dataNodes entry in
dataNodes map in MG and DZM volatile cache.
* On dataNodes(timerstamp, zone). Yep, dataNodes() should be refactored in
order to use timestamp instead of revision, catalog version:
** check whether there's nonAwaked timers with awakeTimestamp less or equal to
the requested one. If true - "fire" the timer.
** Iterate over existing timestamp -> dataNodes entries and find closest one
to the left.
* On node restart:
** restore corresponding volatile state from the MG.
* On filter change:
** Immediately trigger on timer.
* On timeout adjustments
** Rewrite existing timers if exists.
There's a chance that topology and node attributes should also be internally
versioned.
h3. Definition of Done
* DZM dataNodes() uses HybridTimestamp instead of MG revision and catalog
version.
* DZM manages dataNodes and related history internally in last MG key like
Catalog does.
* New robust mechanic of atomic timers to dataNodes application is used.
Details are described in Motivation section.
> Refactor DZM internals
> ----------------------
>
> Key: IGNITE-23466
> URL: https://issues.apache.org/jira/browse/IGNITE-23466
> Project: Ignite
> Issue Type: Improvement
> Reporter: Alexander Lapin
> Assignee: Alexander Lapin
> Priority: Major
> Labels: ignite-3
>
> h3. Motivation
> In order to simply DistributionZone dataNodes processing and meet meta
> storage compaction requirements it's required not to use MG versioning.
> Generally following logic should be implemented.
> * On topology event
> ** Check whether we already have corresponding pending scaleUp/Down timer.
> If not:
> *** Serialised timer is written to MG synchronously. In timer's data bag
> it's expected to have
> **** Awake/DataNodes atomic switch timestamp.
> **** Data nodes to apply.
> *** Else:
> **** Rewrite existing timer by updating Awake/DataNodes timestamp along with
> data nodes to apply. That should be done atomically in a way that calculated
> topology event timestamp is used while comparison with already existing awake
> one.
> ** Timer is scheduled.
> * On timer
> ** Atomically remove the timer and set add new timestamp -> dataNodes entry
> in dataNodes map in MG and DZM volatile cache.
> * On node restart:
> ** restore corresponding volatile state from the MG.
> * On filter change:
> ** Immediately trigger on timer.
> * On timeout adjustments
> ** Rewrite existing timers if exists.
> There's a chance that topology and node attributes should also be internally
> versioned.
> h3. Definition of Done
> * DZM manages dataNodes and related history internally in last MG key like
> Catalog does.
> * New robust mechanic of atomic timers to dataNodes application is used.
> Details are described in Motivation section.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)