[
https://issues.apache.org/jira/browse/IGNITE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kirill Gusakov updated IGNITE-21594:
------------------------------------
Description:
*Motivation*
To implement the consistent behaviour for zone-based collocation
https://issues.apache.org/jira/browse/IGNITE-19170 we need to introduce the new
storage configuration approach. It must give us a guarantee, that all tables,
which belong to the same zone, can be deployed on any nodes from this zone.
To support these guarantees we introduced the new abstraction - storage profiles
In general the main idea is the following:
- Each table has the 1 storage profile. If storage profile is not specified,
the first storage profile from the zone's storage profiles list will be used.
- Each zone has the list of supported storage profiles.
- Each node provide the list of supported storage profiles.
- In addition to the usual zone filter, zone filters the nodes by the storage
profiles list and ensure, that the nodes support the full list of zone storage
profiles.
- Each node supports the 'default' storage profile. Default zone has the
'default' storage profile as well. So, out of the box, user can just create the
tables in the default zone and know nothing about the storage profile routine.
>From the point of configuration view, node storage configurations will be
>changed in the following way:
{code:java}
rocksDb:
flushDelayMillis: 1000
regions:
lruRegion:
cache: lru
size: 256
clockRegion:
cache: clock
size: 512
aipersist:
checkpoint:
checkpointDelayMillis: 100
regions:
segmentedRegion:
replacementMode: SEGMENTED_LRU
clockRegion:
replacementMode: CLOCK
{code}
to
{code:java}
storages:
engines:
aipersist:
checkpoint:
checkpointDelayMillis: 100
rocksDb:
flushDelayMillis: 1000
profiles:
lru_rocks:
engine: rocksDb
cache: lru
size: 256
clock_rocks:
engine: rocksDb
cache: clock
size: 512
segmented_aipersist:
engine: aipersist
replacementMode: SEGMENTED_LRU
clock_aipersist:
engine: aipersist
replacementMode: CLOCK
{code}
*Definition of node*
- Storage configuration structure reworked according to the feature design
(please, look at the attached diagram to simplify the initial understanding of
the result configuration structure).
- All legacy abstractions like ENGINE and DATAREGION params removed from the
DDLs.
- The triad zone-table-node_configs is working as expected according to the
storage profile design document.
was:
*Motivation*
To implement the consistent behaviour for zone-based collocation
https://issues.apache.org/jira/browse/IGNITE-19170 we need to introduce the new
storage configuration approach. It must give us a guarantee, that all tables,
which belong to the same zone, can be deployed on any nodes from this zone.
To support these guarantees we introduced the new abstraction - storage
profiles. You can read the details in the attached document.
In general the main idea is the following:
- Each table has the 1 storage profile. If storage profile is not specified,
the first storage profile from the zone's storage profiles list will be used.
- Each zone has the list of supported storage profiles.
- Each node provide the list of supported storage profiles.
- In addition to the usual zone filter, zone filters the nodes by the storage
profiles list and ensure, that the nodes support the full list of zone storage
profiles.
- Each node supports the 'default' storage profile. Default zone has the
'default' storage profile as well. So, out of the box, user can just create the
tables in the default zone and know nothing about the storage profile routine.
>From the point of configuration view, node storage configurations will be
>changed in the following way:
{code:java}
rocksDb:
flushDelayMillis: 1000
regions:
lruRegion:
cache: lru
size: 256
clockRegion:
cache: clock
size: 512
aipersist:
checkpoint:
checkpointDelayMillis: 100
regions:
segmentedRegion:
replacementMode: SEGMENTED_LRU
clockRegion:
replacementMode: CLOCK
{code}
to
{code:java}
storages:
engines:
aipersist:
checkpoint:
checkpointDelayMillis: 100
rocksDb:
flushDelayMillis: 1000
profiles:
lru_rocks:
engine: rocksDb
cache: lru
size: 256
clock_rocks:
engine: rocksDb
cache: clock
size: 512
segmented_aipersist:
engine: aipersist
replacementMode: SEGMENTED_LRU
clock_aipersist:
engine: aipersist
replacementMode: CLOCK
{code}
Again, please read the attached design document.
*Definition of node*
- Storage configuration structure reworked according to the feature design
(please, look at the attached diagram to simplify the initial understanding of
the result configuration structure).
- All legacy abstractions like ENGINE and DATAREGION params removed from the
DDLs.
- The triad zone-table-node_configs is working as expected according to the
storage profile design document.
> Storage profiles
> -----------------
>
> Key: IGNITE-21594
> URL: https://issues.apache.org/jira/browse/IGNITE-21594
> Project: Ignite
> Issue Type: Improvement
> Reporter: Kirill Gusakov
> Priority: Major
>
> *Motivation*
> To implement the consistent behaviour for zone-based collocation
> https://issues.apache.org/jira/browse/IGNITE-19170 we need to introduce the
> new storage configuration approach. It must give us a guarantee, that all
> tables, which belong to the same zone, can be deployed on any nodes from this
> zone.
> To support these guarantees we introduced the new abstraction - storage
> profiles
> In general the main idea is the following:
> - Each table has the 1 storage profile. If storage profile is not specified,
> the first storage profile from the zone's storage profiles list will be used.
> - Each zone has the list of supported storage profiles.
> - Each node provide the list of supported storage profiles.
> - In addition to the usual zone filter, zone filters the nodes by the
> storage profiles list and ensure, that the nodes support the full list of
> zone storage profiles.
> - Each node supports the 'default' storage profile. Default zone has the
> 'default' storage profile as well. So, out of the box, user can just create
> the tables in the default zone and know nothing about the storage profile
> routine.
> From the point of configuration view, node storage configurations will be
> changed in the following way:
> {code:java}
> rocksDb:
> flushDelayMillis: 1000
> regions:
> lruRegion:
> cache: lru
> size: 256
> clockRegion:
> cache: clock
> size: 512
> aipersist:
> checkpoint:
> checkpointDelayMillis: 100
> regions:
> segmentedRegion:
> replacementMode: SEGMENTED_LRU
> clockRegion:
> replacementMode: CLOCK
> {code}
> to
> {code:java}
> storages:
> engines:
> aipersist:
> checkpoint:
> checkpointDelayMillis: 100
> rocksDb:
> flushDelayMillis: 1000
> profiles:
> lru_rocks:
> engine: rocksDb
> cache: lru
> size: 256
> clock_rocks:
> engine: rocksDb
> cache: clock
> size: 512
> segmented_aipersist:
> engine: aipersist
> replacementMode: SEGMENTED_LRU
> clock_aipersist:
> engine: aipersist
> replacementMode: CLOCK
> {code}
> *Definition of node*
> - Storage configuration structure reworked according to the feature design
> (please, look at the attached diagram to simplify the initial understanding
> of the result configuration structure).
> - All legacy abstractions like ENGINE and DATAREGION params removed from the
> DDLs.
> - The triad zone-table-node_configs is working as expected according to the
> storage profile design document.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)