[ 
https://issues.apache.org/jira/browse/IGNITE-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-21738:
--------------------------------------
    Description: 
In IGNITE-21585 there were fixed potential races:
1. ClientPrimaryReplicaTracker might update `primaryReplicas` after table 
destroyed concurrently.
The fix idea was to forbid caching data for destroyed tables. However, cache 
contains placeholders for all tables/partitions even if ther never be requested.
2. TableManager might register index (see `registerIndexesToTable`) in a table, 
while LWM rising up concurrently.
The `registerIndexesToTable` execution was delegated into LowWatermark, to make 
sure it runs under an lwm lock.


  was:
After IGNITE-21585 there are potential races:
1. ClientPrimaryReplicaTracker may update `primaryReplicas` after table 
destroyed concurrently.
2. TableManager may register index (see `registerIndexesToTable`) in a table, 
while LWM rising up concurrently.


> Proper fix to linearize LWM and Metastorage event.
> --------------------------------------------------
>
>                 Key: IGNITE-21738
>                 URL: https://issues.apache.org/jira/browse/IGNITE-21738
>             Project: Ignite
>          Issue Type: Improvement
>            Reporter: Andrey Mashenkov
>            Priority: Major
>              Labels: ignite-3
>             Fix For: 3.0.0-beta2
>
>
> In IGNITE-21585 there were fixed potential races:
> 1. ClientPrimaryReplicaTracker might update `primaryReplicas` after table 
> destroyed concurrently.
> The fix idea was to forbid caching data for destroyed tables. However, cache 
> contains placeholders for all tables/partitions even if ther never be 
> requested.
> 2. TableManager might register index (see `registerIndexesToTable`) in a 
> table, while LWM rising up concurrently.
> The `registerIndexesToTable` execution was delegated into LowWatermark, to 
> make sure it runs under an lwm lock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to