[jira] [Commented] (IGNITE-20523) .NET: Thin 3.0: ArgumentNullException.ThrowIfNull allocates on value types

2023-10-04 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17772082#comment-17772082
 ] 

Pavel Tupitsyn commented on IGNITE-20523:
-

Merged to main: b616478f97b1973d8f75c88ae55cbb811cadf653

> .NET: Thin 3.0: ArgumentNullException.ThrowIfNull allocates on value types
> --
>
> Key: IGNITE-20523
> URL: https://issues.apache.org/jira/browse/IGNITE-20523
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> IGNITE-20479 replaced custom null checks with standard 
> *ArgumentNullException.ThrowIfNull*. However, *ThrowIfNull* takes *object*, 
> which involves boxing for value types. Therefore we do heap allocations just 
> to validate arguments in some cases, such as generic record/key/value 
> validation in *KeyValueView* and *RecordView*. Bring back the custom generic 
> validation method to fix this.
> Also, *ToKv* method validates the wrong thing twice:
> {code}
> private static KvPair ToKv(KeyValuePair x)
> {
> ArgumentNullException.ThrowIfNull(x);
> ArgumentNullException.ThrowIfNull(x);
> return new(x.Key, x.Value);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20240) C++ 3.0: Reject Tuples with unmapped fields

2023-10-04 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17772081#comment-17772081
 ] 

Pavel Tupitsyn commented on IGNITE-20240:
-

[~isapego] looks good to me.

> C++ 3.0: Reject Tuples with unmapped fields
> ---
>
> Key: IGNITE-20240
> URL: https://issues.apache.org/jira/browse/IGNITE-20240
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Tuples with unmapped fields should not be allowed in table APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20567) Move the 'enabled' flag from the authentication configuration to security

2023-10-04 Thread Ivan Gagarkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Gagarkin reassigned IGNITE-20567:
--

Assignee: Ivan Gagarkin

> Move the 'enabled' flag from the authentication configuration to security
> -
>
> Key: IGNITE-20567
> URL: https://issues.apache.org/jira/browse/IGNITE-20567
> Project: Ignite
>  Issue Type: Improvement
>  Components: security
>Reporter: Ivan Gagarkin
>Assignee: Ivan Gagarkin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20567) Move the 'enabled' flag from the authentication configuration to security

2023-10-04 Thread Ivan Gagarkin (Jira)
Ivan Gagarkin created IGNITE-20567:
--

 Summary: Move the 'enabled' flag from the authentication 
configuration to security
 Key: IGNITE-20567
 URL: https://issues.apache.org/jira/browse/IGNITE-20567
 Project: Ignite
  Issue Type: Improvement
  Components: security
Reporter: Ivan Gagarkin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20507) Persistent cache meta is not removed if node filter skips node.

2023-10-04 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771991#comment-17771991
 ] 

Ignite TC Bot commented on IGNITE-20507:


{panel:title=Branch: [pull/10970/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10970/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7362380buildTypeId=IgniteTests24Java8_RunAll]

> Persistent cache meta is not removed if node filter skips node.
> ---
>
> Key: IGNITE-20507
> URL: https://issues.apache.org/jira/browse/IGNITE-20507
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladimir Steshin
>Priority: Major
>  Labels: ise
> Attachments: TestNodeRestartsAfterDeletionOfNodeFilteredCache.java
>
>
> We keep persistent cache meta on node, which is filtered by the cache node 
> filter. If such cache is removed, some nodes can retain 'cache_data.dat'. 
> Such nodes can't re-join cluster because they find this 'cache_data.dat' and 
> offer the cache when joining the cluster. But the cache has been removed: 
> {code:java}
> org.apache.ignite.spi.IgniteSpiException: Joining node has caches with data 
> which are not presented on cluster, it could mean that they were already 
> destroyed, to add the node to cluster - remove directories with the 
> caches[TestDynamicCache]
> {code}
> This happens because we remove persistent cache data in 
> `GridCacheProcessor#prepareCacheStop` looking at `Map GridCacheAdapter> GridCacheProcessor#caches`. But there is no 
> GridCacheAdapter for the cache if the node filter excludes this cache for the 
> current node. But 'cache_data.dat' exists. 
> The work-around is to acquire a cache proxy on the node for which this cache 
> is excluded:
> {code:java}
> // This fixes the issue!
> // cache = grid(2).cache(cfg.getName());
> {code}
> This creates the proxy and registers missing GridCacheAdapter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20506) CacheAtomicityMode#TRANSACTIONAL_SNAPSHOT removal

2023-10-04 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771932#comment-17771932
 ] 

Ignite TC Bot commented on IGNITE-20506:


{panel:title=Branch: [pull/10964/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10964/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7361130buildTypeId=IgniteTests24Java8_RunAll]

> CacheAtomicityMode#TRANSACTIONAL_SNAPSHOT removal
> -
>
> Key: IGNITE-20506
> URL: https://issues.apache.org/jira/browse/IGNITE-20506
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20397) java.lang.AssertionError: Group of the event is unsupported

2023-10-04 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel reassigned IGNITE-20397:
--

Assignee: Sergey Uttsel

> java.lang.AssertionError: Group of the event is unsupported
> ---
>
> Key: IGNITE-20397
> URL: https://issues.apache.org/jira/browse/IGNITE-20397
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> {code:java}
>   java.lang.AssertionError: Group of the event is unsupported 
> [nodeId=<11_part_18/isaat_n_2>, 
> event=org.apache.ignite.raft.jraft.core.NodeImpl$LogEntryAndClosure@653d84a]
> at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:224)
>  ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
> at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:191)
>  ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
> at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137) 
> ~[disruptor-3.3.7.jar:?]
> at java.lang.Thread.run(Thread.java:834) ~[?:?] {code}
> [https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7498320?expandCode+Inspection=true=true=false=true=false=true]
> The root cause:
>  # StripedDisruptor.StripeEntryHandler#onEvent method gets handler from 
> StripedDisruptor.StripeEntryHandler#subscribers by event.nodeId().
>  # In some cases the `subscribers` map is cleared by invocation of 
> StripedDisruptor.StripeEntryHandler#unsubscribe (for example on table 
> dropping), and then StripeEntryHandler receives event with 
> SafeTimeSyncCommandImpl.
>  # It produces an assertion error: `assert handler != null`
> The issue is not caused by the catalog feature changes.
> The issue is reproduced when I run the 
> ItSqlAsynchronousApiTest#batchIncomplete with RepeatedTest annotation. In 
> this case the cluster is not restarted after each tests. It possible to 
> reproduced it frequently if add Thread.sleep in StripeEntryHandler#onEvent.
> h3. Implementation notes
> We decided that we can use LOG.warn() instead of an assert because it is 
> safely to skip this event if the table was dropped.
> {code:java}
> if (handler != null) {
> handler.onEvent(event, sequence, endOfBatch || subscribers.size() > 1 && 
> !supportsBatches);
> } else {
> LOG.warn(format("Group of the event is unsupported [nodeId={}, 
> event={}]", event.nodeId(), event));
> } {code}
> It is temp solution and we need to add TODO with link 
> https://issues.apache.org/jira/browse/IGNITE-20536
> *Definition of done*
> There is no asserts if handler is null.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20397) java.lang.AssertionError: Group of the event is unsupported

2023-10-04 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20397:
---
Description: 
h3. Motivation
{code:java}
  java.lang.AssertionError: Group of the event is unsupported 
[nodeId=<11_part_18/isaat_n_2>, 
event=org.apache.ignite.raft.jraft.core.NodeImpl$LogEntryAndClosure@653d84a]
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:224)
 ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:191)
 ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137) 
~[disruptor-3.3.7.jar:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?] {code}
[https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7498320?expandCode+Inspection=true=true=false=true=false=true]

The root cause:
 # StripedDisruptor.StripeEntryHandler#onEvent method gets handler from 
StripedDisruptor.StripeEntryHandler#subscribers by event.nodeId().
 # In some cases the `subscribers` map is cleared by invocation of 
StripedDisruptor.StripeEntryHandler#unsubscribe (for example on table 
dropping), and then StripeEntryHandler receives event with 
SafeTimeSyncCommandImpl.
 # It produces an assertion error: `assert handler != null`

The issue is not caused by the catalog feature changes.

The issue is reproduced when I run the ItSqlAsynchronousApiTest#batchIncomplete 
with RepeatedTest annotation. In this case the cluster is not restarted after 
each tests. It possible to reproduced it frequently if add Thread.sleep in 
StripeEntryHandler#onEvent.
h3. Implementation notes

We decided that we can use LOG.warn() instead of an assert because it is safely 
to skip this event if the table was dropped.
{code:java}
if (handler != null) {
handler.onEvent(event, sequence, endOfBatch || subscribers.size() > 1 && 
!supportsBatches);
} else {
LOG.warn(format("Group of the event is unsupported [nodeId={}, event={}]", 
event.nodeId(), event));
} {code}
It is temp solution and we need to add TODO with link 
https://issues.apache.org/jira/browse/IGNITE-20536

*Definition of done*

There is no asserts if handler is null.

  was:
h3. Motivation
{code:java}
  java.lang.AssertionError: Group of the event is unsupported 
[nodeId=<11_part_18/isaat_n_2>, 
event=org.apache.ignite.raft.jraft.core.NodeImpl$LogEntryAndClosure@653d84a]
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:224)
 ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:191)
 ~[ignite-raft-3.0.0-SNAPSHOT.jar:?]
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137) 
~[disruptor-3.3.7.jar:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?] {code}
[https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7498320?expandCode+Inspection=true=true=false=true=false=true]

The root cause:
 # StripedDisruptor.StripeEntryHandler#onEvent method gets handler from 
StripedDisruptor.StripeEntryHandler#subscribers by event.nodeId().
 # In some cases the `subscribers` map is cleared by invocation of 
StripedDisruptor.StripeEntryHandler#unsubscribe (for example on table 
dropping), and then StripeEntryHandler receives event with 
SafeTimeSyncCommandImpl.
 # It produces an assertion error: `assert handler != null`

The issue is not caused by the catalog feature changes.

The issue is reproduced when I run the ItSqlAsynchronousApiTest#batchIncomplete 
with RepeatedTest annotation. In this case the cluster is not restarted after 
each tests. It possible to reproduced it frequently if add Thread.sleep in 
StripeEntryHandler#onEvent.
h3. Implementation notes

We decided that we can use LOG.warn() instead of an assert because it is safely 
to skip this event if the table was dropped.
{code:java}
if (handler != null) {
handler.onEvent(event, sequence, endOfBatch || subscribers.size() > 1 && 
!supportsBatches);
} else {
LOG.warn(format("Group of the event is unsupported [nodeId={}, event={}]", 
event.nodeId(), event));
} {code}
*Definition of done*

There is no asserts if handler is null.


> java.lang.AssertionError: Group of the event is unsupported
> ---
>
> Key: IGNITE-20397
> URL: https://issues.apache.org/jira/browse/IGNITE-20397
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> {code:java}
>   java.lang.AssertionError: Group of the event is unsupported 
> 

[jira] [Created] (IGNITE-20566) CDC doesn't replicate complex objects when keepBinary is set to the false

2023-10-04 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-20566:
-

 Summary: CDC doesn't replicate complex objects when keepBinary is 
set to the false
 Key: IGNITE-20566
 URL: https://issues.apache.org/jira/browse/IGNITE-20566
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Vinogradov
Assignee: Nikolay Izhikov


To reproduce just change 
{{org.apache.ignite.cdc.CdcConfiguration#DFLT_KEEP_BINARY}} to the {{false}}.

{{org.apache.ignite.cdc.AbstractReplicationTest#testActivePassiveReplication}} 
still will be successfull since uses promitive key/val.
{{org.apache.ignite.cdc.AbstractReplicationTest#testActivePassiveReplicationComplexKeyWithKeyValue}}
 will stuck, transaction on destination cluster will never be finished.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20523) .NET: Thin 3.0: ArgumentNullException.ThrowIfNull allocates on value types

2023-10-04 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771888#comment-17771888
 ] 

Igor Sapego commented on IGNITE-20523:
--

Looks good to me.

> .NET: Thin 3.0: ArgumentNullException.ThrowIfNull allocates on value types
> --
>
> Key: IGNITE-20523
> URL: https://issues.apache.org/jira/browse/IGNITE-20523
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-20479 replaced custom null checks with standard 
> *ArgumentNullException.ThrowIfNull*. However, *ThrowIfNull* takes *object*, 
> which involves boxing for value types. Therefore we do heap allocations just 
> to validate arguments in some cases, such as generic record/key/value 
> validation in *KeyValueView* and *RecordView*. Bring back the custom generic 
> validation method to fix this.
> Also, *ToKv* method validates the wrong thing twice:
> {code}
> private static KvPair ToKv(KeyValuePair x)
> {
> ArgumentNullException.ThrowIfNull(x);
> ArgumentNullException.ThrowIfNull(x);
> return new(x.Key, x.Value);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20565) Test ItSchemaChangeTableViewTest.testMergeChangesAddDropAdd is flaky on TC

2023-10-04 Thread Sergey Chugunov (Jira)
Sergey Chugunov created IGNITE-20565:


 Summary: Test 
ItSchemaChangeTableViewTest.testMergeChangesAddDropAdd is flaky on TC
 Key: IGNITE-20565
 URL: https://issues.apache.org/jira/browse/IGNITE-20565
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Chugunov


Test is flaky with low flaky rate, in the latest failure there is the following 
stack trace reported by TC with NPE on top:

 
{code:java}
java.lang.NullPointerException
at 
org.apache.ignite.internal.runner.app.ItSchemaChangeTableViewTest.testMergeChangesAddDropAdd(ItSchemaChangeTableViewTest.java:250)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 

[jira] [Commented] (IGNITE-20342) Rollback transaction for SQL execution issues

2023-10-04 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771874#comment-17771874
 ] 

Yury Gerzhedovich commented on IGNITE-20342:


[~mzhuravkov] LGTM

> Rollback transaction for SQL execution issues
> -
>
> Key: IGNITE-20342
> URL: https://issues.apache.org/jira/browse/IGNITE-20342
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> During execution any data modification we could have runtime error, for 
> example divide to zero, which lead to cancellation of execution. Right now we 
> don't rollback transaction for the case and part of modification could be 
> applied despite of the error.
> In ideal word we should be rollbacked just the DML statement, but right now 
> we don't support savepoint in transaction protocol. So, let's rollback any 
> type of transaction, explicit and implicit, for any DML statement in case 
> error is occurred.
> Test which could show one of the case of the problem is 
> org.apache.ignite.internal.sql.api.ItSqlSynchronousApiTest#runtimeErrorInTransaction



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20458) Preapre test plan for the DistributionZones feature

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20458:
-
Description: 
In this task we must prepare the test plan for the DistributionZones feature.

Test plan for a DistributionZones feature should cover requirements for each 
and every level of the testing, starting from the unit and integration testing, 
through functional to stability and performance testing.

Mostly, this plan will contain scenarios for the functional testing of 
DistributionZones, including data nodes changing after various scenarios of 
nodes restart, filter changes, scale up/ scale down values changes.

  was:
In this task we must prepare the test plan for the DistributionZones feature.

Test plan for a DistributionZones feature should cover requirements for each 
and every level of the testing, starting from the unit and integration testing, 
through functional to stability and performance testing.

Mostly, this plan will contain scenarios for the functional testing of 
DistributionZones, including data nodes changing after node restart, filter 
changes


> Preapre test plan for the DistributionZones feature
> ---
>
> Key: IGNITE-20458
> URL: https://issues.apache.org/jira/browse/IGNITE-20458
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> In this task we must prepare the test plan for the DistributionZones feature.
> Test plan for a DistributionZones feature should cover requirements for each 
> and every level of the testing, starting from the unit and integration 
> testing, through functional to stability and performance testing.
> Mostly, this plan will contain scenarios for the functional testing of 
> DistributionZones, including data nodes changing after various scenarios of 
> nodes restart, filter changes, scale up/ scale down values changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20458) Preapre test plan for the DistributionZones feature

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20458:
-
Description: 
In this task we must prepare the test plan for the DistributionZones feature.

Test plan for a DistributionZones feature should cover requirements for each 
and every level of the testing, starting from the unit and integration testing, 
through functional to stability and performance testing.

Mostly, this plan will contain scenarios for the functional testing of 
DistributionZones, including data nodes changing after node restart, filter 
changes

  was:In this task we must prepare the test plan for the 


> Preapre test plan for the DistributionZones feature
> ---
>
> Key: IGNITE-20458
> URL: https://issues.apache.org/jira/browse/IGNITE-20458
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> In this task we must prepare the test plan for the DistributionZones feature.
> Test plan for a DistributionZones feature should cover requirements for each 
> and every level of the testing, starting from the unit and integration 
> testing, through functional to stability and performance testing.
> Mostly, this plan will contain scenarios for the functional testing of 
> DistributionZones, including data nodes changing after node restart, filter 
> changes



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20458) Preapre test plan for the DistributionZones feature

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20458:
-
Description: In this task we must prepare the test plan for the 

> Preapre test plan for the DistributionZones feature
> ---
>
> Key: IGNITE-20458
> URL: https://issues.apache.org/jira/browse/IGNITE-20458
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> In this task we must prepare the test plan for the 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20004) Implement durable unlock within same primary

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-20004:


Assignee:  Kirill Sizov  (was: Denis Chudov)

> Implement durable unlock within same primary
> 
>
> Key: IGNITE-20004
> URL: https://issues.apache.org/jira/browse/IGNITE-20004
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3, transaction, transaction3_recovery
>
> h3. Motivation
> It's required to release all acquired locks on transaction finish in a 
> durable way. Such durability consists of two parts:
>  * Durable unlock within same primary.
>  * Durable unlock on primary change.
> This ticket is about first part only. There's a counterpart ticket for the 
> second part https://issues.apache.org/jira/browse/IGNITE-20002
> h3. Definition of Done
>  * All unreleased locks for the transactions that were finished are either 
> released or corresponsing lockholder primary 've left the topology. Locks are 
> volatile and are stored only on the primary replica, thus in case of 
> lock-holder primary failure, all locks will be automatically released.
> h3. Implemention Notes
> Durable recursive
> {code:java}
> replicaService.invoke(recipientNode, FACTORY.txCleanupReplicaRequest(){code}
> until success or loss of recipientNode(enlisted primary) are expected.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20564) Implement storage profile configurations approach

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-20564:


Assignee: Kirill Gusakov

> Implement storage profile configurations approach
> -
>
> Key: IGNITE-20564
> URL: https://issues.apache.org/jira/browse/IGNITE-20564
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to the results of IGNITE-20357 we need to refactor our node 
> storage configurations with the usage of new abstraction _storage_profile_.
> Storage profile in general is a pair of two entities:
> - storage engine type
> - storage data space. It will be regions for _aipersist_, for example.
> From configuration point of view:
> - Engine configurations must be still a part of separate configuration group
> - Storage profiles must be a part of separate configuration root
> *Implementation notes*
> Example:
> {code}
> rocksDb:
>   flushDelayMillis: 1000
>   regions:
> lruRegion:
>   cache: lru
>   size: 256
> clockRegion:
>   cache: clock
>   size: 512
>   
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
>   regions:
> segmentedRegion:
>   replacementMode: SEGMENTED_LRU
> clockRegion:
>   replacementMode: CLOCK
> {code}
> will transform to
> {code}
> storages:
>   engines:
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
> rocksDb:
>   flushDelayMillis: 1000 
>   profiles:
> lru_rocks:
>   engine: rocksDb
>   cache: lru
>   size: 256
>   
> clock_rocks:
>   engine: rocksDb
>   cache: clock
>   size: 512
>   
> segmented_aipersist:
>   engine: aipersist
>   replacementMode: SEGMENTED_LRU
>   
> clock_aipersist:
>   engine: aipersist
>   replacementMode: CLOCK
> {code}
> *Definition of done*
> - All storage configurations migrated to storage profiles



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20484) NPE when some operation occurs when the primary replica is changing

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-20484:


Assignee: Vladislav Pyatkov

> NPE when some operation occurs when the primary replica is changing
> ---
>
> Key: IGNITE-20484
> URL: https://issues.apache.org/jira/browse/IGNITE-20484
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> It happens that when the request is created, the primary replica is in this 
> node, but when the request is executed in the replica, it has already lost 
> its role.
> {noformat}
> [2023-09-25T11:03:24,408][WARN 
> ][%iprct_tpclh_2%metastorage-watch-executor-2][ReplicaManager] Failed to 
> process replica request [request=ReadWriteSingleRowReplicaRequestImpl 
> [binaryRowMessage=BinaryRowMessageImpl 
> [binaryTuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9], schemaVersion=1], 
> commitPartitionId=TablePartitionIdMessageImpl [partitionId=0, tableId=4], 
> full=true, groupId=4_part_0, requestType=RW_UPSERT, term=24742070009862, 
> timestampLong=24742430588928, 
> transactionId=018acb5d-4e54-0006--705db0b1]]
>  java.util.concurrent.CompletionException: java.lang.NullPointerException
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) 
> ~[?:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
>  ~[main/:?]
> at java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122) 
> ~[?:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$7(WatchProcessor.java:269)
>  ~[main/:?]
> at 
> java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
>  [?:?]
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: java.lang.NullPointerException
> at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$ensureReplicaIsPrimary$161(PartitionReplicaListener.java:2415)
>  ~[main/:?]
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
> ... 15 more
> {noformat}
> *Definition of done*
> In this case, we should throw the correct exception because the request 
> cannot be handled in this replica anymore, and the matched transaction will 
> be rolled back.
> *Implementation notes*
> Do not forget to check all places where the issue is mentioned (especially in 
> TODO section).
> As discussed with [~sanpwc]:
> This exception is likely to be thrown when 
> - we successfully get a primary replica on one node
> - send a message and the message is slightly slow to be delivered
> - we handle the received message on the recepient node and run 
> {{placementDriver.getPrimaryReplica}}. 
> If the previous lease has expired by the time we handle the message, the call 
> to {{placementDriver}} will result in a {{null}} value instead of a 
> {{ReplicaMeta}} instance. Hence the NPE.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20558) Test plan for zone storage profile filters

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20558:

Description: The feature from IGNITE-20357 needs a test plan  (was: Need to 
prepare a test plan for storage profiles.)

> Test plan for zone storage profile filters
> --
>
> Key: IGNITE-20558
> URL: https://issues.apache.org/jira/browse/IGNITE-20558
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> The feature from IGNITE-20357 needs a test plan



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20564) Implement storage profile configurations approach

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20564:

Description: 
*Motivation*
According to the results of IGNITE-20357 we need to refactor our node storage 
configurations with the usage of new abstraction _storage_profile_.

Storage profile in general is a pair of two entities:
- storage engine type
- storage data space. It will be regions for _aipersist_, for example.

>From configuration point of view:
- Engine configurations must be still a part of separate configuration group
- Storage profiles must be a part of separate configuration root

Example:
{code}
rocksDb:
  flushDelayMillis: 1000
  regions:
lruRegion:
  cache: lru
  size: 256
clockRegion:
  cache: clock
  size: 512
  
aipersist:
  checkpoint:
checkpointDelayMillis: 100
  regions:
segmentedRegion:
  replacementMode: SEGMENTED_LRU
clockRegion:
  replacementMode: CLOCK
{code}
will transform to

{code}
storages:
  engines:
aipersist:
  checkpoint:
checkpointDelayMillis: 100
rocksDb:
  flushDelayMillis: 1000 
  profiles:
lru_rocks:
  engine: rocksDb
  cache: lru
  size: 256
  
clock_rocks:
  engine: rocksDb
  cache: clock
  size: 512
  
segmented_aipersist:
  engine: aipersist
  replacementMode: SEGMENTED_LRU
  
clock_aipersist:
  engine: aipersist
  replacementMode: CLOCK
{code}

*Definition of done*
- All storage configurations migrated to storage profiles

  was:
*Motivation*
According to the results of IGNITE-20357 we need to refactor our node storage 
configurations with the usage of new abstraction _storage_profile_.

Storage profile in general is a pair of two entities:
- storage engine type
- storage data space. It will be regions for _aipersist_, for example.

>From configuration point of view:
- Engine configurations must be still a part of separate configuration group
- Storage profiles must be a part of separate configuration root

Example:
{code}
rocksDb:
  flushDelayMillis: 1000
  regions:
lruRegion:
  cache: lru
  size: 256
clockRegion:
  cache: clock
  size: 512
  
aipersist:
  checkpoint:
checkpointDelayMillis: 100
  regions:
segmentedRegion:
  replacementMode: SEGMENTED_LRU
clockRegion:
  replacementMode: CLOCK
{code}
will transform to

{code}
storages:
  engines:
aipersist:
  checkpoint:
checkpointDelayMillis: 100
rocksDb:
  flushDelayMillis: 1000 
  profiles:
lru_rocks:
  engine: rocksDb
  cache: lru
  size: 256
  
clock_rocks:
  engine: rocksDb
  cache: clock
  size: 512
  
segmented_aipersist:
  engine: aipersist
  replacementMode: SEGMENTED_LRU
  
clock_aipersist:
  engine: aipersist
  replacementMode: CLOCK
{code}




> Implement storage profile configurations approach
> -
>
> Key: IGNITE-20564
> URL: https://issues.apache.org/jira/browse/IGNITE-20564
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to the results of IGNITE-20357 we need to refactor our node 
> storage configurations with the usage of new abstraction _storage_profile_.
> Storage profile in general is a pair of two entities:
> - storage engine type
> - storage data space. It will be regions for _aipersist_, for example.
> From configuration point of view:
> - Engine configurations must be still a part of separate configuration group
> - Storage profiles must be a part of separate configuration root
> Example:
> {code}
> rocksDb:
>   flushDelayMillis: 1000
>   regions:
> lruRegion:
>   cache: lru
>   size: 256
> clockRegion:
>   cache: clock
>   size: 512
>   
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
>   regions:
> segmentedRegion:
>   replacementMode: SEGMENTED_LRU
> clockRegion:
>   replacementMode: CLOCK
> {code}
> will transform to
> {code}
> storages:
>   engines:
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
> rocksDb:
>   flushDelayMillis: 1000 
>   profiles:
> lru_rocks:
>   engine: rocksDb
>   cache: lru
>   size: 256
>   
> clock_rocks:
>   engine: rocksDb
>   cache: clock
>   size: 512
>   
> segmented_aipersist:
>   engine: aipersist
>   replacementMode: SEGMENTED_LRU
>   
> clock_aipersist:
>   engine: aipersist
>   replacementMode: CLOCK
> {code}
> *Definition of done*
> - All storage configurations migrated to storage profiles



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20564) Implement storage profile configurations approach

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20564:

Description: 
*Motivation*
According to the results of IGNITE-20357 we need to refactor our node storage 
configurations with the usage of new abstraction _storage_profile_.

Storage profile in general is a pair of two entities:
- storage engine type
- storage data space. It will be regions for _aipersist_, for example.

>From configuration point of view:
- Engine configurations must be still a part of separate configuration group
- Storage profiles must be a part of separate configuration root

*Implementation notes*
Example:
{code}
rocksDb:
  flushDelayMillis: 1000
  regions:
lruRegion:
  cache: lru
  size: 256
clockRegion:
  cache: clock
  size: 512
  
aipersist:
  checkpoint:
checkpointDelayMillis: 100
  regions:
segmentedRegion:
  replacementMode: SEGMENTED_LRU
clockRegion:
  replacementMode: CLOCK
{code}
will transform to

{code}
storages:
  engines:
aipersist:
  checkpoint:
checkpointDelayMillis: 100
rocksDb:
  flushDelayMillis: 1000 
  profiles:
lru_rocks:
  engine: rocksDb
  cache: lru
  size: 256
  
clock_rocks:
  engine: rocksDb
  cache: clock
  size: 512
  
segmented_aipersist:
  engine: aipersist
  replacementMode: SEGMENTED_LRU
  
clock_aipersist:
  engine: aipersist
  replacementMode: CLOCK
{code}

*Definition of done*
- All storage configurations migrated to storage profiles

  was:
*Motivation*
According to the results of IGNITE-20357 we need to refactor our node storage 
configurations with the usage of new abstraction _storage_profile_.

Storage profile in general is a pair of two entities:
- storage engine type
- storage data space. It will be regions for _aipersist_, for example.

>From configuration point of view:
- Engine configurations must be still a part of separate configuration group
- Storage profiles must be a part of separate configuration root

Example:
{code}
rocksDb:
  flushDelayMillis: 1000
  regions:
lruRegion:
  cache: lru
  size: 256
clockRegion:
  cache: clock
  size: 512
  
aipersist:
  checkpoint:
checkpointDelayMillis: 100
  regions:
segmentedRegion:
  replacementMode: SEGMENTED_LRU
clockRegion:
  replacementMode: CLOCK
{code}
will transform to

{code}
storages:
  engines:
aipersist:
  checkpoint:
checkpointDelayMillis: 100
rocksDb:
  flushDelayMillis: 1000 
  profiles:
lru_rocks:
  engine: rocksDb
  cache: lru
  size: 256
  
clock_rocks:
  engine: rocksDb
  cache: clock
  size: 512
  
segmented_aipersist:
  engine: aipersist
  replacementMode: SEGMENTED_LRU
  
clock_aipersist:
  engine: aipersist
  replacementMode: CLOCK
{code}

*Definition of done*
- All storage configurations migrated to storage profiles


> Implement storage profile configurations approach
> -
>
> Key: IGNITE-20564
> URL: https://issues.apache.org/jira/browse/IGNITE-20564
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to the results of IGNITE-20357 we need to refactor our node 
> storage configurations with the usage of new abstraction _storage_profile_.
> Storage profile in general is a pair of two entities:
> - storage engine type
> - storage data space. It will be regions for _aipersist_, for example.
> From configuration point of view:
> - Engine configurations must be still a part of separate configuration group
> - Storage profiles must be a part of separate configuration root
> *Implementation notes*
> Example:
> {code}
> rocksDb:
>   flushDelayMillis: 1000
>   regions:
> lruRegion:
>   cache: lru
>   size: 256
> clockRegion:
>   cache: clock
>   size: 512
>   
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
>   regions:
> segmentedRegion:
>   replacementMode: SEGMENTED_LRU
> clockRegion:
>   replacementMode: CLOCK
> {code}
> will transform to
> {code}
> storages:
>   engines:
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
> rocksDb:
>   flushDelayMillis: 1000 
>   profiles:
> lru_rocks:
>   engine: rocksDb
>   cache: lru
>   size: 256
>   
> clock_rocks:
>   engine: rocksDb
>   cache: clock
>   size: 512
>   
> segmented_aipersist:
>   engine: aipersist
>   replacementMode: SEGMENTED_LRU
>   
> clock_aipersist:
>   engine: aipersist
>   replacementMode: CLOCK
> {code}
> *Definition of done*
> - All storage configurations 

[jira] [Updated] (IGNITE-20564) Implement storage profile configurations approach

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20564:

Description: 
*Motivation*
According to the results of IGNITE-20357 we need to refactor our node storage 
configurations with the usage of new abstraction _storage_profile_.

Storage profile in general is a pair of two entities:
- storage engine type
- storage data space. It will be regions for _aipersist_, for example.

>From configuration point of view:
- Engine configurations must be still a part of separate configuration group
- Storage profiles must be a part of separate configuration root

Example:
{code}
rocksDb:
  flushDelayMillis: 1000
  regions:
lruRegion:
  cache: lru
  size: 256
clockRegion:
  cache: clock
  size: 512
  
aipersist:
  checkpoint:
checkpointDelayMillis: 100
  regions:
segmentedRegion:
  replacementMode: SEGMENTED_LRU
clockRegion:
  replacementMode: CLOCK
{code}
will transform to

{code}
storages:
  engines:
aipersist:
  checkpoint:
checkpointDelayMillis: 100
rocksDb:
  flushDelayMillis: 1000 
  profiles:
lru_rocks:
  engine: rocksDb
  cache: lru
  size: 256
  
clock_rocks:
  engine: rocksDb
  cache: clock
  size: 512
  
segmented_aipersist:
  engine: aipersist
  replacementMode: SEGMENTED_LRU
  
clock_aipersist:
  engine: aipersist
  replacementMode: CLOCK
{code}



> Implement storage profile configurations approach
> -
>
> Key: IGNITE-20564
> URL: https://issues.apache.org/jira/browse/IGNITE-20564
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to the results of IGNITE-20357 we need to refactor our node 
> storage configurations with the usage of new abstraction _storage_profile_.
> Storage profile in general is a pair of two entities:
> - storage engine type
> - storage data space. It will be regions for _aipersist_, for example.
> From configuration point of view:
> - Engine configurations must be still a part of separate configuration group
> - Storage profiles must be a part of separate configuration root
> Example:
> {code}
> rocksDb:
>   flushDelayMillis: 1000
>   regions:
> lruRegion:
>   cache: lru
>   size: 256
> clockRegion:
>   cache: clock
>   size: 512
>   
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
>   regions:
> segmentedRegion:
>   replacementMode: SEGMENTED_LRU
> clockRegion:
>   replacementMode: CLOCK
> {code}
> will transform to
> {code}
> storages:
>   engines:
> aipersist:
>   checkpoint:
> checkpointDelayMillis: 100
> rocksDb:
>   flushDelayMillis: 1000 
>   profiles:
> lru_rocks:
>   engine: rocksDb
>   cache: lru
>   size: 256
>   
> clock_rocks:
>   engine: rocksDb
>   cache: clock
>   size: 512
>   
> segmented_aipersist:
>   engine: aipersist
>   replacementMode: SEGMENTED_LRU
>   
> clock_aipersist:
>   engine: aipersist
>   replacementMode: CLOCK
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20541) Watch Processor performs unnecessary work in case of empty events

2023-10-04 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771852#comment-17771852
 ] 

Kirill Tkalenko commented on IGNITE-20541:
--

Looks good.

> Watch Processor performs unnecessary work in case of empty events
> -
>
> Key: IGNITE-20541
> URL: https://issues.apache.org/jira/browse/IGNITE-20541
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If a Meta Storage event does not match any of the Watch Listeners, Watch 
> Processor creates a bunch of empty futures for no reason, we can simply skip 
> such events.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20564) Implement storage profile configurations approach

2023-10-04 Thread Kirill Gusakov (Jira)
Kirill Gusakov created IGNITE-20564:
---

 Summary: Implement storage profile configurations approach
 Key: IGNITE-20564
 URL: https://issues.apache.org/jira/browse/IGNITE-20564
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Gusakov






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20442) Sql. Extend grammar with transaction related statements.

2023-10-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-20442:
-

Assignee: Maksim Zhuravkov

> Sql. Extend grammar with transaction related statements.
> 
>
> Key: IGNITE-20442
> URL: https://issues.apache.org/jira/browse/IGNITE-20442
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> In order to process multistatement queries we need to support the following 
> sql grammar to start/finish transactions.
> {code}
>  ::=
> START TRANSACTION []
>  ::= READ ONLY | READ WRITE
> {code}
> {code}
>  ::=
> COMMIT
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19824) Implicit RO should be used in implicit single gets

2023-10-04 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771838#comment-17771838
 ] 

Alexander Lapin commented on IGNITE-19824:
--

[~v.pyatkov] LGTM, thanks!

> Implicit RO should be used in implicit single gets
> --
>
> Key: IGNITE-19824
> URL: https://issues.apache.org/jira/browse/IGNITE-19824
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, all implicit read operations start RW transactions, thus it's 
> possible to catch "Failed to acquire a lock due to a conflict" exception. 
> Generally speaking, given issue should be resolved by substituting RW with RO 
> for all implicit read transactions, however such approach will decrease 
> linearizability so it's required to verify it with product management. It's 
> still however possible to have special case RO for implicit single-key get 
> operation that will set readTimestamp on primary replica instead of 
> transaction coordinator and thus provide cluster-wide linearizability even 
> for RO transactions (only for single-key implicit get operations). Within 
> this ticket, such special RO transactions should be introduced along with 
> their usage switch for single-get implicit reads.
> h3. Definition of Done
>  * Implicit single-get operations use special RO transactions that provide 
> cluster-wide linearizability and thus do not throw "Failed to acquire a lock 
> due to a conflict" exception.
>  * ItAbstractDataStreamerTest#testAutoFlushByTimer adjusted: catch block 
> removed.
> h3. Implementation Notes
> 1. Basically, what we need to do here is to start RO transaction instead of 
> RW one in case of single-key implicit get, thus we should add
> {code:java}
> if (tx == null) {
> tx = txManager.begin(true);
> }{code}
> right in front of
> {code:java}
> return enlistInTx({code}
> Please pay attention, that we want to start special case RO transaction that 
> should go to primary and only primary, so it's not valid to put 
> aforementioned tx = txManager.begin(true); at the very beginning of the 
> method, because in that case balancer may return non-primary through 
> evaluateReadOnlyRecipientNode. Corresponging comment should be added.
> 2. Such specifal case RO transcation doesn't require readTimestamp 
> calcualtion on tx.start for the evaluation point of view, however it still 
> required it for lowWatermark managerment:
> {code:java}
> readOnlyTxFutureById.compute(new TxIdAndTimestamp(readTimestamp, txId), 
> (txIdAndTimestamp, readOnlyTxFuture) -> {
> assert readOnlyTxFuture == null : "previous transaction has not completed 
> yet: " + txIdAndTimestamp;
> if (lowWatermark != null && readTimestamp.compareTo(lowWatermark) <= 0) {
> throw new IgniteInternalException(
> TX_READ_ONLY_TOO_OLD_ERR,
> "Timestamp read-only transaction must be greater than the low 
> watermark: [txTimestamp={}, lowWatermark={}]",
> readTimestamp, lowWatermark
> );
> }
> return new CompletableFuture<>();
> }); {code}
> So, seems that it worth to leave readTimestamp generatoin at it's current 
> place.
> 3. And again in order to have cluster-wide linearizability it's requried to 
> use primaryReplica now as readTimestamp instead of the one proposed in 
> readOnlyReplicaRequest. Basically that means substitution of
> {code:java}
> HybridTimestamp readTimestamp = request.readTimestamp(); {code}
> with
> {code:java}
> HybridTimestamp readTimestamp;
> if (request.requestType() == RequestType.RO_GET && request.implicit()) {
> readTimestamp = hybridClock.now();
> } else {
> readTimestamp = request.readTimestamp();
> } {code}
> along with
> {code:java}
> CompletableFuture safeReadFuture = isPrimaryInTimestamp(isPrimary, 
> readTimestamp) ? completedFuture(null)
> : safeTime.waitFor(readTimestamp); {code}
> in PartitionReplicaListnener. That on its part required adding implicit() to 
> ReadOnlySingleRowReplicaRequest that should be properly set on the client 
> side.
> 4. That specific operation type should also include a timestamp in the 
> response (using TimestampAware). It is necessary to use the timestmp to 
> adjust the clock on the transaction coordinator (despite the fact that we are 
> talking about a single-get operation, it is a transaction, and the node that 
> invoked the operation is called a transaction coordinator). Then we can use 
> clock.now() to update the observation timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20563) .NET: Thin 3.0: Enable heap allocation analyzers

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20563:

Description: 
Enable analyzers from this group (HAA*, HHA*): 
https://github.com/dotnet/roslyn-analyzers/blob/main/src/PerformanceSensitiveAnalyzers/Microsoft.CodeAnalysis.PerformanceSensitiveAnalyzers.md.
They can come either from an old package 
https://www.nuget.org/packages/ClrHeapAllocationAnalyzer/, or from new packages 
listed here: https://github.com/dotnet/roslyn-analyzers/tree/main#main-analyzers

Check other perf-related analyzers that may be useful to us.

  was:
Enable analyzers from this group (HAA*, HHA*): 
https://github.com/dotnet/roslyn-analyzers/blob/main/src/PerformanceSensitiveAnalyzers/Microsoft.CodeAnalysis.PerformanceSensitiveAnalyzers.md.
They can come either from an old package 
https://www.nuget.org/packages/ClrHeapAllocationAnalyzer/, or from new packages 
listed here: https://github.com/dotnet/roslyn-analyzers/tree/main#main-analyzers


> .NET: Thin 3.0: Enable heap allocation analyzers
> 
>
> Key: IGNITE-20563
> URL: https://issues.apache.org/jira/browse/IGNITE-20563
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Enable analyzers from this group (HAA*, HHA*): 
> https://github.com/dotnet/roslyn-analyzers/blob/main/src/PerformanceSensitiveAnalyzers/Microsoft.CodeAnalysis.PerformanceSensitiveAnalyzers.md.
> They can come either from an old package 
> https://www.nuget.org/packages/ClrHeapAllocationAnalyzer/, or from new 
> packages listed here: 
> https://github.com/dotnet/roslyn-analyzers/tree/main#main-analyzers
> Check other perf-related analyzers that may be useful to us.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20563) .NET: Thin 3.0: Enable heap allocation analyzers

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20563:

Description: 
Enable analyzers from this group (HAA*, HHA*): 
https://github.com/dotnet/roslyn-analyzers/blob/main/src/PerformanceSensitiveAnalyzers/Microsoft.CodeAnalysis.PerformanceSensitiveAnalyzers.md.
They can come either from an old package 
https://www.nuget.org/packages/ClrHeapAllocationAnalyzer/, or from new packages 
listed here: https://github.com/dotnet/roslyn-analyzers/tree/main#main-analyzers

> .NET: Thin 3.0: Enable heap allocation analyzers
> 
>
> Key: IGNITE-20563
> URL: https://issues.apache.org/jira/browse/IGNITE-20563
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Enable analyzers from this group (HAA*, HHA*): 
> https://github.com/dotnet/roslyn-analyzers/blob/main/src/PerformanceSensitiveAnalyzers/Microsoft.CodeAnalysis.PerformanceSensitiveAnalyzers.md.
> They can come either from an old package 
> https://www.nuget.org/packages/ClrHeapAllocationAnalyzer/, or from new 
> packages listed here: 
> https://github.com/dotnet/roslyn-analyzers/tree/main#main-analyzers



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20563) .NET: Thin 3.0: Enable heap allocation analyzers

2023-10-04 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20563:
---

 Summary: .NET: Thin 3.0: Enable heap allocation analyzers
 Key: IGNITE-20563
 URL: https://issues.apache.org/jira/browse/IGNITE-20563
 Project: Ignite
  Issue Type: Improvement
  Components: platforms, thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20558) Test plan for zone storage profile filters

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20558:
-
Description: Need to prepare a test plan for storage profiles.

> Test plan for zone storage profile filters
> --
>
> Key: IGNITE-20558
> URL: https://issues.apache.org/jira/browse/IGNITE-20558
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> Need to prepare a test plan for storage profiles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20558) Test plan for zone storage profile filters

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20558:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Test plan for zone storage profile filters
> --
>
> Key: IGNITE-20558
> URL: https://issues.apache.org/jira/browse/IGNITE-20558
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20559) Return metastorage invokes in DistributionZoneManager#createMetastorageTopologyListener

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-20559:


Assignee: Mirza Aliev

> Return metastorage invokes in 
> DistributionZoneManager#createMetastorageTopologyListener
> ---
>
> Key: IGNITE-20559
> URL: https://issues.apache.org/jira/browse/IGNITE-20559
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # LogicalTopologyEventListener to update logical topology.
> Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler. Also 
> {{nodeAttriburtes}} and {{topologyAugmentationMap}} in must be saved in MS, 
> so we can use this fields when recovery DZM



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-20317:


Assignee: Mirza Aliev

> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # DistributionZoneManager#onUpdateScaleUp
> # DistributionZoneManager#onUpdateScaleDown
> -DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
> replicas update.-
> -LogicalTopologyEventListener to update logical topology.-
> -DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
> watch listener to update pending assignments.-
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler.
> h3. *Implementation Notes*
> * ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
> DistributionZoneManager#onUpdateFilter and 
> DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> * We cannnot return future from LogicalTopologyEventListener's methods. We 
> can ignore these futures. It has drawback: we can skip the topology update
> # topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
> # Node C was joined to the topology and left quickly and ms invokes to update 
> topology entry was reordered.
> # data nodes was not updated immediately to [A,B,C].
> We think that we can ignore this bug because eventually it doesn't break the 
> consistency of the date node. For this purpose we need to change the invoke 
> condition:
> `value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
>  instead of
> `value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() 
> - 1))`
> * Need to return ms invoke futures from WatchListener#onUpdate method of the 
> data nodes listener.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20457) Verify commitTimestamp against enlisted partitions expiration timestamps

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20457:
-
Fix Version/s: 3.0.0-beta2

> Verify commitTimestamp against enlisted partitions expiration timestamps
> 
>
> Key: IGNITE-20457
> URL: https://issues.apache.org/jira/browse/IGNITE-20457
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Blocker
>  Labels: ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> On tx commit It’s required to check that commit timestamp is less than 
> expiration timestamps for all enlisted partitions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20519) Add causality token of the last update of catalog descriptors to CatalogObjectDescriptor

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20519:
-
Fix Version/s: 3.0.0-beta2

> Add causality token of the last update of catalog descriptors to 
> CatalogObjectDescriptor
> 
>
> Key: IGNITE-20519
> URL: https://issues.apache.org/jira/browse/IGNITE-20519
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Motivation*
> It could be useful to add causality token of the last update of 
> {{CatalogObjectDescriptor}}. For example, this will help us to call
> {{DistributionZoneManager#dataNodes(long causalityToken, int zoneId)}} for 
> the specified {{CatalogZoneDescriptor}}, so we could receive data nodes with 
> accordance of correct version of filter from {{CatalogZoneDescriptor}}
> *Implementation notes*
> This could be done with the enriching {{UpdateEntry#applyUpdate(Catalog 
> catalog)}} with {{causalityToken}}, so we could propagate {{causalityToken}} 
> to all {{UpdateEntry}}, where we recreate {{CatalogObjectDescriptor}} and 
> create new version of {{Catalog}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20358) Make distributed node storage config local

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20358:
-
Description: 
*Motivation*

At the moment, all {{*StorageEngineConfigurationSchema}} has the 
{{ConfigurationType.DISTRIBUTED}} type. But it is not the case anymore, each 
node can have different storage configurations by new design.

*Definition of done*
 - All {{*StorageEngineConfigurationSchema}} configurations moved to the 
{{ConfigurationType.LOCAL}} scope.

  was:
*Motivation*

At the moment, all {{*StorageEngineConfigurationSchema}} has the 
{{ConfigurationType.DISTRIBUTED}} type. But it is not the case anymore, each 
node can have the different storage configurations by new design.

*Definition of done*
- All {{*StorageEngineConfigurationSchema}} configurations moved to the 
{{ConfigurationType.LOCAL}} scope.


> Make distributed node storage config local
> --
>
> Key: IGNITE-20358
> URL: https://issues.apache.org/jira/browse/IGNITE-20358
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> *Motivation*
> At the moment, all {{*StorageEngineConfigurationSchema}} has the 
> {{ConfigurationType.DISTRIBUTED}} type. But it is not the case anymore, each 
> node can have different storage configurations by new design.
> *Definition of done*
>  - All {{*StorageEngineConfigurationSchema}} configurations moved to the 
> {{ConfigurationType.LOCAL}} scope.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20358) Make distributed node storage config local

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20358:
-
Description: 
*Motivation*

At the moment, all {{*StorageEngineConfigurationSchema}} has the 
{{ConfigurationType.DISTRIBUTED}} type. But this is not the case anymore, each 
node can have different storage configurations by new design.

*Definition of done*
 - All {{*StorageEngineConfigurationSchema}} configurations moved to the 
{{ConfigurationType.LOCAL}} scope.

  was:
*Motivation*

At the moment, all {{*StorageEngineConfigurationSchema}} has the 
{{ConfigurationType.DISTRIBUTED}} type. But it is not the case anymore, each 
node can have different storage configurations by new design.

*Definition of done*
 - All {{*StorageEngineConfigurationSchema}} configurations moved to the 
{{ConfigurationType.LOCAL}} scope.


> Make distributed node storage config local
> --
>
> Key: IGNITE-20358
> URL: https://issues.apache.org/jira/browse/IGNITE-20358
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> *Motivation*
> At the moment, all {{*StorageEngineConfigurationSchema}} has the 
> {{ConfigurationType.DISTRIBUTED}} type. But this is not the case anymore, 
> each node can have different storage configurations by new design.
> *Definition of done*
>  - All {{*StorageEngineConfigurationSchema}} configurations moved to the 
> {{ConfigurationType.LOCAL}} scope.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20357) Design node config, zone and table storage relations

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20357:
-
Fix Version/s: 3.0.0-beta2

> Design node config, zone and table storage relations
> 
>
> Key: IGNITE-20357
> URL: https://issues.apache.org/jira/browse/IGNITE-20357
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: Copy of Unify storage configurations for the 
> table_zone_node levels.pdf
>
>
> *Motivation*
> We need to clarify the UX around the table storage, zone and node configs 
> according to zone-based collocation.
> *Definition of done*
> User has the simple and predictable flow to:
> - Configure the node storage from the point of view: tables with which 
> requirements for storage can use this node.
> - Describe on the zone creation, the nodes with which table storages can be a 
> part of this zone
> - Describe on the table creation, which storage needed for this table and 
> receive the error as soon as possible if chosen zone can't guarantee that its 
> nodes have this storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20545) Test IgniteRpcTest.testDisconnect is flaky on TC

2023-10-04 Thread Sergey Chugunov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-20545:
-
Description: 
Test failed recently on main branch ([failed 
run|https://ci.ignite.apache.org/viewLog.html?buildId=7536503=ApacheIgnite3xGradle_Test_RunAllTests]),
 there is an assertion in test logs:
{code:java}
org.opentest4j.AssertionFailedError: expected:  but was: 
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
at 
app//org.apache.ignite.raft.jraft.rpc.AbstractRpcTest.testDisconnect(AbstractRpcTest.java:128){code}

Test history shows that it fails occasionally in different branches with the 
same error in logs.

Looks like there is some kind of race between events in test logic.

  was:
Test failed recently on main branch ([failed 
run|https://ci.ignite.apache.org/viewLog.html?buildId=7536503=ApacheIgnite3xGradle_Test_RunAllTests]),
 there is an assertion in test logs:


org.opentest4j.AssertionFailedError: expected:  but was: 
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
at 
app//org.apache.ignite.raft.jraft.rpc.AbstractRpcTest.testDisconnect(AbstractRpcTest.java:128)
Test history shows that it fails occasionally in different branches with the 
same error in logs.

Looks like there is some kind of race between events in test logic.


> Test IgniteRpcTest.testDisconnect is flaky on TC
> 
>
> Key: IGNITE-20545
> URL: https://issues.apache.org/jira/browse/IGNITE-20545
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: ignite-3
>
> Test failed recently on main branch ([failed 
> run|https://ci.ignite.apache.org/viewLog.html?buildId=7536503=ApacheIgnite3xGradle_Test_RunAllTests]),
>  there is an assertion in test logs:
> {code:java}
> org.opentest4j.AssertionFailedError: expected:  but was: 
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
> at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
> at 
> app//org.apache.ignite.raft.jraft.rpc.AbstractRpcTest.testDisconnect(AbstractRpcTest.java:128){code}
> Test history shows that it fails occasionally in different branches with the 
> same error in logs.
> Looks like there is some kind of race between events in test logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20342) Rollback transaction for SQL execution issues

2023-10-04 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771805#comment-17771805
 ] 

Igor Sapego commented on IGNITE-20342:
--

Changes in ODBC tests looks good to me.

> Rollback transaction for SQL execution issues
> -
>
> Key: IGNITE-20342
> URL: https://issues.apache.org/jira/browse/IGNITE-20342
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During execution any data modification we could have runtime error, for 
> example divide to zero, which lead to cancellation of execution. Right now we 
> don't rollback transaction for the case and part of modification could be 
> applied despite of the error.
> In ideal word we should be rollbacked just the DML statement, but right now 
> we don't support savepoint in transaction protocol. So, let's rollback any 
> type of transaction, explicit and implicit, for any DML statement in case 
> error is occurred.
> Test which could show one of the case of the problem is 
> org.apache.ignite.internal.sql.api.ItSqlSynchronousApiTest#runtimeErrorInTransaction



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-15556) Drop SchemaBuilder API.

2023-10-04 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-15556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771795#comment-17771795
 ] 

Yury Gerzhedovich commented on IGNITE-15556:


[~amashenkov] LGTM!

> Drop SchemaBuilder API.
> ---
>
> Key: IGNITE-15556
> URL: https://issues.apache.org/jira/browse/IGNITE-15556
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: UX, ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> Drop SchemaBuilders API as schema manupulation will be available only via SQL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20560) It's possible to execute commands on a finished transaction under certain circumstances

2023-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20560:
---
Description: 
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may *get the validation that prevents 
commands from being executed on a finished transaction broken.*

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler.

*Details*
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is
 * +updated+ in {{PartitionReplicaListener.processTxCleanupAction}} and
 * +read+ in {{{}PartitionReplicaListener.appendTxCommand{}}}.

If the update has not been called because of a crash, the code in 
{{{}appendTxCommand{}}}:
{code:java}
   txCleanupReadyFutures.compute(txId, (id, txOps) -> {
if (txOps == null) {
txOps = new TxCleanupReadyFutureList();
}

if (isFinalState(txOps.state)) {
fut.completeExceptionally(
new 
TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
already finished."));
} else {
txOps.futures.computeIfAbsent(cmdType, type -> new 
ArrayList<>()).add(fut);
}

return txOps;
});{code}
will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
command instead of throwing a {{{}TransactionException{}}}.

 

*_Please note there are tests muted with this task._*

  was:
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may *get the validation that prevents 
commands from being executed on a finished transaction broken.*

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler.

*Details*
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is 
* +updated+ in {{PartitionReplicaListener.processTxCleanupAction}} and 
* +read+ in {{PartitionReplicaListener.appendTxCommand}}. 

If the update has not been called because of a crash, the code in 
{{appendTxCommand}}:
{code:java}
   txCleanupReadyFutures.compute(txId, (id, txOps) -> {
if (txOps == null) {
txOps = new TxCleanupReadyFutureList();
}

if (isFinalState(txOps.state)) {
fut.completeExceptionally(
new 
TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
already finished."));
} else {
txOps.futures.computeIfAbsent(cmdType, type -> new 
ArrayList<>()).add(fut);
}

return txOps;
});{code}
will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
command instead of throwing a {{TransactionException}}.



> It's possible to execute commands on a finished transaction under certain 
> circumstances
> ---
>
> Key: IGNITE-20560
> URL: https://issues.apache.org/jira/browse/IGNITE-20560
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> If a cleanup operation crashes, it does not affect the transaction it was for 
> called since the transaction has been finished already.
> However under certain circumstances we may *get the validation that prevents 
> commands from being executed on a finished transaction broken.*
> The issue is that we have 
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
> local txState, and is updated in the cleanup command handler.
> *Details*
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is
>  * +updated+ in {{PartitionReplicaListener.processTxCleanupAction}} and
>  * +read+ in {{{}PartitionReplicaListener.appendTxCommand{}}}.
> If the update has not been called because of a crash, the code in 
> {{{}appendTxCommand{}}}:
> {code:java}
>txCleanupReadyFutures.compute(txId, (id, txOps) -> {
> if (txOps == null) {
> txOps = new TxCleanupReadyFutureList();
> }
> if (isFinalState(txOps.state)) {
> fut.completeExceptionally(
> new 
> TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
> already finished."));
> } else {
> txOps.futures.computeIfAbsent(cmdType, type -> 

[jira] [Assigned] (IGNITE-20562) Documentation: Review and update cpp/DEVNOTES.md

2023-10-04 Thread Alex Levitski (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Levitski reassigned IGNITE-20562:
--

Assignee: Alex Levitski

> Documentation: Review and update cpp/DEVNOTES.md
> 
>
> Key: IGNITE-20562
> URL: https://issues.apache.org/jira/browse/IGNITE-20562
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation, platforms
>Reporter: Igor Sapego
>Assignee: Alex Levitski
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need someone with a good English skills to check and fix if necessary 
> [modules/platforms/cpp/DEVNOTES.md|https://github.com/apache/ignite-3/blob/main/modules/platforms/cpp/DEVNOTES.md]
>  file in Ignite 3 repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20562) Documentation: Review and update cpp/DEVNOTES.md

2023-10-04 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-20562:


 Summary: Documentation: Review and update cpp/DEVNOTES.md
 Key: IGNITE-20562
 URL: https://issues.apache.org/jira/browse/IGNITE-20562
 Project: Ignite
  Issue Type: Improvement
  Components: documentation, platforms
Reporter: Igor Sapego
 Fix For: 3.0.0-beta2


We need someone with a good English skills to check and fix if necessary 
[modules/platforms/cpp/DEVNOTES.md|https://github.com/apache/ignite-3/blob/main/modules/platforms/cpp/DEVNOTES.md]
 file in Ignite 3 repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-16578) Sql. Implement check of constrains on validation phase.

2023-10-04 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky resolved IGNITE-16578.
-
Resolution: Fixed

> Sql. Implement check of constrains on validation phase.
> ---
>
> Key: IGNITE-16578
> URL: https://issues.apache.org/jira/browse/IGNITE-16578
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Partial PK update possibilities:
> {noformat}
> CREATE TABLE test1 (k1 int, k2 int, a int, b int, CONSTRAINT PK PRIMARY KEY 
> (k1, k2));
> INSERT INTO test1 (k2, b) VALUES (1, 1); <-- need to be rejected
> CREATE TABLE test2 (k1 int DEFAULT 0, k2 int, a int, b int, CONSTRAINT PK 
> PRIMARY KEY (k1, k2));
> INSERT INTO test1 (k2, b) VALUES (1, 1);  <-- it`s all ok there.
> {noformat}
> At first case now we obtain below trace, seems it helpful to check 
> constraints on validation phase before RowAssembler was called.
> {noformat}
> class org.apache.ignite.lang.IgniteInternalException: Unexpected exception
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:300)
>   at 
> org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:75)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: class org.apache.ignite.internal.schema.SchemaMismatchException: 
> Failed to set column (null was passed, but column is not nullable): Column 
> [schemaIndex=0, columnOrder=0, name=K1, type=NativeType [name=INT32, 
> sizeInBytes=4, fixed=true], nullable=false]
>   at 
> org.apache.ignite.internal.schema.row.RowAssembler.appendNull(RowAssembler.java:342)
>   at 
> org.apache.ignite.internal.schema.row.RowAssembler.writeValue(RowAssembler.java:166)
>   at 
> org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl.insertTuple(IgniteTableImpl.java:310)
>   at 
> org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl.toModifyRow(IgniteTableImpl.java:273)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.push(ModifyNode.java:122)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ProjectNode.push(ProjectNode.java:71)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ScanNode.push(ScanNode.java:113)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20560) It's possible to execute commands on a finished transaction under certain circumstances

2023-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20560:
---
Description: 
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may *get the validation that prevents 
commands from being executed on a finished transaction broken.*

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler.

*Details*
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is 
* +updated+ in {{PartitionReplicaListener.processTxCleanupAction}} and 
* +read+ in {{PartitionReplicaListener.appendTxCommand}}. 

If the update has not been called because of a crash, the code in 
{{appendTxCommand}}:
{code:java}
   txCleanupReadyFutures.compute(txId, (id, txOps) -> {
if (txOps == null) {
txOps = new TxCleanupReadyFutureList();
}

if (isFinalState(txOps.state)) {
fut.completeExceptionally(
new 
TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
already finished."));
} else {
txOps.futures.computeIfAbsent(cmdType, type -> new 
ArrayList<>()).add(fut);
}

return txOps;
});{code}
will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
command instead of throwing a {{TransactionException}}.


  was:
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may *get the validation that prevents 
commands from being executed on a finished transaction broken.*

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler.

*Details*
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is +updated+ in 
{{PartitionReplicaListener.processTxCleanupAction}} and +read+ in 
{{PartitionReplicaListener.appendTxCommand}}. 

If the update has not been called because of a crash, the code in 
{{appendTxCommand}}:
{code:java}
   txCleanupReadyFutures.compute(txId, (id, txOps) -> {
if (txOps == null) {
txOps = new TxCleanupReadyFutureList();
}

if (isFinalState(txOps.state)) {
fut.completeExceptionally(
new 
TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
already finished."));
} else {
txOps.futures.computeIfAbsent(cmdType, type -> new 
ArrayList<>()).add(fut);
}

return txOps;
});{code}
will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
command instead of throwing a {{TransactionException}}.



> It's possible to execute commands on a finished transaction under certain 
> circumstances
> ---
>
> Key: IGNITE-20560
> URL: https://issues.apache.org/jira/browse/IGNITE-20560
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> If a cleanup operation crashes, it does not affect the transaction it was for 
> called since the transaction has been finished already.
> However under certain circumstances we may *get the validation that prevents 
> commands from being executed on a finished transaction broken.*
> The issue is that we have 
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
> local txState, and is updated in the cleanup command handler.
> *Details*
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is 
> * +updated+ in {{PartitionReplicaListener.processTxCleanupAction}} and 
> * +read+ in {{PartitionReplicaListener.appendTxCommand}}. 
> If the update has not been called because of a crash, the code in 
> {{appendTxCommand}}:
> {code:java}
>txCleanupReadyFutures.compute(txId, (id, txOps) -> {
> if (txOps == null) {
> txOps = new TxCleanupReadyFutureList();
> }
> if (isFinalState(txOps.state)) {
> fut.completeExceptionally(
> new 
> TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
> already finished."));
> } else {
> txOps.futures.computeIfAbsent(cmdType, type -> new 
> ArrayList<>()).add(fut);
> }
> return 

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20317:
-
Epic Link: IGNITE-20166

> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # DistributionZoneManager#onUpdateScaleUp
> # DistributionZoneManager#onUpdateScaleDown
> -DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
> replicas update.-
> -LogicalTopologyEventListener to update logical topology.-
> -DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
> watch listener to update pending assignments.-
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler.
> h3. *Implementation Notes*
> * ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
> DistributionZoneManager#onUpdateFilter and 
> DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> * We cannnot return future from LogicalTopologyEventListener's methods. We 
> can ignore these futures. It has drawback: we can skip the topology update
> # topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
> # Node C was joined to the topology and left quickly and ms invokes to update 
> topology entry was reordered.
> # data nodes was not updated immediately to [A,B,C].
> We think that we can ignore this bug because eventually it doesn't break the 
> consistency of the date node. For this purpose we need to change the invoke 
> condition:
> `value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
>  instead of
> `value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() 
> - 1))`
> * Need to return ms invoke futures from WatchListener#onUpdate method of the 
> data nodes listener.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20559) Return metastorage invokes in DistributionZoneManager#createMetastorageTopologyListener

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20559:
-
Epic Link: IGNITE-20166

> Return metastorage invokes in 
> DistributionZoneManager#createMetastorageTopologyListener
> ---
>
> Key: IGNITE-20559
> URL: https://issues.apache.org/jira/browse/IGNITE-20559
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # LogicalTopologyEventListener to update logical topology.
> Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler. Also 
> {{nodeAttriburtes}} and {{topologyAugmentationMap}} in must be saved in MS, 
> so we can use this fields when recovery DZM



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20561) Change condition for DistributionZonesUtil#triggerKeyConditionForZonesChanges to use ConditionType#TOMBSTONE

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20561:
-
Epic Link: IGNITE-20166

> Change condition for DistributionZonesUtil#triggerKeyConditionForZonesChanges 
> to use  ConditionType#TOMBSTONE
> -
>
> Key: IGNITE-20561
> URL: https://issues.apache.org/jira/browse/IGNITE-20561
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We need to use {{ConditionType#TOMBSTONE}} in 
> {{DistributionZonesUtil#triggerKeyConditionForZonesChanges}} when we 
> initialise keys for zones in MS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20560) It's possible to execute commands on a finished transaction under certain circumstances

2023-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20560:
---
Description: 
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may *get the validation that prevents 
commands from being executed on a finished transaction broken.*

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler.

*Details*
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is +updated+ in 
{{PartitionReplicaListener.processTxCleanupAction}} and +read+ in 
{{PartitionReplicaListener.appendTxCommand}}. 

If the update has not been called because of a crash, the code in 
{{appendTxCommand}}:
{code:java}
   txCleanupReadyFutures.compute(txId, (id, txOps) -> {
if (txOps == null) {
txOps = new TxCleanupReadyFutureList();
}

if (isFinalState(txOps.state)) {
fut.completeExceptionally(
new 
TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
already finished."));
} else {
txOps.futures.computeIfAbsent(cmdType, type -> new 
ArrayList<>()).add(fut);
}

return txOps;
});{code}
will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
command instead of throwing a {{TransactionException}}.


  was:
If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may get the validation that prevents 
commands from being executed on a finished transaction *broken*. 

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler:
{code:java}

{code}




> It's possible to execute commands on a finished transaction under certain 
> circumstances
> ---
>
> Key: IGNITE-20560
> URL: https://issues.apache.org/jira/browse/IGNITE-20560
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> If a cleanup operation crashes, it does not affect the transaction it was for 
> called since the transaction has been finished already.
> However under certain circumstances we may *get the validation that prevents 
> commands from being executed on a finished transaction broken.*
> The issue is that we have 
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
> local txState, and is updated in the cleanup command handler.
> *Details*
> {{PartitionReplicaListener.TxCleanupReadyFutureList.state}} is +updated+ in 
> {{PartitionReplicaListener.processTxCleanupAction}} and +read+ in 
> {{PartitionReplicaListener.appendTxCommand}}. 
> If the update has not been called because of a crash, the code in 
> {{appendTxCommand}}:
> {code:java}
>txCleanupReadyFutures.compute(txId, (id, txOps) -> {
> if (txOps == null) {
> txOps = new TxCleanupReadyFutureList();
> }
> if (isFinalState(txOps.state)) {
> fut.completeExceptionally(
> new 
> TransactionException(TX_FAILED_READ_WRITE_OPERATION_ERR, "Transaction is 
> already finished."));
> } else {
> txOps.futures.computeIfAbsent(cmdType, type -> new 
> ArrayList<>()).add(fut);
> }
> return txOps;
> });{code}
> will still read {{txOps.state}} as {{PENDING}} and will allow to execute this 
> command instead of throwing a {{TransactionException}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20559) Return metastorage invokes in DistributionZoneManager#createMetastorageTopologyListener

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20559:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:

# LogicalTopologyEventListener to update logical topology.

Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS


h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler. Also {{nodeAttriburtes}} 
and {{topologyAugmentationMap}} in must be saved in MS, so we can use this 
fields when recovery DZM

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:

# LogicalTopologyEventListener to update logical topology.

Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS


h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler. 


> Return metastorage invokes in 
> DistributionZoneManager#createMetastorageTopologyListener
> ---
>
> Key: IGNITE-20559
> URL: https://issues.apache.org/jira/browse/IGNITE-20559
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # LogicalTopologyEventListener to update logical topology.
> Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler. Also 
> {{nodeAttriburtes}} and {{topologyAugmentationMap}} in must be saved in MS, 
> so we can use this fields when recovery DZM



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20561) Change condition for DistributionZonesUtil#triggerKeyConditionForZonesChanges to use ConditionType#TOMBSTONE

2023-10-04 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-20561:


 Summary: Change condition for 
DistributionZonesUtil#triggerKeyConditionForZonesChanges to use  
ConditionType#TOMBSTONE
 Key: IGNITE-20561
 URL: https://issues.apache.org/jira/browse/IGNITE-20561
 Project: Ignite
  Issue Type: Bug
Reporter: Mirza Aliev


We need to use {{ConditionType#TOMBSTONE}} in 
{{DistributionZonesUtil#triggerKeyConditionForZonesChanges}} when we initialise 
keys for zones in MS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20559) Return metastorage invokes in DistributionZoneManager#createMetastorageTopologyListener

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20559:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:

# LogicalTopologyEventListener to update logical topology.

Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS


h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler. 

> Return metastorage invokes in 
> DistributionZoneManager#createMetastorageTopologyListener
> ---
>
> Key: IGNITE-20559
> URL: https://issues.apache.org/jira/browse/IGNITE-20559
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # LogicalTopologyEventListener to update logical topology.
> Also we need to save {{nodeAttriburtes}} and {{topologyAugmentationMap}} in MS
> h3. *Definition of Done*
> Need to ensure event handling linearization. All immediate data nodes 
> recalculation must be returned  to the event handler. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20560) It's possible to execute commands on a finished transaction under certain circumstances

2023-10-04 Thread Jira
 Kirill Sizov created IGNITE-20560:
--

 Summary: It's possible to execute commands on a finished 
transaction under certain circumstances
 Key: IGNITE-20560
 URL: https://issues.apache.org/jira/browse/IGNITE-20560
 Project: Ignite
  Issue Type: Task
Reporter:  Kirill Sizov


If a cleanup operation crashes, it does not affect the transaction it was for 
called since the transaction has been finished already.
However under certain circumstances we may get the validation that prevents 
commands from being executed on a finished transaction *broken*. 

The issue is that we have 
{{PartitionReplicaListener.TxCleanupReadyFutureList.state}} that duplicates 
local txState, and is updated in the cleanup command handler:
{code:java}

{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20559) Return metastorage invokes in DistributionZoneManager#createMetastorageTopologyListener

2023-10-04 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-20559:


 Summary: Return metastorage invokes in 
DistributionZoneManager#createMetastorageTopologyListener
 Key: IGNITE-20559
 URL: https://issues.apache.org/jira/browse/IGNITE-20559
 Project: Ignite
  Issue Type: Bug
Reporter: Mirza Aliev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20317:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
- DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
- LogicalTopologyEventListener to update logical topology.-
- DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
-# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
-# LogicalTopologyEventListener to update logical topology.-
-# DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.


> Meta storage invokes are not completed when events are handled in DZM 
> 

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20317:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
-DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
-LogicalTopologyEventListener to update logical topology.-
-DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener watch 
listener to update pending assignments.-


h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
-DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
-LogicalTopologyEventListener to update logical topology.-
-DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener watch 
listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> 

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20317:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
-# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
-# LogicalTopologyEventListener to update logical topology.-
-# DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.
# LogicalTopologyEventListener to update logical topology.
# DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20317:
-
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
-DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
-LogicalTopologyEventListener to update logical topology.-
-DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener watch 
listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneManager#onUpdateScaleUp
# DistributionZoneManager#onUpdateScaleDown
- DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.-
- LogicalTopologyEventListener to update logical topology.-
- DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.-

All immediate data nodes recalculation must be returned  

h3. *Definition of Done*
Need to ensure event handling linearization. All immediate data nodes 
recalculation must be returned  to the event handler.

h3. *Implementation Notes*
* ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.

* We cannnot return future from LogicalTopologyEventListener's methods. We can 
ignore these futures. It has drawback: we can skip the topology update
# topology=[A,B], dataNodes=[A,B], scaleUp=0, scaleDown=100
# Node C was joined to the topology and left quickly and ms invokes to update 
topology entry was reordered.
# data nodes was not updated immediately to [A,B,C].
We think that we can ignore this bug because eventually it doesn't break the 
consistency of the date node. For this purpose we need to change the invoke 
condition:
`value(zonesLogicalTopologyVersionKey()).lt(longToBytes(newTopology.version()))`
 instead of
`value(zonesLogicalTopologyVersionKey()).eq(longToBytes(newTopology.version() - 
1))`

* Need to return ms invoke futures from WatchListener#onUpdate method of the 
data nodes listener.


> Meta storage invokes are not completed when events are handled in DZM 
> 

[jira] [Updated] (IGNITE-20451) Introduce WorkerRegistery

2023-10-04 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20451:
-
Summary: Introduce WorkerRegistery  (was: Introduce Introduce 
WorkerRegistery)

> Introduce WorkerRegistery
> -
>
> Key: IGNITE-20451
> URL: https://issues.apache.org/jira/browse/IGNITE-20451
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> Each Ignite node has a number of system-critical threads. We should implement 
> a periodic check that calls the failure handler when one of the following 
> conditions has been detected:
>  - Critical thread is not alive anymore.
>  - Critical thread 'hangs' for a long time, e.g. while executing a task 
> extracted from the task queue.
> In case of failure condition, call stacks of all threads should be logged 
> before invoking failure handler.
> Implementations based on separate diagnostic thread seem fragile, cause this 
> thread become a vulnerable point with respect to thread termination and CPU 
> resource starvation. So we are to use self-monitoring approach: critical 
> threads themselves should monitor each other.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20543) CacheStoreFactory should be invoked within sandbox

2023-10-04 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-20543:

Description: 
User can invoke any code while configuring 
CacheConfiguration#CacheStoreFactory. Factory code is user defined and invoked 
on Ignite server nodes during cache start. 

 

Should wrap this code into sandbox like any other user code (compute jobs, etc).

> CacheStoreFactory should be invoked within sandbox
> --
>
> Key: IGNITE-20543
> URL: https://issues.apache.org/jira/browse/IGNITE-20543
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> User can invoke any code while configuring 
> CacheConfiguration#CacheStoreFactory. Factory code is user defined and 
> invoked on Ignite server nodes during cache start. 
>  
> Should wrap this code into sandbox like any other user code (compute jobs, 
> etc).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20546) Entries aren't expired while re-inserting with SQL

2023-10-04 Thread Maksim Timonin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771781#comment-17771781
 ] 

Maksim Timonin commented on IGNITE-20546:
-

[~ivandasch] thanks for review! Merged to master

> Entries aren't expired while re-inserting with SQL
> --
>
> Key: IGNITE-20546
> URL: https://issues.apache.org/jira/browse/IGNITE-20546
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TTL isn't set for entries inserted with "insert from select" query. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20546) Entries aren't expired while re-inserting with SQL

2023-10-04 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-20546:

Description: TTL isn't set for entries inserted with "insert from select" 
query. 

> Entries aren't expired while re-inserting with SQL
> --
>
> Key: IGNITE-20546
> URL: https://issues.apache.org/jira/browse/IGNITE-20546
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TTL isn't set for entries inserted with "insert from select" query. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20543) CacheStoreFactory should be invoked within sandbox

2023-10-04 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771779#comment-17771779
 ] 

Ignite TC Bot commented on IGNITE-20543:


{panel:title=Branch: [pull/10971/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10971/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Security{color} [[tests 
2|https://ci2.ignite.apache.org/viewLog.html?buildId=7361269]]
* {color:#013220}SecurityTestSuite: 
CacheStoreFactorySandboxTest.testStaticCacheStoreFactory - PASSED{color}
* {color:#013220}SecurityTestSuite: 
CacheStoreFactorySandboxTest.testDynamicCacheStoreFactory - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7361291buildTypeId=IgniteTests24Java8_RunAll]

> CacheStoreFactory should be invoked within sandbox
> --
>
> Key: IGNITE-20543
> URL: https://issues.apache.org/jira/browse/IGNITE-20543
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20546) Entries aren't expired while re-inserting with SQL

2023-10-04 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771773#comment-17771773
 ] 

Ignite TC Bot commented on IGNITE-20546:


{panel:title=Branch: [pull/10972/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10972/head] Base: [master] : New Tests 
(16)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Cache 5{color} [[tests 
16|https://ci2.ignite.apache.org/viewLog.html?buildId=7361307]]
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=REPLICATED, 
atomicityMode=ATOMIC, eagerTtl=true] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testInsertExpired[cacheMode=REPLICATED, 
atomicityMode=ATOMIC, eagerTtl=true] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=REPLICATED, 
atomicityMode=TRANSACTIONAL, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testInsertExpired[cacheMode=REPLICATED, 
atomicityMode=TRANSACTIONAL, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=REPLICATED, 
atomicityMode=ATOMIC, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testInsertExpired[cacheMode=REPLICATED, 
atomicityMode=ATOMIC, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=PARTITIONED, 
atomicityMode=ATOMIC, eagerTtl=true] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testInsertExpired[cacheMode=PARTITIONED, 
atomicityMode=ATOMIC, eagerTtl=true] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=PARTITIONED, 
atomicityMode=TRANSACTIONAL, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testInsertExpired[cacheMode=PARTITIONED, 
atomicityMode=TRANSACTIONAL, eagerTtl=false] - PASSED{color}
* {color:#013220}IgniteCacheWithIndexingTestSuite: 
CacheQueryFilterExpiredTest.testFilterExpired[cacheMode=REPLICATED, 
atomicityMode=TRANSACTIONAL, eagerTtl=true] - PASSED{color}
... and 5 new tests

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7361400buildTypeId=IgniteTests24Java8_RunAll]

> Entries aren't expired while re-inserting with SQL
> --
>
> Key: IGNITE-20546
> URL: https://issues.apache.org/jira/browse/IGNITE-20546
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20553) Unexpected rebalancing immediately after table creation

2023-10-04 Thread Sergey Chugunov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-20553:
-
Priority: Blocker  (was: Major)

> Unexpected rebalancing immediately after table creation
> ---
>
> Key: IGNITE-20553
> URL: https://issues.apache.org/jira/browse/IGNITE-20553
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> During the implementation of IGNITE-20330, it was discovered that when 
> running 
> {*}org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore{*},
>  a situation may occur that after creating the table, rebalancing will begin 
> and this test will freeze on the first insert ({*}sql(ignite1, 
> String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
> i));{*}). The situation is not reproduced often; you need to run the test 
> several times.
> h3. Upd#1
> It's a known issue that node restart is broken. Before proceeding with given 
> ticket metastorage compaction epic should be finished, espesially 
> https://issues.apache.org/jira/browse/IGNITE-20210



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20510) Java thin 3.0: ClientMetricsTest.testConnectionMetrics is flaky

2023-10-04 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771758#comment-17771758
 ] 

Pavel Tupitsyn commented on IGNITE-20510:
-

Merged to main: 75901c4412d411319b130cfac6ca9324ca6b20bc

> Java thin 3.0: ClientMetricsTest.testConnectionMetrics is flaky
> ---
>
> Key: IGNITE-20510
> URL: https://issues.apache.org/jira/browse/IGNITE-20510
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> org.opentest4j.AssertionFailedError: expected: <1> but was: <0>
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
>   at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
>   at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:629)
>   at 
> app//org.apache.ignite.client.ClientMetricsTest.testConnectionMetrics(ClientMetricsTest.java:86)
> {code}
> https://ci.ignite.apache.org/test/5232970473067276194?currentProjectId=ApacheIgnite3xGradle_Test=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20411) IndexOutOfBoundsException in SqlRowHandler$BinaryTupleRowWrapper

2023-10-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-20411:
--
Description: 
*Exception:*
{code}
java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:459)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler$BinaryTupleRowWrapper.get(SqlRowHandler.java:357)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler.get(SqlRowHandler.java:74)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler.get(SqlRowHandler.java:65)
at 
org.apache.ignite.internal.sql.engine.exec.UpdatableTableImpl.convertRow(UpdatableTableImpl.java:337)
at 
org.apache.ignite.internal.sql.engine.exec.UpdatableTableImpl.insertAll(UpdatableTableImpl.java:242)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.flushTuples(ModifyNode.java:219)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.tryEnd(ModifyNode.java:190)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.end(ModifyNode.java:163)
at 
org.apache.ignite.internal.sql.engine.exec.rel.Inbox.pushUnordered(Inbox.java:344)
at 
org.apache.ignite.internal.sql.engine.exec.rel.Inbox.push(Inbox.java:202)
at 
org.apache.ignite.internal.sql.engine.exec.rel.Inbox.onBatchReceived(Inbox.java:180)
at 
org.apache.ignite.internal.sql.engine.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:167)
at 
org.apache.ignite.internal.sql.engine.exec.ExchangeServiceImpl.lambda$start$1(ExchangeServiceImpl.java:73)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:150)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.lambda$onMessage$0(MessageServiceImpl.java:119)
at 
org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:81)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
{code}

*Reproducer:*
Add this test to *ItSqlSynchronousApiTest*
{code:java}
@Test
public void testUpdateTable() {
IgniteSql sql = igniteSql();
Session ses = sql.createSession();
checkDdl(true, ses, "CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT)");

var upsertFut = CompletableFuture.runAsync(() -> {
for (int i = 0; i < 1000; i++) {
checkDml(1, ses, "INSERT INTO TEST VALUES (?, ?)", i, i);
}
});

checkDdl(true, ses, "ALTER TABLE TEST ADD COLUMN VAL1 INT DEFAULT -1");

upsertFut.join();
}
{code}


*NOTE*
Original exception stack trace seems to be lost if you just run the test, I had 
to use debugger to get it. Consider addressing this too.



This use was fixed in IGNITE-20520. This ticket just removed 
UpdateSchemaListener that is no longer in use.

  was:
*Exception:*
{code}
java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:459)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler$BinaryTupleRowWrapper.get(SqlRowHandler.java:357)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler.get(SqlRowHandler.java:74)
at 
org.apache.ignite.internal.sql.engine.exec.SqlRowHandler.get(SqlRowHandler.java:65)
at 
org.apache.ignite.internal.sql.engine.exec.UpdatableTableImpl.convertRow(UpdatableTableImpl.java:337)
at 
org.apache.ignite.internal.sql.engine.exec.UpdatableTableImpl.insertAll(UpdatableTableImpl.java:242)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.flushTuples(ModifyNode.java:219)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.tryEnd(ModifyNode.java:190)
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.end(ModifyNode.java:163)
at 

[jira] [Commented] (IGNITE-20357) Design node config, zone and table storage relations

2023-10-04 Thread Alexey Scherbakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771746#comment-17771746
 ] 

Alexey Scherbakov commented on IGNITE-20357:


LGTM

> Design node config, zone and table storage relations
> 
>
> Key: IGNITE-20357
> URL: https://issues.apache.org/jira/browse/IGNITE-20357
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Attachments: Copy of Unify storage configurations for the 
> table_zone_node levels.pdf
>
>
> *Motivation*
> We need to clarify the UX around the table storage, zone and node configs 
> according to zone-based collocation.
> *Definition of done*
> User has the simple and predictable flow to:
> - Configure the node storage from the point of view: tables with which 
> requirements for storage can use this node.
> - Describe on the zone creation, the nodes with which table storages can be a 
> part of this zone
> - Describe on the table creation, which storage needed for this table and 
> receive the error as soon as possible if chosen zone can't guarantee that its 
> nodes have this storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20357) Design node config, zone and table storage relations

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20357:

Issue Type: Improvement  (was: Task)

> Design node config, zone and table storage relations
> 
>
> Key: IGNITE-20357
> URL: https://issues.apache.org/jira/browse/IGNITE-20357
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Attachments: Copy of Unify storage configurations for the 
> table_zone_node levels.pdf
>
>
> *Motivation*
> We need to clarify the UX around the table storage, zone and node configs 
> according to zone-based collocation.
> *Definition of done*
> User has the simple and predictable flow to:
> - Configure the node storage from the point of view: tables with which 
> requirements for storage can use this node.
> - Describe on the zone creation, the nodes with which table storages can be a 
> part of this zone
> - Describe on the table creation, which storage needed for this table and 
> receive the error as soon as possible if chosen zone can't guarantee that its 
> nodes have this storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20357) Design node config, zone and table storage relations

2023-10-04 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20357:

Issue Type: Task  (was: Improvement)

> Design node config, zone and table storage relations
> 
>
> Key: IGNITE-20357
> URL: https://issues.apache.org/jira/browse/IGNITE-20357
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
> Attachments: Copy of Unify storage configurations for the 
> table_zone_node levels.pdf
>
>
> *Motivation*
> We need to clarify the UX around the table storage, zone and node configs 
> according to zone-based collocation.
> *Definition of done*
> User has the simple and predictable flow to:
> - Configure the node storage from the point of view: tables with which 
> requirements for storage can use this node.
> - Describe on the zone creation, the nodes with which table storages can be a 
> part of this zone
> - Describe on the table creation, which storage needed for this table and 
> receive the error as soon as possible if chosen zone can't guarantee that its 
> nodes have this storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20558) Test plan for zone storage profile filters

2023-10-04 Thread Kirill Gusakov (Jira)
Kirill Gusakov created IGNITE-20558:
---

 Summary: Test plan for zone storage profile filters
 Key: IGNITE-20558
 URL: https://issues.apache.org/jira/browse/IGNITE-20558
 Project: Ignite
  Issue Type: Task
Reporter: Kirill Gusakov
Assignee: Kirill Gusakov






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20519) Add causality token of the last update of catalog descriptors to CatalogObjectDescriptor

2023-10-04 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20519:
-
Reviewer: Ivan Bessonov

> Add causality token of the last update of catalog descriptors to 
> CatalogObjectDescriptor
> 
>
> Key: IGNITE-20519
> URL: https://issues.apache.org/jira/browse/IGNITE-20519
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Motivation*
> It could be useful to add causality token of the last update of 
> {{CatalogObjectDescriptor}}. For example, this will help us to call
> {{DistributionZoneManager#dataNodes(long causalityToken, int zoneId)}} for 
> the specified {{CatalogZoneDescriptor}}, so we could receive data nodes with 
> accordance of correct version of filter from {{CatalogZoneDescriptor}}
> *Implementation notes*
> This could be done with the enriching {{UpdateEntry#applyUpdate(Catalog 
> catalog)}} with {{causalityToken}}, so we could propagate {{causalityToken}} 
> to all {{UpdateEntry}}, where we recreate {{CatalogObjectDescriptor}} and 
> create new version of {{Catalog}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20510) Java thin 3.0: ClientMetricsTest.testConnectionMetrics is flaky

2023-10-04 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771740#comment-17771740
 ] 

Igor Sapego commented on IGNITE-20510:
--

Looks good to me

> Java thin 3.0: ClientMetricsTest.testConnectionMetrics is flaky
> ---
>
> Key: IGNITE-20510
> URL: https://issues.apache.org/jira/browse/IGNITE-20510
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> org.opentest4j.AssertionFailedError: expected: <1> but was: <0>
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
>   at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
>   at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:629)
>   at 
> app//org.apache.ignite.client.ClientMetricsTest.testConnectionMetrics(ClientMetricsTest.java:86)
> {code}
> https://ci.ignite.apache.org/test/5232970473067276194?currentProjectId=ApacheIgnite3xGradle_Test=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Deleted] (IGNITE-20557) Embedded Continuous Query: add filter by event type

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn deleted IGNITE-20557:



> Embedded Continuous Query: add filter by event type
> ---
>
> Key: IGNITE-20557
> URL: https://issues.apache.org/jira/browse/IGNITE-20557
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> Add a possibility to receive events of specific types only (any combination 
> of CREATE, UPDATE, DELETE) with *ContinuousQueryOptions.filter*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20557) Embedded Continuous Query: add filter by event type

2023-10-04 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20557:
---

 Summary: Embedded Continuous Query: add filter by event type
 Key: IGNITE-20557
 URL: https://issues.apache.org/jira/browse/IGNITE-20557
 Project: Ignite
  Issue Type: New Feature
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2


Add a possibility to receive events of specific types only (any combination of 
CREATE, UPDATE, DELETE) with *ContinuousQueryOptions.filter*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Deleted] (IGNITE-20555) Basic Embedded Continuous Query

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn deleted IGNITE-20555:



> Basic Embedded Continuous Query
> ---
>
> Key: IGNITE-20555
> URL: https://issues.apache.org/jira/browse/IGNITE-20555
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Deleted] (IGNITE-20556) Embedded Continuous Query: add columnNames option

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn deleted IGNITE-20556:



> Embedded Continuous Query: add columnNames option
> -
>
> Key: IGNITE-20556
> URL: https://issues.apache.org/jira/browse/IGNITE-20556
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> Add an ability to receive a subset of columns in a Continuous Query with 
> *ContinuousQueryOptions.columnNames*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Deleted] (IGNITE-20554) 3.0 Continuous Query

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn deleted IGNITE-20554:



> 3.0 Continuous Query
> 
>
> Key: IGNITE-20554
> URL: https://issues.apache.org/jira/browse/IGNITE-20554
> Project: Ignite
>  Issue Type: Epic
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20556) Embedded Continuous Query: add columnNames option

2023-10-04 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20556:
---

 Summary: Embedded Continuous Query: add columnNames option
 Key: IGNITE-20556
 URL: https://issues.apache.org/jira/browse/IGNITE-20556
 Project: Ignite
  Issue Type: New Feature
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2


Add an ability to receive a subset of columns in a Continuous Query with 
*ContinuousQueryOptions.columnNames*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20554) 3.0 Continuous Query

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20554:

Labels: ignite-3  (was: )

> 3.0 Continuous Query
> 
>
> Key: IGNITE-20554
> URL: https://issues.apache.org/jira/browse/IGNITE-20554
> Project: Ignite
>  Issue Type: Epic
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20555) Basic Embedded Continuous Query

2023-10-04 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20555:
---

 Summary: Basic Embedded Continuous Query
 Key: IGNITE-20555
 URL: https://issues.apache.org/jira/browse/IGNITE-20555
 Project: Ignite
  Issue Type: New Feature
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19537) 3.0 Data Streamer

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19537:

Summary: 3.0 Data Streamer  (was: 3.0 Data streamer)

> 3.0 Data Streamer
> -
>
> Key: IGNITE-19537
> URL: https://issues.apache.org/jira/browse/IGNITE-19537
> Project: Ignite
>  Issue Type: Epic
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: iep-102, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Design and implement data streamer for Ignite 3.0 clients and embedded API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20554) 3.0 Continuous Query

2023-10-04 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20554:
---

 Summary: 3.0 Continuous Query
 Key: IGNITE-20554
 URL: https://issues.apache.org/jira/browse/IGNITE-20554
 Project: Ignite
  Issue Type: Epic
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19537) 3.0 Data streamer

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19537:

Description: Design and implement data streamer for Ignite 3.0 clients and 
embedded API.  (was: Design and implement data streamer for Ignite 3.0 clients.)

> 3.0 Data streamer
> -
>
> Key: IGNITE-19537
> URL: https://issues.apache.org/jira/browse/IGNITE-19537
> Project: Ignite
>  Issue Type: Epic
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: iep-102, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Design and implement data streamer for Ignite 3.0 clients and embedded API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19537) 3.0 Data streamer

2023-10-04 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19537:

Summary: 3.0 Data streamer  (was: Thin 3.0: Data streamer)

> 3.0 Data streamer
> -
>
> Key: IGNITE-19537
> URL: https://issues.apache.org/jira/browse/IGNITE-19537
> Project: Ignite
>  Issue Type: Epic
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: iep-102, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Design and implement data streamer for Ignite 3.0 clients.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20553) Unexpected rebalancing immediately after table creation

2023-10-04 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20553:
-
Description: 
During the implementation of IGNITE-20330, it was discovered that when running 
{*}org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore{*},
 a situation may occur that after creating the table, rebalancing will begin 
and this test will freeze on the first insert ({*}sql(ignite1, 
String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
i));{*}). The situation is not reproduced often; you need to run the test 
several times.
h3. Upd#1

It's a known issue that node restart is broken. Before proceeding with given 
ticket metastorage compaction epic should be finished, espesially 
https://issues.apache.org/jira/browse/IGNITE-20210

  was:During the implementation of IGNITE-20330, it was discovered that when 
running 
*org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore*,
 a situation may occur that after creating the table, rebalancing will begin 
and this test will freeze on the first insert (*sql(ignite1, 
String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
i));*). The situation is not reproduced often; you need to run the test several 
times.


> Unexpected rebalancing immediately after table creation
> ---
>
> Key: IGNITE-20553
> URL: https://issues.apache.org/jira/browse/IGNITE-20553
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> During the implementation of IGNITE-20330, it was discovered that when 
> running 
> {*}org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore{*},
>  a situation may occur that after creating the table, rebalancing will begin 
> and this test will freeze on the first insert ({*}sql(ignite1, 
> String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
> i));{*}). The situation is not reproduced often; you need to run the test 
> several times.
> h3. Upd#1
> It's a known issue that node restart is broken. Before proceeding with given 
> ticket metastorage compaction epic should be finished, espesially 
> https://issues.apache.org/jira/browse/IGNITE-20210



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20541) Watch Processor performs unnecessary work in case of empty events

2023-10-04 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20541:
-
Reviewer: Kirill Tkalenko

> Watch Processor performs unnecessary work in case of empty events
> -
>
> Key: IGNITE-20541
> URL: https://issues.apache.org/jira/browse/IGNITE-20541
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If a Meta Storage event does not match any of the Watch Listeners, Watch 
> Processor creates a bunch of empty futures for no reason, we can simply skip 
> such events.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20553) Unexpected rebalancing immediately after table creation

2023-10-04 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20553:


 Summary: Unexpected rebalancing immediately after table creation
 Key: IGNITE-20553
 URL: https://issues.apache.org/jira/browse/IGNITE-20553
 Project: Ignite
  Issue Type: Bug
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-beta2


During the implementation of IGNITE-20330, it was discovered that when running 
*org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore*,
 a situation may occur that after creating the table, rebalancing will begin 
and this test will freeze on the first insert (*sql(ignite1, 
String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
i));*). The situation is not reproduced often; you need to run the test several 
times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)