[jira] [Updated] (IGNITE-20834) SQL query may hang forerver after node restart

2023-11-12 Thread Andrey Khitrin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Khitrin updated IGNITE-20834:

Description: 
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5-10 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: cannot perform SQL query after restart. It hangs for a long 
time. Ignite log is overwhelmed with "Primary replica expired" messages.

This bug was first observed soon after fixes in 
https://issues.apache.org/jira/browse/IGNITE-20116.

  was:
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.

*UPD*: The problem is caused by 
https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done


> SQL query may hang forerver after node restart
> --
>
> Key: IGNITE-20834
> URL: https://issues.apache.org/jira/browse/IGNITE-20834
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> How to reproduce:
> 1. Start a 1-node cluster
> 2. Create several simple tables (usually 5-10 is enough to reproduce):
> {code:sql}
> create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> ...
> {code}
> 3. Fill every table with 1000 rows.
> 4. Ensure that every table contains 1000 rows:
> {code:sql}
> SELECT COUNT(*) FROM failoverTest00;
> ...
> {code}
> 5. Restart node (kill a Java process and start node again).
> 6. Check all tables again.
> Expected behavior: after restart, all tables still contains the same data as 
> before.
> Actual behavior: cannot perform SQL query after restart. It hangs for a long 
> time. Ignite log is overwhelmed with "Primary replica expired" messages.
> This bug was first observed soon after fixes in 
> https://issues.apache.org/jira/browse/IGNITE-20116.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20822) Refactoring WALIterator filter by higher bound

2023-11-12 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785399#comment-17785399
 ] 

Ignite TC Bot commented on IGNITE-20822:


{panel:title=Branch: [pull/11034/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11034/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7606677buildTypeId=IgniteTests24Java8_RunAll]

> Refactoring WALIterator filter by higher bound
> --
>
> Key: IGNITE-20822
> URL: https://issues.apache.org/jira/browse/IGNITE-20822
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently filter by higher bound has different implementation in WAL iterator 
> implementations:
>  # Standalone filters by self. It has a little bug - one excess iteration 
> over underlying iterator to check bounds.
>  # RecordsIterator filters only by index of higherBound. It might be 
> dangerous in concurrent scenario. It's better to filter by fixed position (as 
> it guarantees that this part of segment is completed). 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20834) SQL query may hang forerver after node restart

2023-11-12 Thread Andrey Khitrin (Jira)
Andrey Khitrin created IGNITE-20834:
---

 Summary: SQL query may hang forerver after node restart
 Key: IGNITE-20834
 URL: https://issues.apache.org/jira/browse/IGNITE-20834
 Project: Ignite
  Issue Type: Bug
Reporter: Andrey Khitrin
 Fix For: 3.0.0-beta2


How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.

*UPD*: The problem is caused by 
https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (IGNITE-20577) Partial data loss after node restart

2023-11-12 Thread Andrey Khitrin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Khitrin closed IGNITE-20577.
---

> Partial data loss after node restart
> 
>
> Key: IGNITE-20577
> URL: https://issues.apache.org/jira/browse/IGNITE-20577
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> How to reproduce:
> 1. Start a 1-node cluster
> 2. Create several simple tables (usually 5 is enough to reproduce):
> {code:sql}
> create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> ...
> {code}
> 3. Fill every table with 1000 rows.
> 4. Ensure that every table contains 1000 rows:
> {code:sql}
> SELECT COUNT(*) FROM failoverTest00;
> ...
> {code}
> 5. Restart node (kill a Java process and start node again).
> 6. Check all tables again.
> Expected behavior: after restart, all tables still contains the same data as 
> before.
> Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
> enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.
> This bug was first observed only near Sep 15, 2023. Most probably, it was 
> introduced somewhere near that date. Probably, it's an another face of 
> IGNITE-20425 (I'm not sure though). No errors in logs observed.
> *UPD*: The problem is caused by 
> https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
> solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20685) Implement ability to trigger transaction recovery

2023-11-12 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20685:
---
Description: 
h3. Motivation

Let's assume that the datanode somehow found out that the transaction 
coordinator is dead, but the products of its activity such as locks and write 
intents are still present. In that case it’s necessary to check whether 
corresponding transaction was actually finished and if not finish it.
h3. Definition of Done
 * Transactions X that detects (detection logic will be covered in a separate 
ticket) that coordinator is dead awaits commitPartition primary replica and 
sends initiateRecoveryReplicaRequest to it in a fully asynchronous manner. 
Meaning that transaction X should behave itself in a way as it specified in 
deadlock prevention engine and do not explicitly wait for initiateRecovery 
result. Actually, we do not expect any direct response from initiate recovery. 
Initiate recovery failover will be implemented in a different way.
 * Commit partition somewhere handles given request. No-op handling is expected 
for now, proper one will be added in IGNITE-20735 Let's consider either 
TransactionStateResolver or TxManagerImpl as initiateRecovery handler. 
TransactionStateResolver seems as the best choice here, however it should be 
refactored a bit, basically because it's won't be only state resolver any 
longer.

h3. Implementation Notes
 * Given ticket is trivial and should be considered as a bridge between durable 
tx coordinator liveness detection and corresponding 
initiateRecoveryReplicaRequest handling. Both items will be covered in a 
separate tickets.

  was:
h3. Motivation

Let's assume that the date node somehow found out that the transaction 
coordinator is dead, but the products of its activity such as locks and write 
intents are still present. In that case it’s necessary to check whether 
corresponding transaction was actually finished and if not finish it.
h3. Definition of Done
 * Transactions X that detects (detection logic will be covered in a separate 
ticket) that coordinator is dead awaits commitPartition primary replica and 
sends initiateRecoveryReplicaRequest to it in a fully asynchronous manner. 
Meaning that transaction X should behave itself in a way as it specified in 
deadlock prevention engine and do not explicitly wait for initiateRecovery 
result. Actually, we do not expect any direct response from initiate recovery. 
Initiate recovery failover will be implemented in a different way.
 * Commit partition somewhere handles given request. No-op handling is expected 
for now, proper one will be added in IGNITE-20735 Let's consider either 
TransactionStateResolver or TxManagerImpl as initiateRecovery handler. 
TransactionStateResolver seems as the best choice here, however it should be 
refactored a bit, basically because it's won't be only state resolver any 
longer.

h3. Implementation Notes
 * Given ticket is trivial and should be considered as a bridge between durable 
tx coordinator liveness detection and corresponding 
initiateRecoveryReplicaRequest handling. Both items will be covered in a 
separate tickets.


> Implement ability to trigger transaction recovery
> -
>
> Key: IGNITE-20685
> URL: https://issues.apache.org/jira/browse/IGNITE-20685
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Let's assume that the datanode somehow found out that the transaction 
> coordinator is dead, but the products of its activity such as locks and write 
> intents are still present. In that case it’s necessary to check whether 
> corresponding transaction was actually finished and if not finish it.
> h3. Definition of Done
>  * Transactions X that detects (detection logic will be covered in a separate 
> ticket) that coordinator is dead awaits commitPartition primary replica and 
> sends initiateRecoveryReplicaRequest to it in a fully asynchronous manner. 
> Meaning that transaction X should behave itself in a way as it specified in 
> deadlock prevention engine and do not explicitly wait for initiateRecovery 
> result. Actually, we do not expect any direct response from initiate 
> recovery. Initiate recovery failover will be implemented in a different way.
>  * Commit partition somewhere handles given request. No-op handling is 
> expected for now, proper one will be added in IGNITE-20735 Let's consider 
> either TransactionStateResolver or TxManagerImpl as initiateRecovery handler. 
> TransactionStateResolver seems as the best choice here, however it should be 
> refactored a bit, basically because it's 

[jira] [Commented] (IGNITE-20749) .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky

2023-11-12 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785369#comment-17785369
 ] 

Pavel Tupitsyn commented on IGNITE-20749:
-

There is an actual bug in `ClientFailoverSocket.ConnectAsync` - lock is not 
released when we exit early. This is causing all operations to hang and the 
test times out.

> .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky
> ---
>
> Key: IGNITE-20749
> URL: https://issues.apache.org/jira/browse/IGNITE-20749
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/test/5088070784193128850?currentProjectId=ApacheIgnite3xGradle_Test=true
> {code}
> Expected: No Exception to be thrown
>   But was:   to endpoint: 127.0.0.1:42477
>  ---> System.TimeoutException: The operation has timed out.
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  195
>--- End of inner exception stack trace ---
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  213
>at Apache.Ignite.Internal.ClientFailoverSocket.ConnectAsync(SocketEndpoint 
> endpoint) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  414
>at Apache.Ignite.Internal.ClientFailoverSocket.GetNextSocketAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  365
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.GetSocketAsync(PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  285
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp
>  clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  183
>at Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAsync(ClientOp 
> clientOp, PooledArrayBuffer request, PreferredNode preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  145
>at Apache.Ignite.Internal.Table.Tables.GetTablesAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Tables.cs:line
>  64
>at 
> Apache.Ignite.Tests.ReconnectTests.<>c__DisplayClass6_0.d.MoveNext()
>  in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
> --- End of stack trace from previous location ---
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at NUnit.Framework.Internal.ExceptionHelper.RecordException(Delegate 
> parameterlessDelegate, String parameterName)>
>at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Execution.SimpleWorkItem.<>c__DisplayClass4_0.b__0()
>at 
> NUnit.Framework.Internal.ContextUtils.<>c__DisplayClass1_0`1.b__0(Object
>  _)
> 1)at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> 

[jira] [Commented] (IGNITE-20749) .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky

2023-11-12 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785372#comment-17785372
 ] 

Pavel Tupitsyn commented on IGNITE-20749:
-

[~isapego] please review.

> .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky
> ---
>
> Key: IGNITE-20749
> URL: https://issues.apache.org/jira/browse/IGNITE-20749
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/test/5088070784193128850?currentProjectId=ApacheIgnite3xGradle_Test=true
> {code}
> Expected: No Exception to be thrown
>   But was:   to endpoint: 127.0.0.1:42477
>  ---> System.TimeoutException: The operation has timed out.
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  195
>--- End of inner exception stack trace ---
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  213
>at Apache.Ignite.Internal.ClientFailoverSocket.ConnectAsync(SocketEndpoint 
> endpoint) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  414
>at Apache.Ignite.Internal.ClientFailoverSocket.GetNextSocketAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  365
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.GetSocketAsync(PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  285
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp
>  clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  183
>at Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAsync(ClientOp 
> clientOp, PooledArrayBuffer request, PreferredNode preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  145
>at Apache.Ignite.Internal.Table.Tables.GetTablesAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Tables.cs:line
>  64
>at 
> Apache.Ignite.Tests.ReconnectTests.<>c__DisplayClass6_0.d.MoveNext()
>  in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
> --- End of stack trace from previous location ---
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at NUnit.Framework.Internal.ExceptionHelper.RecordException(Delegate 
> parameterlessDelegate, String parameterName)>
>at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Execution.SimpleWorkItem.<>c__DisplayClass4_0.b__0()
>at 
> NUnit.Framework.Internal.ContextUtils.<>c__DisplayClass1_0`1.b__0(Object
>  _)
> 1)at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
> {code}



--
This message was sent by Atlassian Jira

[jira] [Commented] (IGNITE-20749) .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky

2023-11-12 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785371#comment-17785371
 ] 

Pavel Tupitsyn commented on IGNITE-20749:
-

100+ successful runs: 
https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests?branch=pull%2F2827=overview=builds#all-projects

> .NET: Thin 3.0: TestReconnectAfterFullClusterRestart is still flaky
> ---
>
> Key: IGNITE-20749
> URL: https://issues.apache.org/jira/browse/IGNITE-20749
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/test/5088070784193128850?currentProjectId=ApacheIgnite3xGradle_Test=true
> {code}
> Expected: No Exception to be thrown
>   But was:   to endpoint: 127.0.0.1:42477
>  ---> System.TimeoutException: The operation has timed out.
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  195
>--- End of inner exception stack trace ---
>at Apache.Ignite.Internal.ClientSocket.ConnectAsync(SocketEndpoint 
> endPoint, IgniteClientConfiguration configuration, Action`1 
> assignmentChangeCallback) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  213
>at Apache.Ignite.Internal.ClientFailoverSocket.ConnectAsync(SocketEndpoint 
> endpoint) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  414
>at Apache.Ignite.Internal.ClientFailoverSocket.GetNextSocketAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  365
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.GetSocketAsync(PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  285
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp
>  clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
> preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  183
>at Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAsync(ClientOp 
> clientOp, PooledArrayBuffer request, PreferredNode preferredNode) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  145
>at Apache.Ignite.Internal.Table.Tables.GetTablesAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Tables.cs:line
>  64
>at 
> Apache.Ignite.Tests.ReconnectTests.<>c__DisplayClass6_0.d.MoveNext()
>  in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
> --- End of stack trace from previous location ---
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at NUnit.Framework.Internal.ExceptionHelper.RecordException(Delegate 
> parameterlessDelegate, String parameterName)>
>at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  162
>at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
>at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
>at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 invoke)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
>at 
> NUnit.Framework.Internal.Execution.SimpleWorkItem.<>c__DisplayClass4_0.b__0()
>at 
> NUnit.Framework.Internal.ContextUtils.<>c__DisplayClass1_0`1.b__0(Object
>  _)
> 1)at 
> Apache.Ignite.Tests.ReconnectTests.TestReconnectAfterFullClusterRestart() in 
> 

[jira] [Commented] (IGNITE-19218) ODBC 3.0: Implement special columns query

2023-11-12 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785368#comment-17785368
 ] 

Pavel Tupitsyn commented on IGNITE-19218:
-

[~isapego] looks good to me.

> ODBC 3.0: Implement special columns query
> -
>
> Key: IGNITE-19218
> URL: https://issues.apache.org/jira/browse/IGNITE-19218
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Probably should just port dummy functionality and tests from Ignite 2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20663) Thin 3.0: ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is flaky

2023-11-12 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785332#comment-17785332
 ] 

Igor Sapego commented on IGNITE-20663:
--

Looks like an issue was fixed by IGNITE-20644

> Thin 3.0: 
> ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is 
> flaky
> --
>
> Key: IGNITE-20663
> URL: https://issues.apache.org/jira/browse/IGNITE-20663
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
>
> The following test is flaky:
> https://ci.ignite.apache.org/test/-4306071594745342563?currentProjectId=ApacheIgnite3xGradle_Test_IntegrationTests=true
> Need to investigate and fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20663) Thin 3.0: ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is flaky

2023-11-12 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego resolved IGNITE-20663.
--
Resolution: Fixed

> Thin 3.0: 
> ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is 
> flaky
> --
>
> Key: IGNITE-20663
> URL: https://issues.apache.org/jira/browse/IGNITE-20663
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
>
> The following test is flaky:
> https://ci.ignite.apache.org/test/-4306071594745342563?currentProjectId=ApacheIgnite3xGradle_Test_IntegrationTests=true
> Need to investigate and fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)