[jira] [Updated] (IGNITE-22327) Error "StateMachine meet critical error" on restart

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22327:
--
Description: 
*Steps to reproduce:*
 # Start cluster of 3 nodes on 3 hosts.
 # Create 10 tables and insert 10 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-23 21:09:52:473 +0300 
[ERROR][%ClusterFailover3NodesTest_cluster_0%JRaft-FSMCaller-Disruptor_stripe_3-0][StateMachineAdapter]
 Encountered an error=Status[ESTATEMACHINE<10002>: StateMachine meet critical 
error when applying one or more tasks since index=2, 
Status[ESTATEMACHINE<10002>: No serializer provider defined for group type 40 
and message type 8]] on StateMachine 
org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine,
 it's highly recommended to implement this method as raft stops working since 
some error occurs, you should figure out the cause and repair or remove this 
node.
Error [type=ERROR_TYPE_STATE_MACHINE, status=Status[ESTATEMACHINE<10002>: 
StateMachine meet critical error when applying one or more tasks since index=2, 
Status[ESTATEMACHINE<10002>: No serializer provider defined for group type 40 
and message type 8]]]
at 
org.apache.ignite.raft.jraft.core.IteratorImpl.getOrCreateError(IteratorImpl.java:156)
at 
org.apache.ignite.raft.jraft.core.IteratorImpl.setErrorAndRollback(IteratorImpl.java:147)
at 
org.apache.ignite.raft.jraft.core.IteratorWrapper.setErrorAndRollback(IteratorWrapper.java:72)
at 
org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine.onApply(JraftServerImpl.java:803)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:557)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:525)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:444)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:340)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:278)
at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:167)
at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:122)
at java.base/java.lang.Thread.run(Thread.java:842)
2024-05-23 21:09:52:473 +0300 
[WARNING][%ClusterFailover3NodesTest_cluster_0%JRaft-FSMCaller-Disruptor_stripe_3-0][NodeImpl]
 Node <18_part_19/ClusterFailover3NodesTest_cluster_0> got error: Error 
[type=ERROR_TYPE_STATE_MACHINE, status=Status[ESTATEMACHINE<10002>: 
StateMachine meet critical error when applying one or more tasks since index=2, 
Status[ESTATEMACHINE<10002>: No serializer provider defined for group type 40 
and message type 8]]].
2024-05-23 21:09:52:473 +0300 
[WARNING][%ClusterFailover3NodesTest_cluster_0%JRaft-FSMCaller-Disruptor_stripe_3-0][FSMCallerImpl]
 FSMCaller already in error status, ignore new error
Error [type=ERROR_TYPE_STATE_MACHINE, status=Status[ESTATEMACHINE<10002>: 
StateMachine meet critical error when applying one or more tasks since index=2, 
Status[ESTATEMACHINE<10002>: No serializer provider defined for group type 40 
and message type 8]]]
at 
org.apache.ignite.raft.jraft.core.IteratorImpl.getOrCreateError(IteratorImpl.java:156)
at 
org.apache.ignite.raft.jraft.core.IteratorImpl.setErrorAndRollback(IteratorImpl.java:147)
at 
org.apache.ignite.raft.jraft.core.IteratorWrapper.setErrorAndRollback(IteratorWrapper.java:72)
at 
org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine.onApply(JraftServerImpl.java:803)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:557)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:525)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:444)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:340)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:278)
at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:167)
at 

[jira] [Updated] (IGNITE-22327) Error "StateMachine meet critical error" on restart

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22327:
--
Description: 
*Steps to reproduce:*
 # Start cluster of 3 nodes on 3 hosts.
 # Create 10 tables and insert 10 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at 
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    at 
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at 
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at 
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-23-594.png! GC calls of node 
ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-04-081.png!

  was:
*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 

[jira] [Updated] (IGNITE-22327) Error "StateMachine meet critical error" on restart

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22327:
--
Environment: 3 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *3 host* 
 (was: 2 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *1 host*
cpuCount=10
memorySizeMb=15360)

>  Error "StateMachine meet critical error" on restart
> 
>
> Key: IGNITE-22327
> URL: https://issues.apache.org/jira/browse/IGNITE-22327
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
> Environment: 3 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *3 
> host*
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Start cluster of 2 nodes on single host.
>  # Create 5 tables and insert 1000 rows into each.
>  # Kill 1 server.
>  # Start the killed server.
>  # Check logs for errors.
> *Expected:*
> No errors in logs.
> *Actual:*
> Errors in logs
> {code:java}
> 2024-05-17 04:26:37:808 + 
> [ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
>  A critical thread is blocked for 688 ms that is more than the allowed 500 
> ms, it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 
> RUNNABLE
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
>     at 
> app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>     at 
> app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>     at 
> app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>     at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>     at 
> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>     at 
> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
> GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-18-06-23-594.png! GC calls of node 
> ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-18-06-04-081.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22327) Error "StateMachine meet critical error" on restart

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22327:
--
Environment: 3 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *3 
hosts*  (was: 3 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *3 host*)

>  Error "StateMachine meet critical error" on restart
> 
>
> Key: IGNITE-22327
> URL: https://issues.apache.org/jira/browse/IGNITE-22327
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
> Environment: 3 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *3 
> hosts*
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Start cluster of 2 nodes on single host.
>  # Create 5 tables and insert 1000 rows into each.
>  # Kill 1 server.
>  # Start the killed server.
>  # Check logs for errors.
> *Expected:*
> No errors in logs.
> *Actual:*
> Errors in logs
> {code:java}
> 2024-05-17 04:26:37:808 + 
> [ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
>  A critical thread is blocked for 688 ms that is more than the allowed 500 
> ms, it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 
> RUNNABLE
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
>     at 
> app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>     at 
> app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>     at 
> app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>     at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>     at 
> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>     at 
> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
> GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-18-06-23-594.png! GC calls of node 
> ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-18-06-04-081.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22327) Error "StateMachine meet critical error" on restart

2024-05-24 Thread Igor (Jira)
Igor created IGNITE-22327:
-

 Summary:  Error "StateMachine meet critical error" on restart
 Key: IGNITE-22327
 URL: https://issues.apache.org/jira/browse/IGNITE-22327
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 3.0.0-beta2
 Environment: 2 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *1 
host*
cpuCount=10
memorySizeMb=15360
Reporter: Igor


*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at 
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    at 
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at 
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at 
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-23-594.png! GC calls of node 
ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-04-081.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22324) The exception "The primary replica has changed" on creation of 1000 tables

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22324:
--
Description: 
*Steps to reproduce:*

1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"

2. Create 1000 tables with 200 varchar columns each  and insert 1 row into 
each. One by one.

*Expected result:*
Tables are created.

*Actual result:*

On table 850 the exception is thrown:
{code:java}
java.sql.SQLException: The primary replica has changed 
[expectedLeaseholderName=TablesAmountCapacityTest_cluster_0, 
currentLeaseholderName=null, 
expectedLeaseholderId=bf69f842-d6c8-4f7a-b7e4-96458a4d92cb, 
currentLeaseholderId=null, 
expectedEnlistmentConsistencyToken=112491691050598880, 
currentEnlistmentConsistencyToken=null]  at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
  at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)  
at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeWithArguments(JdbcPreparedStatement.java:765)
  at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:173)
  at 
org.gridgain.ai3tests.tests.amountcapacity.TablesAmountCapacityBaseTest.lambda$insertRowAndAssertTimeout$2(TablesAmountCapacityBaseTest.java:92)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834){code}
In server logs there is an exception:
{code:java}
2024-05-23 17:57:19:570 + 
[WARNING][CompletableFutureDelayScheduler][RaftGroupServiceImpl] Recoverable 
error during the request occurred (will be retried on the randomly selected 
node) [request=WriteActionRequestImpl [command=[0, 9, 41, -58, -128, -112, -21, 
-103, -45, -23, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112491694408335429], groupId=3402_part_7], peer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0]].
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
at 
java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 
java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2792)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.TimeoutException
... 7 more
2024-05-23 17:57:19:570 + 
[WARNING][CompletableFutureDelayScheduler][RaftGroupServiceImpl] Recoverable 
error during the request occurred (will be retried on the randomly selected 
node) [request=WriteActionRequestImpl [command=[0, 9, 41, -106, -128, -108, 
-21, -103, -45, -23, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112491694408400917], groupId=3402_part_21], peer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0]].
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
at 
java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 
java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2792)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
 

[jira] [Created] (IGNITE-22324) The exception "The primary replica has changed" on creation of 1000 tables

2024-05-24 Thread Igor (Jira)
Igor created IGNITE-22324:
-

 Summary: The exception "The primary replica has changed" on 
creation of 1000 tables
 Key: IGNITE-22324
 URL: https://issues.apache.org/jira/browse/IGNITE-22324
 Project: Ignite
  Issue Type: Bug
  Components: general, persistence
Affects Versions: 3.0.0-beta1
Reporter: Igor


*Steps to reproduce:*

1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"

2. Create 1000 tables with 200 varchar columns each  and insert 1 row into 
each. One by one.

*Expected result:*
Tables are created.

*Actual result:*

On table 949 the exception is thrown:
{code:java}
java.sql.SQLException: Primary replica has expired, transaction will be rolled 
back: [groupId = 1850_part_21, expected enlistment consistency token = 
112069202113202526, commit timestamp = HybridTimestamp [physical=2024-03-10 
03:13:16:057 +, logical=396, composite=112069207395991948], current primary 
replica = null]
  at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
  at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
  at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeWithArguments(JdbcPreparedStatement.java:765)
  at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:173)
  at 
org.gridgain.ai3tests.tests.TablesAmountCapacityTest.lambda$insertRowAndAssertTimeout$1(TablesAmountCapacityTest.java:166)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834) {code}
In server logs there is an exception:
{code:java}
2024-03-10 03:13:24:222 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-8][TxManagerImpl]
 Failed to finish Tx. The operation will be retried 
[txId=018e2659-b09f-009c-23c0-6ab50001].
java.util.concurrent.CompletionException: 
org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed out 
[replicaGrpId=1850_part_21]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:704)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.replicator.ReplicaService.lambda$sendToReplica$0(ReplicaService.java:110)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: 
org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed out 
[replicaGrpId=1850_part_21]
    ... 4 more
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-22][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-11][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-19][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-17][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-23][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 

[jira] [Updated] (IGNITE-22324) The exception "The primary replica has changed" on creation of 1000 tables

2024-05-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22324:
--
Affects Version/s: 3.0.0-beta2
   (was: 3.0.0-beta1)

> The exception "The primary replica has changed" on creation of 1000 tables
> --
>
> Key: IGNITE-22324
> URL: https://issues.apache.org/jira/browse/IGNITE-22324
> Project: Ignite
>  Issue Type: Bug
>  Components: general, persistence
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
> 1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"
> 2. Create 1000 tables with 200 varchar columns each  and insert 1 row into 
> each. One by one.
> *Expected result:*
> Tables are created.
> *Actual result:*
> On table 949 the exception is thrown:
> {code:java}
> java.sql.SQLException: Primary replica has expired, transaction will be 
> rolled back: [groupId = 1850_part_21, expected enlistment consistency token = 
> 112069202113202526, commit timestamp = HybridTimestamp [physical=2024-03-10 
> 03:13:16:057 +, logical=396, composite=112069207395991948], current 
> primary replica = null]
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
>   at 
> org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeWithArguments(JdbcPreparedStatement.java:765)
>   at 
> org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:173)
>   at 
> org.gridgain.ai3tests.tests.TablesAmountCapacityTest.lambda$insertRowAndAssertTimeout$1(TablesAmountCapacityTest.java:166)
>   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834) {code}
> In server logs there is an exception:
> {code:java}
> 2024-03-10 03:13:24:222 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-8][TxManagerImpl]
>  Failed to finish Tx. The operation will be retried 
> [txId=018e2659-b09f-009c-23c0-6ab50001].
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
> IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed 
> out [replicaGrpId=1850_part_21]
>     at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>     at 
> java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:704)
>     at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>     at 
> org.apache.ignite.internal.replicator.ReplicaService.lambda$sendToReplica$0(ReplicaService.java:110)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: 
> org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
> IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed 
> out [replicaGrpId=1850_part_21]
>     ... 4 more
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-22][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-11][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-19][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-17][TrackableNetworkMessageHandler]
>  Message handling has 

[jira] [Updated] (IGNITE-22280) Error "A critical thread is blocked" on restart

2024-05-17 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22280:
--
Attachment: image-2024-05-17-18-06-23-594.png

> Error "A critical thread is blocked" on restart
> ---
>
> Key: IGNITE-22280
> URL: https://issues.apache.org/jira/browse/IGNITE-22280
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
> Environment: 2 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *1 
> host*
> cpuCount=10
> memorySizeMb=15360
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: ignite3db-0-1.log, ignite3db-0.log, 
> image-2024-05-17-17-57-18-759.png, image-2024-05-17-17-57-32-913.png, 
> image-2024-05-17-17-58-12-428.png, image-2024-05-17-18-06-04-081.png, 
> image-2024-05-17-18-06-23-594.png
>
>
> *Steps to reproduce:*
>  # Start cluster of 2 nodes on single host.
>  # Create 5 tables and insert 1000 rows into each.
>  # Kill 1 server.
>  # Start the killed server.
>  # Check logs for errors.
> *Expected:*
> No errors in logs.
> *Actual:*
> Errors in logs
> {code:java}
> 2024-05-17 04:26:37:808 + 
> [ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
>  A critical thread is blocked for 688 ms that is more than the allowed 500 
> ms, it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 
> RUNNABLE
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
>     at 
> app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>     at 
> app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>     at 
> app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>     at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>     at 
> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>     at 
> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
> GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-17-57-32-913.png! GC calls of node 
> ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-17-58-12-428.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22280) Error "A critical thread is blocked" on restart

2024-05-17 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22280:
--
Description: 
*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at 
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    at 
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at 
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at 
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-23-594.png! GC calls of node 
ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
!image-2024-05-17-18-06-04-081.png!

  was:
*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 

[jira] [Updated] (IGNITE-22280) Error "A critical thread is blocked" on restart

2024-05-17 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22280:
--
Attachment: image-2024-05-17-18-06-04-081.png

> Error "A critical thread is blocked" on restart
> ---
>
> Key: IGNITE-22280
> URL: https://issues.apache.org/jira/browse/IGNITE-22280
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
> Environment: 2 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *1 
> host*
> cpuCount=10
> memorySizeMb=15360
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: ignite3db-0-1.log, ignite3db-0.log, 
> image-2024-05-17-17-57-18-759.png, image-2024-05-17-17-57-32-913.png, 
> image-2024-05-17-17-58-12-428.png, image-2024-05-17-18-06-04-081.png, 
> image-2024-05-17-18-06-23-594.png
>
>
> *Steps to reproduce:*
>  # Start cluster of 2 nodes on single host.
>  # Create 5 tables and insert 1000 rows into each.
>  # Kill 1 server.
>  # Start the killed server.
>  # Check logs for errors.
> *Expected:*
> No errors in logs.
> *Actual:*
> Errors in logs
> {code:java}
> 2024-05-17 04:26:37:808 + 
> [ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
>  A critical thread is blocked for 688 ms that is more than the allowed 500 
> ms, it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 
> RUNNABLE
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
>     at 
> app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
>     at 
> app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
>     at 
> app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>     at 
> app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>     at 
> app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>     at 
> app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>     at 
> app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>     at 
> app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>     at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>     at 
> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>     at 
> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
> GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-17-57-32-913.png! GC calls of node 
> ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
> !image-2024-05-17-17-58-12-428.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22280) Error "A critical thread is blocked" on restart

2024-05-17 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22280:
--
Description: 
*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at 
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    at 
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at 
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at 
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}
GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
!image-2024-05-17-17-57-32-913.png! GC calls of node 
ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
!image-2024-05-17-17-58-12-428.png!

  was:
*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Insert create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 

[jira] [Created] (IGNITE-22280) Error "A critical thread is blocked" on restart

2024-05-17 Thread Igor (Jira)
Igor created IGNITE-22280:
-

 Summary: Error "A critical thread is blocked" on restart
 Key: IGNITE-22280
 URL: https://issues.apache.org/jira/browse/IGNITE-22280
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 3.0.0-beta2
 Environment: 2 nodes (with arguments "-Xms4096m", "-Xmx4096m" ) on *1 
host*
cpuCount=10
memorySizeMb=15360
Reporter: Igor
 Attachments: ignite3db-0-1.log, ignite3db-0.log, 
image-2024-05-17-17-57-18-759.png, image-2024-05-17-17-57-32-913.png, 
image-2024-05-17-17-58-12-428.png

*Steps to reproduce:*
 # Start cluster of 2 nodes on single host.
 # Insert create 5 tables and insert 1000 rows into each.
 # Kill 1 server.
 # Start the killed server.
 # Check logs for errors.

*Expected:*

No errors in logs.

*Actual:*
Errors in logs
{code:java}
2024-05-17 04:26:37:808 + 
[ERROR][%ClusterFailover2NodesTest_cluster_0%common-scheduler-0][CriticalWorkerWatchdog]
 A critical thread is blocked for 688 ms that is more than the allowed 500 ms, 
it is "ClusterFailover2NodesTest_cluster_0-srv-worker-3" prio=10 Id=41 RUNNABLE
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:25)
    at 
app//org.apache.ignite.internal.network.message.InvokeResponseDeserializer.getMessage(InvokeResponseDeserializer.java:11)
    at 
app//org.apache.ignite.internal.network.netty.InboundDecoder.decode(InboundDecoder.java:136)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
    at 
app//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    at 
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    at 
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    at 
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at 
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
    at 
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
    at app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
    at 
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at 
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at 
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base@17.0.6/java.lang.Thread.run(Thread.java:833) {code}

GC calls of node ClusterFailover2NodesTest_cluster_0 (LOG: [^ignite3db-0.log])
!image-2024-05-17-17-57-32-913.png! GC calls of node 
ClusterFailover2NodesTest_cluster_1 (LOG: [^ignite3db-0.log])
!image-2024-05-17-17-58-12-428.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22248) Creation of new tables in 1 node cluster stuck after 850+ tables

2024-05-15 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22248:
--
Description: 
*Steps to reproduce:*
 # Single node cluster with arguments "-Xms4096m", "-Xmx4096m"
 # Create tables one by one up to 1000

*Expected:*
1000 tables are created.

*Actual:*
After 850+ tables the creation time is higher than 30 seconds.

!image-2024-05-15-13-22-40-059.png!

In the server logs continuous errors:
{code:java}
2024-05-15 04:11:58:116 + 
[WARNING][CompletableFutureDelayScheduler][RaftGroupServiceImpl] Recoverable 
error during the request occurred (will be retried on the randomly selected 
node) [request=WriteActionRequestImpl [command=[0, 9, 41, -126, -128, -36, -49, 
-79, -50, -34, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112443150482997249], groupId=950_part_21], peer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=TablesAmountCapacityTest_cluster_0, idx=0]].
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:368)
at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:377)
at 
java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1097)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
at 
java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2874)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.util.concurrent.TimeoutException
... 7 more {code}

  was:
*Steps to reproduce:*
 # Multinode cluster (3 nodes) with arguments 
"-Xms4096m", "-Xmx4096m"
 # Create tables one by one up to 1000

*Expected:*
1000 tables are created.

*Actual:*
After 150+ tables the creation time is higher than 30 seconds.

!image-2024-05-13-10-22-06-994.png!


> Creation of new tables in 1 node cluster stuck after 850+ tables
> 
>
> Key: IGNITE-22248
> URL: https://issues.apache.org/jira/browse/IGNITE-22248
> Project: Ignite
>  Issue Type: Bug
>  Components: general, persistence
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2024-05-15-13-22-40-059.png
>
>
> *Steps to reproduce:*
>  # Single node cluster with arguments "-Xms4096m", "-Xmx4096m"
>  # Create tables one by one up to 1000
> *Expected:*
> 1000 tables are created.
> *Actual:*
> After 850+ tables the creation time is higher than 30 seconds.
> !image-2024-05-15-13-22-40-059.png!
> In the server logs continuous errors:
> {code:java}
> 2024-05-15 04:11:58:116 + 
> [WARNING][CompletableFutureDelayScheduler][RaftGroupServiceImpl] Recoverable 
> error during the request occurred (will be retried on the randomly selected 
> node) [request=WriteActionRequestImpl [command=[0, 9, 41, -126, -128, -36, 
> -49, -79, -50, -34, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112443150482997249], groupId=950_part_21], peer=Peer 
> [consistentId=TablesAmountCapacityTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=TablesAmountCapacityTest_cluster_0, idx=0]].
> java.util.concurrent.CompletionException: 
> java.util.concurrent.TimeoutException
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:368)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:377)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1097)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
>   at 
> java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2874)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
>   at 

[jira] [Updated] (IGNITE-22248) Creation of new tables in 1 node cluster stuck after 850+ tables

2024-05-15 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22248:
--
Attachment: image-2024-05-15-13-22-40-059.png

> Creation of new tables in 1 node cluster stuck after 850+ tables
> 
>
> Key: IGNITE-22248
> URL: https://issues.apache.org/jira/browse/IGNITE-22248
> Project: Ignite
>  Issue Type: Bug
>  Components: general, persistence
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2024-05-15-13-22-40-059.png
>
>
> *Steps to reproduce:*
>  # Multinode cluster (3 nodes) with arguments 
> "-Xms4096m", "-Xmx4096m"
>  # Create tables one by one up to 1000
> *Expected:*
> 1000 tables are created.
> *Actual:*
> After 150+ tables the creation time is higher than 30 seconds.
> !image-2024-05-13-10-22-06-994.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22248) Creation of new tables in 1 node cluster stuck after 850+ tables

2024-05-15 Thread Igor (Jira)
Igor created IGNITE-22248:
-

 Summary: Creation of new tables in 1 node cluster stuck after 850+ 
tables
 Key: IGNITE-22248
 URL: https://issues.apache.org/jira/browse/IGNITE-22248
 Project: Ignite
  Issue Type: Bug
  Components: general, persistence
Affects Versions: 3.0.0-beta2
Reporter: Igor


*Steps to reproduce:*
 # Multinode cluster (3 nodes) with arguments 
"-Xms4096m", "-Xmx4096m"
 # Create tables one by one up to 1000

*Expected:*
1000 tables are created.

*Actual:*
After 150+ tables the creation time is higher than 30 seconds.

!image-2024-05-13-10-22-06-994.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-05-13 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845871#comment-17845871
 ] 

Igor commented on IGNITE-22137:
---

[~apolovtcev] 
The file 
`modules/storage-rocksdb/src/main/java/org/apache/ignite/internal/storage/rocksdb/configuration/schema/RocksDbStorageEngineExtensionConfigurationSchema.java`
{code:java}
package org.apache.ignite.internal.storage.rocksdb.configuration.schema;

import org.apache.ignite.configuration.annotation.ConfigValue;
import org.apache.ignite.configuration.annotation.ConfigurationExtension;
import 
org.apache.ignite.internal.storage.configurations.StorageEngineConfigurationSchema;

/**
 * Storages configuration extension for rocksdb storage.
 */
@ConfigurationExtension
public class RocksDbStorageEngineExtensionConfigurationSchema extends 
StorageEngineConfigurationSchema {

@ConfigValue
public RocksDbStorageEngineConfigurationSchema rocksDb;
}
 {code}
Produces 
{code:java}
rocksDb {
flushDelayMillis=100
} {code}
In the distribution.
I think the ticket is not solved correctly.

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Assignee: Aleksandr Polovtsev
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently {{RocksDbStorageEngine}} is called "rocksDb" in configuration which 
> is inconsistent  with other storage engines, like "aipersist" and "aimem". I 
> propose to rename to "rocksdb". However, this is an incompatible change in 
> terms of configuration API, so extra caution must be taken.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22209) Creation of new tables in multinode cluster stuck after 150+ tables

2024-05-13 Thread Igor (Jira)
Igor created IGNITE-22209:
-

 Summary: Creation of new tables in multinode cluster stuck after 
150+ tables
 Key: IGNITE-22209
 URL: https://issues.apache.org/jira/browse/IGNITE-22209
 Project: Ignite
  Issue Type: Bug
  Components: general, persistence
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Attachments: image-2024-05-13-10-22-06-994.png

*Steps to reproduce:*
 # Multinode cluster (3 nodes) with arguments 
"-Xms4096m", "-Xmx4096m"
 # Create tables one by one up to 1000

*Expected:*
1000 tables are created.

*Actual:*
After 150+ tables the creation time is higher than 30 seconds.

!image-2024-05-13-10-22-06-994.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-22139) JDBC request to degraded cluster freezes forever

2024-05-08 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-22139.
---
Resolution: Cannot Reproduce

> JDBC request to degraded cluster freezes forever
> 
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 freezes forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
>   at 
> 

[jira] [Updated] (IGNITE-22187) Cluster of 2 or 3 nodes doesn't work if one node is down

2024-05-08 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22187:
--
Description: 
*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).

*Expected:*

Data is returned.

*Actual:*
On the step 7 it returns error by REST API:
{code:java}
{"title":"Internal Server 
Error","status":500,"code":"IGN-RECOVERY-3","type":null,"detail":"io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: 
/172.120.6.2:3344","node":null,"traceId":"2acb52fc-3275-411b-a4de-45f14873f15c","invalidParams":null}{code}
In the server logs continuous errors:
{code:java}
2024-05-08 10:37:19:796 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-9][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.net.ConnectException.
2024-05-08 10:37:19:796 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-9][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-05-08 10:37:19:796 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-12][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.net.ConnectException.
2024-05-08 10:37:19:796 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-12][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower. {code}
If skip steps 7 and 8, then the exception on step 9 occurs:
{code:java}
java.sql.SQLException: Unable to send fragment 
[targetNode=ClusterFailover3NodesTest_cluster_0, fragmentId=1, 
cause=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
refused: no further information: /192.168.100.5:3344]
    at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
    at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
    at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:111)
    at 
org.gridgain.ai3tests.tests.teststeps.JdbcSteps.executeQuery(JdbcSteps.java:91)
    at 
org.gridgain.ai3tests.tests.failover.ClusterFailoverTestBase.getActualResult(ClusterFailoverTestBase.java:336)
    at 
org.gridgain.ai3tests.tests.failover.ClusterFailoverTestBase.assertDataIsFilledWithoutErrors(ClusterFailoverTestBase.java:154)
    at 
org.gridgain.ai3tests.tests.failover.ClusterFailover3NodesTest.singleKillAndCheckOtherNodeWorks(ClusterFailover3NodesTest.java:96)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834) {code}

  was:
*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).

*Expected:*

Data is returned.

*Actual:*
The select query at step 9 freezes forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect 

[jira] [Created] (IGNITE-22187) Cluster of 2 or 3 nodes doesn't work if one node is down

2024-05-08 Thread Igor (Jira)
Igor created IGNITE-22187:
-

 Summary: Cluster of 2 or 3 nodes doesn't work if one node is down
 Key: IGNITE-22187
 URL: https://issues.apache.org/jira/browse/IGNITE-22187
 Project: Ignite
  Issue Type: Bug
  Components: general, jdbc, networking, persistence
Affects Versions: 3.0.0-beta1
 Environment: The 2 or 3 nodes cluster running locally.
Reporter: Igor


*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).

*Expected:*

Data is returned.

*Actual:*
The select query at step 9 freezes forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:981 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
 Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
2024-04-30 00:04:02:981 +0200 
[WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
Recoverable error during the request occurred (will be retried on the randomly 
selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, -117, -128, 
-8, -15, -83, -4, -54, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
java.util.concurrent.CompletionException: 
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
no further information: /192.168.100.5:3344
  at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
  at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
  at 
org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
  at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
  at 
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
  at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
  at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at 

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster freezes  (was: JDBC request to 
degraded cluster stucks)

> JDBC request to degraded cluster freezes
> 
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 stucks forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> 

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Description: 
*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).

*Expected:*

Data is returned.

*Actual:*
The select query at step 9 freezes forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:981 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
 Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
2024-04-30 00:04:02:981 +0200 
[WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
Recoverable error during the request occurred (will be retried on the randomly 
selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, -117, -128, 
-8, -15, -83, -4, -54, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
java.util.concurrent.CompletionException: 
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
no further information: /192.168.100.5:3344
  at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
  at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
  at 
org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
  at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
  at 
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
  at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
  at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
  at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 
Connection refused: no further information: /192.168.100.5:3344
Caused by: java.net.ConnectException: 

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes forever

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster freezes forever  (was: JDBC 
request to degraded cluster freezes)

> JDBC request to degraded cluster freezes forever
> 
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 freezes forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> 

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster stucks

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster stucks  (was: JDBC request to 
degraded cluster stuck)

> JDBC request to degraded cluster stucks
> ---
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 stucks forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> 

[jira] [Created] (IGNITE-22139) JDBC request to degraded cluster stuck

2024-04-29 Thread Igor (Jira)
Igor created IGNITE-22139:
-

 Summary: JDBC request to degraded cluster stuck
 Key: IGNITE-22139
 URL: https://issues.apache.org/jira/browse/IGNITE-22139
 Project: Ignite
  Issue Type: Bug
  Components: general, jdbc, networking, persistence
Affects Versions: 3.0.0-beta1
 Environment: The 2 or 3 nodes cluster running locally.
Reporter: Igor


*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).



*Expected:*

Data is returned.

*Actual:*
The select query at step 9 stucks forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:981 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
 Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
2024-04-30 00:04:02:981 +0200 
[WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
Recoverable error during the request occurred (will be retried on the randomly 
selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, -117, -128, 
-8, -15, -83, -4, -54, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
java.util.concurrent.CompletionException: 
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
no further information: /192.168.100.5:3344
  at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
  at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
  at 
org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
  at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
  at 
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
  at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
  at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at 

[jira] [Resolved] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-22088.
---
Resolution: Not A Bug

>From the doc:
_Sets the retry policy. When a request fails due to a connection error, and 
multiple server connections are available, Ignite will retry the request if the 
specified policy allows it._

> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES 
> (" + 3 + ", '" + "Pavel" + "')");
> SqlException exception = assertThrows(SqlException.class, () -> 
> igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));
> assertTrue(exception.getMessage().contains("Failed to acquire a lock due 
> to a possible deadlock "));
> }
> assertEquals(16, retriesCount.get()); {code}
> *Expected:*
> Executed without errors.
> *Actual:*
> Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-29 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841970#comment-17841970
 ] 

Igor commented on IGNITE-22088:
---

[~isapego] sorry, misread the doc. Then it is not the issue.

> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES 
> (" + 3 + ", '" + "Pavel" + "')");
> SqlException exception = assertThrows(SqlException.class, () -> 
> igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));
> assertTrue(exception.getMessage().contains("Failed to acquire a lock due 
> to a possible deadlock "));
> }
> assertEquals(16, retriesCount.get()); {code}
> *Expected:*
> Executed without errors.
> *Actual:*
> Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22117) Node restart fails due to error: marshaller mappings storage is broken

2024-04-26 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22117:
--
  Component/s: (was: persistence)
Affects Version/s: (was: 2.9.1)

> Node restart fails due to error: marshaller mappings storage is broken
> --
>
> Key: IGNITE-22117
> URL: https://issues.apache.org/jira/browse/IGNITE-22117
> Project: Ignite
>  Issue Type: Bug
>Reporter: Igor
>Priority: Major
>  Labels: ignite
>
> *Steps to reproduce:*
> 1. Start cluster of 3 nodes.
> 2. Create 4 tables with amount of rows up to 10.
> 3. Continously update data in the tables.
> 4. During the updates randomly restart the node.
> *Expected:*
> The node started successfully.
> *Actual:*
> The error happen during the node start:
> {code:java}
> [2024-04-24T21:59:10.738+0300][ERROR][main] Exception during start 
> processors, node will be stopped and close connections
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   ... 8 more
> [2024-04-24T21:59:10.751+0300][ERROR][main] Got exception while starting 
> (will rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> 

[jira] [Updated] (IGNITE-22117) Node restart fails due to error: marshaller mappings storage is broken

2024-04-26 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22117:
--
Labels:   (was: ignite)

> Node restart fails due to error: marshaller mappings storage is broken
> --
>
> Key: IGNITE-22117
> URL: https://issues.apache.org/jira/browse/IGNITE-22117
> Project: Ignite
>  Issue Type: Bug
>Reporter: Igor
>Priority: Major
>
> *Steps to reproduce:*
> 1. Start cluster of 3 nodes.
> 2. Create 4 tables with amount of rows up to 10.
> 3. Continously update data in the tables.
> 4. During the updates randomly restart the node.
> *Expected:*
> The node started successfully.
> *Actual:*
> The error happen during the node start:
> {code:java}
> [2024-04-24T21:59:10.738+0300][ERROR][main] Exception during start 
> processors, node will be stopped and close connections
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   ... 8 more
> [2024-04-24T21:59:10.751+0300][ERROR][main] Got exception while starting 
> (will rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> 

[jira] [Resolved] (IGNITE-22117) Node restart fails due to error: marshaller mappings storage is broken

2024-04-26 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-22117.
---
Resolution: Abandoned

> Node restart fails due to error: marshaller mappings storage is broken
> --
>
> Key: IGNITE-22117
> URL: https://issues.apache.org/jira/browse/IGNITE-22117
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.9.1
>Reporter: Igor
>Priority: Major
>  Labels: ignite
>
> *Steps to reproduce:*
> 1. Start cluster of 3 nodes.
> 2. Create 4 tables with amount of rows up to 10.
> 3. Continously update data in the tables.
> 4. During the updates randomly restart the node.
> *Expected:*
> The node started successfully.
> *Actual:*
> The error happen during the node start:
> {code:java}
> [2024-04-24T21:59:10.738+0300][ERROR][main] Exception during start 
> processors, node will be stopped and close connections
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   ... 8 more
> [2024-04-24T21:59:10.751+0300][ERROR][main] Got exception while starting 
> (will rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   

[jira] [Closed] (IGNITE-22117) Node restart fails due to error: marshaller mappings storage is broken

2024-04-26 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor closed IGNITE-22117.
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Node restart fails due to error: marshaller mappings storage is broken
> --
>
> Key: IGNITE-22117
> URL: https://issues.apache.org/jira/browse/IGNITE-22117
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.9.1
>Reporter: Igor
>Priority: Major
>  Labels: ignite
>
> *Steps to reproduce:*
> 1. Start cluster of 3 nodes.
> 2. Create 4 tables with amount of rows up to 10.
> 3. Continously update data in the tables.
> 4. During the updates randomly restart the node.
> *Expected:*
> The node started successfully.
> *Actual:*
> The error happen during the node start:
> {code:java}
> [2024-04-24T21:59:10.738+0300][ERROR][main] Exception during start 
> processors, node will be stopped and close connections
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   ... 8 more
> [2024-04-24T21:59:10.751+0300][ERROR][main] Got exception while starting 
> (will rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Failed to start processor: 
> GridProcessorAdapter []
>   at 
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941)
>  ~[ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
>  [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
> [ignite-core-8.9.3.jar:8.9.3]
>   at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
> [ignite-spring-8.9.3.jar:8.9.3]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140)
>  [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
>   at 
> org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
> [poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
> Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
> [platformId=0, typeId=-852964974], marshaller mappings storage is broken. 
> Clean up marshaller directory (/marshaller) and restart the node. 
> File name: -852964974.classname0, FileSize: 0
>   at 
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
>  

[jira] [Created] (IGNITE-22117) Node restart fails due to error: marshaller mappings storage is broken

2024-04-26 Thread Igor (Jira)
Igor created IGNITE-22117:
-

 Summary: Node restart fails due to error: marshaller mappings 
storage is broken
 Key: IGNITE-22117
 URL: https://issues.apache.org/jira/browse/IGNITE-22117
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 2.9.1
Reporter: Igor


*Steps to reproduce:*
1. Start cluster of 3 nodes.

2. Create 4 tables with amount of rows up to 10.
3. Continously update data in the tables.

4. During the updates randomly restart the node.

*Expected:*
The node started successfully.

*Actual:*

The error happen during the node start:
{code:java}
[2024-04-24T21:59:10.738+0300][ERROR][main] Exception during start processors, 
node will be stopped and close connections
org.apache.ignite.IgniteCheckedException: Failed to start processor: 
GridProcessorAdapter []
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941) 
~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
 [ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
 [ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
[ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
[ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
[ignite-spring-8.9.3.jar:8.9.3]
at 
org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140) 
[poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
at 
org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
[poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
[platformId=0, typeId=-852964974], marshaller mappings storage is broken. Clean 
up marshaller directory (/marshaller) and restart the node. File 
name: -852964974.classname0, FileSize: 0
at 
org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938) 
~[ignite-core-8.9.3.jar:8.9.3]
... 8 more
[2024-04-24T21:59:10.751+0300][ERROR][main] Got exception while starting (will 
rollback startup routine).
org.apache.ignite.IgniteCheckedException: Failed to start processor: 
GridProcessorAdapter []
at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1941) 
~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1165) 
[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
 [ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1709)
 [ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1146) 
[ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:637) 
[ignite-core-8.9.3.jar:8.9.3]
at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:65) 
[ignite-spring-8.9.3.jar:8.9.3]
at 
org.gridgain.poc.framework.starter.IgniteStarter.start(IgniteStarter.java:140) 
[poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
at 
org.gridgain.poc.framework.starter.IgniteStarter.main(IgniteStarter.java:73) 
[poc-tester-ignite2-0.5.0-SNAPSHOT.jar:?]
Caused by: org.apache.ignite.IgniteCheckedException: Class name is null for 
[platformId=0, typeId=-852964974], marshaller mappings storage is broken. Clean 
up marshaller directory (/marshaller) and restart the node. File 
name: -852964974.classname0, FileSize: 0
at 
org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:218)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:536)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 
org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
 ~[ignite-core-8.9.3.jar:8.9.3]
at 

[jira] [Updated] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-22 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22088:
--
Description: 
*Details:*
IgniteClient do not run retry with set retryPolicy on transaction lock. The 
default retry policy also doesn't work. The debugging also shows that no any 
code inside `RetryReadPolicy` is not used during transaction lock exception.

*Steps to reproduce:*

Run the next code:
{code:java}
AtomicInteger retriesCount = new AtomicInteger(0);

RetryReadPolicy retry = new RetryReadPolicy() {
@Override
public boolean shouldRetry(RetryPolicyContext context) {
System.out.println("CHECK IF RETRY SHOULD HAPPEN");
retriesCount.addAndGet(1);
return super.shouldRetry(context);
}
};

try (IgniteClient igniteClient1 = 
IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
IgniteClient igniteClient2 = 
IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build()) 
{

igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER PRIMARY 
KEY, name VARCHAR(200))");

Transaction tr1 = igniteClient1.transactions().begin();
Transaction tr2 = igniteClient2.transactions().begin();

igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES (" 
+ 3 + ", '" + "Pavel" + "')");

SqlException exception = assertThrows(SqlException.class, () -> 
igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));

assertTrue(exception.getMessage().contains("Failed to acquire a lock due to 
a possible deadlock "));
}

assertEquals(16, retriesCount.get()); {code}
*Expected:*
Executed without errors.

*Actual:*
Fails on the last step expected 16 retries, actual 0.

  was:
*Details:*
IgniteClient do not run retry with set retryPolicy on transaction lock. The 
default retry policy also doesn't work. The debugging also shows that no any 
code inside `RetryReadPolicy` is not used during transaction lock exception.


*Steps to reproduce:*

Run the next code:
{code:java}
AtomicInteger retriesCount = new AtomicInteger(0);

RetryReadPolicy retry = new RetryReadPolicy() {
@Override
public boolean shouldRetry(RetryPolicyContext context) {
System.out.println("CHECK IF RETRY SHOULD HAPPEN");
retriesCount.addAndGet(1);
return super.shouldRetry(context);
}
};

try (IgniteClient igniteClient1 = 
IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
IgniteClient igniteClient2 = 
IgniteClient.builder().addresses("localhost:10800").build()) {

igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER PRIMARY 
KEY, name VARCHAR(200))");

Transaction tr1 = igniteClient1.transactions().begin();
Transaction tr2 = igniteClient2.transactions().begin();

igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES (" 
+ 3 + ", '" + "Pavel" + "')");

SqlException exception = assertThrows(SqlException.class, () -> 
igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));

assertTrue(exception.getMessage().contains("Failed to acquire a lock due to 
a possible deadlock "));
}

assertEquals(16, retriesCount.get()); {code}
*Expected:*
Executed without errors.

*Actual:*
Fails on the last step expected 16 retries, actual 0.


> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> 

[jira] [Created] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-22 Thread Igor (Jira)
Igor created IGNITE-22088:
-

 Summary: retryPolicy of IgniteClient doesn't work on transaction 
fail
 Key: IGNITE-22088
 URL: https://issues.apache.org/jira/browse/IGNITE-22088
 Project: Ignite
  Issue Type: Bug
  Components: clients, thin client
Affects Versions: 3.0.0-beta1
Reporter: Igor


*Details:*
IgniteClient do not run retry with set retryPolicy on transaction lock. The 
default retry policy also doesn't work. The debugging also shows that no any 
code inside `RetryReadPolicy` is not used during transaction lock exception.


*Steps to reproduce:*

Run the next code:
{code:java}
AtomicInteger retriesCount = new AtomicInteger(0);

RetryReadPolicy retry = new RetryReadPolicy() {
@Override
public boolean shouldRetry(RetryPolicyContext context) {
System.out.println("CHECK IF RETRY SHOULD HAPPEN");
retriesCount.addAndGet(1);
return super.shouldRetry(context);
}
};

try (IgniteClient igniteClient1 = 
IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
IgniteClient igniteClient2 = 
IgniteClient.builder().addresses("localhost:10800").build()) {

igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER PRIMARY 
KEY, name VARCHAR(200))");

Transaction tr1 = igniteClient1.transactions().begin();
Transaction tr2 = igniteClient2.transactions().begin();

igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES (" 
+ 3 + ", '" + "Pavel" + "')");

SqlException exception = assertThrows(SqlException.class, () -> 
igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));

assertTrue(exception.getMessage().contains("Failed to acquire a lock due to 
a possible deadlock "));
}

assertEquals(16, retriesCount.get()); {code}
*Expected:*
Executed without errors.

*Actual:*
Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22011) aimem: repeat of create table and drop column leads to Failed to get the primary replica

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22011:
--
Description: 
*Comment:*
This is the flaky issue and can happen on any operation to table with aimem 
persistence if the cluster lives long enough.
h3. Steps to reproduce:

Run the next queries using *IgniteSql* in cycle with 50 repeats in single 
connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On random repeat the client throws the exception:
{code:java}
org.apache.ignite.sql.SqlException: IGN-PLACEMENTDRIVER-1 
TraceId:16e895ba-34d2-4aac-aeb5-4718a116a97d Failed to get the primary replica 
[tablePartitionId=18_part_22, awaitTimestamp=HybridTimestamp 
[physical=2024-04-09 10:38:37:478 +0200, logical=53, 
composite=112240356063838261]]
    at 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Description: 
h3. Steps to reproduce:

Run the next queries using *JDBC* in cycle with 50 repeats in single connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On repeat 6 the client freeze for infinite amount of time.
The error in server log:
{code:java}
2024-04-09 09:27:27:955 +0200 
[ERROR][%DropTableMultipleTriesJdbcTest_cluster_0%JRaft-AppendEntries-Processor-0][AbstractClientService]
 Fail to run RpcResponseClosure, the request is AppendEntriesRequestImpl 
[committedIndex=183, 
data=org.apache.ignite.raft.jraft.util.ByteString@d86b8023, 
entriesList=ArrayList [EntryMetaImpl [checksum=0, dataLen=21, 
hasChecksum=false, learnersList=null, oldLearnersList=null, oldPeersList=null, 
peersList=null, term=1, 

[jira] [Updated] (IGNITE-22011) aimem: repeat of create table and drop column leads to Failed to get the primary replica

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22011:
--
Environment: 2 nodes cluster running on remote machine or locally

> aimem: repeat of create table and drop column leads to Failed to get the 
> primary replica
> 
>
> Key: IGNITE-22011
> URL: https://issues.apache.org/jira/browse/IGNITE-22011
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence, thin client
>Affects Versions: 3.0.0-beta1
> Environment: 2 nodes cluster running on remote machine or locally
>Reporter: Igor
>Priority: Blocker
>  Labels: ignite-3
>
> *Comment:*
> This is the flaky issue and can happen on any operation to table with aimem 
> persistence if the cluster lives long enough.
> h3. Steps to reproduce:
> Run the next queries using *IgniteSql* in cycle with 50 repeats in single 
> connection:
> {code:java}
> create zone if not exists "AIMEM" engine aimem
> create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
> null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key 
> (k1, k2)) with PRIMARY_ZONE='AIMEM'
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 
> 3367, null, null, '1980-02-27 01:01:49.0')
> rnal
> st:10800]
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 
> 3368, 
> '1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
>  
> '1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
>  '1980-02-28 01:01:50.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 
> 3369, 
> '2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
>  
> '2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
>  '1980-02-29 01:01:51.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 
> 3370, 
> '3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
>  
> '3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
>  '1980-03-01 01:01:52.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 
> 3371, null, null, '1980-03-02 01:01:53.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 
> 3372, 
> '5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
>  
> '5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
>  '1980-03-03 01:01:54.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 
> 3373, 
> '6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
>  
> '6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
>  '1980-03-04 01:01:55.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 
> 3374, 
> '7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
>  
> '7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
>  '1980-03-05 01:01:56.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 
> 3375, null, null, '1980-03-06 01:01:57.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 
> 3376, 
> '9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
>  
> 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Description: 
h3. Steps to reproduce:

Run the next queries using *JDBC* in cycle with 50 repeats in single connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
rnal
st:10800]
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On repeat 6 the client freeze for infinite amount of time.
The error in server log:
{code:java}
2024-04-09 09:27:27:955 +0200 
[ERROR][%DropTableMultipleTriesJdbcTest_cluster_0%JRaft-AppendEntries-Processor-0][AbstractClientService]
 Fail to run RpcResponseClosure, the request is AppendEntriesRequestImpl 
[committedIndex=183, 
data=org.apache.ignite.raft.jraft.util.ByteString@d86b8023, 
entriesList=ArrayList [EntryMetaImpl [checksum=0, dataLen=21, 
hasChecksum=false, learnersList=null, oldLearnersList=null, oldPeersList=null, 
peersList=null, term=1, 

[jira] [Created] (IGNITE-22011) aimem: repeat of create table and drop column leads to Failed to get the primary replica

2024-04-09 Thread Igor (Jira)
Igor created IGNITE-22011:
-

 Summary: aimem: repeat of create table and drop column leads to 
Failed to get the primary replica
 Key: IGNITE-22011
 URL: https://issues.apache.org/jira/browse/IGNITE-22011
 Project: Ignite
  Issue Type: Bug
  Components: persistence, thin client
Affects Versions: 3.0.0-beta1
Reporter: Igor


*Comment:*
This is the flaky issue and can happen on any operation to table with aimem 
persistence if the cluster lives long enough.
h3. Steps to reproduce:

Run the next queries using *IgniteSql* in cycle with 50 repeats in single 
connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
rnal
st:10800]
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On random repeat the client throws the exception:
{code:java}
org.apache.ignite.sql.SqlException: IGN-PLACEMENTDRIVER-1 
TraceId:16e895ba-34d2-4aac-aeb5-4718a116a97d Failed to get 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Environment: 2 nodes cluster running on remote machine or locally  (was: 1 
node cluster running on remote machine or locally)

> aimem: repeat of create table and drop column leads to freeze of client
> ---
>
> Key: IGNITE-22009
> URL: https://issues.apache.org/jira/browse/IGNITE-22009
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, persistence
>Affects Versions: 3.0.0-beta1
> Environment: 2 nodes cluster running on remote machine or locally
>Reporter: Igor
>Priority: Blocker
>  Labels: ignite-3
>
> h3. Steps to reproduce:
> Run the next queries in cycle with 10 repeats in single connection:
> {code:java}
> create zone if not exists "AIMEM" engine aimem
> create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
> null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key 
> (k1, k2)) with PRIMARY_ZONE='AIMEM'
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 
> 3367, null, null, '1980-02-27 01:01:49.0')
> rnal
> st:10800]
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 
> 3368, 
> '1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
>  
> '1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
>  '1980-02-28 01:01:50.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 
> 3369, 
> '2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
>  
> '2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
>  '1980-02-29 01:01:51.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 
> 3370, 
> '3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
>  
> '3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
>  '1980-03-01 01:01:52.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 
> 3371, null, null, '1980-03-02 01:01:53.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 
> 3372, 
> '5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
>  
> '5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
>  '1980-03-03 01:01:54.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 
> 3373, 
> '6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
>  
> '6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
>  '1980-03-04 01:01:55.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 
> 3374, 
> '7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
>  
> '7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
>  '1980-03-05 01:01:56.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 
> 3375, null, null, '1980-03-06 01:01:57.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 
> 3376, 
> '9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
>  
> 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Description: 
h3. Steps to reproduce:

Run the next queries using *JDBC* in cycle with 10 repeats in single connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
rnal
st:10800]
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On repeat 6 the client freeze for infinite amount of time.
The error in server log:
{code:java}
2024-04-09 09:27:27:955 +0200 
[ERROR][%DropTableMultipleTriesJdbcTest_cluster_0%JRaft-AppendEntries-Processor-0][AbstractClientService]
 Fail to run RpcResponseClosure, the request is AppendEntriesRequestImpl 
[committedIndex=183, 
data=org.apache.ignite.raft.jraft.util.ByteString@d86b8023, 
entriesList=ArrayList [EntryMetaImpl [checksum=0, dataLen=21, 
hasChecksum=false, learnersList=null, oldLearnersList=null, oldPeersList=null, 
peersList=null, term=1, 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Environment: 1 node cluster running on remote machine  (was: 1 node cluster 
running locally)

> aimem: repeat of create table and drop column leads to freeze of client
> ---
>
> Key: IGNITE-22009
> URL: https://issues.apache.org/jira/browse/IGNITE-22009
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, persistence
>Affects Versions: 3.0.0-beta1
> Environment: 1 node cluster running on remote machine
>Reporter: Igor
>Priority: Blocker
>  Labels: ignite-3
>
> h3. Steps to reproduce:
> Run the next queries in cycle with 10 repeats in single connection:
> {code:java}
> create zone if not exists "AIMEM" engine aimem
> create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
> null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key 
> (k1, k2)) with PRIMARY_ZONE='AIMEM'
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 
> 3367, null, null, '1980-02-27 01:01:49.0')
> rnal
> st:10800]
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 
> 3368, 
> '1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
>  
> '1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
>  '1980-02-28 01:01:50.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 
> 3369, 
> '2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
>  
> '2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
>  '1980-02-29 01:01:51.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 
> 3370, 
> '3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
>  
> '3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
>  '1980-03-01 01:01:52.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 
> 3371, null, null, '1980-03-02 01:01:53.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 
> 3372, 
> '5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
>  
> '5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
>  '1980-03-03 01:01:54.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 
> 3373, 
> '6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
>  
> '6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
>  '1980-03-04 01:01:55.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 
> 3374, 
> '7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
>  
> '7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
>  '1980-03-05 01:01:56.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 
> 3375, null, null, '1980-03-06 01:01:57.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 
> 3376, 
> '9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
>  
> 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Environment: 1 node cluster running on remote machine or locally  (was: 1 
node cluster running on remote machine)

> aimem: repeat of create table and drop column leads to freeze of client
> ---
>
> Key: IGNITE-22009
> URL: https://issues.apache.org/jira/browse/IGNITE-22009
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, persistence
>Affects Versions: 3.0.0-beta1
> Environment: 1 node cluster running on remote machine or locally
>Reporter: Igor
>Priority: Blocker
>  Labels: ignite-3
>
> h3. Steps to reproduce:
> Run the next queries in cycle with 10 repeats in single connection:
> {code:java}
> create zone if not exists "AIMEM" engine aimem
> create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
> null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key 
> (k1, k2)) with PRIMARY_ZONE='AIMEM'
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 
> 3367, null, null, '1980-02-27 01:01:49.0')
> rnal
> st:10800]
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 
> 3368, 
> '1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
>  
> '1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
>  '1980-02-28 01:01:50.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 
> 3369, 
> '2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
>  
> '2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
>  '1980-02-29 01:01:51.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 
> 3370, 
> '3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
>  
> '3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
>  '1980-03-01 01:01:52.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 
> 3371, null, null, '1980-03-02 01:01:53.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 
> 3372, 
> '5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
>  
> '5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
>  '1980-03-03 01:01:54.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 
> 3373, 
> '6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
>  
> '6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
>  '1980-03-04 01:01:55.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 
> 3374, 
> '7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
>  
> '7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
>  '1980-03-05 01:01:56.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 
> 3375, null, null, '1980-03-06 01:01:57.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 
> 3376, 
> '9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
>  
> 

[jira] [Updated] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22009:
--
Component/s: jdbc

> aimem: repeat of create table and drop column leads to freeze of client
> ---
>
> Key: IGNITE-22009
> URL: https://issues.apache.org/jira/browse/IGNITE-22009
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, persistence
>Affects Versions: 3.0.0-beta1
> Environment: 1 node cluster running locally
>Reporter: Igor
>Priority: Blocker
>  Labels: ignite-3
>
> h3. Steps to reproduce:
> Run the next queries in cycle with 10 repeats in single connection:
> {code:java}
> create zone if not exists "AIMEM" engine aimem
> create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
> null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key 
> (k1, k2)) with PRIMARY_ZONE='AIMEM'
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 
> 3367, null, null, '1980-02-27 01:01:49.0')
> rnal
> st:10800]
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 
> 3368, 
> '1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
>  
> '1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
>  '1980-02-28 01:01:50.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 
> 3369, 
> '2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
>  
> '2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
>  '1980-02-29 01:01:51.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 
> 3370, 
> '3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
>  
> '3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
>  '1980-03-01 01:01:52.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 
> 3371, null, null, '1980-03-02 01:01:53.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 
> 3372, 
> '5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
>  
> '5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
>  '1980-03-03 01:01:54.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 
> 3373, 
> '6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
>  
> '6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
>  '1980-03-04 01:01:55.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 
> 3374, 
> '7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
>  
> '7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
>  '1980-03-05 01:01:56.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 
> 3375, null, null, '1980-03-06 01:01:57.0')
> insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 
> 3376, 
> '9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
>  
> '9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
>  '1980-03-07 01:01:58.0')
> select * from selectFromDropMultipleJdbc
> 

[jira] [Created] (IGNITE-22009) aimem: repeat of create table and drop column leads to freeze of client

2024-04-09 Thread Igor (Jira)
Igor created IGNITE-22009:
-

 Summary: aimem: repeat of create table and drop column leads to 
freeze of client
 Key: IGNITE-22009
 URL: https://issues.apache.org/jira/browse/IGNITE-22009
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 3.0.0-beta1
 Environment: 1 node cluster running locally
Reporter: Igor


h3. Steps to reproduce:

Run the next queries in cycle with 10 repeats in single connection:
{code:java}
create zone if not exists "AIMEM" engine aimem
create table selectFromDropMultipleJdbc(k1 INTEGER not null, k2 INTEGER not 
null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, 
k2)) with PRIMARY_ZONE='AIMEM'
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3366, 3367, 
null, null, '1980-02-27 01:01:49.0')
rnal
st:10800]
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3367, 3368, 
'1v1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_',
 
'1v2_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1',
 '1980-02-28 01:01:50.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3368, 3369, 
'2v1_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_',
 
'2v2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2_2',
 '1980-02-29 01:01:51.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3369, 3370, 
'3v1_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_',
 
'3v2_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3_3',
 '1980-03-01 01:01:52.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3370, 3371, 
null, null, '1980-03-02 01:01:53.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3371, 3372, 
'5v1_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_',
 
'5v2_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5_5',
 '1980-03-03 01:01:54.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3372, 3373, 
'6v1_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_',
 
'6v2_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6_6',
 '1980-03-04 01:01:55.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3373, 3374, 
'7v1_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_',
 
'7v2_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7_7',
 '1980-03-05 01:01:56.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3374, 3375, 
null, null, '1980-03-06 01:01:57.0')
insert into selectFromDropMultipleJdbc(k1, k2, v1, v2, v3) values (3375, 3376, 
'9v1_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_',
 
'9v2_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9_9',
 '1980-03-07 01:01:58.0')
select * from selectFromDropMultipleJdbc
drop table selectFromDropMultipleJdbc {code}
*Expected:*

All queries are executed.
h3. Actual:

On repeat 6 the client freeze for infinite amount of time.
The error in server log:
{code:java}
2024-04-09 09:27:27:955 +0200 
[ERROR][%DropTableMultipleTriesJdbcTest_cluster_0%JRaft-AppendEntries-Processor-0][AbstractClientService]
 Fail to run RpcResponseClosure, the request is AppendEntriesRequestImpl 

[jira] [Updated] (IGNITE-21894) Undescriptive error when restart cluster node during open JDBC transaction

2024-04-01 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21894:
--
Summary: Undescriptive error when restart cluster node during open JDBC 
transaction  (was: Undescriptive error when restart cluster during open JDBC 
transaction)

> Undescriptive error when restart cluster node during open JDBC transaction
> --
>
> Key: IGNITE-21894
> URL: https://issues.apache.org/jira/browse/IGNITE-21894
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Affects Versions: 3.0.0-beta1
> Environment: 2 nodes cluster
>Reporter: Igor
>Priority: Minor
>  Labels: ignite-3
>
> *Steps to reproduce:*
> 1. Start 2 nodes cluster.
> 2. Open JDBC connection, start transaction (using `.setAutoCommit(false)`).
> 3. Execute some insert queries. Do not commit the transaction.
> 4. Restart server node where connection was established.
> 5. Close JDBC statement and connection.
> *Expected:*
> Connection is closed with understandable error or without any error.
> *Actual:*
> Unclear exception on server side while closing the connection:
> {code:java}
> 2024-04-01 00:55:28:399 + 
> [WARNING][ClusterFailoverMultiNodeTest_cluster_0-srv-worker-3][ClientInboundMessageHandler]
>  Error processing client request [connectionId=1, id=1, op=55, 
> remoteAddress=/127.0.0.1:59430]:Failed to find resource with id: 1
> org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
> TraceId:fe67e0da-5839-48c7-a59e-ca3465491698 Failed to find resource with id: 
> 1
>   at 
> org.apache.ignite.client.handler.ClientResourceRegistry.get(ClientResourceRegistry.java:82)
>   at 
> org.apache.ignite.client.handler.JdbcQueryEventHandlerImpl.finishTxAsync(JdbcQueryEventHandlerImpl.java:390)
>   at 
> org.apache.ignite.client.handler.requests.jdbc.ClientJdbcFinishTxRequest.process(ClientJdbcFinishTxRequest.java:42)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:785)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:581)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$channelRead$2(ClientInboundMessageHandler.java:328)
>   at 
> org.gridgain.internal.security.context.SecuredRunnable.run(SecuredRunnable.java:34)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.channelRead(ClientInboundMessageHandler.java:328)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>   at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   at java.base/java.lang.Thread.run(Thread.java:834) {code}
> The exception on client side:
> {code:java}
> 

[jira] [Updated] (IGNITE-21894) Undescriptive error when restart cluster during open JDBC transaction

2024-04-01 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21894:
--
Description: 
*Steps to reproduce:*
1. Start 2 nodes cluster.

2. Open JDBC connection, start transaction (using `.setAutoCommit(false)`).

3. Execute some insert queries. Do not commit the transaction.

4. Restart server node where connection was established.

5. Close JDBC statement and connection.
*Expected:*
Connection is closed with understandable error or without any error.

*Actual:*
Unclear exception on server side:
{code:java}
2024-04-01 00:55:28:399 + 
[WARNING][ClusterFailoverMultiNodeTest_cluster_0-srv-worker-3][ClientInboundMessageHandler]
 Error processing client request [connectionId=1, id=1, op=55, 
remoteAddress=/127.0.0.1:59430]:Failed to find resource with id: 1
org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
TraceId:fe67e0da-5839-48c7-a59e-ca3465491698 Failed to find resource with id: 1
at 
org.apache.ignite.client.handler.ClientResourceRegistry.get(ClientResourceRegistry.java:82)
at 
org.apache.ignite.client.handler.JdbcQueryEventHandlerImpl.finishTxAsync(JdbcQueryEventHandlerImpl.java:390)
at 
org.apache.ignite.client.handler.requests.jdbc.ClientJdbcFinishTxRequest.process(ClientJdbcFinishTxRequest.java:42)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:785)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:581)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$channelRead$2(ClientInboundMessageHandler.java:328)
at 
org.gridgain.internal.security.context.SecuredRunnable.run(SecuredRunnable.java:34)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.channelRead(ClientInboundMessageHandler.java:328)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834) {code}
The exception on client side:
{code:java}
java.sql.SQLException: The transaction rollback request failed.
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.finishTx(JdbcConnection.java:425)
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.close(JdbcConnection.java:441)
    at 
org.gridgain.ai3tests.tests.ThinClientRollbackTests.test(ThinClientRollbackTests.java:109)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.ExecutionException: 

[jira] [Updated] (IGNITE-21894) Undescriptive error when restart cluster during open JDBC transaction

2024-04-01 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21894:
--
Description: 
*Steps to reproduce:*
1. Start 2 nodes cluster.

2. Open JDBC connection, start transaction (using `.setAutoCommit(false)`).

3. Execute some insert queries. Do not commit the transaction.

4. Restart server node where connection was established.

5. Close JDBC statement and connection.
*Expected:*
Connection is closed with understandable error or without any error.

*Actual:*
Unclear exception on server side while closing the connection:
{code:java}
2024-04-01 00:55:28:399 + 
[WARNING][ClusterFailoverMultiNodeTest_cluster_0-srv-worker-3][ClientInboundMessageHandler]
 Error processing client request [connectionId=1, id=1, op=55, 
remoteAddress=/127.0.0.1:59430]:Failed to find resource with id: 1
org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
TraceId:fe67e0da-5839-48c7-a59e-ca3465491698 Failed to find resource with id: 1
at 
org.apache.ignite.client.handler.ClientResourceRegistry.get(ClientResourceRegistry.java:82)
at 
org.apache.ignite.client.handler.JdbcQueryEventHandlerImpl.finishTxAsync(JdbcQueryEventHandlerImpl.java:390)
at 
org.apache.ignite.client.handler.requests.jdbc.ClientJdbcFinishTxRequest.process(ClientJdbcFinishTxRequest.java:42)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:785)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:581)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$channelRead$2(ClientInboundMessageHandler.java:328)
at 
org.gridgain.internal.security.context.SecuredRunnable.run(SecuredRunnable.java:34)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.channelRead(ClientInboundMessageHandler.java:328)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834) {code}
The exception on client side:
{code:java}
java.sql.SQLException: The transaction rollback request failed.
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.finishTx(JdbcConnection.java:425)
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.close(JdbcConnection.java:441)
    at 
org.gridgain.ai3tests.tests.ThinClientRollbackTests.test(ThinClientRollbackTests.java:109)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: 

[jira] [Created] (IGNITE-21894) Undescriptive error when restart cluster during open JDBC transaction

2024-04-01 Thread Igor (Jira)
Igor created IGNITE-21894:
-

 Summary: Undescriptive error when restart cluster during open JDBC 
transaction
 Key: IGNITE-21894
 URL: https://issues.apache.org/jira/browse/IGNITE-21894
 Project: Ignite
  Issue Type: Bug
  Components: jdbc, sql
Affects Versions: 3.0.0-beta1
 Environment: 2 nodes cluster
Reporter: Igor


*Steps to reproduce:*
1. Start 2 nodes cluster.

2. Open JDBC connection, start transaction (using `.setAutoCommit(false)`).

3. Restart server node where connection was established.

4. Close JDBC statement and connection.
*Expected:*
Connection is closed with understandable error or without any error.

*Actual:*
Unclear exception on server side:
{code:java}
2024-04-01 00:55:28:399 + 
[WARNING][ClusterFailoverMultiNodeTest_cluster_0-srv-worker-3][ClientInboundMessageHandler]
 Error processing client request [connectionId=1, id=1, op=55, 
remoteAddress=/127.0.0.1:59430]:Failed to find resource with id: 1
org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
TraceId:fe67e0da-5839-48c7-a59e-ca3465491698 Failed to find resource with id: 1
at 
org.apache.ignite.client.handler.ClientResourceRegistry.get(ClientResourceRegistry.java:82)
at 
org.apache.ignite.client.handler.JdbcQueryEventHandlerImpl.finishTxAsync(JdbcQueryEventHandlerImpl.java:390)
at 
org.apache.ignite.client.handler.requests.jdbc.ClientJdbcFinishTxRequest.process(ClientJdbcFinishTxRequest.java:42)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:785)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:581)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$channelRead$2(ClientInboundMessageHandler.java:328)
at 
org.gridgain.internal.security.context.SecuredRunnable.run(SecuredRunnable.java:34)
at 
org.apache.ignite.client.handler.ClientInboundMessageHandler.channelRead(ClientInboundMessageHandler.java:328)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834) {code}
The exception on client side:
{code:java}
java.sql.SQLException: The transaction rollback request failed.
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.finishTx(JdbcConnection.java:425)
    at 
org.apache.ignite.internal.jdbc.JdbcConnection.close(JdbcConnection.java:441)
    at 
org.gridgain.ai3tests.tests.ThinClientRollbackTests.test(ThinClientRollbackTests.java:109)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 

[jira] (IGNITE-21639) Server after kill does not start and stuck on election

2024-03-21 Thread Igor (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-21639 ]


Igor deleted comment on IGNITE-21639:
---

was (Author: JIRAUSER299771):
The run with logs 
https://ggtc.gridgain.com/buildConfiguration/Qa_PocTesterAwsBuildTypeAI3/10704411?hideTestsFromDependencies=false=false=false=true

> Server after kill does not start and stuck on election 
> ---
>
> Key: IGNITE-21639
> URL: https://issues.apache.org/jira/browse/IGNITE-21639
> Project: Ignite
>  Issue Type: Improvement
>  Components: general, networking, platforms
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: 
> poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0
>
>
> *Steps to reproduce:*
>  # Start the 3 nodes cluster on different machine each (not in docker).
>  # Insert about 500 000 rows across 500 tables. Replication is 3.
>  # Kill one node.
>  # Start killed node.
> *Expected:*
> The node is started, joined to the cluster and works normally.
> Actual:
> The node stucks on starting with repeating messages like this:
> {code:java}
> 2024-02-29 23:06:21:261 +0300 
> [INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl]
>  Unsuccessful election round number 128
> 2024-02-29 23:06:21:261 +0300 
> [INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl]
>  Node <154_part_24/poc-tester-SERVER-192.168.1.117-id-0> term 3 start 
> preVote. 
> 2024-02-29 23:06:21:282 +0300 
> [ERROR][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor_stripe_5-0][StripedDisruptor]
>  Handle disruptor event error 
> [name=%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor-, 
> event=org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTask@efb699b, 
> hasHandler=false]
> java.lang.AssertionError: Safe time reordering detected 
> [current=112016525904248838, proposed=112016523364991002]
>     at 
> org.apache.ignite.internal.table.distributed.raft.PartitionListener.lambda$onWrite$1(PartitionListener.java:169)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
>     at 
> org.apache.ignite.internal.table.distributed.raft.PartitionListener.onWrite(PartitionListener.java:159)
>     at 
> org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine.onApply(JraftServerImpl.java:674)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:557)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:525)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:444)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
>     at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:266)
>     at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:231)
>     at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
>     at java.base/java.lang.Thread.run(Thread.java:829){code}
>  
> [^poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21725) The exception "Primary replica has expired" on creation of 1000 tables

2024-03-11 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21725:
--
Summary: The exception "Primary replica has expired" on creation of 1000 
tables  (was: The exception "Primary replica has expired" on a lot creation of 
1000 tables)

> The exception "Primary replica has expired" on creation of 1000 tables
> --
>
> Key: IGNITE-21725
> URL: https://issues.apache.org/jira/browse/IGNITE-21725
> Project: Ignite
>  Issue Type: Bug
>  Components: general, persistence
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
> 1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"
> 2. Create 1000 tables with 200 varchar columns each  and insert 1 row into 
> each. One by one.
> *Expected result:*
> Tables are created.
> *Actual result:*
> On table 949 the exception is thrown:
> {code:java}
> java.sql.SQLException: Primary replica has expired, transaction will be 
> rolled back: [groupId = 1850_part_21, expected enlistment consistency token = 
> 112069202113202526, commit timestamp = HybridTimestamp [physical=2024-03-10 
> 03:13:16:057 +, logical=396, composite=112069207395991948], current 
> primary replica = null]
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
>   at 
> org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeWithArguments(JdbcPreparedStatement.java:765)
>   at 
> org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:173)
>   at 
> org.gridgain.ai3tests.tests.TablesAmountCapacityTest.lambda$insertRowAndAssertTimeout$1(TablesAmountCapacityTest.java:166)
>   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834) {code}
> In server logs there is an exception:
> {code:java}
> 2024-03-10 03:13:24:222 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-8][TxManagerImpl]
>  Failed to finish Tx. The operation will be retried 
> [txId=018e2659-b09f-009c-23c0-6ab50001].
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
> IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed 
> out [replicaGrpId=1850_part_21]
>     at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>     at 
> java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:704)
>     at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>     at 
> org.apache.ignite.internal.replicator.ReplicaService.lambda$sendToReplica$0(ReplicaService.java:110)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: 
> org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
> IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed 
> out [replicaGrpId=1850_part_21]
>     ... 4 more
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-22][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-11][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> [WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-19][TrackableNetworkMessageHandler]
>  Message handling has been too long [duration=67ms, message=[class 
> org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
> 2024-03-10 03:13:24:290 + 
> 

[jira] [Created] (IGNITE-21725) The exception "Primary replica has expired" on a lot creation of 1000 tables

2024-03-11 Thread Igor (Jira)
Igor created IGNITE-21725:
-

 Summary: The exception "Primary replica has expired" on a lot 
creation of 1000 tables
 Key: IGNITE-21725
 URL: https://issues.apache.org/jira/browse/IGNITE-21725
 Project: Ignite
  Issue Type: Bug
  Components: general, persistence
Affects Versions: 3.0.0-beta1
Reporter: Igor


*Steps to reproduce:*

1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"

2. Create 1000 tables with 200 varchar columns each  and insert 1 row into 
each. One by one.

*Expected result:*
Tables are created.

*Actual result:*

On table 949 the exception is thrown:
{code:java}
java.sql.SQLException: Primary replica has expired, transaction will be rolled 
back: [groupId = 1850_part_21, expected enlistment consistency token = 
112069202113202526, commit timestamp = HybridTimestamp [physical=2024-03-10 
03:13:16:057 +, logical=396, composite=112069207395991948], current primary 
replica = null]
  at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
  at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
  at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeWithArguments(JdbcPreparedStatement.java:765)
  at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:173)
  at 
org.gridgain.ai3tests.tests.TablesAmountCapacityTest.lambda$insertRowAndAssertTimeout$1(TablesAmountCapacityTest.java:166)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834) {code}
In server logs there is an exception:
{code:java}
2024-03-10 03:13:24:222 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-8][TxManagerImpl]
 Failed to finish Tx. The operation will be retried 
[txId=018e2659-b09f-009c-23c0-6ab50001].
java.util.concurrent.CompletionException: 
org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed out 
[replicaGrpId=1850_part_21]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:704)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.replicator.ReplicaService.lambda$sendToReplica$0(ReplicaService.java:110)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: 
org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: 
IGN-REP-3 TraceId:7ff7e851-9f18-4212-b317-a70a0a92fdfe Replication is timed out 
[replicaGrpId=1850_part_21]
    ... 4 more
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-22][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-11][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-19][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-17][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 
[WARNING][%TablesAmountCapacityTest_cluster_0%partition-operations-23][TrackableNetworkMessageHandler]
 Message handling has been too long [duration=67ms, message=[class 
org.apache.ignite.raft.jraft.rpc.WriteActionRequestImpl]]
2024-03-10 03:13:24:290 + 

[jira] [Updated] (IGNITE-21663) Cluster load balancing when 1 node is killed doesn't work

2024-03-04 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21663:
--
Description: 
*Steps to reproduce:*
 # Start cluster with 2 nodes running locally.
 # Make connection like this:

{code:java}
try (IgniteClient igniteClient = IgniteClient.builder().retryPolicy(new 
RetryLimitPolicy()).addresses(thinClientEndpoints.toArray(new 
String[]{"localhost:10800","localhost:10801"})).build()) {
    try (Session session = igniteClient.sql().createSession()) {
        //code here
    }
} {code}
3. Create table with replication 2

4. Execute insert 1 row and select from the table.
5. Kill first node (in list of connection)
6. Execute select from the table.

*Expected:*
Cluster works with one node.
*Actual:*
The exception on select after first node is killed, the select is not executed.
{code:java}
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:92e48867-2e6e-4730-9781-527a4e204b32 Unable to send fragment 
[targetNode=ConnectionTest_cluster_0, fragmentId=1, 
cause=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
refused: no further information: /192.168.100.5:3344]
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
    at 
org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
    at 
org.gridgain.ai3tests.tests.teststeps.ThinClientSteps.lambda$executeQuery$0(ThinClientSteps.java:61)
    at io.qameta.allure.Allure.lambda$step$1(Allure.java:127)
    at io.qameta.allure.Allure.step(Allure.java:181)
    at io.qameta.allure.Allure.step(Allure.java:125)
    at 
org.gridgain.ai3tests.tests.teststeps.ThinClientSteps.executeQuery(ThinClientSteps.java:61)
    at 
org.gridgain.ai3tests.tests.ConnectionTest.testThinClientConnectionToMultipleHost(ConnectionTest.java:93)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.CompletionException: 
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:92e48867-2e6e-4730-9781-527a4e204b32 Unable to send fragment 
[targetNode=ConnectionTest_cluster_0, fragmentId=1, 
cause=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
refused: no further information: /192.168.100.5:3344]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:419)
    at 
org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
    at 
java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:92e48867-2e6e-4730-9781-527a4e204b32 Unable to send fragment 
[targetNode=ConnectionTest_cluster_0, fragmentId=1, 
cause=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
refused: no further information: /192.168.100.5:3344]
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 

[jira] [Created] (IGNITE-21663) Cluster load balancing when 1 node is killed doesn't work

2024-03-04 Thread Igor (Jira)
Igor created IGNITE-21663:
-

 Summary: Cluster load balancing when 1 node is killed doesn't work
 Key: IGNITE-21663
 URL: https://issues.apache.org/jira/browse/IGNITE-21663
 Project: Ignite
  Issue Type: Improvement
  Components: persistence, sql
Affects Versions: 3.0.0-beta1
Reporter: Igor
 Fix For: 3.0.0-beta1


*Steps to reproduce:*
 # Start cluster with 2 nodes running locally.
 # Make connection like this:

{code:java}
try (IgniteClient igniteClient = IgniteClient.builder().retryPolicy(new 
RetryLimitPolicy()).addresses(thinClientEndpoints.toArray(new 
String[]{"localhost:10800","localhost:10801"})).build()) {
    try (Session session = igniteClient.sql().createSession()) {
        //code here
    }
} {code}
3. Create table with replication 2

4. Execute insert 1 row and select from the table.
5. Kill first node (in list of connection)
6. Execute select from the table.

*Expected:*
Cluster works with one node.
*Actual:*
The exception, the selec is not executed.

*Comments:*
The Java client makes request to the working node, but the working node tries 
to connect to the killed one and gets connection exception.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21639) Server after kill does not start and stuck on election

2024-03-01 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822465#comment-17822465
 ] 

Igor commented on IGNITE-21639:
---

The run with logs 
https://ggtc.gridgain.com/buildConfiguration/Qa_PocTesterAwsBuildTypeAI3/10704411?hideTestsFromDependencies=false=false=false=true

> Server after kill does not start and stuck on election 
> ---
>
> Key: IGNITE-21639
> URL: https://issues.apache.org/jira/browse/IGNITE-21639
> Project: Ignite
>  Issue Type: Improvement
>  Components: general, networking, platforms
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: 
> poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0
>
>
> *Steps to reproduce:*
>  # Start the 3 nodes cluster on different machine each (not in docker).
>  # Insert about 500 000 rows across 500 tables. Replication is 3.
>  # Kill one node.
>  # Start killed node.
> *Expected:*
> The node is started, joined to the cluster and works normally.
> Actual:
> The node stucks on starting with repeating messages like this:
> {code:java}
> 2024-02-29 23:06:21:261 +0300 
> [INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl]
>  Unsuccessful election round number 128
> 2024-02-29 23:06:21:261 +0300 
> [INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl]
>  Node <154_part_24/poc-tester-SERVER-192.168.1.117-id-0> term 3 start 
> preVote. 
> 2024-02-29 23:06:21:282 +0300 
> [ERROR][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor_stripe_5-0][StripedDisruptor]
>  Handle disruptor event error 
> [name=%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor-, 
> event=org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTask@efb699b, 
> hasHandler=false]
> java.lang.AssertionError: Safe time reordering detected 
> [current=112016525904248838, proposed=112016523364991002]
>     at 
> org.apache.ignite.internal.table.distributed.raft.PartitionListener.lambda$onWrite$1(PartitionListener.java:169)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
>     at 
> org.apache.ignite.internal.table.distributed.raft.PartitionListener.onWrite(PartitionListener.java:159)
>     at 
> org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine.onApply(JraftServerImpl.java:674)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:557)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:525)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:444)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
>     at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
>     at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:266)
>     at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:231)
>     at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
>     at java.base/java.lang.Thread.run(Thread.java:829){code}
>  
> [^poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21639) Server after kill does not start and stuck on election

2024-02-29 Thread Igor (Jira)
Igor created IGNITE-21639:
-

 Summary: Server after kill does not start and stuck on election 
 Key: IGNITE-21639
 URL: https://issues.apache.org/jira/browse/IGNITE-21639
 Project: Ignite
  Issue Type: Improvement
  Components: general, networking, platforms
Affects Versions: 3.0.0-beta1
Reporter: Igor
 Attachments: 
poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0

*Steps to reproduce:*
 # Start the 3 nodes cluster on different machine each (not in docker).
 # Insert about 500 000 rows across 500 tables. Replication is 3.
 # Kill one node.
 # Start killed node.

*Expected:*
The node is started, joined to the cluster and works normally.

Actual:
The node stucks on starting with repeating messages like this:
{code:java}
2024-02-29 23:06:21:261 +0300 
[INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl] 
Unsuccessful election round number 128
2024-02-29 23:06:21:261 +0300 
[INFO][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-ElectionTimer-18][NodeImpl] 
Node <154_part_24/poc-tester-SERVER-192.168.1.117-id-0> term 3 start preVote. 
2024-02-29 23:06:21:282 +0300 
[ERROR][%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor_stripe_5-0][StripedDisruptor]
 Handle disruptor event error 
[name=%poc-tester-SERVER-192.168.1.117-id-0%JRaft-FSMCaller-Disruptor-, 
event=org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTask@efb699b, 
hasHandler=false]
java.lang.AssertionError: Safe time reordering detected 
[current=112016525904248838, proposed=112016523364991002]
    at 
org.apache.ignite.internal.table.distributed.raft.PartitionListener.lambda$onWrite$1(PartitionListener.java:169)
    at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
    at 
org.apache.ignite.internal.table.distributed.raft.PartitionListener.onWrite(PartitionListener.java:159)
    at 
org.apache.ignite.internal.raft.server.impl.JraftServerImpl$DelegatingStateMachine.onApply(JraftServerImpl.java:674)
    at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:557)
    at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:525)
    at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:444)
    at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
    at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
    at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:266)
    at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:231)
    at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
    at java.base/java.lang.Thread.run(Thread.java:829){code}
 

[^poc-tester-SERVER-192.168.1.117-id-0-2024-02-29-22-56-11-client.log.0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21577) JDBC throws exception when multiple endpoints are used

2024-02-20 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21577:
--
Description: 
*Steps to reproduce:*
1. Start cluster of 2 nodes.

2. Execute the code:
{code:java}
try (Connection connection = 
DriverManager.getConnection("jdbc:ignite:thin://localhost:10800,localhost:10801");
Statement statement = connection.createStatement()) {
statement.executeUpdate("CREATE TABLE Person (id INT PRIMARY KEY, name 
VARCHAR)");
statement.executeUpdate("INSERT INTO Person (id, name) VALUES (1, 'John')");
} {code}
*Expected:*
The code is executed.

*Actual:*
The error is thrown on last insert statement.
{code:java}
java.sql.SQLException: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
org.apache.ignite.internal.jdbc.JdbcStatement.toSqlException(JdbcStatement.java:781)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:148)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:181)
at 
org.gridgain.ai3tests.tests.ConnectionTest.testSaveAndGetFromCache(ConnectionTest.java:47)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:144)
... 5 more
Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
at 
org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
at 
org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
at 
java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
at 
java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
 {code}
*Comments:*

The code works fine if Java API is used.

  was:
*Steps to reproduce:*
1. Start cluster of 2 nodes.

2. Execute the code:
{code:java}
try (Connection connection = 
DriverManager.getConnection("jdbc:ignite:thin://localhost:10800,localhost:10801");
Statement statement = connection.createStatement()) {
statement.executeUpdate("CREATE TABLE Person (id INT PRIMARY KEY, name 
VARCHAR)");
statement.executeUpdate("INSERT INTO Person (id, name) VALUES (1, 'John')");
} {code}
*Expected:*
The code is executed.

*Actual:*
The error is thrown on last insert statement.
{code:java}
java.sql.SQLException: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
org.apache.ignite.internal.jdbc.JdbcStatement.toSqlException(JdbcStatement.java:781)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:148)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:181)
at 
org.gridgain.ai3tests.tests.ConnectionTest.testSaveAndGetFromCache(ConnectionTest.java:47)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 

[jira] [Resolved] (IGNITE-21577) JDBC throws exception when multiple endpoints are used

2024-02-20 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-21577.
---
Resolution: Won't Fix

The functionality is not supported.

> JDBC throws exception when multiple endpoints are used
> --
>
> Key: IGNITE-21577
> URL: https://issues.apache.org/jira/browse/IGNITE-21577
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
> 1. Start cluster of 2 nodes.
> 2. Execute the code:
> {code:java}
> try (Connection connection = 
> DriverManager.getConnection("jdbc:ignite:thin://localhost:10800,localhost:10801");
> Statement statement = connection.createStatement()) {
> statement.executeUpdate("CREATE TABLE Person (id INT PRIMARY KEY, name 
> VARCHAR)");
> statement.executeUpdate("INSERT INTO Person (id, name) VALUES (1, 
> 'John')");
> } {code}
> *Expected:*
> The code is executed.
> *Actual:*
> The error is thrown on last insert statement.
> {code:java}
> java.sql.SQLException: java.util.concurrent.ExecutionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.toSqlException(JdbcStatement.java:781)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:148)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:181)
>   at 
> org.gridgain.ai3tests.tests.ConnectionTest.testSaveAndGetFromCache(ConnectionTest.java:47)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:144)
>   ... 5 more
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
>   at 
> java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
>   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
>   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
>   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
>  {code}
> *Comments:*
> The multiple hosts for JDBC is supported according to documentation:
> [https://ignite.apache.org/docs/3.0.0-beta/sql/jdbc-driver]
> Also the code works fine if Java API is used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (IGNITE-21577) JDBC throws exception when multiple endpoints are used

2024-02-20 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor closed IGNITE-21577.
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> JDBC throws exception when multiple endpoints are used
> --
>
> Key: IGNITE-21577
> URL: https://issues.apache.org/jira/browse/IGNITE-21577
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
> 1. Start cluster of 2 nodes.
> 2. Execute the code:
> {code:java}
> try (Connection connection = 
> DriverManager.getConnection("jdbc:ignite:thin://localhost:10800,localhost:10801");
> Statement statement = connection.createStatement()) {
> statement.executeUpdate("CREATE TABLE Person (id INT PRIMARY KEY, name 
> VARCHAR)");
> statement.executeUpdate("INSERT INTO Person (id, name) VALUES (1, 
> 'John')");
> } {code}
> *Expected:*
> The code is executed.
> *Actual:*
> The error is thrown on last insert statement.
> {code:java}
> java.sql.SQLException: java.util.concurrent.ExecutionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.toSqlException(JdbcStatement.java:781)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:148)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:181)
>   at 
> org.gridgain.ai3tests.tests.ConnectionTest.testSaveAndGetFromCache(ConnectionTest.java:47)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:144)
>   ... 5 more
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 
> 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
>   at 
> java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
>   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
>   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
>   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
>  {code}
> *Comments:*
> The multiple hosts for JDBC is supported according to documentation:
> [https://ignite.apache.org/docs/3.0.0-beta/sql/jdbc-driver]
> Also the code works fine if Java API is used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21577) JDBC throws exception when multiple endpoints are used

2024-02-20 Thread Igor (Jira)
Igor created IGNITE-21577:
-

 Summary: JDBC throws exception when multiple endpoints are used
 Key: IGNITE-21577
 URL: https://issues.apache.org/jira/browse/IGNITE-21577
 Project: Ignite
  Issue Type: Improvement
  Components: jdbc
Affects Versions: 3.0.0-beta1
Reporter: Igor


*Steps to reproduce:*
1. Start cluster of 2 nodes.

2. Execute the code:
{code:java}
try (Connection connection = 
DriverManager.getConnection("jdbc:ignite:thin://localhost:10800,localhost:10801");
Statement statement = connection.createStatement()) {
statement.executeUpdate("CREATE TABLE Person (id INT PRIMARY KEY, name 
VARCHAR)");
statement.executeUpdate("INSERT INTO Person (id, name) VALUES (1, 'John')");
} {code}
*Expected:*
The code is executed.

*Actual:*
The error is thrown on last insert statement.
{code:java}
java.sql.SQLException: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
org.apache.ignite.internal.jdbc.JdbcStatement.toSqlException(JdbcStatement.java:781)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:148)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:181)
at 
org.gridgain.ai3tests.tests.ConnectionTest.testSaveAndGetFromCache(ConnectionTest.java:47)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:144)
... 5 more
Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:bcc4730a-471d-4f66-b637-fb083e1f88a2 Failed to find resource with id: 1
at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
at 
org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
at 
org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
at 
java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
at 
java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at 
java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
 {code}
*Comments:*

The multiple hosts for JDBC is supported according to documentation:
[https://ignite.apache.org/docs/3.0.0-beta/sql/jdbc-driver]
Also the code works fine if Java API is used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21489) Add running script files to .jar file

2024-02-07 Thread Igor (Jira)
Igor created IGNITE-21489:
-

 Summary: Add running script files to .jar file
 Key: IGNITE-21489
 URL: https://issues.apache.org/jira/browse/IGNITE-21489
 Project: Ignite
  Issue Type: Improvement
  Components: binary, build
Affects Versions: 3.0.0-beta1
Reporter: Igor


The `ignite-runner-3.0.0-SNAPSHOT.jar` requires a lot of custom setup to run 
the server. It would be useful to have the setup scripts inside jar to be able 
to unpack them and use.

The files required to be added to `ignite-runner-3.0.0-SNAPSHOT.jar`:
 # bootstrap-functions.sh
 # ignite.java.util.logging.properties
 # ignite3db
 # ignite-config.conf
 # setup-java.sh
 # vars.env



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21472) Exception: Invalid length for a tuple element on prepared statement

2024-02-06 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21472:
--
Description: 
h3. Steps to reproduce:

Run the next code:
{code:java}
try (Session session = 
IgniteClient.builder().addresses("localhost:10800").build().sql()
.createSession();) {

session.execute(null, "drop table if exists ttable");
session.execute(null, "create table ttable("
+ "keyTINYINT0 TINYINT not null, "
+ "keySMALLINT1 SMALLINT not null, "
+ "keyINTEGER2 INTEGER not null, "
+ "keyTINYINT3 TINYINT not null, "
+ "val INTEGER not null, "
+ "primary key (keyTINYINT0, keySMALLINT1, keyINTEGER2, 
keyTINYINT3))");
session.execute(null, "select keyTINYINT0, keySMALLINT1, keyINTEGER2, 
keyTINYINT3, val from ttable  "
+ "where keyTINYINT0 = ? AND keySMALLINT1 = ? AND 
keyINTEGER2 = ? AND keyTINYINT3 = ? AND val = ? ",
new Object[]{(byte) -87, (short)19507, 25781820, (byte)-84, 
116522});
} {code}
h3. Expected:

Run succesfully.

*Actual:*
The exception on the last statement:
{code:java}
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
    at 
org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
    at 
org.gridgain.ai3tests.tests.BasicAi3OperationsTest.testSaveAndGetFromCachee(BasicAi3OperationsTest.java:66)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.CompletionException: 
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:419)
    at 
org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
    at 
java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
    at 
org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
    at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
    ... 7 more {code}

  was:
h3. Steps to reproduce:
h3. Run the next code:
{code:java}
try (Session session = 
IgniteClient.builder().addresses("localhost:10800").build().sql()
.createSession();) {

session.execute(null, "drop table if exists ttable");
session.execute(null, "create table ttable("
+ "keyTINYINT0 

[jira] [Created] (IGNITE-21472) Exception: Invalid length for a tuple element on prepared statement

2024-02-06 Thread Igor (Jira)
Igor created IGNITE-21472:
-

 Summary: Exception: Invalid length for a tuple element on prepared 
statement
 Key: IGNITE-21472
 URL: https://issues.apache.org/jira/browse/IGNITE-21472
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta1
Reporter: Igor


h3. Steps to reproduce:
h3. Run the next code:
{code:java}
try (Session session = 
IgniteClient.builder().addresses("localhost:10800").build().sql()
.createSession();) {

session.execute(null, "drop table if exists ttable");
session.execute(null, "create table ttable("
+ "keyTINYINT0 TINYINT not null, "
+ "keySMALLINT1 SMALLINT not null, "
+ "keyINTEGER2 INTEGER not null, "
+ "keyTINYINT3 TINYINT not null, "
+ "val INTEGER not null, "
+ "primary key (keyTINYINT0, keySMALLINT1, keyINTEGER2, 
keyTINYINT3))");
session.execute(null, "select keyTINYINT0, keySMALLINT1, keyINTEGER2, 
keyTINYINT3, val from ttable  "
+ "where keyTINYINT0 = ? AND keySMALLINT1 = ? AND 
keyINTEGER2 = ? AND keyTINYINT3 = ? AND val = ? ",
new Object[]{(byte) -87, (short)19507, 25781820, (byte)-84, 
116522});
} {code}
h3. Expected:

Run succesfully.

*Actual:*
The exception:
{code:java}
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
    at 
org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
    at 
org.gridgain.ai3tests.tests.BasicAi3OperationsTest.testSaveAndGetFromCachee(BasicAi3OperationsTest.java:66)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
Caused by: java.util.concurrent.CompletionException: 
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:419)
    at 
org.apache.ignite.internal.client.TcpClientChannel.lambda$onMessage$3(TcpClientChannel.java:238)
    at 
java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at 
java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at 
java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at 
java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:cfa96929-079c-4434-a826-1eea7d307d3f Invalid length for a tuple 
element: 4
    at 
java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at 
org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765)
    at 
org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699)
    at 
org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
    at 
org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:508)
    at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:397)
    ... 7 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21451) RocksDB: repeat of create table and drop column leads to freeze of client

2024-02-05 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21451:
--
Description: 
h3. Steps to reproduce:

Run the next queries in cycle with 50 repeats in single connection:
{code:java}
drop table if exists dropNoMoreIndexedColumn
create zone if not exists "rocksdb" engine rocksdb
create table dropNoMoreIndexedColumn(k1 TIMESTAMP not null, k2 INTEGER not 
null, v0 TINYINT not null, v1 SMALLINT not null, v2 INTEGER not null, v3 BIGINT 
not null, v4 VARCHAR not null, v5 TIMESTAMP not null, primary key (k1, k2)) 
with PRIMARY_ZONE='rocksdb'
create index dropNoMoreIndexedColumn_v1idx on dropNoMoreIndexedColumn using 
TREE (v1)
drop index dropNoMoreIndexedColumn_v1idx
alter table dropNoMoreIndexedColumn drop column v1 {code}
h3. Expected:

All queries are executed.
h3. Actual:

On repeat 31 the client freeze for infinite amount of time.
h3. Analysis:

With storages aimem and aipersist the issue is not happen.

The servers contain repeated error in logs:
{code:java}
2024-02-05 13:47:24:812 +0100 
[ERROR][%DropColumnsTest_cluster_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
 Error occurred when notifying safe time advanced callback
java.util.concurrent.CompletionException: 
java.lang.UnsupportedOperationException: Update log is not supported in RocksDB 
storage.
  at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
  at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:645)
  at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.UnsupportedOperationException: Update log is not supported 
in RocksDB storage.
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.trimUpdateLog(RocksDbMvPartitionStorage.java:908)
  at 
org.apache.ignite.internal.table.distributed.raft.snapshot.outgoing.SnapshotAwarePartitionDataStorage.trimUpdateLog(SnapshotAwarePartitionDataStorage.java:244)
  at 
org.apache.ignite.internal.table.distributed.gc.GcUpdateHandler.lambda$vacuumBatch$0(GcUpdateHandler.java:81)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.lambda$runConsistently$2(RocksDbMvPartitionStorage.java:228)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.busy(RocksDbMvPartitionStorage.java:1431)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.runConsistently(RocksDbMvPartitionStorage.java:213)
  at 
org.apache.ignite.internal.table.distributed.raft.snapshot.outgoing.SnapshotAwarePartitionDataStorage.runConsistently(SnapshotAwarePartitionDataStorage.java:80)
  at 
org.apache.ignite.internal.table.distributed.gc.GcUpdateHandler.vacuumBatch(GcUpdateHandler.java:80)
  at 
org.apache.ignite.internal.table.distributed.gc.MvGc.lambda$scheduleGcForStorage$7(MvGc.java:242)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
  ... 4 more  {code}

  was:
h3. Steps to reproduce:

Run the next queries in cycle with 50 repeats in single connection:
{code:java}
drop table if exists dropNoMoreIndexedColumn
create zone if not exists "rocksdb" engine rocksdb
create table dropNoMoreIndexedColumn(k1 TIMESTAMP not null, k2 INTEGER not 
null, v0 TINYINT not null, v1 SMALLINT not null, v2 INTEGER not null, v3 BIGINT 
not null, v4 VARCHAR not null, v5 TIMESTAMP not null, primary key (k1, k2)) 
with PRIMARY_ZONE='rocksdb'
create index dropNoMoreIndexedColumn_v1idx on dropNoMoreIndexedColumn using 
TREE (v1)
drop index dropNoMoreIndexedColumn_v1idx
alter table dropNoMoreIndexedColumn drop column v1 {code}
h3. Expected:

All queries are executed.
h3. Actual:

On repeat 31 the client freeze for infinite amount of time.
h3. Analysis:

The servers contain repeated error in logs:
{code:java}
2024-02-05 13:47:24:812 +0100 
[ERROR][%DropColumnsTest_cluster_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
 Error occurred when notifying safe time advanced callback
java.util.concurrent.CompletionException: 
java.lang.UnsupportedOperationException: Update log is not supported in RocksDB 
storage.
  at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
  at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:645)
  at 

[jira] [Created] (IGNITE-21451) RocksDB: repeat of create table and drop column leads to freeze of client

2024-02-05 Thread Igor (Jira)
Igor created IGNITE-21451:
-

 Summary: RocksDB: repeat of create table and drop column leads to 
freeze of client
 Key: IGNITE-21451
 URL: https://issues.apache.org/jira/browse/IGNITE-21451
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 3.0.0-beta1
 Environment: 2 nodes cluster running locally.
Reporter: Igor


h3. Steps to reproduce:

Run the next queries in cycle with 50 repeats in single connection:
{code:java}
drop table if exists dropNoMoreIndexedColumn
create zone if not exists "rocksdb" engine rocksdb
create table dropNoMoreIndexedColumn(k1 TIMESTAMP not null, k2 INTEGER not 
null, v0 TINYINT not null, v1 SMALLINT not null, v2 INTEGER not null, v3 BIGINT 
not null, v4 VARCHAR not null, v5 TIMESTAMP not null, primary key (k1, k2)) 
with PRIMARY_ZONE='rocksdb'
create index dropNoMoreIndexedColumn_v1idx on dropNoMoreIndexedColumn using 
TREE (v1)
drop index dropNoMoreIndexedColumn_v1idx
alter table dropNoMoreIndexedColumn drop column v1 {code}
h3. Expected:

All queries are executed.
h3. Actual:

On repeat 31 the client freeze for infinite amount of time.
h3. Analysis:

The servers contain repeated error in logs:
{code:java}
2024-02-05 13:47:24:812 +0100 
[ERROR][%DropColumnsTest_cluster_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
 Error occurred when notifying safe time advanced callback
java.util.concurrent.CompletionException: 
java.lang.UnsupportedOperationException: Update log is not supported in RocksDB 
storage.
  at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
  at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:645)
  at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.UnsupportedOperationException: Update log is not supported 
in RocksDB storage.
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.trimUpdateLog(RocksDbMvPartitionStorage.java:908)
  at 
org.apache.ignite.internal.table.distributed.raft.snapshot.outgoing.SnapshotAwarePartitionDataStorage.trimUpdateLog(SnapshotAwarePartitionDataStorage.java:244)
  at 
org.apache.ignite.internal.table.distributed.gc.GcUpdateHandler.lambda$vacuumBatch$0(GcUpdateHandler.java:81)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.lambda$runConsistently$2(RocksDbMvPartitionStorage.java:228)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.busy(RocksDbMvPartitionStorage.java:1431)
  at 
org.apache.ignite.internal.storage.rocksdb.RocksDbMvPartitionStorage.runConsistently(RocksDbMvPartitionStorage.java:213)
  at 
org.apache.ignite.internal.table.distributed.raft.snapshot.outgoing.SnapshotAwarePartitionDataStorage.runConsistently(SnapshotAwarePartitionDataStorage.java:80)
  at 
org.apache.ignite.internal.table.distributed.gc.GcUpdateHandler.vacuumBatch(GcUpdateHandler.java:80)
  at 
org.apache.ignite.internal.table.distributed.gc.MvGc.lambda$scheduleGcForStorage$7(MvGc.java:242)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
  ... 4 more  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21200) IgniteRunner start fails in Windows via git-bash

2024-01-04 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-21200:
--
Labels: ignite-3 windows  (was: ignite-3)

> IgniteRunner start fails in Windows via git-bash
> 
>
> Key: IGNITE-21200
> URL: https://issues.apache.org/jira/browse/IGNITE-21200
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, cli
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3, windows
> Fix For: 3.0.0-beta2
>
>
> h3. Steps to reproduce:
>  # Build ignite3-db-3.0.0-SNAPSHOT.zip distribution from sources.
>  # Use git-bash to run `./ignite3db start` in Windows.
> h3. Expected:
> IgniteRunner started.
> h3. Actual:
> Error:
> {code:java}
> ./ignite3db: line 38: C:\Program: No such file or directory{code}
> h3. Details:
> The space in path to java (C:\Program Files) is considered as separation 
> between command and arguments. To avoid it, usage of all variables have to 
> replaced to arrays. For example `${JAVA_CMD_WITH_ARGS}` have to be replaced 
> to `${JAVA_CMD_WITH_ARGS[@]}` and so on.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21200) IgniteRunner start fails in Windows via git-bash

2024-01-04 Thread Igor (Jira)
Igor created IGNITE-21200:
-

 Summary: IgniteRunner start fails in Windows via git-bash
 Key: IGNITE-21200
 URL: https://issues.apache.org/jira/browse/IGNITE-21200
 Project: Ignite
  Issue Type: Bug
  Components: binary, cli
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Fix For: 3.0.0-beta2


h3. Steps to reproduce:
 # Build ignite3-db-3.0.0-SNAPSHOT.zip distribution from sources.
 # Use git-bash to run `./ignite3db start` in Windows.

h3. Expected:

IgniteRunner started.
h3. Actual:

Error:
{code:java}
./ignite3db: line 38: C:\Program: No such file or directory{code}
h3. Details:

The space in path to java (C:\Program Files) is considered as separation 
between command and arguments. To avoid it, usage of all variables have to 
replaced to arrays. For example `${JAVA_CMD_WITH_ARGS}` have to be replaced to 
`${JAVA_CMD_WITH_ARGS[@]}` and so on.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21088) Impossible to restart node with json config

2023-12-14 Thread Igor (Jira)
Igor created IGNITE-21088:
-

 Summary: Impossible to restart node with json config
 Key: IGNITE-21088
 URL: https://issues.apache.org/jira/browse/IGNITE-21088
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Fix For: 3.0.0-beta2


*Steps:*
1. Create ignite-config.json config instead of ignite-config.conf
{code:java}
{
  "network" : {
    "nodeFinder" : {
      "netClusterNodes" : [ "localhost:3110", "localhost:3111" ]
    },
    "port" : 3110
  },
  "rest" : {
    "port" : 10550
  },
  "clientConnector" : {
    "port" : 2080
  }
} {code}
2. Start node.

 

3. Stop node.
4. Restart node.
*Expected:*
Node restarted.
*Actual:* 
Config was rewritten into .conf format (but filename wasn't changed) and didn't 
start because incorrectly formatted config.
{code:java}
aimem {
    defaultRegion {
        emptyPagesPoolSize=100
        evictionMode=DISABLED
        evictionThreshold=0.9
        initSize=13666140160
        maxSize=13666140160
        memoryAllocator {
            type=unsafe
        }
    }
    pageSize=16384
}
aipersist {
    checkpoint {
        checkpointDelayMillis=200
        checkpointThreads=4
        compactionThreads=4
        frequency=18
        frequencyDeviation=40
        logReadLockThresholdTimeout=0
        readLockTimeout=1
        useAsyncFileIoFactory=true
    }
    defaultRegion {
        memoryAllocator {
            type=unsafe
        }
        replacementMode=CLOCK
        size=13666140160
    }
    pageSize=16384
}
clientConnector {
    connectTimeout=5000
    idleTimeout=0
    metricsEnabled=false
    port=2080
    sendServerExceptionStackTraceToClient=false
    ssl {
        ciphers=""
        clientAuth=none
        enabled=false
        keyStore {
            password=""
            path=""
            type=PKCS12
        }
        trustStore {
            password=""
            path=""
            type=PKCS12
        }
    }
}
cluster {
    networkInvokeTimeout=500
}
compute {
    queueMaxSize=2147483647
    statesLifetimeMillis=6
    threadPoolSize=20
    threadPoolStopTimeoutMillis=1
}
deployment {
    deploymentLocation=deployment
}
network {
    fileTransfer {
        chunkSize=1048576
        maxConcurrentRequests=4
        responseTimeout=1
        threadPoolSize=8
    }
    inbound {
        soBacklog=128
        soKeepAlive=true
        soLinger=0
        soReuseAddr=true
        tcpNoDelay=true
    }
    membership {
        failurePingInterval=1000
        membershipSyncInterval=3
        scaleCube {
            failurePingRequestMembers=3
            gossipInterval=200
            gossipRepeatMult=3
            membershipSuspicionMultiplier=5
            metadataTimeout=3000
        }
    }
    nodeFinder {
        netClusterNodes=[
            "localhost:3110",
            "localhost:3111"
        ]
        type=STATIC
    }
    outbound {
        soKeepAlive=true
        soLinger=0
        tcpNoDelay=true
    }
    port=3110
    shutdownQuietPeriod=0
    shutdownTimeout=15000
    ssl {
        ciphers=""
        clientAuth=none
        enabled=false
        keyStore {
            password=""
            path=""
            type=PKCS12
        }
        trustStore {
            password=""
            path=""
            type=PKCS12
        }
    }
}
raft {
    fsync=true
    responseTimeout=3000
    retryDelay=200
    retryTimeout=1
    rpcInstallSnapshotTimeout=30
    volatileRaft {
        logStorage {
            name=unlimited
        }
    }
}
rest {
    dualProtocol=false
    httpToHttpsRedirection=false
    port=10550
    ssl {
        ciphers=""
        clientAuth=none
        enabled=false
        keyStore {
            password=""
            path=""
            type=PKCS12
        }
        port=10400
        trustStore {
            password=""
            path=""
            type=PKCS12
        }
    }
}
rocksDb {
    defaultRegion {
        cache=lru
        numShardBits=-1
        size=268435456
        writeBufferSize=67108864
    }
    flushDelayMillis=100
} {code}
The error while starting:
{code:java}
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:58e58a9a-e9a7-4d2e-bba6-9477d41d03b2 Unable to start [node=Cluster_0]
        at 
org.apache.ignite.internal.app.IgniteImpl.handleStartException(IgniteImpl.java:897)
        at org.apache.ignite.internal.app.IgniteImpl.start(IgniteImpl.java:886)
        at 
org.apache.ignite.internal.app.IgnitionImpl.doStart(IgnitionImpl.java:198)
        at 
org.apache.ignite.internal.app.IgnitionImpl.start(IgnitionImpl.java:99)
        at org.apache.ignite.IgnitionManager.start(IgnitionManager.java:72)
        at org.apache.ignite.IgnitionManager.start(IgnitionManager.java:51)
        at 
org.apache.ignite.internal.app.IgniteRunner.call(IgniteRunner.java:48)
        at 

[jira] [Updated] (IGNITE-20971) The Ignite process huge memory overhead for tables creation

2023-12-07 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20971:
--
Attachment: gc.20231128_132812_386649.log

> The Ignite process huge memory overhead for tables creation
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Attachments: gc.20231128_132812_386649.log
>
>
> Creating 1000 tables with 5 column each.
> *Expected:*
> 1000 tables are created.
>  
> *Actual:*
> After some tables (in my case after 75 tables) the Ignite runner process is 
> silently teared down, no any errors in output. GC log doesn't show any 
> problem.
>  
> *Additional information:*
> On more performant (in CPU) servers it can create up to 855 tables on 4GB 
> HEAP and then tearing down with 
> `java.lang.OutOfMemoryError: Java heap space`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20971) The Ignite process huge memory overhead for tables creation

2023-12-07 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20971:
--
Description: 
*Steps to reproduce*
 # Start process with `-Xms4096m -Xmx4096m`
 # Create tables with 5 columns one by one up to 1000.

*Expected:*
1000 tables are created.

 

*Actual:*

After ~219 the process was killed by OOM killer due to process took 64GB of 
memory of available 65GB.

 

*Additional information:*

OOM killer output:
{code:java}
[Tue Nov 28 18:35:37 2023] 
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/user.slice/user-10002.slice/session-201.scope,task=java,pid=11748,uid=10002
[Tue Nov 28 18:35:37 2023] Out of memory: Killed process 11748 (java) 
total-vm:97160836kB, anon-rss:64192728kB, file-rss:0kB, shmem-rss:0kB, 
UID:10002 pgtables:127948kB oom_score_adj:0
[Tue Nov 28 18:35:39 2023] oom_reaper: reaped process 11748 (java), now 
anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

pubagent:~$ grep MemTotal /proc/meminfo
MemTotal:   65038556 kB
pubagent:~$ free   
totalusedfree  shared  buff/cache   available
Mem:65038556  32184864557116 900  15959264174540
Swap:  0   0   0

{code}
GC log
[^gc.20231128_132812_386649.log]

  was:
Creating 1000 tables with 5 column each.

*Expected:*
1000 tables are created.

 

*Actual:*

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.

 

*Additional information:*

On more performant (in CPU) servers it can create up to 855 tables on 4GB HEAP 
and then tearing down with 
`java.lang.OutOfMemoryError: Java heap space`


> The Ignite process huge memory overhead for tables creation
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Attachments: gc.20231128_132812_386649.log
>
>
> *Steps to reproduce*
>  # Start process with `-Xms4096m -Xmx4096m`
>  # Create tables with 5 columns one by one up to 1000.
> *Expected:*
> 1000 tables are created.
>  
> *Actual:*
> After ~219 the process was killed by OOM killer due to process took 64GB of 
> memory of available 65GB.
>  
> *Additional information:*
> OOM killer output:
> {code:java}
> [Tue Nov 28 18:35:37 2023] 
> oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/user.slice/user-10002.slice/session-201.scope,task=java,pid=11748,uid=10002
> [Tue Nov 28 18:35:37 2023] Out of memory: Killed process 11748 (java) 
> total-vm:97160836kB, anon-rss:64192728kB, file-rss:0kB, shmem-rss:0kB, 
> UID:10002 pgtables:127948kB oom_score_adj:0
> [Tue Nov 28 18:35:39 2023] oom_reaper: reaped process 11748 (java), now 
> anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> pubagent:~$ grep MemTotal /proc/meminfo
> MemTotal:   65038556 kB
> pubagent:~$ free   
> totalusedfree  shared  buff/cache   available
> Mem:65038556  32184864557116 900  159592
> 64174540
> Swap:  0   0   0
> {code}
> GC log
> [^gc.20231128_132812_386649.log]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20971) The Ignite process huge memory overhead for tables creation

2023-12-07 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20971:
--
Summary: The Ignite process huge memory overhead for tables creation  (was: 
The Ignite process quietly tear down while creating a lot of tables)

> The Ignite process huge memory overhead for tables creation
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Creating 1000 tables with 5 column each.
> *Expected:*
> 1000 tables are created.
>  
> *Actual:*
> After some tables (in my case after 75 tables) the Ignite runner process is 
> silently teared down, no any errors in output. GC log doesn't show any 
> problem.
>  
> *Additional information:*
> On more performant (in CPU) servers it can create up to 855 tables on 4GB 
> HEAP and then tearing down with 
> `java.lang.OutOfMemoryError: Java heap space`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20971) The Ignite process quietly tear down while creating a lot of tables

2023-11-27 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20971:
--
Description: 
Creating 1000 tables with 5 column each.

*Expected:*
1000 tables are created.

 

*Actual:*

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.

 

*Additional information:*

On more performant (in CPU) servers it can create up to 855 tables on 4GB HEAP 
and then tearing down with 
`java.lang.OutOfMemoryError: Java heap space`

  was:
Creating 1000 tables with 5 column each.

Expected:
1000 tables are created.

 

Actual:

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.

 

Additional information:

On more performant (in CPU) servers it can create up to 855 tables on 4GB HEAP 
and then tearing down with 
`java.lang.OutOfMemoryError: Java heap space`


> The Ignite process quietly tear down while creating a lot of tables
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> Creating 1000 tables with 5 column each.
> *Expected:*
> 1000 tables are created.
>  
> *Actual:*
> After some tables (in my case after 75 tables) the Ignite runner process is 
> silently teared down, no any errors in output. GC log doesn't show any 
> problem.
>  
> *Additional information:*
> On more performant (in CPU) servers it can create up to 855 tables on 4GB 
> HEAP and then tearing down with 
> `java.lang.OutOfMemoryError: Java heap space`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20971) The Ignite process quietly tear down while creating a lot of tables

2023-11-27 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20971:
--
Description: 
Creating 1000 tables with 5 column each.

Expected:
1000 tables are created.

 

Actual:

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.

 

Additional information:

On more performant (in CPU) servers it can create up to 855 tables on 4GB HEAP 
and then tearing down with 
`java.lang.OutOfMemoryError: Java heap space`

  was:
Creating 1000 tables with 5 column each.

Expected:
1000 tables are created.

 

Actual:

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.


> The Ignite process quietly tear down while creating a lot of tables
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> Creating 1000 tables with 5 column each.
> Expected:
> 1000 tables are created.
>  
> Actual:
> After some tables (in my case after 75 tables) the Ignite runner process is 
> silently teared down, no any errors in output. GC log doesn't show any 
> problem.
>  
> Additional information:
> On more performant (in CPU) servers it can create up to 855 tables on 4GB 
> HEAP and then tearing down with 
> `java.lang.OutOfMemoryError: Java heap space`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20971) The Ignite process quietly tear down while creating a lot of tables

2023-11-27 Thread Igor (Jira)
Igor created IGNITE-20971:
-

 Summary: The Ignite process quietly tear down while creating a lot 
of tables
 Key: IGNITE-20971
 URL: https://issues.apache.org/jira/browse/IGNITE-20971
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 3.0.0-beta2
Reporter: Igor


Creating 1000 tables with 5 column each.

Expected:
1000 tables are created.

 

Actual:

After some tables (in my case after 75 tables) the Ignite runner process is 
silently teared down, no any errors in output. GC log doesn't show any problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20777) Exception while init cluster due to missed `add-opens` argument

2023-11-01 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20777:
--
Description: 
*Steps to reproduce:*
 # Start cluster with 2 nodes.
 # Init cluster

*Expected result:*

Cluster started

*Actual result:*

Error in log and cluster shutting down.

[^ignite3db-0.log.txt] [^stderr.log.txt]

*Workaround:*

If the option *--add-opens=java.base/sun.nio.ch=ALL-UNNAMED* is added to 
startup script, cluster works fine.

  was:
*Steps to reproduce:*
 # Start cluster with 2 nodes.
 # Init cluster

*Expected result:*

Cluster started

{*}Actual result:{*}{*}{*}

Error in log and cluster shutting down.

[^ignite3db-0.log.txt] [^stderr.log.txt]

*Workaround:*

If the option *--add-opens=java.base/sun.nio.ch=ALL-UNNAMED* added to startup 
script, cluster works fine.


> Exception while init cluster due to missed `add-opens` argument
> ---
>
> Key: IGNITE-20777
> URL: https://issues.apache.org/jira/browse/IGNITE-20777
> Project: Ignite
>  Issue Type: Bug
>  Components: cli, general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
> Attachments: ignite3db-0.log.txt, stderr.log.txt
>
>
> *Steps to reproduce:*
>  # Start cluster with 2 nodes.
>  # Init cluster
> *Expected result:*
> Cluster started
> *Actual result:*
> Error in log and cluster shutting down.
> [^ignite3db-0.log.txt] [^stderr.log.txt]
> *Workaround:*
> If the option *--add-opens=java.base/sun.nio.ch=ALL-UNNAMED* is added to 
> startup script, cluster works fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20777) Exception while init cluster due to missed `add-opens` argument

2023-11-01 Thread Igor (Jira)
Igor created IGNITE-20777:
-

 Summary: Exception while init cluster due to missed `add-opens` 
argument
 Key: IGNITE-20777
 URL: https://issues.apache.org/jira/browse/IGNITE-20777
 Project: Ignite
  Issue Type: Bug
  Components: cli, general
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Attachments: ignite3db-0.log.txt, stderr.log.txt

*Steps to reproduce:*
 # Start cluster with 2 nodes.
 # Init cluster

*Expected result:*

Cluster started

{*}Actual result:{*}{*}{*}

Error in log and cluster shutting down.

[^ignite3db-0.log.txt] [^stderr.log.txt]

*Workaround:*

If the option *--add-opens=java.base/sun.nio.ch=ALL-UNNAMED* added to startup 
script, cluster works fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20731) Exception "The primary replica has changed" on big amount of rows

2023-10-24 Thread Igor (Jira)
Igor created IGNITE-20731:
-

 Summary: Exception "The primary replica has changed" on big amount 
of rows
 Key: IGNITE-20731
 URL: https://issues.apache.org/jira/browse/IGNITE-20731
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 3.0.0-beta2
Reporter: Igor


*Steps to reproduce:*

1. Start cluster with 1 node with JVM options: "-Xms4096m -Xmx4096m"

2. Execute
{code:java}
create table rows_capacity_table(id INTEGER not null, column_1 VARCHAR(50) not 
null, column_2 VARCHAR(50) not null, column_3 VARCHAR(50) not null, column_4 
VARCHAR(50) not null, primary key (id)) {code}
3. Insert rows into table up to 1 000 000 rows.

*Expected result:*
Rows are inserted.

*Actual result:*
After 733000 rows the exception is thrown.
Client:
{code:java}
java.sql.BatchUpdateException: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:9b8ef95a-bbbe-48cf-9c94-2e80d01c2033 The primary replica has 
changed [expectedLeaseholder=TablesAmountCapacityTest_cluster_0, 
currentLeaseholder=null]
at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:155)
 {code}
Server:
{code:java}
2023-10-23 13:47:31:529 +0300 
[INFO][%TablesAmountCapacityTest_cluster_0%metastorage-watch-executor-0][PartitionReplicaListener]
 Primary replica expired [grp=5_part_12]
2023-10-23 13:47:31:532 +0300 
[INFO][%TablesAmountCapacityTest_cluster_0%metastorage-watch-executor-0][PartitionReplicaListener]
 Primary replica expired [grp=5_part_20]
2023-10-23 13:47:31:536 +0300 
[INFO][%TablesAmountCapacityTest_cluster_0%metastorage-watch-executor-0][PartitionReplicaListener]
 Primary replica expired [grp=5_part_24]
2023-10-23 13:47:31:539 +0300 
[INFO][%TablesAmountCapacityTest_cluster_0%metastorage-watch-executor-0][PartitionReplicaListener]
 Primary replica expired [grp=5_part_16]
2023-10-23 13:47:31:699 +0300 
[WARNING][%TablesAmountCapacityTest_cluster_0%metastorage-watch-executor-3][ReplicaManager]
 Failed to process replica request [request=TxFinishReplicaRequestImpl 
[commit=false, commitTimestampLong=111283931920007204, groupId=5_part_24, 
groups=HashSet [5_part_5, 5_part_4, 5_part_7, 5_part_6, 5_part_1, 5_part_0, 
5_part_3, 5_part_2, 5_part_13, 5_part_12, 5_part_15, 5_part_14, 5_part_9, 
5_part_8, 5_part_11, 5_part_10, 5_part_21, 5_part_20, 5_part_23, 5_part_22, 
5_part_17, 5_part_16, 5_part_19, 5_part_18, 5_part_24], 
term=111283839559532593, timestampLong=111283932466315264, 
txId=018b5c25-7653---23c06ab5]]
java.util.concurrent.CompletionException: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:9b8ef95a-bbbe-48cf-9c94-2e80d01c2033 The primary replica has 
changed [expectedLeaseholder=TablesAmountCapacityTest_cluster_0, 
currentLeaseholder=null]
at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
at 
java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
at 
org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$7(WatchProcessor.java:281)
at 
java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: 

[jira] [Updated] (IGNITE-20724) Exception "Type conversion is not supported yet" after "ALTER TABLE DROP COLUMN"

2023-10-24 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20724:
--
Description: 
*Steps to reproduce:*
Run the next sequence of separate queries one by one (via JDBC or ODBC)
{code:java}
DROP TABLE IF EXISTS CAR;
CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY 
VARCHAR(20), PRIMARY KEY (ID, PARKINGID));
CREATE INDEX CAR_NAME_IDX ON PUBLIC.CAR(NAME);
INSERT INTO PUBLIC.CAR(ID, PARKINGID, NAME, CITY) VALUES(1, 0, 'car_1', 'New 
York');
ALTER TABLE PUBLIC.CAR ADD COLUMN MODEL_ID INT;
ALTER TABLE PUBLIC.CAR ADD COLUMN COUNTRY VARCHAR DEFAULT 'USA';
ALTER TABLE PUBLIC.CAR DROP COLUMN MODEL_ID;
SELECT * FROM PUBLIC.CAR WHERE ID <= 10 OR ID > 1000 ORDER BY ID;{code}

*Expected result:*
Every command run successfully.
*Actual result:*
The last one throws exception
{code:java}
java.sql.SQLException: Type conversion is not supported yet.
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
{code}
Exceptions from servers:
Server1:
{code:java}
2023-10-23 22:00:38:287 +0200 
[INFO][%BasicAi3OperationsTest_cluster_0%sql-execution-pool-3][JdbcQueryEventHandlerImpl]
 Exception while executing query [query=SELECT * FROM PUBLIC.CAR WHERE ID <= 10 
OR ID > 1000 ORDER BY ID;]
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:034a6b87-a7b3-408d-b41f-a5914ffec30d Type conversion is not supported 
yet.
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:59)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:101)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$0(AsyncSqlCursorImpl.java:77)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.lambda$closeAsync$0(AsyncRootNode.java:160)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.closeAsync(AsyncRootNode.java:160)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.onError(AsyncRootNode.java:115)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$onError$2(ExecutionServiceImpl.java:530)
at 
java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:757)
at 
java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:735)
at 
java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2182)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.onError(ExecutionServiceImpl.java:529)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.onMessage(ExecutionServiceImpl.java:381)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.lambda$start$4(ExecutionServiceImpl.java:220)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:139)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.lambda$onMessage$1(MessageServiceImpl.java:110)
at 
org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:81)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:034a6b87-a7b3-408d-b41f-a5914ffec30d Type conversion is not supported 
yet.
at 
org.apache.ignite.internal.sql.engine.util.SqlExceptionMapperProvider.lambda$mappers$1(SqlExceptionMapperProvider.java:53)
at 
org.apache.ignite.internal.lang.IgniteExceptionMapper.map(IgniteExceptionMapper.java:61)
at 
org.apache.ignite.internal.lang.IgniteExceptionMapperUtil.map(IgniteExceptionMapperUtil.java:149)
at 
org.apache.ignite.internal.lang.IgniteExceptionMapperUtil.mapToPublicException(IgniteExceptionMapperUtil.java:103)
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:49)

[jira] [Created] (IGNITE-20724) Exception "Type conversion is not supported yet" after "ALTER TABLE DROP COLUMN"

2023-10-24 Thread Igor (Jira)
Igor created IGNITE-20724:
-

 Summary: Exception "Type conversion is not supported yet" after 
"ALTER TABLE DROP COLUMN"
 Key: IGNITE-20724
 URL: https://issues.apache.org/jira/browse/IGNITE-20724
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Igor


*Steps to reproduce:*
Run the next sequence of separate queries one by one
DROP TABLE IF EXISTS CAR;
CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY 
VARCHAR(20), PRIMARY KEY (ID, PARKINGID));
CREATE INDEX CAR_NAME_IDX ON PUBLIC.CAR(NAME);
INSERT INTO PUBLIC.CAR(ID, PARKINGID, NAME, CITY) VALUES(1, 0, 'car_1', 'New 
York');
ALTER TABLE PUBLIC.CAR ADD COLUMN MODEL_ID INT;
ALTER TABLE PUBLIC.CAR ADD COLUMN COUNTRY VARCHAR DEFAULT 'USA';
ALTER TABLE PUBLIC.CAR DROP COLUMN MODEL_ID;
SELECT * FROM PUBLIC.CAR WHERE ID <= 10 OR ID > 1000 ORDER BY ID;
*Expected result:*
Every command run successfully.
*Actual result:*
The last one throws exception

java.sql.SQLException: Type conversion is not supported yet.
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
Exceptions from servers:
Server1:

2023-10-23 22:00:38:287 +0200 
[INFO][%BasicAi3OperationsTest_cluster_0%sql-execution-pool-3][JdbcQueryEventHandlerImpl]
 Exception while executing query [query=SELECT * FROM PUBLIC.CAR WHERE ID <= 10 
OR ID > 1000 ORDER BY ID;]
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:034a6b87-a7b3-408d-b41f-a5914ffec30d Type conversion is not supported 
yet.
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:59)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:101)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$0(AsyncSqlCursorImpl.java:77)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.lambda$closeAsync$0(AsyncRootNode.java:160)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.closeAsync(AsyncRootNode.java:160)
at 
org.apache.ignite.internal.sql.engine.exec.rel.AsyncRootNode.onError(AsyncRootNode.java:115)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$onError$2(ExecutionServiceImpl.java:530)
at 
java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:757)
at 
java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:735)
at 
java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2182)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.onError(ExecutionServiceImpl.java:529)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.onMessage(ExecutionServiceImpl.java:381)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.lambda$start$4(ExecutionServiceImpl.java:220)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:139)
at 
org.apache.ignite.internal.sql.engine.message.MessageServiceImpl.lambda$onMessage$1(MessageServiceImpl.java:110)
at 
org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:81)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:034a6b87-a7b3-408d-b41f-a5914ffec30d Type conversion is not supported 
yet.
at 
org.apache.ignite.internal.sql.engine.util.SqlExceptionMapperProvider.lambda$mappers$1(SqlExceptionMapperProvider.java:53)
at 

[jira] [Created] (IGNITE-20716) Partial data loss after node restart

2023-10-23 Thread Igor (Jira)
Igor created IGNITE-20716:
-

 Summary: Partial data loss after node restart
 Key: IGNITE-20716
 URL: https://issues.apache.org/jira/browse/IGNITE-20716
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 3.0.0-beta2
Reporter: Igor


How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

No errors in logs observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20625) SELECT MIN(column), MAX(column) by ODBC throws exception

2023-10-11 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20625:
--
Summary: SELECT MIN(column), MAX(column) by ODBC throws exception  (was: 
SELECT MIN(column), MAX(column) by ODBC throws exceptions)

> SELECT MIN(column), MAX(column) by ODBC throws exception
> 
>
> Key: IGNITE-20625
> URL: https://issues.apache.org/jira/browse/IGNITE-20625
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> h3. Steps to reproduce:
>  # Connect to Ignite using ODBC driver (Python).
>  # Execute separate queries one by one 
> {code:java}
> DROP TABLE IF EXISTS PUBLIC.PARKING;
> CREATE TABLE PUBLIC.PARKING(ID INT, NAME VARCHAR(255), CAPACITY INT NOT NULL, 
> b decimal,c date, CITY VARCHAR(20), PRIMARY KEY (ID, CITY));
> INSERT INTO PUBLIC.PARKING(ID, NAME, CAPACITY, CITY) VALUES(1, 'parking_1', 
> 1, 'New York');
> SELECT MIN(CAPACITY), MAX(CAPACITY) FROM PUBLIC.PARKING; {code}
> h3. Expected result:
> Query executed successfully.
> h3. Actual result:
> The last query throws exception.
> {code:java}
> The value in stream is not a Binary data : 5{code}
> No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20625) SELECT MIN(column), MAX(column) by ODBC throws exceptions

2023-10-11 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20625:
--
Description: 
h3. Steps to reproduce:
 # Connect to Ignite using ODBC driver (Python).
 # Execute separate queries one by one 
{code:java}
DROP TABLE IF EXISTS PUBLIC.PARKING;
CREATE TABLE PUBLIC.PARKING(ID INT, NAME VARCHAR(255), CAPACITY INT NOT NULL, b 
decimal,c date, CITY VARCHAR(20), PRIMARY KEY (ID, CITY));
INSERT INTO PUBLIC.PARKING(ID, NAME, CAPACITY, CITY) VALUES(1, 'parking_1', 1, 
'New York');
SELECT MIN(CAPACITY), MAX(CAPACITY) FROM PUBLIC.PARKING; {code}

h3. Expected result:

Query executed successfully.
h3. Actual result:

The last query throws exception.
{code:java}
The value in stream is not a Binary data : 5{code}
No errors in server log.

  was:
h3. Steps to reproduce:
 # Connect to Ignite using ODBC driver (Python).
 # Execute `CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), 
CITY VARCHAR(20), PRIMARY KEY (ID, PARKINGID))`

h3. Expected result:

Query executed successfully.
h3. Actual result:

Exception is thrown.
{code:java}
CmdResult{exitCode=-1, result=, error=error executing CREATE TABLE CAR(ID INT, 
PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY (ID, 
PARKINGID)) got error ('HYC00', '[HYC00] Metadata for non-executed queries is 
not supported (0) (SQLNumResultCols)'){code}
No errors in server log.


> SELECT MIN(column), MAX(column) by ODBC throws exceptions
> -
>
> Key: IGNITE-20625
> URL: https://issues.apache.org/jira/browse/IGNITE-20625
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> h3. Steps to reproduce:
>  # Connect to Ignite using ODBC driver (Python).
>  # Execute separate queries one by one 
> {code:java}
> DROP TABLE IF EXISTS PUBLIC.PARKING;
> CREATE TABLE PUBLIC.PARKING(ID INT, NAME VARCHAR(255), CAPACITY INT NOT NULL, 
> b decimal,c date, CITY VARCHAR(20), PRIMARY KEY (ID, CITY));
> INSERT INTO PUBLIC.PARKING(ID, NAME, CAPACITY, CITY) VALUES(1, 'parking_1', 
> 1, 'New York');
> SELECT MIN(CAPACITY), MAX(CAPACITY) FROM PUBLIC.PARKING; {code}
> h3. Expected result:
> Query executed successfully.
> h3. Actual result:
> The last query throws exception.
> {code:java}
> The value in stream is not a Binary data : 5{code}
> No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20625) SELECT MIN(column), MAX(column) by ODBC throws exceptions

2023-10-11 Thread Igor (Jira)
Igor created IGNITE-20625:
-

 Summary: SELECT MIN(column), MAX(column) by ODBC throws exceptions
 Key: IGNITE-20625
 URL: https://issues.apache.org/jira/browse/IGNITE-20625
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 3.0.0-beta2
Reporter: Igor


h3. Steps to reproduce:
 # Connect to Ignite using ODBC driver (Python).
 # Execute `CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), 
CITY VARCHAR(20), PRIMARY KEY (ID, PARKINGID))`

h3. Expected result:

Query executed successfully.
h3. Actual result:

Exception is thrown.
{code:java}
CmdResult{exitCode=-1, result=, error=error executing CREATE TABLE CAR(ID INT, 
PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY (ID, 
PARKINGID)) got error ('HYC00', '[HYC00] Metadata for non-executed queries is 
not supported (0) (SQLNumResultCols)'){code}
No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20615) SQL queries by ODBC throws exceptions

2023-10-11 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-20615.
---
Resolution: Not A Problem

Pyodbc tries to get metadata about non executed query - this feature is not yet 
implemented

> SQL queries by ODBC throws exceptions
> -
>
> Key: IGNITE-20615
> URL: https://issues.apache.org/jira/browse/IGNITE-20615
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> h3. Steps to reproduce:
>  # Connect to Ignite using ODBC driver (Python).
>  # Execute `CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME 
> VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY (ID, PARKINGID))`
> h3. Expected result:
> Query executed successfully.
> h3. Actual result:
> Exception is thrown.
> {code:java}
> CmdResult{exitCode=-1, result=, error=error executing CREATE TABLE CAR(ID 
> INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY 
> (ID, PARKINGID)) got error ('HYC00', '[HYC00] Metadata for non-executed 
> queries is not supported (0) (SQLNumResultCols)'){code}
> No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20615) SQL queries by ODBC throws exceptions

2023-10-11 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20615:
--
Labels: ignite-3  (was: )

> SQL queries by ODBC throws exceptions
> -
>
> Key: IGNITE-20615
> URL: https://issues.apache.org/jira/browse/IGNITE-20615
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> h3. Steps to reproduce:
>  # Connect to Ignite using ODBC driver (Python).
>  # Execute `CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME 
> VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY (ID, PARKINGID))`
> h3. Expected result:
> Query executed successfully.
> h3. Actual result:
> Exception is thrown.
> {code:java}
> CmdResult{exitCode=-1, result=, error=error executing CREATE TABLE CAR(ID 
> INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY 
> (ID, PARKINGID)) got error ('HYC00', '[HYC00] Metadata for non-executed 
> queries is not supported (0) (SQLNumResultCols)'){code}
> No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20615) SQL queries by ODBC throws exceptions

2023-10-10 Thread Igor (Jira)
Igor created IGNITE-20615:
-

 Summary: SQL queries by ODBC throws exceptions
 Key: IGNITE-20615
 URL: https://issues.apache.org/jira/browse/IGNITE-20615
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 3.0.0-beta2
Reporter: Igor


h3. Steps to reproduce:
 # Connect to Ignite using ODBC driver (Python).
 # Execute `CREATE TABLE CAR(ID INT, PARKINGID INT NOT NULL, NAME VARCHAR(255), 
CITY VARCHAR(20), PRIMARY KEY (ID, PARKINGID))`

h3. Expected result:

Query executed successfully.
h3. Actual result:

Exception is thrown.
{code:java}
CmdResult{exitCode=-1, result=, error=error executing CREATE TABLE CAR(ID INT, 
PARKINGID INT NOT NULL, NAME VARCHAR(255), CITY VARCHAR(20), PRIMARY KEY (ID, 
PARKINGID)) got error ('HYC00', '[HYC00] Metadata for non-executed queries is 
not supported (0) (SQLNumResultCols)'){code}
No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20286) Information about REST port disappeared from logs

2023-08-25 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20286:
--
Description: 
The information about taken REST port is disappeared from logs.
Previously the string was present:
{code:java}
2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
118ms. Server Running: http://5e85d1e4d3d1:10301{code}
Now it is impossible to determine the taken port by node (because REST endpoint 
support `portRange`).

The possible reason of the problem: IgniteRunner doesn't have SL4J 
implementations in dependencies.
!image-2023-08-25-16-20-23-118.png!

  was:
The information about taken REST port is disappeared from logs.
Previously the string was present:

{code:java}
2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
118ms. Server Running: http://5e85d1e4d3d1:10301{code}
{{}}Now it is impossible to determine the taken port by node (because REST 
endpoint support `portRange`).

The possible reason of the problem: IgniteRunner doesn't have SL4J 
implementations in dependencies.
!image-2023-08-25-16-20-23-118.png!


> Information about REST port disappeared from logs
> -
>
> Key: IGNITE-20286
> URL: https://issues.apache.org/jira/browse/IGNITE-20286
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
> Attachments: image-2023-08-25-16-20-23-118.png
>
>
> The information about taken REST port is disappeared from logs.
> Previously the string was present:
> {code:java}
> 2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
> 118ms. Server Running: http://5e85d1e4d3d1:10301{code}
> Now it is impossible to determine the taken port by node (because REST 
> endpoint support `portRange`).
> The possible reason of the problem: IgniteRunner doesn't have SL4J 
> implementations in dependencies.
> !image-2023-08-25-16-20-23-118.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20286) Information about REST port disappeared from logs

2023-08-25 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-20286:
--
Description: 
The information about taken REST port is disappeared from logs.
Previously the string was present:

{code:java}
2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
118ms. Server Running: http://5e85d1e4d3d1:10301{code}
{{}}Now it is impossible to determine the taken port by node (because REST 
endpoint support `portRange`).

The possible reason of the problem: IgniteRunner doesn't have SL4J 
implementations in dependencies.
!image-2023-08-25-16-20-23-118.png!

  was:
The information about taken REST port is disappeared from logs.
Previously the string was present:
{{2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
118ms. Server Running: 
}}{{[http://5e85d1e4d3d1:10301|http://5e85d1e4d3d1:10301/]}}
Now it is impossible to determine the taken port by node (because REST endpoint 
support `portRange`).

The possible reason of the problem: IgniteRunner doesn't have SL4J 
implementations in dependencies.
!image-2023-08-25-16-20-23-118.png!


> Information about REST port disappeared from logs
> -
>
> Key: IGNITE-20286
> URL: https://issues.apache.org/jira/browse/IGNITE-20286
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
> Attachments: image-2023-08-25-16-20-23-118.png
>
>
> The information about taken REST port is disappeared from logs.
> Previously the string was present:
> {code:java}
> 2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
> 118ms. Server Running: http://5e85d1e4d3d1:10301{code}
> {{}}Now it is impossible to determine the taken port by node (because REST 
> endpoint support `portRange`).
> The possible reason of the problem: IgniteRunner doesn't have SL4J 
> implementations in dependencies.
> !image-2023-08-25-16-20-23-118.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20286) Information about REST port disappeared from logs

2023-08-25 Thread Igor (Jira)
Igor created IGNITE-20286:
-

 Summary: Information about REST port disappeared from logs
 Key: IGNITE-20286
 URL: https://issues.apache.org/jira/browse/IGNITE-20286
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Attachments: image-2023-08-25-16-20-23-118.png

The information about taken REST port is disappeared from logs.
Previously the string was present:
{{2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
118ms. Server Running: 
}}{{[http://5e85d1e4d3d1:10301|http://5e85d1e4d3d1:10301/]}}
Now it is impossible to determine the taken port by node (because REST endpoint 
support `portRange`).

The possible reason of the problem: IgniteRunner doesn't have SL4J 
implementations in dependencies.
!image-2023-08-25-16-20-23-118.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19727) Server nodes cannot find each other and log NullPointerException

2023-06-13 Thread Igor (Jira)
Igor created IGNITE-19727:
-

 Summary: Server nodes cannot find each other and log 
NullPointerException
 Key: IGNITE-19727
 URL: https://issues.apache.org/jira/browse/IGNITE-19727
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Attachments: server1.log.zip, server2.log.zip

h2. Steps to reproduce
 # Version 3.0.0-SNAPSHOT commit hash 006ddb06e1deb6788e1b2796bc033af14758b132
 # Copy db distributions into 2 servers.
 # Setup log level to FINE
 # Setup lookup by changing ignite-config.conf on both servers to
{code:java}
{
network: {
port: 3344,
portRange: 10,
nodeFinder: {
netClusterNodes: [
"172.24.1.2:3344,172.24.1.4:3344"
]
}
}
} {code}

 # Start both servers by command 
{code:java}
sh ./ignite3db  start {code}
 

h2. Expected behavior

Servers joined into cluster.
h2. Actual behavior

Two separate clusters are created with errors in log such:
{code:java}
2023-06-13 16:21:07:178 + [WARNING][main][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344] Exception on initial 
Sync, cause: java.lang.NullPointerException

...

2023-06-13 16:21:37:185 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Send Sync to 
172.24.1.2:3344,172.24.1.4:3344
2023-06-13 16:21:37:186 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Failed to send 
Sync to 172.24.1.2:3344,172.24.1.4:3344, cause: java.lang.NullPointerException 
{code}
Logs in attachment[^server1.log.zip][^server2.log.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-18 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723807#comment-17723807
 ] 

Igor edited comment on IGNITE-19488 at 5/18/23 7:01 AM:


If errors in `SELECT count( * )` are ignored then it is possible to insert at 
least 1 mln rows into one table (test didn't try to insert more). Logs are in 
attachment. But the error happens pretty frequently.
 [^logs_with_ignored_erorr.zip]


was (Author: JIRAUSER299771):
If errors in `SELECT count(*)` are ignored then it is possible to insert at 
least 1 mln rows into one table (test didn't try to insert more). Logs are in 
attachment. But the error happens pretty frequently.
 [^logs_with_ignored_erorr.zip]

> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
>  Labels: ignite-3
> Attachments: logs.zip, logs_with_ignored_erorr.zip
>
>
> h1. Steps to reproduce
> Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
>  {code}
> Logs are in the attachment.
> [^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-18 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723807#comment-17723807
 ] 

Igor commented on IGNITE-19488:
---

If errors in `SELECT count(*)` are ignored then it is possible to insert at 
least 1 mln rows into one table (test didn't try to insert more). Logs are in 
attachment. But the error happens pretty frequently.
 [^logs_with_ignored_erorr.zip]

> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
>  Labels: ignite-3
> Attachments: logs.zip, logs_with_ignored_erorr.zip
>
>
> h1. Steps to reproduce
> Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
>  {code}
> Logs are in the attachment.
> [^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-18 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-19488:
--
Attachment: logs_with_ignored_erorr.zip

> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
>  Labels: ignite-3
> Attachments: logs.zip, logs_with_ignored_erorr.zip
>
>
> h1. Steps to reproduce
> Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
>  {code}
> Logs are in the attachment.
> [^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-17 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723474#comment-17723474
 ] 

Igor commented on IGNITE-19488:
---

[~xtern] there is the code to reproduce the error 
https://github.com/Lunigorn/ignite3test/tree/rows-capacity-test

> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
>  Labels: ignite-3
> Attachments: logs.zip
>
>
> h1. Steps to reproduce
> Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
>  {code}
> Logs are in the attachment.
> [^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-16 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-19488:
--
Description: 
h1. Steps to reproduce

Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
 # Create table with 5 columns
 # Insert into table rows by batches 1000 rows each batch.
 # Repeat previous step untill exception is thrown.

h1. Expected behaviour

Created more than 30 000 rows.
h1. Actual behaviour

An exception after 29 000 rows are inserted:
{code:java}
Exception while executing query [query=SELECT COUNT(*) FROM 
rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
FROM rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
 {code}
Logs are in the attachment.

[^logs.zip]

  was:
h1. Steps to reproduce
 # Create table with 5 columns
 # Insert into table rows by batches 1000 rows each batch.
 # Repeat previous step untill exception is thrown.

h1. Expected behaviour

Created more than 30 000 rows.
h1. Actual behaviour

An exception after 29 000 rows are inserted:
{code:java}
Exception while executing query [query=SELECT COUNT(*) FROM 
rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
FROM rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
 {code}
Logs are in the attachment.

[^logs.zip]


> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
> Attachments: logs.zip
>
>
> h1. Steps to reproduce
> Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: 

[jira] [Commented] (IGNITE-19247) BatchUpdateException: Replication is timed out" upon inserting rows in batches via JDBC

2023-05-16 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723215#comment-17723215
 ] 

Igor commented on IGNITE-19247:
---

[~xtern] there is no such words in logs. Full bug description and logs are here 
https://issues.apache.org/jira/browse/IGNITE-19488

> BatchUpdateException: Replication is timed out" upon inserting rows in 
> batches via JDBC
> ---
>
> Key: IGNITE-19247
> URL: https://issues.apache.org/jira/browse/IGNITE-19247
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0
>Reporter: Alexander Belyak
>Assignee: Pavel Pereslegin
>Priority: Critical
>  Labels: ignite-3
> Fix For: 3.0
>
> Attachments: ReplicationTimeoutReproducerClientLog.zip, 
> node_0.log.zip, node_1.log.zip, serverLog.zip, test.log
>
>
> Start single node cluster:
> {noformat}
> git commit 78946d4c
> https://github.com/apache/ignite-3.git branch mainbuild by:
>     ./gradlew clean allDistZip -x test -x integrationTest -x check -x 
> modernizer 
> start by:  
>     /tmp/ignite3-3.0.0-SNAPSHOT/ignite3-db-3.0.0-SNAPSHOT$ export 
> IGNITE_HOME=$(pwd)
>     /tmp/ignite3-3.0.0-SNAPSHOT/ignite3-db-3.0.0-SNAPSHOT$ bin/ignite3db start
>         Starting Ignite 3...
>         Node named defaultNode started successfully. REST addresses are 
> [http://127.0.1.1:10300]
>     /tmp/ignite3-3.0.0-SNAPSHOT/ignite3-cli-3.0.0-SNAPSHOT$ bin/ignite3 
> cluster init --cluster-endpoint-url=http://localhost:10300 --cluster-name=c1 
> --meta-storage-node=defaultNode
>         Cluster was initialized successfully{noformat}
> Code below just create  tables with  columns (int key and 
> varchar cols) and insert  rows into each table (with SLEEP ms interval 
> between operations, with  attemps.
>  
> {noformat}
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.PreparedStatement;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.sql.Statement;
> public class TimeoutExceptionReproducer {
> private static final String DB_URL = "jdbc:ignite:thin://127.0.0.1:10800";
> private static final int COLUMNS = 10;
> private static final String TABLE_NAME = "K";
> private static final int ROWS = 10;
> private static final int TABLES = 3;
> private static final int BATCH_SIZE = 100;
> private static final int SLEEP = 0;
> private static final int RETRY = 1;
> private static String getCreateSql(String tableName) {
> StringBuilder sql = new StringBuilder("create table 
> ").append(tableName).append(" (id int primary key");
> for (int i = 0; i < COLUMNS; i++) {
> sql.append(", col").append(i).append(" varchar NOT NULL");
> }
> sql.append(")");
> return sql.toString();
> }
> private static final void s() {
> if (SLEEP > 0) {
> try {
> Thread.sleep(SLEEP);
> } catch (InterruptedException e) {
> // NoOp
> }
> }
> }
> private static void createTables(Connection connection, String tableName) 
> throws SQLException {
> try (Statement stmt = connection.createStatement()) {
> System.out.println("Creating " + tableName);
> stmt.executeUpdate("drop table if exists " + tableName );
> s();
> stmt.executeUpdate(getCreateSql(tableName));
> s();
> }
> }
> private static String getInsertSql(String tableName) {
> StringBuilder sql = new StringBuilder("insert into 
> ").append(tableName).append(" values(?");
> for (int i = 0; i < COLUMNS; i++) {
> sql.append(", ?");
> }
> sql.append(")");
> return sql.toString();
> }
> private static void insertBatch(PreparedStatement ps) {
> int retryCounter = 0;
> while(retryCounter <= RETRY) {
> try {
> ps.executeBatch();
> return;
> } catch (SQLException e) {
> System.err.println(retryCounter + " error while executing " + 
> ps + ":" + e);
> retryCounter++;
> }
> }
> }
> private static void insertData(Connection connection, String tableName) 
> throws SQLException {
> long ts = System.currentTimeMillis();
> try (PreparedStatement ps = 
> connection.prepareStatement(getInsertSql(tableName))) {
> int batch = 0;
> for (int i = 0; i < ROWS; i++) {
> ps.setInt(1, i);
> for (int j = 2; j < COLUMNS + 2; j++) {
> ps.setString(j, "value" + i + "_" + j);
> }
> ps.addBatch();
>

[jira] [Updated] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-16 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-19488:
--
Description: 
h1. Steps to reproduce
 # Create table with 5 columns
 # Insert into table rows by batches 1000 rows each batch.
 # Repeat previous step untill exception is thrown.

h1. Expected behaviour

Created more than 30 000 rows.
h1. Actual behaviour

An exception after 29 000 rows are inserted:
{code:java}
Exception while executing query [query=SELECT COUNT(*) FROM 
rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
FROM rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
 {code}
Logs are in the attachment.

[^logs.zip]

  was:
h1. Steps to reproduce
 # Create table with 5 columns
 # Insert into table rows by batches 1000 rows each batch.
 # Repeat previous step untill exception is thrown.

h1. Expected behaviour

Created more than 30 000 rows.
h1. Actual behaviour

An exception:
{code:java}
Exception while executing query [query=SELECT COUNT(*) FROM 
rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
FROM rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
 {code}
Logs are in the attachment.

[^logs.zip]


> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
> Attachments: logs.zip
>
>
> h1. Steps to reproduce
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception after 29 000 rows are inserted:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> 

[jira] [Created] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-16 Thread Igor (Jira)
Igor created IGNITE-19488:
-

 Summary: RemoteFragmentExecutionException when inserting more than 
30 000 rows into one table
 Key: IGNITE-19488
 URL: https://issues.apache.org/jira/browse/IGNITE-19488
 Project: Ignite
  Issue Type: Bug
  Components: jdbc, sql
Reporter: Igor
 Attachments: logs.zip

h1. Steps to reproduce
 # Create table with 5 columns
 # Insert into table rows by batches 1000 rows each batch.
 # Repeat previous step untill exception is thrown.

h1. Expected behaviour

Created more than 30 000 rows.
h1. Actual behaviour

An exception:
{code:java}
Exception while executing query [query=SELECT COUNT(*) FROM 
rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
FROM rows_capacity_table]. Error message:IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
failed: nodeName=TablesAmountCapacityTest_cluster_0, 
queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
 {code}
Logs are in the attachment.

[^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table

2023-05-16 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-19488:
--
Priority: Critical  (was: Major)

> RemoteFragmentExecutionException when inserting more than 30 000 rows into 
> one table
> 
>
> Key: IGNITE-19488
> URL: https://issues.apache.org/jira/browse/IGNITE-19488
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Reporter: Igor
>Priority: Critical
> Attachments: logs.zip
>
>
> h1. Steps to reproduce
>  # Create table with 5 columns
>  # Insert into table rows by batches 1000 rows each batch.
>  # Repeat previous step untill exception is thrown.
> h1. Expected behaviour
> Created more than 30 000 rows.
> h1. Actual behaviour
> An exception:
> {code:java}
> Exception while executing query [query=SELECT COUNT(*) FROM 
> rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
> java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) 
> FROM rows_capacity_table]. Error message:IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 
> TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution 
> failed: nodeName=TablesAmountCapacityTest_cluster_0, 
> queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, 
> originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907
>   at 
> org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149)
>   at 
> org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108)
>  {code}
> Logs are in the attachment.
> [^logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >