[jira] [Updated] (IGNITE-19587) Sql. Remove execution-related part from IgniteTable

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19587:
--
Priority: Major  (was: Minor)

> Sql. Remove execution-related part from IgniteTable 
> 
>
> Key: IGNITE-19587
> URL: https://issues.apache.org/jira/browse/IGNITE-19587
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, {{org.apache.ignite.internal.sql.engine.schema.IgniteTable}} 
> interface provides access to the internal table to access the data. This was 
> convinient because event produced by {{TableManager}} contains both table 
> object and its descriptor.
> With upcoming {{CatalogService}} this won't be the case, because catalog 
> manages objects' descriptors only. 
> We need to rework {{LogicalRelImplementor}} in a way to acquire table object 
> from a manager rather than from IgniteTable. This makes migration to 
> {{CatalogService}} possible.
> h4. Implementation Note
>  * It would be nice to leave {{LogicalRelImplementor}} synchronous, awaiting 
> objects' futures outside
>  * Method {{org.apache.ignite.internal.sql.engine.schema.IgniteTable#table}} 
> should be removed, as well as {{UpdateableTable}} from extends list of 
> {{org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19587) Sql. Remove execution-related part from IgniteTable

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19587:
--
Priority: Minor  (was: Major)

> Sql. Remove execution-related part from IgniteTable 
> 
>
> Key: IGNITE-19587
> URL: https://issues.apache.org/jira/browse/IGNITE-19587
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, {{org.apache.ignite.internal.sql.engine.schema.IgniteTable}} 
> interface provides access to the internal table to access the data. This was 
> convinient because event produced by {{TableManager}} contains both table 
> object and its descriptor.
> With upcoming {{CatalogService}} this won't be the case, because catalog 
> manages objects' descriptors only. 
> We need to rework {{LogicalRelImplementor}} in a way to acquire table object 
> from a manager rather than from IgniteTable. This makes migration to 
> {{CatalogService}} possible.
> h4. Implementation Note
>  * It would be nice to leave {{LogicalRelImplementor}} synchronous, awaiting 
> objects' futures outside
>  * Method {{org.apache.ignite.internal.sql.engine.schema.IgniteTable#table}} 
> should be removed, as well as {{UpdateableTable}} from extends list of 
> {{org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19729) [ducktests] Support sqlConfiguration field in ignite configuration

2023-06-13 Thread Sergey Korotkov (Jira)
Sergey Korotkov created IGNITE-19729:


 Summary: [ducktests] Support sqlConfiguration field in ignite 
configuration
 Key: IGNITE-19729
 URL: https://issues.apache.org/jira/browse/IGNITE-19729
 Project: Ignite
  Issue Type: Task
Reporter: Sergey Korotkov
Assignee: Sergey Korotkov






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19726:
--
Description: 
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related objects from IgniteTableImpl:

- IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
- StorageNode.convertPublisher should be moved to ScanableTableImpl.
- TableRowConverter accessor from ExecutableTable.


  was:
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related objects from IgniteTableImpl:

- IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
- StorageNode.convertPublisher should be moved to ScanableTableImpl.



> Sql. Migrate index operations to ScanableTable.
> ---
>
> Key: IGNITE-19726
> URL: https://issues.apache.org/jira/browse/IGNITE-19726
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
> possible to move execution related index methods to ScanableTable, thus 
> removing remaining execution related objects from IgniteTableImpl:
> - IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
> - StorageNode.convertPublisher should be moved to ScanableTableImpl.
> - TableRowConverter accessor from ExecutableTable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19728) .NET: DataStreamerTests.TestAutoFlushFrequency is flaky

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn reassigned IGNITE-19728:
---

Assignee: Pavel Tupitsyn

> .NET: DataStreamerTests.TestAutoFlushFrequency is flaky
> ---
>
> Key: IGNITE-19728
> URL: https://issues.apache.org/jira/browse/IGNITE-19728
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
>
> *DataStreamerTests.TestAutoFlushFrequency(True)*:
> * History: 
> https://ci.ignite.apache.org/test/4035794459336688174?currentProjectId=ApacheIgnite3xGradle_Test=true
> * Failure: 
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7291573?hideProblemsFromDependencies=false=false=true=true=overview=7291573_31617_.1117=debug=flowAware
> {code}
>  Failed TestAutoFlushFrequency(True) [256 ms]
> 14:54:54   Error Message:
> 14:54:54  Expected: True
> 14:54:54   But was:  False
> 14:54:54 
> 14:54:54   Stack Trace:
> 14:54:54  at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
> 14:54:54at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
> 14:54:54at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 
> invoke)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.BeforeAndAfterTestCommand.<>c__DisplayClass1_0.b__0()
> 14:54:54at 
> NUnit.Framework.Internal.Commands.DelegatingTestCommand.RunTestMethodInThreadAbortSafeZone(TestExecutionContext
>  context, Action action)
> 14:54:54 
> 14:54:54 1)at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54 
> 14:54:54 
> 14:54:54   Standard Output Messages:
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=SchemasGet, remoteAddress=127.0.0.1:10942, requestId=3]
> 14:54:54  [17:54:53] [Debug] [Table] Schema loaded [tableId=1, 
> schemaVersion=1]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=PartitionAssignmentGet, remoteAddress=127.0.0.1:10943, requestId=2]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleDeleteAll, remoteAddress=127.0.0.1:10942, requestId=4]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=TupleUpsertAll, remoteAddress=127.0.0.1:10943, requestId=3]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleContainsKey, remoteAddress=127.0.0.1:10942, requestId=5]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19728) .NET: DataStreamerTests.TestAutoFlushFrequency is flaky

2023-06-13 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-19728:
---

 Summary: .NET: DataStreamerTests.TestAutoFlushFrequency is flaky
 Key: IGNITE-19728
 URL: https://issues.apache.org/jira/browse/IGNITE-19728
 Project: Ignite
  Issue Type: Bug
  Components: platforms, thin client
Reporter: Pavel Tupitsyn


*DataStreamerTests.TestAutoFlushFrequency(True)*:

* History: 
https://ci.ignite.apache.org/test/4035794459336688174?currentProjectId=ApacheIgnite3xGradle_Test=true
* Failure: 
https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7291573?hideProblemsFromDependencies=false=false=true=true=overview=7291573_31617_.1117=debug=flowAware

{code}
 Failed TestAutoFlushFrequency(True) [256 ms]
14:54:54   Error Message:
14:54:54  Expected: True
14:54:54   But was:  False
14:54:54 
14:54:54   Stack Trace:
14:54:54  at 
Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
enabled) in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
 103
14:54:54at 
NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
14:54:54at 
NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
 awaiter)
14:54:54at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 
invoke)
14:54:54at 
NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
 context)
14:54:54at 
NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
 context)
14:54:54at 
NUnit.Framework.Internal.Commands.BeforeAndAfterTestCommand.<>c__DisplayClass1_0.b__0()
14:54:54at 
NUnit.Framework.Internal.Commands.DelegatingTestCommand.RunTestMethodInThreadAbortSafeZone(TestExecutionContext
 context, Action action)
14:54:54 
14:54:54 1)at 
Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
enabled) in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
 103
14:54:54 
14:54:54 
14:54:54   Standard Output Messages:
14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
[op=SchemasGet, remoteAddress=127.0.0.1:10942, requestId=3]
14:54:54  [17:54:53] [Debug] [Table] Schema loaded [tableId=1, 
schemaVersion=1]
14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
[op=PartitionAssignmentGet, remoteAddress=127.0.0.1:10943, requestId=2]
14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
[op=TupleDeleteAll, remoteAddress=127.0.0.1:10942, requestId=4]
14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
[op=TupleUpsertAll, remoteAddress=127.0.0.1:10943, requestId=3]
14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
[op=TupleContainsKey, remoteAddress=127.0.0.1:10942, requestId=5]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19728) .NET: DataStreamerTests.TestAutoFlushFrequency is flaky

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19728:

Labels: .NET ignite-3  (was: )

> .NET: DataStreamerTests.TestAutoFlushFrequency is flaky
> ---
>
> Key: IGNITE-19728
> URL: https://issues.apache.org/jira/browse/IGNITE-19728
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
>
> *DataStreamerTests.TestAutoFlushFrequency(True)*:
> * History: 
> https://ci.ignite.apache.org/test/4035794459336688174?currentProjectId=ApacheIgnite3xGradle_Test=true
> * Failure: 
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7291573?hideProblemsFromDependencies=false=false=true=true=overview=7291573_31617_.1117=debug=flowAware
> {code}
>  Failed TestAutoFlushFrequency(True) [256 ms]
> 14:54:54   Error Message:
> 14:54:54  Expected: True
> 14:54:54   But was:  False
> 14:54:54 
> 14:54:54   Stack Trace:
> 14:54:54  at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
> 14:54:54at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
> 14:54:54at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 
> invoke)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.BeforeAndAfterTestCommand.<>c__DisplayClass1_0.b__0()
> 14:54:54at 
> NUnit.Framework.Internal.Commands.DelegatingTestCommand.RunTestMethodInThreadAbortSafeZone(TestExecutionContext
>  context, Action action)
> 14:54:54 
> 14:54:54 1)at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54 
> 14:54:54 
> 14:54:54   Standard Output Messages:
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=SchemasGet, remoteAddress=127.0.0.1:10942, requestId=3]
> 14:54:54  [17:54:53] [Debug] [Table] Schema loaded [tableId=1, 
> schemaVersion=1]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=PartitionAssignmentGet, remoteAddress=127.0.0.1:10943, requestId=2]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleDeleteAll, remoteAddress=127.0.0.1:10942, requestId=4]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=TupleUpsertAll, remoteAddress=127.0.0.1:10943, requestId=3]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleContainsKey, remoteAddress=127.0.0.1:10942, requestId=5]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19728) .NET: DataStreamerTests.TestAutoFlushFrequency is flaky

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19728:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> .NET: DataStreamerTests.TestAutoFlushFrequency is flaky
> ---
>
> Key: IGNITE-19728
> URL: https://issues.apache.org/jira/browse/IGNITE-19728
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Priority: Major
>
> *DataStreamerTests.TestAutoFlushFrequency(True)*:
> * History: 
> https://ci.ignite.apache.org/test/4035794459336688174?currentProjectId=ApacheIgnite3xGradle_Test=true
> * Failure: 
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7291573?hideProblemsFromDependencies=false=false=true=true=overview=7291573_31617_.1117=debug=flowAware
> {code}
>  Failed TestAutoFlushFrequency(True) [256 ms]
> 14:54:54   Error Message:
> 14:54:54  Expected: True
> 14:54:54   But was:  False
> 14:54:54 
> 14:54:54   Stack Trace:
> 14:54:54  at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54at 
> NUnit.Framework.Internal.TaskAwaitAdapter.GenericAdapter`1.BlockUntilCompleted()
> 14:54:54at 
> NUnit.Framework.Internal.MessagePumpStrategy.NoMessagePumpStrategy.WaitForCompletion(AwaitAdapter
>  awaiter)
> 14:54:54at NUnit.Framework.Internal.AsyncToSyncAdapter.Await(Func`1 
> invoke)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.RunTestMethod(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.TestMethodCommand.Execute(TestExecutionContext
>  context)
> 14:54:54at 
> NUnit.Framework.Internal.Commands.BeforeAndAfterTestCommand.<>c__DisplayClass1_0.b__0()
> 14:54:54at 
> NUnit.Framework.Internal.Commands.DelegatingTestCommand.RunTestMethodInThreadAbortSafeZone(TestExecutionContext
>  context, Action action)
> 14:54:54 
> 14:54:54 1)at 
> Apache.Ignite.Tests.Table.DataStreamerTests.TestAutoFlushFrequency(Boolean 
> enabled) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/Table/DataStreamerTests.cs:line
>  103
> 14:54:54 
> 14:54:54 
> 14:54:54   Standard Output Messages:
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=SchemasGet, remoteAddress=127.0.0.1:10942, requestId=3]
> 14:54:54  [17:54:53] [Debug] [Table] Schema loaded [tableId=1, 
> schemaVersion=1]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=PartitionAssignmentGet, remoteAddress=127.0.0.1:10943, requestId=2]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleDeleteAll, remoteAddress=127.0.0.1:10942, requestId=4]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3985] Sending request 
> [op=TupleUpsertAll, remoteAddress=127.0.0.1:10943, requestId=3]
> 14:54:54  [17:54:53] [Trace] [ClientSocket-3984] Sending request 
> [op=TupleContainsKey, remoteAddress=127.0.0.1:10942, requestId=5]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19560:

Priority: Critical  (was: Major)

> Thin 3.0: Netty buffer leak in ConfigurationTest
> 
>
> Key: IGNITE-19560
> URL: https://issues.apache.org/jira/browse/IGNITE-19560
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: _Test_Run_Unit_Tests_14560.log
>
>
> {code}
> ClientTupleTest > testTypedGetters() PASSED
>   org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType()
>   ClientTupleTest > testTypedGettersWithIncorrectType() PASSED
> org.apache.ignite.client.ConfigurationTest
>   ConfigurationTest STANDARD_ERROR
>   2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] 
> Thin client protocol started successfully [port=10800]
>   2023-05-24 13:53:59:249 +0300 
> [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: 
> ByteBuf.release() was not called before it's garbage-collected. See 
> https://netty.io/wiki/reference-counted-objects.html for more information.
>   Recent access records:
>   #1:
> 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300)
> 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
> 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
> 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> {code}
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunUnitTests/7247037?hideProblemsFromDependencies=false=false



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19727) Server nodes cannot find each other and log NullPointerException

2023-06-13 Thread Alexander Belyak (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Belyak updated IGNITE-19727:
--
Labels: ignite-3  (was: )

> Server nodes cannot find each other and log NullPointerException
> 
>
> Key: IGNITE-19727
> URL: https://issues.apache.org/jira/browse/IGNITE-19727
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Critical
>  Labels: ignite-3
> Attachments: server1.log.zip, server2.log.zip
>
>
> h2. Steps to reproduce
>  # Version 3.0.0-SNAPSHOT commit hash 006ddb06e1deb6788e1b2796bc033af14758b132
>  # Copy db distributions into 2 servers.
>  # Setup log level to FINE
>  # Setup lookup by changing ignite-config.conf on both servers to
> {code:java}
> {
> network: {
> port: 3344,
> portRange: 10,
> nodeFinder: {
> netClusterNodes: [
> "172.24.1.2:3344,172.24.1.4:3344"
> ]
> }
> }
> } {code}
>  # Start both servers by command 
> {code:java}
> sh ./ignite3db  start {code}
>  
> h2. Expected behavior
> Servers joined into cluster.
> h2. Actual behavior
> Two separate clusters are created with errors in log such:
> {code:java}
> 2023-06-13 16:21:07:178 + [WARNING][main][MembershipProtocol] 
> [default:defaultNode:57294ce834dc4730@172.24.1.2:3344] Exception on initial 
> Sync, cause: java.lang.NullPointerException
> ...
> 2023-06-13 16:21:37:185 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
> [default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Send Sync to 
> 172.24.1.2:3344,172.24.1.4:3344
> 2023-06-13 16:21:37:186 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
> [default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Failed to send 
> Sync to 172.24.1.2:3344,172.24.1.4:3344, cause: 
> java.lang.NullPointerException {code}
> Logs in attachment[^server1.log.zip][^server2.log.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18390) Calcite engine. Reduce count of created spools during planning

2023-06-13 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-18390:
---
Labels: calcite calcite2-required ise  (was: calcite calcite2-required)

> Calcite engine. Reduce count of created spools during planning
> --
>
> Key: IGNITE-18390
> URL: https://issues.apache.org/jira/browse/IGNITE-18390
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: calcite, calcite2-required, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, when trying to convert traits table spool is created for each 
> exchange node, but not always required. Root node not required to be 
> rewindable, rewindability is required only on the right hand of correlated 
> nested loop node, but rewindability trait is propagated from the bottom nodes 
> (table and index scans) to the top nodes and all nodes are converted to 
> rewindable, this cause redundant spool creation.
> Investigate posibility to reduce count of table spools to reduce planning 
> time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19562) Calcite engine. Make sure all diagnostic tools work

2023-06-13 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-19562:
---
Labels: calcite ise  (was: calcite)

> Calcite engine. Make sure all diagnostic tools work 
> 
>
> Key: IGNITE-19562
> URL: https://issues.apache.org/jira/browse/IGNITE-19562
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: calcite, ise
>
> Calcite-based SQL engine and H2-based SQL engine use different paths to run 
> queries. For H2-based engine we have a lot of diagnostic tools, perhaps some 
> of them are not working for Calcite-based SQL engine. 
> We need to check (write tests) and fix (in case it's not working) such 
> instruments as:
> * Metrics
> * Events
> * Long running queries warnings in log messages
> * Performance statistics
> * Tracing
> * Hiding of sensitive information in diagnostic tools



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19727) Server nodes cannot find each other and log NullPointerException

2023-06-13 Thread Igor (Jira)
Igor created IGNITE-19727:
-

 Summary: Server nodes cannot find each other and log 
NullPointerException
 Key: IGNITE-19727
 URL: https://issues.apache.org/jira/browse/IGNITE-19727
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
Reporter: Igor
 Attachments: server1.log.zip, server2.log.zip

h2. Steps to reproduce
 # Version 3.0.0-SNAPSHOT commit hash 006ddb06e1deb6788e1b2796bc033af14758b132
 # Copy db distributions into 2 servers.
 # Setup log level to FINE
 # Setup lookup by changing ignite-config.conf on both servers to
{code:java}
{
network: {
port: 3344,
portRange: 10,
nodeFinder: {
netClusterNodes: [
"172.24.1.2:3344,172.24.1.4:3344"
]
}
}
} {code}

 # Start both servers by command 
{code:java}
sh ./ignite3db  start {code}
 

h2. Expected behavior

Servers joined into cluster.
h2. Actual behavior

Two separate clusters are created with errors in log such:
{code:java}
2023-06-13 16:21:07:178 + [WARNING][main][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344] Exception on initial 
Sync, cause: java.lang.NullPointerException

...

2023-06-13 16:21:37:185 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Send Sync to 
172.24.1.2:3344,172.24.1.4:3344
2023-06-13 16:21:37:186 + [DEBUG][sc-cluster-3344-1][MembershipProtocol] 
[default:defaultNode:57294ce834dc4730@172.24.1.2:3344][doSync] Failed to send 
Sync to 172.24.1.2:3344,172.24.1.4:3344, cause: java.lang.NullPointerException 
{code}
Logs in attachment[^server1.log.zip][^server2.log.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19725) Calcite-2. Add local flag support

2023-06-13 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-19725:
-
Labels: calcite calcite2-required ise  (was: calcite calcite2-required)

> Calcite-2. Add local flag support
> -
>
> Key: IGNITE-19725
> URL: https://issues.apache.org/jira/browse/IGNITE-19725
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite, calcite2-required, ise
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19725) Calcite-2. Add local flag support

2023-06-13 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-19725:
-
Component/s: sql

> Calcite-2. Add local flag support
> -
>
> Key: IGNITE-19725
> URL: https://issues.apache.org/jira/browse/IGNITE-19725
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite, calcite2-required, ise
> Fix For: 2.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19725) Calcite-2. Add local flag support

2023-06-13 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-19725:
-
Fix Version/s: 2.16

> Calcite-2. Add local flag support
> -
>
> Key: IGNITE-19725
> URL: https://issues.apache.org/jira/browse/IGNITE-19725
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite, calcite2-required, ise
> Fix For: 2.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19373) Get rid of waitForIndex() from tests

2023-06-13 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky reassigned IGNITE-19373:
---

Assignee: Evgeny Stanilovsky

> Get rid of waitForIndex() from tests
> 
>
> Key: IGNITE-19373
> URL: https://issues.apache.org/jira/browse/IGNITE-19373
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> We have a crutch in our tests to wait to create indexes on all nodes in a 
> cluster. Seems we already have other crutches to be ready to get rid of the 
> first one.
> See the usage of 
> org.apache.ignite.internal.sql.engine.ClusterPerClassIntegrationTest#waitForIndex



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19665) High performance drop in key-value put() operations introduced between May 23 and June 5

2023-06-13 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy reassigned IGNITE-19665:
--

Assignee: Roman Puchkovskiy

> High performance drop in key-value put() operations introduced between May 23 
> and June 5
> 
>
> Key: IGNITE-19665
> URL: https://issues.apache.org/jira/browse/IGNITE-19665
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Artiukhov
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ai3_embedded_20230606_144606.jfr.zip, 
> ai3_thin_client_20230606_075104.jfr.zip, gc.log.20230606_075104, 
> ignite-config.conf, ignite3db-0.log, run11-new-emb-jfr.txt, ycsb-run10.log
>
>
> This ticket is a product of subsequent work on 
> https://issues.apache.org/jira/browse/IGNITE-19664.
> There are high (more than 4x on my local machine) performance drop in 
> {{KeyValueView#put}} operations introduced somewhere between the following 
> commit:
> {noformat}
> commit 0c68cbe3f016e508bd9d53ce5320c88acba1acff (HEAD)
> Author: Slava Koptilin 
> Date:   Tue May 23 10:17:53 2023 +0300
> IGNITE-17883 Removed not implemented 'invoke' functionality (#2090)
> {noformat}
> and the following one:
> {code:java}
> commit a2254434c403bc54685f05e0d6f51bef56abea2a (HEAD -> main, origin/main, 
> origin/HEAD)
> Author: Vadim Pakhnushev <8614891+valep...@users.noreply.github.com>
> Date:   Mon Jun 5 17:43:07 2023 +0300
> IGNITE-19559 NPE in deploy/undeploy calls in non-REPL mode (#2131)
> {code}
> The test is the "Test 1" from 
> https://issues.apache.org/jira/browse/IGNITE-19664, i.e.: 
> 1. Start an Ignite 3 server node with attached {{ignite-config.conf}}. 
> {{raft.fsync=false}} is set in the config.
> 2. Start YCSB client which makes {{KeyValueView#put}} operations within a 
> "100% insert" profile.
> Results for {{0c68cbe3f016e508bd9d53ce5320c88acba1acff}} were as follows:
> {noformat}
> [OVERALL], RunTime(ms), 282482
> [OVERALL], Throughput(ops/sec), 3540.048569466373
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 1067.488346
> [INSERT], MinLatency(us), 492
> [INSERT], MaxLatency(us), 421375
> [INSERT], 95thPercentileLatency(us), 2059
> [INSERT], 99thPercentileLatency(us), 5151
> [INSERT], Return=OK, 100
> {noformat}
> Results for {{a2254434c403bc54685f05e0d6f51bef56abea2a}} are more than 4x 
> worse in terms of throughput:
> {code:java}
> [OVERALL], RunTime(ms), 1325870
> [OVERALL], Throughput(ops/sec), 754.2217562807816
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 5229.54584
> [INSERT], MinLatency(us), 1297
> [INSERT], MaxLatency(us), 164223
> [INSERT], 95thPercentileLatency(us), 9871
> [INSERT], 99thPercentileLatency(us), 14271
> [INSERT], Return=OK, 100
> {code}
> Logs for {{0c68cbe3f016e508bd9d53ce5320c88acba1acff}}: see 
> https://issues.apache.org/jira/browse/IGNITE-19664
> Logs for {{a2254434c403bc54685f05e0d6f51bef56abea2a}}:
> - node's config:  [^ignite-config.conf] 
> - node's log:  [^ignite3db-0.log] 
> - node's GC log:  [^gc.log.20230606_075104] 
> - YCSB client log:  [^ycsb-run10.log] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19079) ExecutionTimeout in ItIgniteNodeRestartTest

2023-06-13 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732142#comment-17732142
 ] 

Denis Chudov commented on IGNITE-19079:
---

Could not reproduce the test failure in several hundreds runs of these tests on 
TeamCity: 
https://ci.ignite.apache.org/viewType.html?buildTypeId=ApacheIgnite3xGradle_Test_IntegrationTests_ModuleRunner_ApacheIgnite3xGradle_Test_IntegrationTests=pull%2F2179=buildTypeStatusDiv

> ExecutionTimeout in ItIgniteNodeRestartTest
> ---
>
> Key: IGNITE-19079
> URL: https://issues.apache.org/jira/browse/IGNITE-19079
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ItIgniteNodeRestartTest#testTwoNodesRestartDirect fails with ExecutionTimeout
> {code:java}
> 023-03-20 03:52:36:208 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-17][NodeImpl] Unsuccessful election 
> round number 662
>     2023-03-20 03:52:36:209 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-17][NodeImpl] Node 
>  term 1 start 
> preVote.
>     2023-03-20 03:52:36:601 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-18][NodeImpl] Unsuccessful election 
> round number 659
>     2023-03-20 03:52:36:601 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-18][NodeImpl] Node 
> <4d8ec640-9e96-4939-86e8-acb0c9460da8_part_1/iinrt_ttnrd_0> term 1 start 
> preVote.
>     2023-03-20 03:52:37:992 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-6][NodeImpl] Unsuccessful election 
> round number 663
>     2023-03-20 03:52:37:992 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-6][NodeImpl] Node 
>  term 1 start 
> preVote.
>     2023-03-20 03:52:38:049 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-19][NodeImpl] Unsuccessful election 
> round number 660
>     2023-03-20 03:52:38:049 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-19][NodeImpl] Node 
> <4d8ec640-9e96-4939-86e8-acb0c9460da8_part_1/iinrt_ttnrd_0> term 1 start 
> preVote.
>     2023-03-20 03:52:38:299 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-2][NodeImpl] Unsuccessful election 
> round number 659
>     2023-03-20 03:52:38:300 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-2][NodeImpl] Node 
> <4d8ec640-9e96-4939-86e8-acb0c9460da8_part_0/iinrt_ttnrd_0> term 1 start 
> preVote.
>     2023-03-20 03:52:42:870 +0300 
> [INFO][%iinrt_ttnrd_0%JRaft-ElectionTimer-1][NodeImpl] Unsuccessful election 
> round number 662 {code}
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7138347?expandCode+Inspection=true=true=false=false



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19726:
--
Epic Link: IGNITE-19502

> Sql. Migrate index operations to ScanableTable.
> ---
>
> Key: IGNITE-19726
> URL: https://issues.apache.org/jira/browse/IGNITE-19726
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
> possible to move execution related index methods to ScanableTable, thus 
> removing remaining execution related objects from IgniteTableImpl:
> - IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
> - StorageNode.convertPublisher should be moved to ScanableTableImpl.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19726:
--
Description: 
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related objects from IgniteTableImpl:

- IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
- StorageNode.convertPublisher should be moved to ScanableTableImpl.


  was:
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related methods from IgniteTableImpl:

- IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
- StorageNode.convertPublisher should be moved to ScanableTableImpl.



> Sql. Migrate index operations to ScanableTable.
> ---
>
> Key: IGNITE-19726
> URL: https://issues.apache.org/jira/browse/IGNITE-19726
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
> possible to move execution related index methods to ScanableTable, thus 
> removing remaining execution related objects from IgniteTableImpl:
> - IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
> - StorageNode.convertPublisher should be moved to ScanableTableImpl.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19726:
--
Description: 
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related methods from IgniteTableImpl:

- IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
- StorageNode.convertPublisher should be moved to ScanableTableImpl.


  was:
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related methods from IgniteTableImpl.




> Sql. Migrate index operations to ScanableTable.
> ---
>
> Key: IGNITE-19726
> URL: https://issues.apache.org/jira/browse/IGNITE-19726
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
> possible to move execution related index methods to ScanableTable, thus 
> removing remaining execution related methods from IgniteTableImpl:
> - IndexScanNode: index scans/looks should be performed via ScanableTable APIs.
> - StorageNode.convertPublisher should be moved to ScanableTableImpl.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19726:
--
Description: 
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable, thus 
removing remaining execution related methods from IgniteTableImpl.



  was:
When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable (*), thus 
removing remaining execution related methods from IgniteTableImpl.




> Sql. Migrate index operations to ScanableTable.
> ---
>
> Key: IGNITE-19726
> URL: https://issues.apache.org/jira/browse/IGNITE-19726
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
> possible to move execution related index methods to ScanableTable, thus 
> removing remaining execution related methods from IgniteTableImpl.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19726) Sql. Migrate index operations to ScanableTable.

2023-06-13 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-19726:
-

 Summary: Sql. Migrate index operations to ScanableTable.
 Key: IGNITE-19726
 URL: https://issues.apache.org/jira/browse/IGNITE-19726
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 3.0.0-beta2
Reporter: Maksim Zhuravkov
Assignee: Maksim Zhuravkov


When https://issues.apache.org/jira/browse/IGNITE-19587 is complete, it is 
possible to move execution related index methods to ScanableTable (*), thus 
removing remaining execution related methods from IgniteTableImpl.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19483) Transform TableManager and IndexManager to internally work against Catalog event types

2023-06-13 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732126#comment-17732126
 ] 

Roman Puchkovskiy commented on IGNITE-19483:


The patch looks good to me

> Transform TableManager and IndexManager to internally work against Catalog 
> event types
> --
>
> Key: IGNITE-19483
> URL: https://issues.apache.org/jira/browse/IGNITE-19483
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, when an event like 'table was added to configuration' happens, the 
> listener polls the table config by itself and then uses it to create the 
> table.
> This should be changed: the table configuration object should be converted to 
> an object from Catalog domain and pushed to the listeners.
> Same should be done to indices.
> Requires investigation.
> Also, we need to stop passing configuration to deeply-nested components (like 
> storages). Also requires investigation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19212) ODBC 3.0: Implement basic query execution

2023-06-13 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego reassigned IGNITE-19212:


Assignee: Igor Sapego

> ODBC 3.0: Implement basic query execution
> -
>
> Key: IGNITE-19212
> URL: https://issues.apache.org/jira/browse/IGNITE-19212
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Scope:
> - Implement server part;
> - Implement client part;
> - Implement metadata passing about result set;
> - Port applicable tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19536) Introduce a "recoverable" flag to differentiate recoverable and non-recoverable exceptions

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19536:
-
Description: 
It seems useful to introduce a marker interface in order to differentiate 
recoverable and non-recoverable errors. This approach should simplify exception 
handling on the client side.
Something as follows:
{code:java}
try {
igniteCompute.execute();
}
catch (IgniteComputeException error) {
if (error instanceof RecoverableException) {
// Put retry logic here.
}
}
{code}

  was:
It seems useful to introduce a marker/flag in order to differentiate 
recoverable and non-recoverable errors. This approach should simplify exception 
handling on the client side.
Something as follows:
{code:java}
try {
igniteCompute.execute();
}
catch (IgniteComputeException error) {
if (error is recoverable) {
// Put retry logic here.
}
}
{code}


> Introduce a "recoverable" flag to differentiate recoverable and 
> non-recoverable exceptions
> --
>
> Key: IGNITE-19536
> URL: https://issues.apache.org/jira/browse/IGNITE-19536
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: iep-84, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It seems useful to introduce a marker interface in order to differentiate 
> recoverable and non-recoverable errors. This approach should simplify 
> exception handling on the client side.
> Something as follows:
> {code:java}
> try {
> igniteCompute.execute();
> }
> catch (IgniteComputeException error) {
> if (error instanceof RecoverableException) {
> // Put retry logic here.
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19536) Introduce a "recoverable" flag to differentiate recoverable and non-recoverable exceptions

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19536:
-
Description: 
It seems useful to introduce a marker/flag in order to differentiate 
recoverable and non-recoverable errors. This approach should simplify exception 
handling on the client side.
Something as follows:
{code:java}
try {
igniteCompute.execute();
}
catch (IgniteComputeException error) {
if (error is recoverable) {
// Put retry logic here.
}
}
{code}

  was:
It seems useful to introduce a marker/flag in order to differentiate 
recoverable and non-recoverable errors. This approach should simplify exception 
handling on the client side.
Something as follows:

{code:java}
try {
igniteCompute.execute();
}
catch (IgniteComputeException error) {
if (error is recoverable) {
// Put retry logic here.
}
}
{code}



> Introduce a "recoverable" flag to differentiate recoverable and 
> non-recoverable exceptions
> --
>
> Key: IGNITE-19536
> URL: https://issues.apache.org/jira/browse/IGNITE-19536
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: iep-84, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It seems useful to introduce a marker/flag in order to differentiate 
> recoverable and non-recoverable errors. This approach should simplify 
> exception handling on the client side.
> Something as follows:
> {code:java}
> try {
> igniteCompute.execute();
> }
> catch (IgniteComputeException error) {
> if (error is recoverable) {
> // Put retry logic here.
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19715) Thin client operations can take a long time if PA is enabled and some cluster nodes are not network reachable.

2023-06-13 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19715:

Description: 
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during 
handling of the next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result socket#connect operation (and hence put operation) hangs for a 
significant amount of time (it depends on OS parameters, usually it is couple 
of minutes). This is confusing for users because a single put may take much 
longer than the configured ClientConfiguration#setTimeout property.

  was:
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during 
handling of the next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result put operation hangs for a significant amount of time (it depends on 
OS parameters, usually it is couple of minutes) This is confusing for users 
because a single PUT takes much longer than the configured 
ClientConfiguration#setTimeout property.


> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not network reachable.
> --
>
> Key: IGNITE-19715
> URL: https://issues.apache.org/jira/browse/IGNITE-19715
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Assignee: Mikhail Petrov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not reachable over network.
> Consider the following scenario:
> 1. The thin client have already sucessfully established connection to all 
> configured node addresses.
> 2. A particular cluster node becomes unreachable over network. It can be 
> reproduced with iptables -A INPUT -p tcp --dport for Linux.
> 3. The thin client periodically sends put request which is mapped by PA to 
> the unreachable node.
> 4. Firstly  all attempts to perform put will lead to `ClientException: 
> Timeout was reached before computation completed.` exception. But eventually 
> the connection to the unreachable node will be closed by OS (see 
> tcp_keepalive_time for Linux).
> This will lead to reestablishing connection to the unreachable node during 
> handling of the next put (see ReliableChannel.java:1012)
> We currently do not set a timeout for the open connection operation (see 
> GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
> Socket#connect(java.net.SocketAddress, int))
> As a result socket#connect operation (and hence put operation) hangs for a 
> significant amount of time (it depends on OS parameters, usually it is couple 
> of minutes). This is confusing for users because a single put may take much 
> longer than the configured ClientConfiguration#setTimeout property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19715) Thin client operations can take a long time if PA is enabled and some cluster nodes are not network reachable.

2023-06-13 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov reassigned IGNITE-19715:
---

Assignee: Mikhail Petrov

> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not network reachable.
> --
>
> Key: IGNITE-19715
> URL: https://issues.apache.org/jira/browse/IGNITE-19715
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Assignee: Mikhail Petrov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not reachable over network.
> Consider the following scenario:
> 1. The thin client have already sucessfully established connection to all 
> configured node addresses.
> 2. A particular cluster node becomes unreachable over network. It can be 
> reproduced with iptables -A INPUT -p tcp --dport for Linux.
> 3. The thin client periodically sends put request which is mapped by PA to 
> the unreachable node.
> 4. Firstly  all attempts to perform put will lead to `ClientException: 
> Timeout was reached before computation completed.` exception. But eventually 
> the connection to the unreachable node will be closed by OS (see 
> tcp_keepalive_time for Linux).
> This will lead to reestablishing connection to the unreachable node during 
> handling of the next put (see ReliableChannel.java:1012)
> We currently do not set a timeout for the open connection operation (see 
> GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
> Socket#connect(java.net.SocketAddress, int))
> As a result put operation hangs for a significant amount of time (it depends 
> on OS parameters, usually it is couple of minutes) This is confusing for 
> users because a single PUT takes much longer than the configured 
> ClientConfiguration#setTimeout property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-19665) High performance drop in key-value put() operations introduced between May 23 and June 5

2023-06-13 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17730530#comment-17730530
 ] 

Roman Puchkovskiy edited comment on IGNITE-19665 at 6/13/23 2:12 PM:
-

There are two problems:
 # We go to the configuration to check for table existence each time we obtain 
the table
 # The check was made less efficient due to changes introduced in IGNITE-19531

Introduction of Catalog and Schema sync will allow us to remove the checks in 
item 1 completely, which will automaticlly solve the problem in item 2.

It is suggested to leave this on hold for now until Catalog and Schema sync 
arrive or it suddenly turns out that we need all the performance we might get 
'right here, right now'.


was (Author: rpuch):
There are two problems:
 # We go to the configuration to check for table existence each time we obtain 
the table
 # The check was made less efficient due to changes introduces in IGNITE-19531

Introduction of Catalog and Schema sync will allow us to remove the checks in 
item 1 completely, which will automaticlly solve the problem in item 2.

It is suggested to leave this on hold for now until Catalog and Schema sync 
arrive or it suddenly turns out that we need all the performance we might get 
'right here, right now'.

> High performance drop in key-value put() operations introduced between May 23 
> and June 5
> 
>
> Key: IGNITE-19665
> URL: https://issues.apache.org/jira/browse/IGNITE-19665
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ai3_embedded_20230606_144606.jfr.zip, 
> ai3_thin_client_20230606_075104.jfr.zip, gc.log.20230606_075104, 
> ignite-config.conf, ignite3db-0.log, run11-new-emb-jfr.txt, ycsb-run10.log
>
>
> This ticket is a product of subsequent work on 
> https://issues.apache.org/jira/browse/IGNITE-19664.
> There are high (more than 4x on my local machine) performance drop in 
> {{KeyValueView#put}} operations introduced somewhere between the following 
> commit:
> {noformat}
> commit 0c68cbe3f016e508bd9d53ce5320c88acba1acff (HEAD)
> Author: Slava Koptilin 
> Date:   Tue May 23 10:17:53 2023 +0300
> IGNITE-17883 Removed not implemented 'invoke' functionality (#2090)
> {noformat}
> and the following one:
> {code:java}
> commit a2254434c403bc54685f05e0d6f51bef56abea2a (HEAD -> main, origin/main, 
> origin/HEAD)
> Author: Vadim Pakhnushev <8614891+valep...@users.noreply.github.com>
> Date:   Mon Jun 5 17:43:07 2023 +0300
> IGNITE-19559 NPE in deploy/undeploy calls in non-REPL mode (#2131)
> {code}
> The test is the "Test 1" from 
> https://issues.apache.org/jira/browse/IGNITE-19664, i.e.: 
> 1. Start an Ignite 3 server node with attached {{ignite-config.conf}}. 
> {{raft.fsync=false}} is set in the config.
> 2. Start YCSB client which makes {{KeyValueView#put}} operations within a 
> "100% insert" profile.
> Results for {{0c68cbe3f016e508bd9d53ce5320c88acba1acff}} were as follows:
> {noformat}
> [OVERALL], RunTime(ms), 282482
> [OVERALL], Throughput(ops/sec), 3540.048569466373
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 1067.488346
> [INSERT], MinLatency(us), 492
> [INSERT], MaxLatency(us), 421375
> [INSERT], 95thPercentileLatency(us), 2059
> [INSERT], 99thPercentileLatency(us), 5151
> [INSERT], Return=OK, 100
> {noformat}
> Results for {{a2254434c403bc54685f05e0d6f51bef56abea2a}} are more than 4x 
> worse in terms of throughput:
> {code:java}
> [OVERALL], RunTime(ms), 1325870
> [OVERALL], Throughput(ops/sec), 754.2217562807816
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 5229.54584
> [INSERT], MinLatency(us), 1297
> [INSERT], MaxLatency(us), 164223
> [INSERT], 95thPercentileLatency(us), 9871
> [INSERT], 99thPercentileLatency(us), 14271
> [INSERT], Return=OK, 100
> {code}
> Logs for {{0c68cbe3f016e508bd9d53ce5320c88acba1acff}}: see 
> https://issues.apache.org/jira/browse/IGNITE-19664
> Logs for {{a2254434c403bc54685f05e0d6f51bef56abea2a}}:
> - node's config:  [^ignite-config.conf] 
> - node's log:  [^ignite3db-0.log] 
> - node's GC log:  [^gc.log.20230606_075104] 
> - YCSB client log:  [^ycsb-run10.log] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19725) Calcite-2. Add local flag support

2023-06-13 Thread Ivan Daschinsky (Jira)
Ivan Daschinsky created IGNITE-19725:


 Summary: Calcite-2. Add local flag support
 Key: IGNITE-19725
 URL: https://issues.apache.org/jira/browse/IGNITE-19725
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Daschinsky
Assignee: Ivan Daschinsky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19600) Remove the DistributionZoneManager#topologyVersionedDataNodes and connected logic

2023-06-13 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732108#comment-17732108
 ] 

Mirza Aliev commented on IGNITE-19600:
--

[~kgusakov] Thank you, LGTM! 

> Remove the DistributionZoneManager#topologyVersionedDataNodes and connected 
> logic
> -
>
> Key: IGNITE-19600
> URL: https://issues.apache.org/jira/browse/IGNITE-19600
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> *Motivation*
> Under the IGNITE-18756 we introduce the logic for awaiting the right 
> dataNodes list, which sycnronized with the appropriate toplogy version. But 
> at the moment this method is not needed anymore.
> Definition of done
> - The method itself and connected logic from IGNITE-18756 are removed
> - All needed tests fixed after that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19519) Deployment unit removal

2023-06-13 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-19519:
--
Description: 
h3. Deployment unit removal process

The deployment unit must be removed from all cluster nodes. In order to achieve 
this the following must be implemented:
 # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
could fail because another process has already changed status to {{OBSOLETE}} 
or {{REMOVING}} value. It is also impossible to start an undeployment process 
in case the deployment process is still in progress.

After this step the deployment unit is not available for new code execution. 
Code execution in progress still can use this deployment unit.
 # Meta storage event must be fired to all target nodes due to a change of 
{{{}clusterDURecord.status{}}}.
 # After receiving this event by the target node the system must change 
{{nodeDURecord.status}} to {{OBSOLETE}} value.
 # The node waits for finishing of all code executions in progress that depend 
on this deployment unit. As soon as all code executions are finished 
{{nodeDURecord.status}} must be changed to {{REMOVING}} value.

>From this point it is impossible to use the deployment unit for code execution 
>neither for new tasks nor for old tasks (the second is impossible due to the 
>invariant that all old tasks are finished).
 # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the system 
is able to receive an event from meta storage and check that all nodes have 
{{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
{{clusterDURecord.status}} must be changed to {{REMOVING}} too.
 # Now the deployment unit can be removed from each target node and, after it, 
remove corresponding status records.
 # For each removal of {{nodeDURecord}} record from meta storage the system is 
able to receive an event from meta storage and check that there are no any 
{{nodeDURecord}} records for the given deployment unit. Now the system must 
remove the {{clusterDURecord}} record for the deployment unit.

 

Note that If the deployment unit was removed then there are no any class 
loaders associated with this deployment unit. Eventually the class loader 
should be collected by GC and all classes must be unloaded from JVM. It is the 
critical requirement in order to avoid memory leaks related to multiple class 
loading/unloading.
h3. Node restart during unit removal process

If a target node was restarted during deployment unit removal process then the 
node must find all deployment units with clusterDURecord.status == OBSOLETE or 
clusterDURecord.status == REMOVING for restarted node and finish deployment 
unit removal process as described in the previous section.

  was:
h3. Deployment unit removal process

The deployment unit must be removed from all cluster nodes. In order to achieve 
this the following must be implemented:
 # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
could fail because another process has already changed status to {{OBSOLETE}} 
or {{REMOVING}} value. It is also impossible to start an undeployment process 
in case the deployment process is still in progress.

After this step the deployment unit is not available for new code execution. 
Code execution in progress still can use this deployment unit.
 # Meta storage event must be fired to all target nodes due to a change of 
{{{}clusterDURecord.status{}}}.
 # After receiving this event by the target node the system must change 
{{nodeDURecord.status}} to{{ }}{{OBSOLETE}} value.
 # The node waits for finishing of all code executions in progress that depend 
on this deployment unit. As soon as all code executions are finished 
{{nodeDURecord.status}} must be changed to {{REMOVING}} value.

>From this point it is impossible to use the deployment unit for code execution 
>neither for new tasks nor for old tasks (the second is impossible due to the 
>invariant that all old tasks are finished).
 # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the system 
is able to receive an event from meta storage and check that all nodes have 
{{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
{{clusterDURecord.status}} must be changed to {{REMOVING}} too.
 # Now the deployment unit can be removed from each target node and, after it, 
remove corresponding status records.
 # For each removal of {{nodeDURecord}} record from meta storage the system is 
able to receive an event from meta storage and check that there are no any 
{{nodeDURecord}} records for the given deployment unit. Now the system must 
remove the {{clusterDURecord}} record for the deployment unit.

 

Note that If the deployment unit was removed then there are no any class 
loaders associated with this deployment unit. Eventually the class loader 
should be collected by GC and all classes must be 

[jira] [Updated] (IGNITE-19519) Deployment unit removal

2023-06-13 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-19519:
--
Description: 
h3. Deployment unit removal process

The deployment unit must be removed from all cluster nodes. In order to achieve 
this the following must be implemented:
 # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
could fail because another process has already changed status to {{OBSOLETE}} 
or {{REMOVING}} value. It is also impossible to start an undeployment process 
in case the deployment process is still in progress.

After this step the deployment unit is not available for new code execution. 
Code execution in progress still can use this deployment unit.
 # Meta storage event must be fired to all target nodes due to a change of 
{{{}clusterDURecord.status{}}}.
 # After receiving this event by the target node the system must change 
{{nodeDURecord.status}} to {{OBSOLETE}} value.
 # The node waits for finishing of all code executions in progress that depend 
on this deployment unit. As soon as all code executions are finished 
{{nodeDURecord.status}} must be changed to {{REMOVING}} value.

>From this point it is impossible to use the deployment unit for code execution 
>neither for new tasks nor for old tasks (the second is impossible due to the 
>invariant that all old tasks are finished).
 # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the system 
is able to receive an event from meta storage and check that all nodes have 
{{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
{{clusterDURecord.status}} must be changed to {{REMOVING}} too.
 # Now the deployment unit can be removed from each target node and, after it, 
remove corresponding status records.
 # For each removal of {{nodeDURecord}} record from meta storage the system is 
able to receive an event from meta storage and check that there are no any 
{{nodeDURecord}} records for the given deployment unit. Now the system must 
remove the {{clusterDURecord}} record for the deployment unit.

 

Note that If the deployment unit was removed then there are no any class 
loaders associated with this deployment unit. Eventually the class loader 
should be collected by GC and all classes must be unloaded from JVM. It is the 
critical requirement in order to avoid memory leaks related to multiple class 
loading/unloading.
h3. Node restart during unit removal process

If a target node was restarted during deployment unit removal process then the 
node must find all deployment units with {{clusterDURecord.status == OBSOLETE}} 
or {{clusterDURecord.status == REMOVING}} for restarted node and finish 
deployment unit removal process as described in the previous section.

  was:
h3. Deployment unit removal process

The deployment unit must be removed from all cluster nodes. In order to achieve 
this the following must be implemented:
 # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
could fail because another process has already changed status to {{OBSOLETE}} 
or {{REMOVING}} value. It is also impossible to start an undeployment process 
in case the deployment process is still in progress.

After this step the deployment unit is not available for new code execution. 
Code execution in progress still can use this deployment unit.
 # Meta storage event must be fired to all target nodes due to a change of 
{{{}clusterDURecord.status{}}}.
 # After receiving this event by the target node the system must change 
{{nodeDURecord.status}} to {{OBSOLETE}} value.
 # The node waits for finishing of all code executions in progress that depend 
on this deployment unit. As soon as all code executions are finished 
{{nodeDURecord.status}} must be changed to {{REMOVING}} value.

>From this point it is impossible to use the deployment unit for code execution 
>neither for new tasks nor for old tasks (the second is impossible due to the 
>invariant that all old tasks are finished).
 # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the system 
is able to receive an event from meta storage and check that all nodes have 
{{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
{{clusterDURecord.status}} must be changed to {{REMOVING}} too.
 # Now the deployment unit can be removed from each target node and, after it, 
remove corresponding status records.
 # For each removal of {{nodeDURecord}} record from meta storage the system is 
able to receive an event from meta storage and check that there are no any 
{{nodeDURecord}} records for the given deployment unit. Now the system must 
remove the {{clusterDURecord}} record for the deployment unit.

 

Note that If the deployment unit was removed then there are no any class 
loaders associated with this deployment unit. Eventually the class loader 
should be collected by GC and all classes must be 

[jira] [Updated] (IGNITE-19519) Add remove verification after restart

2023-06-13 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-19519:
--
Description: 
h3. Deployment unit removal process

The deployment unit must be removed from all cluster nodes. In order to achieve 
this the following must be implemented:
 # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
could fail because another process has already changed status to {{OBSOLETE}} 
or {{REMOVING}} value. It is also impossible to start an undeployment process 
in case the deployment process is still in progress.

After this step the deployment unit is not available for new code execution. 
Code execution in progress still can use this deployment unit.
 # Meta storage event must be fired to all target nodes due to a change of 
{{{}clusterDURecord.status{}}}.
 # After receiving this event by the target node the system must change 
{{nodeDURecord.status}} to{{ }}{{OBSOLETE}} value.
 # The node waits for finishing of all code executions in progress that depend 
on this deployment unit. As soon as all code executions are finished 
{{nodeDURecord.status}} must be changed to {{REMOVING}} value.

>From this point it is impossible to use the deployment unit for code execution 
>neither for new tasks nor for old tasks (the second is impossible due to the 
>invariant that all old tasks are finished).
 # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the system 
is able to receive an event from meta storage and check that all nodes have 
{{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
{{clusterDURecord.status}} must be changed to {{REMOVING}} too.
 # Now the deployment unit can be removed from each target node and, after it, 
remove corresponding status records.
 # For each removal of {{nodeDURecord}} record from meta storage the system is 
able to receive an event from meta storage and check that there are no any 
{{nodeDURecord}} records for the given deployment unit. Now the system must 
remove the {{clusterDURecord}} record for the deployment unit.

 

Note that If the deployment unit was removed then there are no any class 
loaders associated with this deployment unit. Eventually the class loader 
should be collected by GC and all classes must be unloaded from JVM. It is the 
critical requirement in order to avoid memory leaks related to multiple class 
loading/unloading.
h3. Node restart during unit removal process

If a target node was restarted during deployment unit removal process then the 
node must find all deployment units with clusterDURecord.status == OBSOLETE or 
clusterDURecord.status == REMOVING for restarted node and finish deployment 
unit removal process as described in the previous section.

  was:
h3. Node restart during unit removal process

If a target node was restarted during deployment unit removal process then the 
node must find all deployment units with clusterDURecord.status == OBSOLETE or 
clusterDURecord.status == REMOVING for restarted node and finish deployment 
unit removal process as described in the previous section.


> Add remove verification after restart 
> --
>
> Key: IGNITE-19519
> URL: https://issues.apache.org/jira/browse/IGNITE-19519
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: iep-103, ignite-3
>
> h3. Deployment unit removal process
> The deployment unit must be removed from all cluster nodes. In order to 
> achieve this the following must be implemented:
>  # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
> could fail because another process has already changed status to {{OBSOLETE}} 
> or {{REMOVING}} value. It is also impossible to start an undeployment process 
> in case the deployment process is still in progress.
> After this step the deployment unit is not available for new code execution. 
> Code execution in progress still can use this deployment unit.
>  # Meta storage event must be fired to all target nodes due to a change of 
> {{{}clusterDURecord.status{}}}.
>  # After receiving this event by the target node the system must change 
> {{nodeDURecord.status}} to{{ }}{{OBSOLETE}} value.
>  # The node waits for finishing of all code executions in progress that 
> depend on this deployment unit. As soon as all code executions are finished 
> {{nodeDURecord.status}} must be changed to {{REMOVING}} value.
> From this point it is impossible to use the deployment unit for code 
> execution neither for new tasks nor for old tasks (the second is impossible 
> due to the invariant that all old tasks are finished).
>  # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the 
> system is able to receive an event from meta 

[jira] [Updated] (IGNITE-19519) Deployment unit removal

2023-06-13 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-19519:
--
Summary: Deployment unit removal  (was: Add remove verification after 
restart )

> Deployment unit removal
> ---
>
> Key: IGNITE-19519
> URL: https://issues.apache.org/jira/browse/IGNITE-19519
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: iep-103, ignite-3
>
> h3. Deployment unit removal process
> The deployment unit must be removed from all cluster nodes. In order to 
> achieve this the following must be implemented:
>  # Change {{clusterDURecord.status}} to {{OBSOLETE}} value. This operation 
> could fail because another process has already changed status to {{OBSOLETE}} 
> or {{REMOVING}} value. It is also impossible to start an undeployment process 
> in case the deployment process is still in progress.
> After this step the deployment unit is not available for new code execution. 
> Code execution in progress still can use this deployment unit.
>  # Meta storage event must be fired to all target nodes due to a change of 
> {{{}clusterDURecord.status{}}}.
>  # After receiving this event by the target node the system must change 
> {{nodeDURecord.status}} to{{ }}{{OBSOLETE}} value.
>  # The node waits for finishing of all code executions in progress that 
> depend on this deployment unit. As soon as all code executions are finished 
> {{nodeDURecord.status}} must be changed to {{REMOVING}} value.
> From this point it is impossible to use the deployment unit for code 
> execution neither for new tasks nor for old tasks (the second is impossible 
> due to the invariant that all old tasks are finished).
>  # For each change of {{nodeDURecord.status}} to {{REMOVING}} value the 
> system is able to receive an event from meta storage and check that all nodes 
> have {{{}nodeDURecord.status == REMOVING{}}}. If the condition is met then 
> {{clusterDURecord.status}} must be changed to {{REMOVING}} too.
>  # Now the deployment unit can be removed from each target node and, after 
> it, remove corresponding status records.
>  # For each removal of {{nodeDURecord}} record from meta storage the system 
> is able to receive an event from meta storage and check that there are no any 
> {{nodeDURecord}} records for the given deployment unit. Now the system must 
> remove the {{clusterDURecord}} record for the deployment unit.
>  
> Note that If the deployment unit was removed then there are no any class 
> loaders associated with this deployment unit. Eventually the class loader 
> should be collected by GC and all classes must be unloaded from JVM. It is 
> the critical requirement in order to avoid memory leaks related to multiple 
> class loading/unloading.
> h3. Node restart during unit removal process
> If a target node was restarted during deployment unit removal process then 
> the node must find all deployment units with clusterDURecord.status == 
> OBSOLETE or clusterDURecord.status == REMOVING for restarted node and finish 
> deployment unit removal process as described in the previous section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19715) Thin client operations can take a long time if PA is enabled and some cluster nodes are not network reachable.

2023-06-13 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19715:

Description: 
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during 
handling of the next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result put operation hangs for a significant amount of time (it depends on 
OS parameters, usually it is couple of minutes) This is confusing for users 
because a single PUT takes much longer than the configured 
ClientConfiguration#setTimeout property.

  was:
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during 
handling of the next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result put operation hangs for a significant amount of time (it depends on 
OS parameters, usually it is couple of minutes) and ignores the 
ClientConfiguration#setTimeout property.


> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not network reachable.
> --
>
> Key: IGNITE-19715
> URL: https://issues.apache.org/jira/browse/IGNITE-19715
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>
> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not reachable over network.
> Consider the following scenario:
> 1. The thin client have already sucessfully established connection to all 
> configured node addresses.
> 2. A particular cluster node becomes unreachable over network. It can be 
> reproduced with iptables -A INPUT -p tcp --dport for Linux.
> 3. The thin client periodically sends put request which is mapped by PA to 
> the unreachable node.
> 4. Firstly  all attempts to perform put will lead to `ClientException: 
> Timeout was reached before computation completed.` exception. But eventually 
> the connection to the unreachable node will be closed by OS (see 
> tcp_keepalive_time for Linux).
> This will lead to reestablishing connection to the unreachable node during 
> handling of the next put (see ReliableChannel.java:1012)
> We currently do not set a timeout for the open connection operation (see 
> GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
> Socket#connect(java.net.SocketAddress, int))
> As a result put operation hangs for a significant amount of time (it depends 
> on OS parameters, usually it is couple of minutes) This is confusing for 
> users because a single PUT takes much longer than the configured 
> ClientConfiguration#setTimeout property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19715) Thin client operations can take a long time if PA is enabled and some cluster nodes are not network reachable.

2023-06-13 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19715:

Description: 
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during 
handling of the next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result put operation hangs for a significant amount of time (it depends on 
OS parameters, usually it is couple of minutes) and ignores the 
ClientConfiguration#setTimeout property.

  was:
Thin client operations can take a long time if PA is enabled and some cluster 
nodes are not reachable over network.

Consider the following scenario:

1. The thin client have already sucessfully established connection to all 
configured node addresses.
2. A particular cluster node becomes unreachable over network. It can be 
reproduced with iptables -A INPUT -p tcp --dport for Linux.
3. The thin client periodically sends put request which is mapped by PA to the 
unreachable node.
4. Firstly  all attempts to perform put will lead to `ClientException: Timeout 
was reached before computation completed.` exception. But eventually the 
connection to the unreachable node will be closed by OS (see tcp_keepalive_time 
for Linux).

This will lead to reestablishing connection to the unreachable node during the 
next put (see ReliableChannel.java:1012)

We currently do not set a timeout for the open connection operation (see 
GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
Socket#connect(java.net.SocketAddress, int))

As a result put operation hangs for a significant amount of time and ignores 
the ClientConfiguration#setTimeout property.


> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not network reachable.
> --
>
> Key: IGNITE-19715
> URL: https://issues.apache.org/jira/browse/IGNITE-19715
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>
> Thin client operations can take a long time if PA is enabled and some cluster 
> nodes are not reachable over network.
> Consider the following scenario:
> 1. The thin client have already sucessfully established connection to all 
> configured node addresses.
> 2. A particular cluster node becomes unreachable over network. It can be 
> reproduced with iptables -A INPUT -p tcp --dport for Linux.
> 3. The thin client periodically sends put request which is mapped by PA to 
> the unreachable node.
> 4. Firstly  all attempts to perform put will lead to `ClientException: 
> Timeout was reached before computation completed.` exception. But eventually 
> the connection to the unreachable node will be closed by OS (see 
> tcp_keepalive_time for Linux).
> This will lead to reestablishing connection to the unreachable node during 
> handling of the next put (see ReliableChannel.java:1012)
> We currently do not set a timeout for the open connection operation (see 
> GridNioClientConnectionMultiplexer#open, here we use Integer.MAX_VALUE for 
> Socket#connect(java.net.SocketAddress, int))
> As a result put operation hangs for a significant amount of time (it depends 
> on OS parameters, usually it is couple of minutes) and ignores the 
> ClientConfiguration#setTimeout property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17684) Investigate of using BinaryTuple instead of array of objects in SQL execution

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17684:

Labels: ignite-3 perfomance tech-debt  (was: ignite-3 tech-debt)

> Investigate of using BinaryTuple instead of array of objects in SQL execution
> -
>
> Key: IGNITE-17684
> URL: https://issues.apache.org/jira/browse/IGNITE-17684
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3, perfomance, tech-debt
>
> Right now internaly we use array of objects to represent row in SQL instead 
> of BinaryTuple and do unnecessary convertation.
> Let's investigate possibility migration from array of objects to direct usage 
> of BinaryTuple in execution tree. There are possible issue for some type 
> execution, like a twophase aggregates and potentialy we should use different 
> representation of row for different parts.
> Start points are: 
> org.apache.ignite.internal.sql.engine.exec.ArrayRowHandler
> org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl#toRow
> As result of the task - list of issues which need to be resolved to implement 
> reuse BinaryTuple



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19557) Sql. Insert through JDBC with batch optimization.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-19557:
---
Epic Link: IGNITE-19479

> Sql. Insert through JDBC with batch optimization.  
> ---
>
> Key: IGNITE-19557
> URL: https://issues.apache.org/jira/browse/IGNITE-19557
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3, performance, sql-performance
>
> JdbcQueryEventHandlerImpl#batchPrepStatementAsync
> process batch rows sequentially row by row, seems it brings essential 
> throughput boost if rows will be processed as batch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19664) Insufficient performance of key-value operations via Java thin client

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19664:

Fix Version/s: 3.0.0-beta2

> Insufficient performance of key-value operations via Java thin client
> -
>
> Key: IGNITE-19664
> URL: https://issues.apache.org/jira/browse/IGNITE-19664
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Fix For: 3.0.0-beta2
>
> Attachments: ai3_embedded_20230606_083100.jfr.zip, 
> ai3_thin_client_20230606_074305.jfr.zip, gc.log.20230606_074305, 
> ignite-config.conf, ignite3db-0.log, ycsb-run8-thin.txt, 
> ycsb-run9-embedded.txt
>
>
> Apache Ignite 3, rev. 0c68cbe3f016e508bd9d53ce5320c88acba1acff
> YCSB key-value benchmarks: 
> https://github.com/gridgain/YCSB/tree/ae687c3bbd82eb7ce7b886af9a2ae2757457097c/ignite3
> h1. Summary
> The performance of key-value {{put()}} operations may be ~1.5 worse if 
> performed via Java thin client in comparison to similar {{put()}} operations 
> performed within an embedded node. 
> h1. Test 1. Thin client node
> h2. Steps
> Start a separate Ignite 3 node and a YCSB client which "100% inserts" 
> workload.
> 1. Start an Apache Ignite 3 server node with the attached 
> {{ignite-config.conf}}.
> 2. Start a YCSB client node which performs {{KeyValueView#put}} operations. 
> YCSB command line options: {{-db site.ycsb.db.ignite3.IgniteClient -p 
> hosts=127.0.0.1 -s -P ./workloads/workloadc -threads 4 -p dataintegrity=true 
> -p operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=false -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 282482
> [OVERALL], Throughput(ops/sec), 3540.048569466373
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 1067.488346
> [INSERT], MinLatency(us), 492
> [INSERT], MaxLatency(us), 421375
> [INSERT], 95thPercentileLatency(us), 2059
> [INSERT], 99thPercentileLatency(us), 5151
> [INSERT], Return=OK, 100
> {noformat}
> Node's log:  [^ignite3db-0.log] 
> Node's GC log:  [^gc.log.20230606_074305] 
> Node's config:  [^ignite-config.conf] 
> YCSB log:  [^ycsb-run8-thin.txt] 
> h1. Test 2. Embedded node
> h2. Steps
> The following step will start YCSB with an embedded Ignite 3 node within the 
> same JVM and the "100% insert" workload on that node. 
> 1. Run YCSB with the {{useEmbedded=true}} parameter: {{-db 
> site.ycsb.db.ignite3.IgniteClient -p hosts=127.0.0.1 -s -P 
> ./workloads/workloadc -threads 4 -p dataintegrity=true -p 
> operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=true -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 173993
> [OVERALL], Throughput(ops/sec), 5747.357652319346
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 614.723711
> [INSERT], MinLatency(us), 284
> [INSERT], MaxLatency(us), 342271
> [INSERT], 95thPercentileLatency(us), 1182
> [INSERT], 99thPercentileLatency(us), 3357
> [INSERT], Return=OK, 100
> {noformat}
> Whole YCSB log:  [^ycsb-run9-embedded.txt] 
> h1. Local machine specs
> Lenovo ThinkPad T15 Gen 1
> CPU: Intel i7-10510U (4 cores, 8 threads)
> RAM: 32 GiB DDR4-2666
> SSD: 512 GiB M.2 2242



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19664) Insufficient performance of key-value operations via Java thin client

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19664:

Component/s: thin client

> Insufficient performance of key-value operations via Java thin client
> -
>
> Key: IGNITE-19664
> URL: https://issues.apache.org/jira/browse/IGNITE-19664
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ai3_embedded_20230606_083100.jfr.zip, 
> ai3_thin_client_20230606_074305.jfr.zip, gc.log.20230606_074305, 
> ignite-config.conf, ignite3db-0.log, ycsb-run8-thin.txt, 
> ycsb-run9-embedded.txt
>
>
> Apache Ignite 3, rev. 0c68cbe3f016e508bd9d53ce5320c88acba1acff
> YCSB key-value benchmarks: 
> https://github.com/gridgain/YCSB/tree/ae687c3bbd82eb7ce7b886af9a2ae2757457097c/ignite3
> h1. Summary
> The performance of key-value {{put()}} operations may be ~1.5 worse if 
> performed via Java thin client in comparison to similar {{put()}} operations 
> performed within an embedded node. 
> h1. Test 1. Thin client node
> h2. Steps
> Start a separate Ignite 3 node and a YCSB client which "100% inserts" 
> workload.
> 1. Start an Apache Ignite 3 server node with the attached 
> {{ignite-config.conf}}.
> 2. Start a YCSB client node which performs {{KeyValueView#put}} operations. 
> YCSB command line options: {{-db site.ycsb.db.ignite3.IgniteClient -p 
> hosts=127.0.0.1 -s -P ./workloads/workloadc -threads 4 -p dataintegrity=true 
> -p operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=false -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 282482
> [OVERALL], Throughput(ops/sec), 3540.048569466373
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 1067.488346
> [INSERT], MinLatency(us), 492
> [INSERT], MaxLatency(us), 421375
> [INSERT], 95thPercentileLatency(us), 2059
> [INSERT], 99thPercentileLatency(us), 5151
> [INSERT], Return=OK, 100
> {noformat}
> Node's log:  [^ignite3db-0.log] 
> Node's GC log:  [^gc.log.20230606_074305] 
> Node's config:  [^ignite-config.conf] 
> YCSB log:  [^ycsb-run8-thin.txt] 
> h1. Test 2. Embedded node
> h2. Steps
> The following step will start YCSB with an embedded Ignite 3 node within the 
> same JVM and the "100% insert" workload on that node. 
> 1. Run YCSB with the {{useEmbedded=true}} parameter: {{-db 
> site.ycsb.db.ignite3.IgniteClient -p hosts=127.0.0.1 -s -P 
> ./workloads/workloadc -threads 4 -p dataintegrity=true -p 
> operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=true -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 173993
> [OVERALL], Throughput(ops/sec), 5747.357652319346
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 614.723711
> [INSERT], MinLatency(us), 284
> [INSERT], MaxLatency(us), 342271
> [INSERT], 95thPercentileLatency(us), 1182
> [INSERT], 99thPercentileLatency(us), 3357
> [INSERT], Return=OK, 100
> {noformat}
> Whole YCSB log:  [^ycsb-run9-embedded.txt] 
> h1. Local machine specs
> Lenovo ThinkPad T15 Gen 1
> CPU: Intel i7-10510U (4 cores, 8 threads)
> RAM: 32 GiB DDR4-2666
> SSD: 512 GiB M.2 2242



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-14757) Calcite. CorrelatedNestedLoopJoinRule with batched instantiation fail with current tests.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-14757:
---
Epic Link: IGNITE-19479

> Calcite. CorrelatedNestedLoopJoinRule with batched instantiation fail with 
> current tests.
> -
>
> Key: IGNITE-14757
> URL: https://issues.apache.org/jira/browse/IGNITE-14757
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite, calcite2-required, calcite3-required
>
> changing CorrelatedNestedLoopJoinRule#INSTANCE into 
> CorrelatedNestedLoopJoinRule#INSTANCE_BATCHED and further call of 
> IgniteCalciteTestSuite will stops all tets progress.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19621) Slow query planning

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-19621:
---
Epic Link: IGNITE-19479

> Slow query planning
> ---
>
> Key: IGNITE-19621
> URL: https://issues.apache.org/jira/browse/IGNITE-19621
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0
>Reporter: Alexander Belyak
>Priority: Major
>  Labels: ignite-3
> Attachments: ddl-ignite3.sql
>
>
> Query from the TPC-H benchmark took 13 to 18 seconds to plan (try to execute 
> on an empty TPCH tables)
> Problem query is:
> {code:java}
> select   sum(case when nation = '' then volume else 0 end) / sum(volume) 
> as mkt_share , o_year
> from ( 
>    select floor(o_orderdate / (cast (365 as bigint) * 8640))  as o_year, 
>    l_extendedprice * (1 - l_discount) as volume, 
>    n2.n_name as nation 
>    from part, supplier, lineitem, orders, customer, nation n1, nation n2, 
> region 
>    where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = 
> o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey 
>        and n1.n_regionkey = r_regionkey and r_name = 'rrr2' and s_nationkey = 
> n2.n_nationkey 
>        and o_orderdate between 78890400 and 85197240 
>        and p_type = 
> ) as all_nations 
> group by o_year 
> order by o_year     
> {code}
> Second run took about 50ms (query cache works fine).
> See ddl in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17699) Improve implementation of QueryTaskExecuter

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-17699:
---
Epic Link: IGNITE-19479

> Improve implementation of QueryTaskExecuter
> ---
>
> Key: IGNITE-17699
> URL: https://issues.apache.org/jira/browse/IGNITE-17699
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: calcite2-required, calcite3-required, ignite-3, tech-debt
>
> Current implementation is based on StrippedThreadExecutor which has major 
> downside: long-living task prevents others task from the same stripe from 
> being executed. This means that the sql engine can be blocked by any query 
> that will take a thread for a long time (most simple example is UDF invoking 
> a {{Thread.sleep()}}).
> Let's think about how we can improve this.
> Every implementation of QueryTaskExecuter must meet following requirements:
> # tasks with the same (queryId, fragmentId) can't be reordered
> # tasks with the same (queryId, fragmentId) can't be executed in parallel: if 
> T1 and T2 are tasks, submit_time(T1) < submit_time(T2), then end_time(T1) < 
> start_time(T2)
> # there must be 'happens-before' relation between execution of task with the 
> same (queryId, fragmentId)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18517) Sql. SELECT with LIMIT prefetch more data than necessary.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18517:
---
Epic Link: IGNITE-19479

> Sql. SELECT with LIMIT prefetch more data than necessary.
> -
>
> Key: IGNITE-18517
> URL: https://issues.apache.org/jira/browse/IGNITE-18517
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>
> This is just an optimization issue, not bug.
> Simple distributed request like : SELECT * FROM T LIMIT N;
> {noformat}
> IgniteLimit
>   IgniteExchange
> IgniteTableScan
> {noformat}
> need to request (maximum) N rows from each TableScan, but for now requested :
> Outbox#flush -> source().request(waiting = IN_BUFFER_SIZE);



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18691) Sql. Reduce overhead of sorting assignments.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18691:
---
Epic Link: IGNITE-19479

> Sql. Reduce overhead of sorting assignments.
> 
>
> Key: IGNITE-18691
> URL: https://issues.apache.org/jira/browse/IGNITE-18691
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>
> When we mapping a query, we need to know who is the primary replica for each 
> partition.
> Current implementation based on storing such leaders into an "ordered" list.  
> The element number in which corresponds to the partition number.
> InternalTable has several methods to get current assignments (see 
> {{InternalTable#assignments}} and {{InternalTable#primaryReplicas}}).
> Currently each of them explicitly sort result using partition numbers.
> This looks not optimal, since we call "assignments" for each table during 
> executing a single query.
> One possible solution is to simply change the type of paritionMap inside 
> InternalTableImpl to some kind of sorted implementation, but the 
> disadvantages of this change should be carefully investigated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17684) Investigate of using BinaryTuple instead of array of objects in SQL execution

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-17684:
---
Epic Link: IGNITE-19479

> Investigate of using BinaryTuple instead of array of objects in SQL execution
> -
>
> Key: IGNITE-17684
> URL: https://issues.apache.org/jira/browse/IGNITE-17684
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3, tech-debt
>
> Right now internaly we use array of objects to represent row in SQL instead 
> of BinaryTuple and do unnecessary convertation.
> Let's investigate possibility migration from array of objects to direct usage 
> of BinaryTuple in execution tree. There are possible issue for some type 
> execution, like a twophase aggregates and potentialy we should use different 
> representation of row for different parts.
> Start points are: 
> org.apache.ignite.internal.sql.engine.exec.ArrayRowHandler
> org.apache.ignite.internal.sql.engine.schema.IgniteTableImpl#toRow
> As result of the task - list of issues which need to be resolved to implement 
> reuse BinaryTuple



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-13568) Calcite integration. Decrease bounds of index scan

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-13568:
---
Epic Link: IGNITE-19479

> Calcite integration. Decrease bounds of index scan
> --
>
> Key: IGNITE-13568
> URL: https://issues.apache.org/jira/browse/IGNITE-13568
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Minor
>  Labels: calcite2-required, calcite3-required
>
> As of now, we analyze just the first column from predicate to detemine lower 
> and upper bounds for complex indexes in case first predicate is not equal. It 
> decreasing search space, however for particular cases, it could be not so 
> effective, for example, the selectivity of the first column is very low. We 
> need to take into account all columns from predicate and suitable index. Need 
> to keep in mind that each column at the index could be sorted in a different 
> manner.
> Start point right now is 
> org.apache.ignite.internal.processors.query.calcite.util.RexUtils#buildIndexConditions,
>  could be refactored before the ticked will be started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18689) Sql. Introduce heuristics to put spools into the plan

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18689:
---
Epic Link: IGNITE-19479

> Sql. Introduce heuristics to put spools into the plan
> -
>
> Key: IGNITE-18689
> URL: https://issues.apache.org/jira/browse/IGNITE-18689
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Rewindability trait was eliminated in IGNITE-18213, and the spool node is no 
> more participate in query optimisation process. Let's do some research on if 
> there are such queries which may leverage the table/index spools in order to 
> speed up the execution, and if they are, let's introduce (perhaps) heuristic 
> rule to put spool node for such queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19557) Sql. Insert through JDBC with batch optimization.

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19557:

Labels: calcite3-required ignite-3 performance sql-performance  (was: 
calcite3-required ignite-3 perfomance sql-performance)

> Sql. Insert through JDBC with batch optimization.  
> ---
>
> Key: IGNITE-19557
> URL: https://issues.apache.org/jira/browse/IGNITE-19557
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3, performance, sql-performance
>
> JdbcQueryEventHandlerImpl#batchPrepStatementAsync
> process batch rows sequentially row by row, seems it brings essential 
> throughput boost if rows will be processed as batch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17244:

Labels: .NET ignite-3 performance  (was: .NET ignite-3 perfomance)

> .NET: Thin 3.0: Optimize async request handling
> ---
>
> Key: IGNITE-17244
> URL: https://issues.apache.org/jira/browse/IGNITE-17244
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Minor
>  Labels: .NET, ignite-3, performance
>
> Reduce allocations when handling async requests in *ClientSocket*.
> Look into combining the following functionality into a single object that can 
> be pooled:
> * IBufferWriter - to write the request
> * IValueTaskSource - to represent the task completion
> * IThreadPoolWorkItem - to handle response on thread pool efficiently
> See PoolingAsyncValueTaskMethodBuilder details: 
> https://devblogs.microsoft.com/dotnet/how-async-await-really-works/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18937) Sql. Join of big number of table function takes unreasonable amount of time

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18937:
---
Epic Link: IGNITE-19479

> Sql. Join of big number of table function takes unreasonable amount of time
> ---
>
> Key: IGNITE-18937
> URL: https://issues.apache.org/jira/browse/IGNITE-18937
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> The test 
> {{org.apache.ignite.internal.sql.engine.planner.JoinCommutePlannerTest#commuteIsDisabledForBigJoinsOfTableFunctions}}
>  takes too much time to finish (don't know actual timing, just killed the 
> test after a minute), whereas the similar one but with tables (instead of 
> table function) takes only a few seconds. Let's investigate this problem and 
> fix it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17623) .NET: Thin 3.0: Perf: review exception throw sites

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17623:

Labels: .NET ignite-3 performance  (was: .NET ignite-3 perfomance)

> .NET: Thin 3.0: Perf: review exception throw sites
> --
>
> Key: IGNITE-17623
> URL: https://issues.apache.org/jira/browse/IGNITE-17623
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3, performance
> Fix For: 3.0.0-beta2
>
>
> *throw* statement prevents inlining. Review all throw statements:
> * Internal sanity checks can be replaced with Debug.Assert
> * When *throw* is still necessary, and the method is small (candidate for 
> inlining) - move throw logic into a separate method.
> https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/#exceptions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17361) Sql. Investigate low performance of sql parser

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-17361:
---
Epic Link: IGNITE-19479

> Sql. Investigate low performance of sql parser
> --
>
> Key: IGNITE-17361
> URL: https://issues.apache.org/jira/browse/IGNITE-17361
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Despite the workaround suggested in IGNITE-17360, low performance of a parser 
> is still the problem. Let's try to find out whether it's possible to optimise 
> parser somehow or not



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19461) The composite index is used when single column index expected to be used

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-19461:
---
Epic Link: IGNITE-19479

> The composite index is used when single column index expected to be used
> 
>
> Key: IGNITE-19461
> URL: https://issues.apache.org/jira/browse/IGNITE-19461
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql, thin client
>Reporter: Igor
>Priority: Minor
>  Labels: ignite-3
>
> h1. Steps to reproduce:
> 1. Create table.
> {code:java}
> CREATE TABLE index_test_table_5(id INT PRIMARY KEY, field_1 TINYINT, field_2 
> SMALLINT, field_3 INT, field_4 FLOAT, field_5 VARCHAR){code}
> 2. Create index:
> {code:java}
> CREATE INDEX index_test_index_5_1 ON index_test_table_5(field_2){code}
> 3. Create composite index:
> {code:java}
> CREATE INDEX index_test_index_5_2 ON index_test_table_5(field_2, field_3, 
> field_5){code}
> 4. Insert some rows.
> 5. Explain plan for query with filter by column contained in both indexes:
> {code:java}
> EXPLAIN PLAN FOR SELECT * FROM index_test_table_5 WHERE field_2 = 50{code}
> h1. Expected result:
> The index for single column is used (index_test_index_5_1)
> h1. Actual result:
> Randonly can be used either single column index (index_test_index_5_1) or 
> composite index (index_test_index_5_2).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19542) .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19542:

Labels: .NET ignite-3 performance  (was: .NET ignite-3 perfomance)

> .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple
> 
>
> Key: IGNITE-19542
> URL: https://issues.apache.org/jira/browse/IGNITE-19542
> Project: Ignite
>  Issue Type: Task
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3, performance
> Fix For: 3.0.0-beta2
>
>
> * Client protocol uses *BinaryTuple* to exchange data (IGNITE-17297)
> * When reading data from server, currently we unpack all elements of the 
> incoming *BinaryTuple* into *IgniteTuple*
> This is extra work and extra allocations. Instead, wrap the incoming 
> *BinaryTuple* like *MutableRowTupleAdapter* does in Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-16639) Sql. Index on DATE/TIME/TIMESTAMP fields cannot be used.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16639:
---
Epic Link: IGNITE-19479

> Sql. Index on DATE/TIME/TIMESTAMP fields cannot be used. 
> -
>
> Key: IGNITE-16639
> URL: https://issues.apache.org/jira/browse/IGNITE-16639
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> Need to [Index on DATE/TIME/TIMESTAMP fields cannot be 
> used|https://issues.apache.org/jira/browse/IGNITE-16077].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18890) Sql. Avoid full index scans in case of null dynamic parameter

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18890:
---
Epic Link: IGNITE-19479

> Sql. Avoid full index scans in case of null dynamic parameter
> -
>
> Key: IGNITE-18890
> URL: https://issues.apache.org/jira/browse/IGNITE-18890
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> The ticket is a copy of IGNITE-17889 to port it from AI2 to AI3.
> Currently, queries like:
> {code:java}
> SELECT * FROM tbl WHERE a >= ?
> {code}
> Should return no rows if dynamic parameter is null, but can be downgraded to 
> full index scan in case table have index on column {{a}} (ASCENDING order, 
> NULLS FIRST).
> We should somehow analyse nulls in search bounds and return empty rows 
> iterator for regular field conditions (`=`, `<`, '>`, etc). But also nulls 
> should be processed as is in search bounds for conditions like `IS NULL`, `IS 
> NOT NULL`, `IS NOT DISTINCT FROM` (the last one not supported currently).  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18896) Sql. SortNode optimization for queries with limit.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18896:
---
Epic Link: IGNITE-19479

> Sql. SortNode optimization for queries with limit.
> --
>
> Key: IGNITE-18896
> URL: https://issues.apache.org/jira/browse/IGNITE-18896
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> As of now we use 2 structures to get top-n (highest or lowest) rows from 
> unsorted stream.
> We use PriorityQueue, which backed with heap, to hold top-n partially sorted 
> in backward order, and an array (as a stack) to sort and reverse to get 
> expected order.
> Seems, we can use a single array here.
> * Pass the array to the constructor of fastutil's version 
> ObjectHeapPriorityQueue.
> * After the stream is processed, the array will be partially sorted.
> * Then we can sort the array using heap-sort algo instantly, without creating 
> a new array.
> * Return result from the end of array like we already do.
> Also, I'd suggest to use ObjectArrayList.wrap() to wrap array to a Stack 
> interface. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19632) .NET: Thin 3.0: Optimize Data Streamer for single connection use case

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19632:

Labels: .NET iep-102 ignite-3 performance  (was: .NET iep-102 ignite-3 
perfomance)

> .NET: Thin 3.0: Optimize Data Streamer for single connection use case
> -
>
> Key: IGNITE-19632
> URL: https://issues.apache.org/jira/browse/IGNITE-19632
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, iep-102, ignite-3, performance
> Fix For: 3.0.0-beta2
>
>
> Optimize .NET client data streamer for a use case when only one connection 
> exists. In this case we don't need to deal with partition awareness and 
> per-node buffers.
> This can be detected automatically or by a flag in DataStreamerOptions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18635) Sql. Extra comparison in case of index scan and simple predicate.

2023-06-13 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-18635:
---
Epic Link: IGNITE-19479

> Sql. Extra comparison in case of index scan and simple predicate.
> -
>
> Key: IGNITE-18635
> URL: https://issues.apache.org/jira/browse/IGNITE-18635
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>
> IndexScanNode contains :
> {noformat}
> IgniteIndex schemaIndex
> RangeIterable rangeConditions
> Predicate filters
> {noformat}
> Seems that for some simple predicates no additional filters comparison is 
> needed.
> For example :
> {noformat}
> create table t (a int);
> create index a_idx on t (a);
> select a from t where a = 1;
> {noformat}
> If correct index scan is used here, no need in additional comparisons:
> {noformat}
> if (filters != null && !filters.test(row)) {
> continue;
> }
> {noformat}
> Seems for more complex cases: sort indexes and range this optimization still 
> applicable:
> {noformat}
> create table t (a int);
> create index a_idx on t (a);
> select a from t where a > 5 and a < 10;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19682) .NET: Thin 3.0: Combine tx.begin with first enlisted operation

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19682:

Labels: .NET ignite-3 performance  (was: .NET ignite-3 perfomance)

> .NET: Thin 3.0: Combine tx.begin with first enlisted operation
> --
>
> Key: IGNITE-19682
> URL: https://issues.apache.org/jira/browse/IGNITE-19682
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3, performance
> Fix For: 3.0.0-beta2
>
>
> Currently, client sends a separate *TX_BEGIN* request when the user invokes 
> *ITransactions.BeginAsync* API:
> * Extra network request.
> * Chosen tx coordinator (server node that handles TX_BEGIN request) is random 
> and in most cases won't be the primary node for enlisted keys.
> Solution:
> * On the client, do not send *TX_BEGIN* request when the user invokes 
> *ITransactions.BeginAsync*. Instead, start the tx "on demand" when it is 
> first used in some API.
> * Send two requests at once to the same node where the first enlisted 
> operation goes (according to partition awareness, if applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19479) Performance improvements

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19479:

Labels: ignite-3 performance  (was: ignite-3)

> Performance improvements
> 
>
> Key: IGNITE-19479
> URL: https://issues.apache.org/jira/browse/IGNITE-19479
> Project: Ignite
>  Issue Type: Epic
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3, performance
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19664) Insufficient performance of key-value operations via Java thin client

2023-06-13 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-19664:
---
Epic Link: IGNITE-19479

> Insufficient performance of key-value operations via Java thin client
> -
>
> Key: IGNITE-19664
> URL: https://issues.apache.org/jira/browse/IGNITE-19664
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ai3_embedded_20230606_083100.jfr.zip, 
> ai3_thin_client_20230606_074305.jfr.zip, gc.log.20230606_074305, 
> ignite-config.conf, ignite3db-0.log, ycsb-run8-thin.txt, 
> ycsb-run9-embedded.txt
>
>
> Apache Ignite 3, rev. 0c68cbe3f016e508bd9d53ce5320c88acba1acff
> YCSB key-value benchmarks: 
> https://github.com/gridgain/YCSB/tree/ae687c3bbd82eb7ce7b886af9a2ae2757457097c/ignite3
> h1. Summary
> The performance of key-value {{put()}} operations may be ~1.5 worse if 
> performed via Java thin client in comparison to similar {{put()}} operations 
> performed within an embedded node. 
> h1. Test 1. Thin client node
> h2. Steps
> Start a separate Ignite 3 node and a YCSB client which "100% inserts" 
> workload.
> 1. Start an Apache Ignite 3 server node with the attached 
> {{ignite-config.conf}}.
> 2. Start a YCSB client node which performs {{KeyValueView#put}} operations. 
> YCSB command line options: {{-db site.ycsb.db.ignite3.IgniteClient -p 
> hosts=127.0.0.1 -s -P ./workloads/workloadc -threads 4 -p dataintegrity=true 
> -p operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=false -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 282482
> [OVERALL], Throughput(ops/sec), 3540.048569466373
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 1067.488346
> [INSERT], MinLatency(us), 492
> [INSERT], MaxLatency(us), 421375
> [INSERT], 95thPercentileLatency(us), 2059
> [INSERT], 99thPercentileLatency(us), 5151
> [INSERT], Return=OK, 100
> {noformat}
> Node's log:  [^ignite3db-0.log] 
> Node's GC log:  [^gc.log.20230606_074305] 
> Node's config:  [^ignite-config.conf] 
> YCSB log:  [^ycsb-run8-thin.txt] 
> h1. Test 2. Embedded node
> h2. Steps
> The following step will start YCSB with an embedded Ignite 3 node within the 
> same JVM and the "100% insert" workload on that node. 
> 1. Run YCSB with the {{useEmbedded=true}} parameter: {{-db 
> site.ycsb.db.ignite3.IgniteClient -p hosts=127.0.0.1 -s -P 
> ./workloads/workloadc -threads 4 -p dataintegrity=true -p 
> operationcount=100 -p recordcount=100 -p disableFsync=true -p 
> useEmbedded=true -load}}
> h2. Results
> {noformat}
> [OVERALL], RunTime(ms), 173993
> [OVERALL], Throughput(ops/sec), 5747.357652319346
> [INSERT], Operations, 100
> [INSERT], AverageLatency(us), 614.723711
> [INSERT], MinLatency(us), 284
> [INSERT], MaxLatency(us), 342271
> [INSERT], 95thPercentileLatency(us), 1182
> [INSERT], 99thPercentileLatency(us), 3357
> [INSERT], Return=OK, 100
> {noformat}
> Whole YCSB log:  [^ycsb-run9-embedded.txt] 
> h1. Local machine specs
> Lenovo ThinkPad T15 Gen 1
> CPU: Intel i7-10510U (4 cores, 8 threads)
> RAM: 32 GiB DDR4-2666
> SSD: 512 GiB M.2 2242



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19588) Calcite engine. Long running queries warning

2023-06-13 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732078#comment-17732078
 ] 

Ignite TC Bot commented on IGNITE-19588:


{panel:title=Branch: [pull/10769/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10769/head] Base: [master] : New Tests 
(5)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
5|https://ci2.ignite.apache.org/viewLog.html?buildId=7208272]]
* {color:#013220}IgniteCalciteTestSuite: 
TimeCalculationExecutionTest.testTime[Execution strategy = FIFO] - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
TimeCalculationExecutionTest.testTime[Execution strategy = LIFO] - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
TimeCalculationExecutionTest.testTime[Execution strategy = RANDOM] - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
SqlDiagnosticIntegrationTest.testBigResultSet - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
SqlDiagnosticIntegrationTest.testLongRunningQueries - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7208353buildTypeId=IgniteTests24Java8_RunAll]

> Calcite engine. Long running queries warning
> 
>
> Key: IGNITE-19588
> URL: https://issues.apache.org/jira/browse/IGNITE-19588
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> For H2-based SQL engine we have warnings in the log files, if query executed 
> for too long time (see 
> {{IgniteConfiguration#setLongQueryWarningTimeout(long)}})
> We need the same functionality for the Calcite-based SQL engine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19724) Remove redundant joins

2023-06-13 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-19724:

Description: 
We have some redundant joins:
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
 must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
willSuccseesFast());
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
 can be replaced by chaining maybe, need to check
- In the TableManager#cleanUpTablesResources replace the join with get+timeout
- Also need to review tests from 
https://github.com/apache/ignite-3/commit/7bcea31c9eb6350120584c1ca060131504927d04
 for join

  was:
We have some redundant joins:
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
 must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
willSuccseesFast());
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
 can be replaced by chaining maybe, need to check
- Also need to review tests from 
https://github.com/apache/ignite-3/commit/7bcea31c9eb6350120584c1ca060131504927d04
 for join


> Remove redundant joins
> --
>
> Key: IGNITE-19724
> URL: https://issues.apache.org/jira/browse/IGNITE-19724
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>
> We have some redundant joins:
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
>  must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
> willSuccseesFast());
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
>  can be replaced by chaining maybe, need to check
> - In the TableManager#cleanUpTablesResources replace the join with get+timeout
> - Also need to review tests from 
> https://github.com/apache/ignite-3/commit/7bcea31c9eb6350120584c1ca060131504927d04
>  for join



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17623) .NET: Thin 3.0: Perf: review exception throw sites

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17623:

Labels: .NET ignite-3 perfomance  (was: .NET ignite-3)

> .NET: Thin 3.0: Perf: review exception throw sites
> --
>
> Key: IGNITE-17623
> URL: https://issues.apache.org/jira/browse/IGNITE-17623
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, ignite-3, perfomance
> Fix For: 3.0.0-beta2
>
>
> *throw* statement prevents inlining. Review all throw statements:
> * Internal sanity checks can be replaced with Debug.Assert
> * When *throw* is still necessary, and the method is small (candidate for 
> inlining) - move throw logic into a separate method.
> https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/#exceptions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19682) .NET: Thin 3.0: Combine tx.begin with first enlisted operation

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19682:

Labels: .NET ignite-3 perfomance  (was: .NET ignite-3)

> .NET: Thin 3.0: Combine tx.begin with first enlisted operation
> --
>
> Key: IGNITE-19682
> URL: https://issues.apache.org/jira/browse/IGNITE-19682
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3, perfomance
> Fix For: 3.0.0-beta2
>
>
> Currently, client sends a separate *TX_BEGIN* request when the user invokes 
> *ITransactions.BeginAsync* API:
> * Extra network request.
> * Chosen tx coordinator (server node that handles TX_BEGIN request) is random 
> and in most cases won't be the primary node for enlisted keys.
> Solution:
> * On the client, do not send *TX_BEGIN* request when the user invokes 
> *ITransactions.BeginAsync*. Instead, start the tx "on demand" when it is 
> first used in some API.
> * Send two requests at once to the same node where the first enlisted 
> operation goes (according to partition awareness, if applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19632) .NET: Thin 3.0: Optimize Data Streamer for single connection use case

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19632:

Labels: .NET iep-102 ignite-3 perfomance  (was: .NET iep-102 ignite-3)

> .NET: Thin 3.0: Optimize Data Streamer for single connection use case
> -
>
> Key: IGNITE-19632
> URL: https://issues.apache.org/jira/browse/IGNITE-19632
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, iep-102, ignite-3, perfomance
> Fix For: 3.0.0-beta2
>
>
> Optimize .NET client data streamer for a use case when only one connection 
> exists. In this case we don't need to deal with partition awareness and 
> per-node buffers.
> This can be detected automatically or by a flag in DataStreamerOptions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19542) .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19542:

Labels: .NET ignite-3 perfomance  (was: .NET ignite-3)

> .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple
> 
>
> Key: IGNITE-19542
> URL: https://issues.apache.org/jira/browse/IGNITE-19542
> Project: Ignite
>  Issue Type: Task
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3, perfomance
> Fix For: 3.0.0-beta2
>
>
> * Client protocol uses *BinaryTuple* to exchange data (IGNITE-17297)
> * When reading data from server, currently we unpack all elements of the 
> incoming *BinaryTuple* into *IgniteTuple*
> This is extra work and extra allocations. Instead, wrap the incoming 
> *BinaryTuple* like *MutableRowTupleAdapter* does in Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17244) .NET: Thin 3.0: Optimize async request handling

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17244:

Labels: .NET ignite-3 perfomance  (was: .NET ignite-3)

> .NET: Thin 3.0: Optimize async request handling
> ---
>
> Key: IGNITE-17244
> URL: https://issues.apache.org/jira/browse/IGNITE-17244
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Minor
>  Labels: .NET, ignite-3, perfomance
>
> Reduce allocations when handling async requests in *ClientSocket*.
> Look into combining the following functionality into a single object that can 
> be pooled:
> * IBufferWriter - to write the request
> * IValueTaskSource - to represent the task completion
> * IThreadPoolWorkItem - to handle response on thread pool efficiently
> See PoolingAsyncValueTaskMethodBuilder details: 
> https://devblogs.microsoft.com/dotnet/how-async-await-really-works/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19542) .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple

2023-06-13 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19542:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> .NET: Thin 3.0: Refactor IgniteTuple to wrap BinaryTuple
> 
>
> Key: IGNITE-19542
> URL: https://issues.apache.org/jira/browse/IGNITE-19542
> Project: Ignite
>  Issue Type: Task
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> * Client protocol uses *BinaryTuple* to exchange data (IGNITE-17297)
> * When reading data from server, currently we unpack all elements of the 
> incoming *BinaryTuple* into *IgniteTuple*
> This is extra work and extra allocations. Instead, wrap the incoming 
> *BinaryTuple* like *MutableRowTupleAdapter* does in Java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19724) Remove redundant joins

2023-06-13 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-19724:

Description: 
We have some redundant joins:
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
 must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
willSuccseesFast());
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
 can be replaced by chaining maybe, need to check
- Also need to review tests from 
https://github.com/apache/ignite-3/commit/7bcea31c9eb6350120584c1ca060131504927d04
 for join

  was:
We have some redundant joins:
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
 must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
willSuccseesFast());
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
 can be replaced by chaining maybe, need to check


> Remove redundant joins
> --
>
> Key: IGNITE-19724
> URL: https://issues.apache.org/jira/browse/IGNITE-19724
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>
> We have some redundant joins:
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
>  must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
> willSuccseesFast());
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
>  can be replaced by chaining maybe, need to check
> - Also need to review tests from 
> https://github.com/apache/ignite-3/commit/7bcea31c9eb6350120584c1ca060131504927d04
>  for join



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19724) Remove redundant joins

2023-06-13 Thread Kirill Gusakov (Jira)
Kirill Gusakov created IGNITE-19724:
---

 Summary: Remove redundant joins
 Key: IGNITE-19724
 URL: https://issues.apache.org/jira/browse/IGNITE-19724
 Project: Ignite
  Issue Type: Task
Reporter: Kirill Gusakov


We have some redundant joins:
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
 must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
willSuccseesFast());
- 
[this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
 can be replaced by chaining maybe, need to check



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19724) Remove redundant joins

2023-06-13 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov reassigned IGNITE-19724:
---

Assignee: Kirill Gusakov

> Remove redundant joins
> --
>
> Key: IGNITE-19724
> URL: https://issues.apache.org/jira/browse/IGNITE-19724
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>
> We have some redundant joins:
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java#L199]
>  must be replaced by assertThat(replicaManager.stopReplica(testGrpId), 
> willSuccseesFast());
> - 
> [this|https://github.com/apache/ignite-3/blob/7bcea31c9eb6350120584c1ca060131504927d04/modules/replicator/src/main/java/org/apache/ignite/internal/replicator/ReplicaManager.java#L409]
>  can be replaced by chaining maybe, need to check



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19723) All REST endpoins should return valid Problem json

2023-06-13 Thread Aleksandr (Jira)
Aleksandr created IGNITE-19723:
--

 Summary: All REST endpoins should return valid Problem json
 Key: IGNITE-19723
 URL: https://issues.apache.org/jira/browse/IGNITE-19723
 Project: Ignite
  Issue Type: Bug
  Components: rest
Reporter: Aleksandr


We have to develop a common mechanism for validation that any REST endpoint 
returns valid Problem json. 

Here is the definition for valid Problem 
https://datatracker.ietf.org/doc/html/rfc7807 

Now we do not set application/json+problem content type. Also, all cases should 
be carefully reviewed.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19199) Idle safe time propagation for the metastorage

2023-06-13 Thread Sergey Chugunov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-19199:
-
Epic Link: IGNITE-18733

> Idle safe time propagation for the metastorage
> --
>
> Key: IGNITE-19199
> URL: https://issues.apache.org/jira/browse/IGNITE-19199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
>
> In https://issues.apache.org/jira/browse/IGNITE-19028 safe time is propagated 
> from the leader every 1 second, which is sub-optimal. The timeout should be 
> configurable + safe time should be only propagated if the metastorage is 
> really idle (no updates were made in this timeout period)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19199) Idle safe time propagation for the metastorage

2023-06-13 Thread Sergey Chugunov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-19199:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Idle safe time propagation for the metastorage
> --
>
> Key: IGNITE-19199
> URL: https://issues.apache.org/jira/browse/IGNITE-19199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
>
> In https://issues.apache.org/jira/browse/IGNITE-19028 safe time is propagated 
> from the leader every 1 second, which is sub-optimal. The timeout should be 
> configurable + safe time should be only propagated if the metastorage is 
> really idle (no updates were made in this timeout period)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19363) Split start of indexes and start of partition raft group nodes

2023-06-13 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-19363:
-
Reviewer: Kirill Tkalenko

> Split start of indexes and start of partition raft group nodes
> --
>
> Key: IGNITE-19363
> URL: https://issues.apache.org/jira/browse/IGNITE-19363
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now there is a cyclic dependency between raft group nodes recovery on 
> start and start of indexes. To start indexes all raft nodes should be 
> started. And raft nodes perform index rebuild on start. Index rebuild is a 
> blocking operation which waits for the table to appear. That can't happen 
> until all raft nodes started. As there's a smaller number of stripes in 
> disruptor, blocking one disruptor block the start of another raft node, so 
> table can't appear and index rebuild can't be finished.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19363) Split start of indexes and start of partition raft group nodes

2023-06-13 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-19363:
-
Fix Version/s: 3.0.0-beta2

> Split start of indexes and start of partition raft group nodes
> --
>
> Key: IGNITE-19363
> URL: https://issues.apache.org/jira/browse/IGNITE-19363
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now there is a cyclic dependency between raft group nodes recovery on 
> start and start of indexes. To start indexes all raft nodes should be 
> started. And raft nodes perform index rebuild on start. Index rebuild is a 
> blocking operation which waits for the table to appear. That can't happen 
> until all raft nodes started. As there's a smaller number of stripes in 
> disruptor, blocking one disruptor block the start of another raft node, so 
> table can't appear and index rebuild can't be finished.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19363) Split start of indexes and start of partition raft group nodes

2023-06-13 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-19363:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Split start of indexes and start of partition raft group nodes
> --
>
> Key: IGNITE-19363
> URL: https://issues.apache.org/jira/browse/IGNITE-19363
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now there is a cyclic dependency between raft group nodes recovery on 
> start and start of indexes. To start indexes all raft nodes should be 
> started. And raft nodes perform index rebuild on start. Index rebuild is a 
> blocking operation which waits for the table to appear. That can't happen 
> until all raft nodes started. As there's a smaller number of stripes in 
> disruptor, blocking one disruptor block the start of another raft node, so 
> table can't appear and index rebuild can't be finished.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-18692) Rebalance test is failed

2023-06-13 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732054#comment-17732054
 ] 

Vladislav Pyatkov commented on IGNITE-18692:


Merged 7bcea31c9eb6350120584c1ca060131504927d04

> Rebalance test is failed
> 
>
> Key: IGNITE-18692
> URL: https://issues.apache.org/jira/browse/IGNITE-18692
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> org.apache.ignite.internal.rebalance.ItRebalanceTest#assignmentsChangingOnNodeLeaveNodeJoin
>  failed.
> The failure is caused by commits:
> db8f1e38 "IGNITE-18397 Rework Watches based on Raft Learners (#1490)"
> ff27d76d "IGNITE-18598 Fix compilation after merge (#1560)"
> I created separated branch with this test: 
> [https://github.com/gridgain/apache-ignite-3/tree/ignite-18088_test] which 
> based on ff27d76d "IGNITE-18598 Fix compilation after merge (#1560)"
>  
> {code:java}
> org.opentest4j.AssertionFailedError: expected:  but was: 
>     at app//org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
>     at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40)
>     at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:35)
>     at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:179)
>     at 
> app//org.apache.ignite.internal.rebalance.ItRebalanceTest.assignmentsChangingOnNodeLeaveNodeJoin(ItRebalanceTest.java:132)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19633) Exclude org.apache.calcite.plan.volcano from Javadoc

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19633:
-
Labels: ignite-3  (was: )

> Exclude org.apache.calcite.plan.volcano from Javadoc 
> -
>
> Key: IGNITE-19633
> URL: https://issues.apache.org/jira/browse/IGNITE-19633
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19619) Term-bases to lease-based switch in SQL engine

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19619:
-
Labels: ignite-3  (was: )

> Term-bases to lease-based  switch in SQL engine
> ---
>
> Key: IGNITE-19619
> URL: https://issues.apache.org/jira/browse/IGNITE-19619
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19600) Remove the DistributionZoneManager#topologyVersionedDataNodes and connected logic

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19600:
-
Labels: ignite-3  (was: )

> Remove the DistributionZoneManager#topologyVersionedDataNodes and connected 
> logic
> -
>
> Key: IGNITE-19600
> URL: https://issues.apache.org/jira/browse/IGNITE-19600
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Motivation*
> Under the IGNITE-18756 we introduce the logic for awaiting the right 
> dataNodes list, which sycnronized with the appropriate toplogy version. But 
> at the moment this method is not needed anymore.
> Definition of done
> - The method itself and connected logic from IGNITE-18756 are removed
> - All needed tests fixed after that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19583) Benchmark & optimize writing into RAFT log

2023-06-13 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19583:
-
Labels: ignite-3  (was: )

> Benchmark & optimize writing into RAFT log
> --
>
> Key: IGNITE-19583
> URL: https://issues.apache.org/jira/browse/IGNITE-19583
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> We should investigate the log storage performance and consider switching 
> current rocksdb-based implementation with the newer one from jraft - 
> [https://github.com/sofastack/sofa-jraft/issues/453.]
> Given that we use shared log storage for multiple groups, we shouldn't 
> compare our storage with the one from jraft blindly, investigation is 
> required. Maybe we should port and update another storage before measuring 
> anything.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19721) Sql. TypeCoercion. Move type validation checks to SqlValidator.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19721:
--
Description: https://issues.apache.org/jira/browse/IGNITE-18831 introduced 
additional type checking for dynamic parameters and added validation checks, 
that throw exceptions, to IgniteTypeCoercion. It would be better to move those 
checks to IgniteSqlValidation (if it is possible).  (was: 
https://issues.apache.org/jira/browse/IGNITE-18831 introduced additional type 
checking for dynamic parameters and added validation checks, that throw 
exceptions, to IgniteTypeCoercion. It would be better that those checks would 
be part of IgniteSqlValidation (if it is possible).)

> Sql. TypeCoercion. Move type validation checks to SqlValidator.
> ---
>
> Key: IGNITE-19721
> URL: https://issues.apache.org/jira/browse/IGNITE-19721
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> https://issues.apache.org/jira/browse/IGNITE-18831 introduced additional type 
> checking for dynamic parameters and added validation checks, that throw 
> exceptions, to IgniteTypeCoercion. It would be better to move those checks to 
> IgniteSqlValidation (if it is possible).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19721) Sql. TypeCoercion. Move type validation checks to SqlValidator.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19721:
--
Description: https://issues.apache.org/jira/browse/IGNITE-18831 introduced 
additional type checking for dynamic parameters and added validation checks, 
that throw exceptions, to IgniteTypeCoercion. It would be better that those 
checks would be part of IgniteSqlValidation (if it is possible).  (was: 
https://issues.apache.org/jira/browse/IGNITE-18831 [Sql. Dynamic parameters. 
Inferred types of dynamic parameters are not used by the execution runtime] 
introduced additional type checking for dynamic parameters and added validation 
checks, that throw exceptions, to IgniteTypeCoercion. It would be better that 
those checks would be part of IgniteSqlValidation (if it is possible).)

> Sql. TypeCoercion. Move type validation checks to SqlValidator.
> ---
>
> Key: IGNITE-19721
> URL: https://issues.apache.org/jira/browse/IGNITE-19721
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> https://issues.apache.org/jira/browse/IGNITE-18831 introduced additional type 
> checking for dynamic parameters and added validation checks, that throw 
> exceptions, to IgniteTypeCoercion. It would be better that those checks would 
> be part of IgniteSqlValidation (if it is possible).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19721) Sql. TypeCoercion. Move type validation checks to SqlValidator.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19721:
--
Description: https://issues.apache.org/jira/browse/IGNITE-18831 [Sql. 
Dynamic parameters. Inferred types of dynamic parameters are not used by the 
execution runtime] introduced additional type checking for dynamic parameters 
and added validation checks, that throw exceptions, to IgniteTypeCoercion. It 
would be better that those checks would be part of IgniteSqlValidation (if it 
is possible).  (was: https://issues.apache.org/jira/browse/IGNITE-18831 
Introduced additional type checking for dynamic parameters and added validation 
checks, that throw exceptions, to IgniteTypeCoercion. It would be better that 
those checks would be part of IgniteSqlValidation (if it is possible).)

> Sql. TypeCoercion. Move type validation checks to SqlValidator.
> ---
>
> Key: IGNITE-19721
> URL: https://issues.apache.org/jira/browse/IGNITE-19721
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> https://issues.apache.org/jira/browse/IGNITE-18831 [Sql. Dynamic parameters. 
> Inferred types of dynamic parameters are not used by the execution runtime] 
> introduced additional type checking for dynamic parameters and added 
> validation checks, that throw exceptions, to IgniteTypeCoercion. It would be 
> better that those checks would be part of IgniteSqlValidation (if it is 
> possible).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19721) Sql. TypeCoercion. Move type validation checks to SqlValidator.

2023-06-13 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-19721:
--
Description: https://issues.apache.org/jira/browse/IGNITE-18831 Introduced 
additional type checking for dynamic parameters and added validation checks, 
that throw exceptions, to IgniteTypeCoercion. It would be better that those 
checks would be part of IgniteSqlValidation (if it is possible).  (was: 
Introduced additional type checking for dynamic parameters and added validation 
checks, that throw exceptions, to IgniteTypeCoercion. It would be better that 
those checks would be part of IgniteSqlValidation (if it is possible).)

> Sql. TypeCoercion. Move type validation checks to SqlValidator.
> ---
>
> Key: IGNITE-19721
> URL: https://issues.apache.org/jira/browse/IGNITE-19721
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> https://issues.apache.org/jira/browse/IGNITE-18831 Introduced additional type 
> checking for dynamic parameters and added validation checks, that throw 
> exceptions, to IgniteTypeCoercion. It would be better that those checks would 
> be part of IgniteSqlValidation (if it is possible).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19545) .NET: Thin 3.0: Basic Data Streamer

2023-06-13 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732047#comment-17732047
 ] 

Pavel Tupitsyn commented on IGNITE-19545:
-

Merged to main: 79a841d95247c338b80ab91a93ec18ee7c0344ac

> .NET: Thin 3.0: Basic Data Streamer
> ---
>
> Key: IGNITE-19545
> URL: https://issues.apache.org/jira/browse/IGNITE-19545
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, iep-102, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement basic data streamer in .NET client without receiver - see Use Case 
> 1 in the 
> [IEP-102|https://cwiki.apache.org/confluence/display/IGNITE/IEP-102%3A+Data+Streamer].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19709) Sql. Remove reflection call from mapping implementation.

2023-06-13 Thread Evgeny Stanilovsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732046#comment-17732046
 ] 

Evgeny Stanilovsky commented on IGNITE-19709:
-

SqlBenchmark with little changing of query for: 

{noformat}
plan = gatewayNode.prepare("select substring('long_string', 1, 5)");
{noformat}

i.e. one node mapping, shows :

{noformat}
ignite-19709
Benchmark  Mode  Cnt  Score  Error  Units
SqlBenchmark.selectAllSimple  thrpt   20  30138.583 ± 1041.664  ops/s

main: 89eb752c9981f0880de7024932
Benchmark  Mode  Cnt  Score  Error  Units
SqlBenchmark.selectAllSimple  thrpt   20  28720.337 ± 1005.199  ops/s
{noformat}


> Sql. Remove reflection call from mapping implementation.
> 
>
> Key: IGNITE-19709
> URL: https://issues.apache.org/jira/browse/IGNITE-19709
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Mapping implemented through reflection call (IgniteMdFragmentMapping), it`s 
> hard to debug and seems can be implemented more clear, need to refactor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19709) Sql. Remove reflection call from mapping implementation.

2023-06-13 Thread Evgeny Stanilovsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732040#comment-17732040
 ] 

Evgeny Stanilovsky commented on IGNITE-19709:
-

Make sense, revert changes.

> Sql. Remove reflection call from mapping implementation.
> 
>
> Key: IGNITE-19709
> URL: https://issues.apache.org/jira/browse/IGNITE-19709
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Mapping implemented through reflection call (IgniteMdFragmentMapping), it`s 
> hard to debug and seems can be implemented more clear, need to refactor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19709) Sql. Remove reflection call from mapping implementation.

2023-06-13 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-19709:

Description: Mapping implemented through reflection call 
(IgniteMdFragmentMapping), it`s hard to debug and seems can be implemented more 
clear, need to refactor.  (was: 1. For sql requests like SELECT 
SUBSTRING('text', 1, 3); no need to start implicit tx.
2. mapping implemented through reflection call, it`s hard to debug and seems 
can be implemented more clear, need to refactor.)

> Sql. Remove reflection call from mapping implementation.
> 
>
> Key: IGNITE-19709
> URL: https://issues.apache.org/jira/browse/IGNITE-19709
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Mapping implemented through reflection call (IgniteMdFragmentMapping), it`s 
> hard to debug and seems can be implemented more clear, need to refactor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19709) Sql. Remove reflection call from mapping implementation.

2023-06-13 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-19709:

Summary: Sql. Remove reflection call from mapping implementation.  (was: 
Sql. Get rid of starting implicit transactions for SELECT without FROM 
statements, change mapping.)

> Sql. Remove reflection call from mapping implementation.
> 
>
> Key: IGNITE-19709
> URL: https://issues.apache.org/jira/browse/IGNITE-19709
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 1. For sql requests like SELECT SUBSTRING('text', 1, 3); no need to start 
> implicit tx.
> 2. mapping implemented through reflection call, it`s hard to debug and seems 
> can be implemented more clear, need to refactor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19651) ignite-code-deployment module shouldn't depends on ignite-rest-api

2023-06-13 Thread Ivan Gagarkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Gagarkin reassigned IGNITE-19651:
--

Assignee: Ivan Gagarkin

> ignite-code-deployment module shouldn't depends on ignite-rest-api
> --
>
> Key: IGNITE-19651
> URL: https://issues.apache.org/jira/browse/IGNITE-19651
> Project: Ignite
>  Issue Type: Bug
>  Components: compute
>Reporter: Ivan Gagarkin
>Assignee: Ivan Gagarkin
>Priority: Major
>  Labels: ignite-3
>
> Currently, ignite-code-deployment module depends on ignite-rest-api module 
> due to needing for 
> org.apache.ignite.internal.rest.api.deployment.DeploymentStatus. 
> We need to split the internal and external presentation of deployment status. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19720) ODBC 3.0: Implement retrieval of Ignite version on handshake

2023-06-13 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-19720:
-
Labels: ignite-3  (was: )

> ODBC 3.0: Implement retrieval of Ignite version on handshake
> 
>
> Key: IGNITE-19720
> URL: https://issues.apache.org/jira/browse/IGNITE-19720
> Project: Ignite
>  Issue Type: New Feature
>  Components: odbc
>Affects Versions: 3.0.0-beta1
>Reporter: Igor Sapego
>Priority: Major
>  Labels: ignite-3
>
> SQLGetInfo(SQL_DBMS_VER) should return current version of cluster. Currently, 
> ODBC driver have not this information. Need to implement retrieval of this 
> information on handshake.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19708) Check refcounter of unit before undeploy

2023-06-13 Thread Ivan Gagarkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Gagarkin reassigned IGNITE-19708:
--

Assignee: Ivan Gagarkin

> Check refcounter of unit before undeploy
> 
>
> Key: IGNITE-19708
> URL: https://issues.apache.org/jira/browse/IGNITE-19708
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Pochatkin
>Assignee: Ivan Gagarkin
>Priority: Major
>  Labels: ignite-3
>
> # Change clusterDURecord.status to OBSOLETE value. This operation could fail 
> because another process has already changed status to OBSOLETE or REMOVING 
> value. It is also impossible to start an undeployment process in case the 
> deployment process is still in progress.
> After this step the deployment unit is not available for new code execution. 
> Code execution in progress still can use this deployment unit.
>  # Meta storage event must be fired to all target nodes due to a change of 
> clusterDURecord.status.
>  # After receiving this event by the target node the system must change 
> nodeDURecord.status to OBSOLETE value.
>  # The node waits for finishing of all code executions in progress that 
> depend on this deployment unit. As soon as all code executions are finished 
> nodeDURecord.status must be changed to REMOVING value.
> From this point it is impossible to use the deployment unit for code 
> execution neither for new tasks nor for old tasks (the second is impossible 
> due to the invariant that all old tasks are finished).
>  # For each change of nodeDURecord.status to REMOVING value the system is 
> able to receive an event from meta storage and check that all nodes have 
> nodeDURecord.status == REMOVING. If the condition is met then 
> clusterDURecord.status must be changed to REMOVING too.
>  # Now the deployment unit can be removed from each target node and, after 
> it, remove corresponding status records.
>  # For each removal of nodeDURecord record from meta storage the system is 
> able to receive an event from meta storage and check that there are no any 
> nodeDURecord records for the given deployment unit. Now the system must 
> remove the clusterDURecord record for the deployment unit.
>  
> Note that If the deployment unit was removed then there are no any class 
> loaders associated with this deployment unit. Eventually the class loader 
> should be collected by GC and all classes must be unloaded from JVM. It is 
> the critical requirement in order to avoid memory leaks related to multiple 
> class loading/unloading.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >