[jira] [Created] (IGNITE-20162) Sql. Support collection types in RowSchema.

2023-08-03 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-20162:
-

 Summary: Sql. Support collection types in RowSchema.
 Key: IGNITE-20162
 URL: https://issues.apache.org/jira/browse/IGNITE-20162
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 3.0.0-beta1
Reporter: Maksim Zhuravkov


After migrating to BinaryTuple/RowSchema in sql-engine, RowSchema should be 
updated to support collection types such as calcite's ARRAY/MAP. These types 
work with array-backed rows because array-backed RowHandler is untyped.

1. Add types that describe collection types.
2. Update conversion from RelDataTypes to RowSchema types for these types.
3. Update runtime code that read/write collection types from BinaryTuple-backed 
rows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20086) Ensure mockito resources cleaned after tests.

2023-08-03 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-20086:
--
Labels: ignite-3 stability  (was: ignite-3 stability tech-debt-test)

> Ensure mockito resources cleaned after tests. 
> --
>
> Key: IGNITE-20086
> URL: https://issues.apache.org/jira/browse/IGNITE-20086
> Project: Ignite
>  Issue Type: Test
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3, stability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Tests must clean up inline mocks on tear-down.
> Otherwise it may lead to OOM, see IGNITE-20065.
> Let's add an arch-test to be sure all the test, which use mocks, inherits 
> BaseIgniteAbstractTest class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20105) Fix the bug in CatalogUtils#fromParams(ColumnParams)

2023-08-03 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-20105:
-

Assignee: Andrey Mashenkov

> Fix the bug in CatalogUtils#fromParams(ColumnParams)
> 
>
> Key: IGNITE-20105
> URL: https://issues.apache.org/jira/browse/IGNITE-20105
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It was discovered that there is a bug in 
> *org.apache.ignite.internal.catalog.commands.CatalogUtils#fromParams(org.apache.ignite.internal.catalog.commands.ColumnParams)*,
>  it uses fields (precision, scale, length) with a default value of *0*, but 
> we need to use constants from 
> *org.apache.ignite.internal.schema.TemporalNativeType*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750866#comment-17750866
 ] 

Denis Chudov commented on IGNITE-20148:
---

[~alapin] lgtm.

> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3, transactions
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of a majority, meaning that we will release lock still having 
> writeIntent locally. Generally speaking that should be resolved by 
> implementing writeIntent resolution for RW transactions. However given ticket 
> is not yet implemented. Anyway, it is worth to clean up writeIntents on 
> primary replica explicitly for a sense of performance in order to eliminate 
> excessive writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20161) Fix NPE in AppendEntriesRequestProcessor

2023-08-03 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20161:
--

 Summary: Fix NPE in AppendEntriesRequestProcessor
 Key: IGNITE-20161
 URL: https://issues.apache.org/jira/browse/IGNITE-20161
 Project: Ignite
  Issue Type: Bug
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7408964?hideProblemsFromDependencies=false=false=true=true+Inspection=true

2023-08-03 11:57:25:040 +0300 
[INFO][%int_tsdlttn_5005%JRaft-FSMCaller-Disruptor-_stripe_0-0][StateMachineAdapter]
 onStartFollowing: LeaderChangeContext [leaderId=int_tsdlttn_5003, term=2, 
status=Status[ENEWLEADER<10011>: Follower receives message from new leader with 
the same term.]].
  2023-08-03 11:57:25:040 +0300 
[ERROR][%int_tsdlttn_5004%MessagingService-inbound--0][DefaultMessagingService] 
onMessage() failed while processing InvokeRequestImpl [correlationId=4, 
message=AppendEntriesRequestImpl [committedIndex=0, 
data=org.apache.ignite.raft.jraft.util.ByteString@1, entriesList=null, 
groupId=unitest, peerId=int_tsdlttn_5004, prevLogIndex=1, prevLogTerm=1, 
serverId=int_tsdlttn_5003, term=2, timestampLong=110824852359479296]] from 
int_tsdlttn_5003
  java.lang.NullPointerException
at 
org.apache.ignite.raft.jraft.rpc.impl.core.AppendEntriesRequestProcessor.getOrCreatePeerRequestContext(AppendEntriesRequestProcessor.java:351)
at 
org.apache.ignite.raft.jraft.rpc.impl.core.AppendEntriesRequestProcessor$PeerExecutorSelector.select(AppendEntriesRequestProcessor.java:72)
at 
org.apache.ignite.raft.jraft.rpc.impl.IgniteRpcServer$RpcMessageHandler.onReceived(IgniteRpcServer.java:182)
at 
org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:375)
at 
org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$4(DefaultMessagingService.java:335)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20142) Introduce changes for JDK17 tests run

2023-08-03 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750826#comment-17750826
 ] 

Ignite TC Bot commented on IGNITE-20142:


{panel:title=Branch: [pull/10873/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10873/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7282379buildTypeId=IgniteTests24Java8_RunAll]

> Introduce changes for JDK17 tests run
> -
>
> Key: IGNITE-20142
> URL: https://issues.apache.org/jira/browse/IGNITE-20142
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Petr Ivanov
>Assignee: Petr Ivanov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>
> Introduce several changes and extend current WAs for ability to run tests 
> under JDK17.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20160) NullPointerException in FSMCallerImpl.doCommitted

2023-08-03 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20160:
---

 Summary: NullPointerException in FSMCallerImpl.doCommitted
 Key: IGNITE-20160
 URL: https://issues.apache.org/jira/browse/IGNITE-20160
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
 Fix For: 3.0.0-beta2


{code}
java.lang.NullPointerException
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:496)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:448)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
at 
org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:226)
at 
org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:191)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
{code}

https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleRunnerSqlLogic/7410174?hideProblemsFromDependencies=false=false=true=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20160) NullPointerException in FSMCallerImpl.doCommitted

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20160:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> NullPointerException in FSMCallerImpl.doCommitted
> -
>
> Key: IGNITE-20160
> URL: https://issues.apache.org/jira/browse/IGNITE-20160
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> {code}
> java.lang.NullPointerException
> at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:496)
> at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:448)
> at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:136)
> at 
> org.apache.ignite.raft.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:130)
> at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:226)
> at 
> org.apache.ignite.raft.jraft.disruptor.StripedDisruptor$StripeEntryHandler.onEvent(StripedDisruptor.java:191)
> at 
> com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
> {code}
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleRunnerSqlLogic/7410174?hideProblemsFromDependencies=false=false=true=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20159) Add README.md for Ignite 3

2023-08-03 Thread Ivan Zlenko (Jira)
Ivan Zlenko created IGNITE-20159:


 Summary: Add README.md for Ignite 3
 Key: IGNITE-20159
 URL: https://issues.apache.org/jira/browse/IGNITE-20159
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Ivan Zlenko


Right now Ignite 3 lacks proper readme. 
But it's an important piece of documentation for contributors 'cause it is 
usually suites as a hub for all necessary reference documentation and guides. 
We need to consider adding proper readme for project with links to other 
in-project documentation such as contribution guides or devnotes. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20094) IgniteTxManager initial cleanup

2023-08-03 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750814#comment-17750814
 ] 

Ignite TC Bot commented on IGNITE-20094:


{panel:title=Branch: [pull/10864/head] Base: [master] : Possible Blockers 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache (Deadlock Detection){color} [[tests 
4|https://ci2.ignite.apache.org/viewLog.html?buildId=7282522]]
* TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsPessimisticDetectionEnabled - New 
test duration 121s is more that 1 minute
* TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsOptimisticDetectionDisabled - New 
test duration 122s is more that 1 minute
* TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsOptimisticDetectionEnabled - New test 
duration 121s is more that 1 minute
* TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsPessimisticDetectionDisabled - New 
test duration 123s is more that 1 minute

{panel}
{panel:title=Branch: [pull/10864/head] Base: [master] : New Tests 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Cache (Deadlock Detection){color} [[tests 
4|https://ci2.ignite.apache.org/viewLog.html?buildId=7282522]]
* {color:#013220}TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsPessimisticDetectionEnabled - 
PASSED{color}
* {color:#013220}TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsOptimisticDetectionDisabled - 
PASSED{color}
* {color:#013220}TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsOptimisticDetectionEnabled - 
PASSED{color}
* {color:#013220}TxDeadlockDetectionTestSuite: 
TxDeadlockDetectionNoHangsTest.testNoHangsPessimisticDetectionDisabled - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7279392buildTypeId=IgniteTests24Java8_RunAll]

> IgniteTxManager initial cleanup
> ---
>
> Key: IGNITE-20094
> URL: https://issues.apache.org/jira/browse/IGNITE-20094
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20158) CLI: cluster/node config update is not user friendly

2023-08-03 Thread Aleksandr (Jira)
Aleksandr created IGNITE-20158:
--

 Summary: CLI: cluster/node config update is not user friendly
 Key: IGNITE-20158
 URL: https://issues.apache.org/jira/browse/IGNITE-20158
 Project: Ignite
  Issue Type: Task
Reporter: Aleksandr


The configuration update process can be painful because of the way CLI parses 
the command line. For example,

{code:java}
[defaultNode]> cluster config update -u http://localhost:10300 
"{aipersist.regions: [{name: persistent_region,size: 2560
0}],aimem.regions: [{name: btree_volatile_region,maxSize: 25600}]}"
IGN-CMN-65535 Trace ID: 5430a4a7-b24d-4861-89aa-fdb84a17b199
com.typesafe.config.ConfigException$Parse: String: 1: Key '"{aipersist.regions: 
[{name: persistent_region,size: 25600}],aimem.regions: [{name: 
btree_volatile_region,maxSize: 25600}]}"' may not be followed by token: end 
of file
{code}

There is no way to understand what is going wrong. 

I suggest improving the error text and showing the correct command example.

And we have to investigate the root cause of the issue and create a follow-up 
ticket to fix the way CLI parses the command line. I expect the example command 
to be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20157) Introduce LockTimeoutException

2023-08-03 Thread Denis Chudov (Jira)
Denis Chudov created IGNITE-20157:
-

 Summary: Introduce LockTimeoutException
 Key: IGNITE-20157
 URL: https://issues.apache.org/jira/browse/IGNITE-20157
 Project: Ignite
  Issue Type: Improvement
Reporter: Denis Chudov


*Motivation*

Currently we have lock timeouts only for specific implementations of 
DeadlockPreventionPolicy. In the same time, we have transaction request 
timeouts. It makes no sense for such requests to wait for acquiring locks 
longer than request timeout.

*Definition of done*

Future returned by LockManager#acquire is completed exceptionally if the lock 
was not acquired in some time interval (lock acquisition timeout).

*Implementation notes*

This exception (or its message) should differ from the exception thrown because 
of deadlock prevention policy with timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20156) Fix public documentation for running Ignite using Docker

2023-08-03 Thread Ivan Zlenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Zlenko updated IGNITE-20156:
-
Component/s: documentation

> Fix public documentation for running Ignite using Docker
> 
>
> Key: IGNITE-20156
> URL: https://issues.apache.org/jira/browse/IGNITE-20156
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> Current documentation in "Installing Using Docker" chapter has several issues 
> which could prevent users from successfully running Ignite on Docker 
> environment. 
> 1. Example for docker-compose file is incorrect. The correct one is: 
> {code:yaml}
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #  http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> version: "3.9"
> name: ignite3
> x-ignite-def:
>   
>   image: apacheignite/ignite3:${IGNITE3_VERSION:-latest}
>   volumes:
> - ./cluster.conf:/opt/ignite/etc/ignite-config.conf
> services:
>   node1:
> << : *ignite-def
> command: --node-name node1
> ports:
>   - 10300:10300
>   - 10800:10800
>   node2:
> << : *ignite-def
> command: --node-name node2
> ports:
>   - 10301:10300
>   - 10801:10800
>   node3:
> << : *ignite-def
> command: --node-name node3
> ports:
>   - 10302:10300
>   - 10802:10800
> {code}
> 2. Example of command for single-node configuration is incorrect. Correct one 
> is: 
> {code}
> docker run -it --rm -p 10300:10300 -p 10800:10800 apacheignite/ignite3
> {code}
> 3. Also maybe it is worth to use steps from DEVNOTES.md so we can show how to 
> run CLI using Docker as well. 
> {code}
> docker compose -f packaging/docker/docker-compose.yml up -d
> docker run -it --rm --net ignite3_default apacheignite/ignite3 cli
> > connect http://node1:10300
> > cluster init --cluster-name cluster --meta-storage-node node1 
> > --meta-storage-node node2 --meta-storage-node node3
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20156) Fix public documentation for running Ignite using Docker

2023-08-03 Thread Ivan Zlenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Zlenko updated IGNITE-20156:
-
Labels: ignite-3  (was: )

> Fix public documentation for running Ignite using Docker
> 
>
> Key: IGNITE-20156
> URL: https://issues.apache.org/jira/browse/IGNITE-20156
> Project: Ignite
>  Issue Type: Task
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> Current documentation in "Installing Using Docker" chapter has several issues 
> which could prevent users from successfully running Ignite on Docker 
> environment. 
> 1. Example for docker-compose file is incorrect. The correct one is: 
> {code:yaml}
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #  http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> version: "3.9"
> name: ignite3
> x-ignite-def:
>   
>   image: apacheignite/ignite3:${IGNITE3_VERSION:-latest}
>   volumes:
> - ./cluster.conf:/opt/ignite/etc/ignite-config.conf
> services:
>   node1:
> << : *ignite-def
> command: --node-name node1
> ports:
>   - 10300:10300
>   - 10800:10800
>   node2:
> << : *ignite-def
> command: --node-name node2
> ports:
>   - 10301:10300
>   - 10801:10800
>   node3:
> << : *ignite-def
> command: --node-name node3
> ports:
>   - 10302:10300
>   - 10802:10800
> {code}
> 2. Example of command for single-node configuration is incorrect. Correct one 
> is: 
> {code}
> docker run -it --rm -p 10300:10300 -p 10800:10800 apacheignite/ignite3
> {code}
> 3. Also maybe it is worth to use steps from DEVNOTES.md so we can show how to 
> run CLI using Docker as well. 
> {code}
> docker compose -f packaging/docker/docker-compose.yml up -d
> docker run -it --rm --net ignite3_default apacheignite/ignite3 cli
> > connect http://node1:10300
> > cluster init --cluster-name cluster --meta-storage-node node1 
> > --meta-storage-node node2 --meta-storage-node node3
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Description: 
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of a majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing writeIntent 
resolution for RW transactions. However given ticket is not yet implemented. 
Anyway, it is worth to clean up writeIntents on primary replica explicitly for 
a sense of performance in order to eliminate excessive writeIntent resolutions.
h3. Definition of Done
 * Explicit writeIntent cleanup on primary replica prior to locks release is 
implemented.

 

  was:
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing 
[writeIntent resolution for RW 
transactions|https://issues.apache.org/jira/browse/IGNITE-19570]. However given 
ticket is not yet implemented. Anyway, it is worth to clean up writeIntents on 
primary replica explicitly for a sense of performance in order to eliminate 
excessive writeIntent resolutions.
h3. Definition of Done
 * Explicit writeIntent cleanup on primary replica prior to locks release is 
implemented.

 


> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3, transactions
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of a majority, meaning that we will release lock still having 
> writeIntent locally. Generally speaking that should be resolved by 
> implementing writeIntent resolution for RW transactions. However given ticket 
> is not yet implemented. Anyway, it is worth to clean up writeIntents on 
> primary replica explicitly for a sense of performance in order to eliminate 
> excessive writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20156) Fix public documentation for running Ignite using Docker

2023-08-03 Thread Ivan Zlenko (Jira)
Ivan Zlenko created IGNITE-20156:


 Summary: Fix public documentation for running Ignite using Docker
 Key: IGNITE-20156
 URL: https://issues.apache.org/jira/browse/IGNITE-20156
 Project: Ignite
  Issue Type: Task
Reporter: Ivan Zlenko


Current documentation in "Installing Using Docker" chapter has several issues 
which could prevent users from successfully running Ignite on Docker 
environment. 
1. Example for docker-compose file is incorrect. The correct one is: 
{code:yaml}
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

version: "3.9"

name: ignite3

x-ignite-def:
  
  image: apacheignite/ignite3:${IGNITE3_VERSION:-latest}
  volumes:
- ./cluster.conf:/opt/ignite/etc/ignite-config.conf

services:
  node1:
<< : *ignite-def
command: --node-name node1
ports:
  - 10300:10300
  - 10800:10800
  node2:
<< : *ignite-def
command: --node-name node2
ports:
  - 10301:10300
  - 10801:10800
  node3:
<< : *ignite-def
command: --node-name node3
ports:
  - 10302:10300
  - 10802:10800
{code}

2. Example of command for single-node configuration is incorrect. Correct one 
is: 
{code}
docker run -it --rm -p 10300:10300 -p 10800:10800 apacheignite/ignite3
{code}

3. Also maybe it is worth to use steps from DEVNOTES.md so we can show how to 
run CLI using Docker as well. 
{code}
docker compose -f packaging/docker/docker-compose.yml up -d
docker run -it --rm --net ignite3_default apacheignite/ignite3 cli
> connect http://node1:10300
> cluster init --cluster-name cluster --meta-storage-node node1 
> --meta-storage-node node2 --meta-storage-node node3
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18559) Sql. The least restrictive type between VARCHAR and DECIMAL is DECIMAL(precision=32767, scale=16383)

2023-08-03 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-18559:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. The least restrictive type between VARCHAR and DECIMAL is 
> DECIMAL(precision=32767, scale=16383)
> 
>
> Key: IGNITE-18559
> URL: https://issues.apache.org/jira/browse/IGNITE-18559
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> At the moment the least restrictive type between VARCHAR and DECIMAL is 
> DECIMAL(precision=32767, scale=16383). See TypeCoercionTest 
> testVarCharToNumeric.
> Investigate why that happens and whether it is a problem or not.
> Test query:
> {code:java}
> SELECT NULLIF(12.2, 'b') -- Should fail since types do not match {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin reassigned IGNITE-20148:


Assignee: Alexander Lapin

> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3, transactions
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resolved by implementing 
> [writeIntent resolution for RW 
> transactions|https://issues.apache.org/jira/browse/IGNITE-19570]. However 
> given ticket is not yet implemented. Anyway, it is worth to clean up 
> writeIntents on primary replica explicitly for a sense of performance in 
> order to eliminate excessive writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-20150:
--

Assignee: Aleksandr  (was: Ivan Zlenko)

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-20150:
--

Assignee: (was: Aleksandr)

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750796#comment-17750796
 ] 

Aleksandr commented on IGNITE-20150:


[~ivan.zlenko] could you create follow-up ticket for the documentation?

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Assignee: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-20150:
--

Assignee: Ivan Zlenko

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Assignee: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20155) Java client connector skips NOT NULL and other column checks

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20155:

Description: 
See *TupleMarshallerImpl#marshal* and *#binaryTupleRebuildRequired*: we pass 
BinaryTuple as is from the client, bypassing NOT NULL and other constraint 
checks.

We should validate the data from the client without reassembling the tuple.

  was:See *TupleMarshallerImpl#marshal* and *#binaryTupleRebuildRequired*: we 
pass BinaryTuple as is from the client, bypassing NOT NULL and other constraint 
checks.


> Java client connector skips NOT NULL and other column checks
> 
>
> Key: IGNITE-20155
> URL: https://issues.apache.org/jira/browse/IGNITE-20155
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> See *TupleMarshallerImpl#marshal* and *#binaryTupleRebuildRequired*: we pass 
> BinaryTuple as is from the client, bypassing NOT NULL and other constraint 
> checks.
> We should validate the data from the client without reassembling the tuple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Description: 
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing 
[writeIntent resolution for RW 
transactions|https://issues.apache.org/jira/browse/IGNITE-19570]. However given 
ticket is not yet implemented. Anyway, it is worth to clean up writeIntents on 
primary replica explicitly for a sense of performance in order to eliminate 
excessive writeIntent resolutions.
h3. Definition of Done
 * Explicit writeIntent cleanup on primary replica prior to locks release is 
implemented.

 

  was:
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing writeIntent 
resolution for RW transactions However given ticket is not yet implemented. 
Anyway, it is worth to clean up writeIntents on primary replica explicitly for 
a sense of performance in order to eliminate excessive writeIntent resolutions.
h3. Definition of Done
 * Explicit writeIntent cleanup on primary replica prior to locks release is 
implemented.

 


> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3, transactions
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resolved by implementing 
> [writeIntent resolution for RW 
> transactions|https://issues.apache.org/jira/browse/IGNITE-19570]. However 
> given ticket is not yet implemented. Anyway, it is worth to clean up 
> writeIntents on primary replica explicitly for a sense of performance in 
> order to eliminate excessive writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20154) Line separator in ODBC errors text

2023-08-03 Thread Nikita Sivkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Sivkov updated IGNITE-20154:
---
Description: 
*Issue:*
Getting line separator symbols "\r\n" in OBDC error text.

*For example:*
{noformat}
ALTER TABLE PUBLIC.CAR DROP COLUMN IF EXISTS NAME
('HY000', '[HY000] org.apache.ignite.sql.SqlException: Failed to parse query: 
Encountered "" at line 1, column 36.\r\nWas expecting one of:\r\n     (262147, 
3e945522-7d71-48c0-a8cf-675bbc078bb0) (0) (SQLExecDirectW)'){noformat}
*Reproducer:*
 # Start Ignite node.
 # Execute command:
_python3 odbc_client.py -o smoke0.sql.actual localhost:10800 smoke0.sql_

*Commit id:*
2655e406b06a2605c2d5ad9402e06d81c1a168ef

 

 

  was:
*Issue:*
Getting line separator symbols "\r\n" in OBDC error text.
**

*For example:*
{noformat}

{noformat}
*ALTER TABLE PUBLIC.CAR DROP COLUMN IF EXISTS NAME
('HY000', '[HY000] org.apache.ignite.sql.SqlException: Failed to parse query: 
Encountered "" at line 1, column 36.\r\nWas expecting one of:\r\n     (262147, 
3e945522-7d71-48c0-a8cf-675bbc078bb0) (0) (SQLExecDirectW)')*

 

*Reproducer:*
 # Start Ignite node.
 # Execute command:
`python3 odbc_client.py -o smoke0.sql.actual localhost:10800 smoke0.sql`

*Commit id:*
2655e406b06a2605c2d5ad9402e06d81c1a168ef

 

 


> Line separator in ODBC errors text
> --
>
> Key: IGNITE-20154
> URL: https://issues.apache.org/jira/browse/IGNITE-20154
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0
>Reporter: Nikita Sivkov
>Priority: Major
>  Labels: ignite-3, odbc
> Attachments: odbc_client.py, smoke0.sql
>
>
> *Issue:*
> Getting line separator symbols "\r\n" in OBDC error text.
> *For example:*
> {noformat}
> ALTER TABLE PUBLIC.CAR DROP COLUMN IF EXISTS NAME
> ('HY000', '[HY000] org.apache.ignite.sql.SqlException: Failed to parse query: 
> Encountered "" at line 1, column 36.\r\nWas expecting one of:\r\n     
> (262147, 3e945522-7d71-48c0-a8cf-675bbc078bb0) (0) 
> (SQLExecDirectW)'){noformat}
> *Reproducer:*
>  # Start Ignite node.
>  # Execute command:
> _python3 odbc_client.py -o smoke0.sql.actual localhost:10800 smoke0.sql_
> *Commit id:*
> 2655e406b06a2605c2d5ad9402e06d81c1a168ef
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Labels: ignite-3 transactions  (was: )

> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3, transactions
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resolved by implementing 
> writeIntent resolution for RW transactions However given ticket is not yet 
> implemented. Anyway, it is worth to clean up writeIntents on primary replica 
> explicitly for a sense of performance in order to eliminate excessive 
> writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Description: 
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing writeIntent 
resolution for RW transactions However given ticket is not yet implemented. 
Anyway, it is worth to clean up writeIntents on primary replica explicitly for 
a sense of performance in order to eliminate excessive writeIntent resolutions.
h3. Definition of Done
 * Explicit writeIntent cleanup on primary replica prior to locks release is 
implemented.

 

  was:
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing writeIntent 
resolution for RW transactions However given ticket is not yet implemented. 
Anyway, it is worth to clean up writeIntents on primary replica explicitly for 
a sense of performance in order to eliminate excessive writeIntent resolutions.
h3. Definition of Done

 


> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resolved by implementing 
> writeIntent resolution for RW transactions However given ticket is not yet 
> implemented. Anyway, it is worth to clean up writeIntents on primary replica 
> explicitly for a sense of performance in order to eliminate excessive 
> writeIntent resolutions.
> h3. Definition of Done
>  * Explicit writeIntent cleanup on primary replica prior to locks release is 
> implemented.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Description: 
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resolved by implementing writeIntent 
resolution for RW transactions However given ticket is not yet implemented. 
Anyway, it is worth to clean up writeIntents on primary replica explicitly for 
a sense of performance in order to eliminate excessive writeIntent resolutions.
h3. Definition of Done

 

  was:
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resovled by implementing writeIntent 
resolution for RW transactions


> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resolved by implementing 
> writeIntent resolution for RW transactions However given ticket is not yet 
> implemented. Anyway, it is worth to clean up writeIntents on primary replica 
> explicitly for a sense of performance in order to eliminate excessive 
> writeIntent resolutions.
> h3. Definition of Done
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20155) Java client connector skips NOT NULL and other column checks

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20155:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Java client connector skips NOT NULL and other column checks
> 
>
> Key: IGNITE-20155
> URL: https://issues.apache.org/jira/browse/IGNITE-20155
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> See *TupleMarshallerImpl#marshal* and *#binaryTupleRebuildRequired*: we pass 
> BinaryTuple as is from the client, bypassing NOT NULL and other constraint 
> checks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20155) Java client connector skips NOT NULL and other column checks

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20155:

Description: See *TupleMarshallerImpl#marshal* and 
*#binaryTupleRebuildRequired*: we pass BinaryTuple as is from the client, 
bypassing NOT NULL and other constraint checks.

> Java client connector skips NOT NULL and other column checks
> 
>
> Key: IGNITE-20155
> URL: https://issues.apache.org/jira/browse/IGNITE-20155
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> See *TupleMarshallerImpl#marshal* and *#binaryTupleRebuildRequired*: we pass 
> BinaryTuple as is from the client, bypassing NOT NULL and other constraint 
> checks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20155) Java client connector skips NOT NULL and other column checks

2023-08-03 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20155:
---

 Summary: Java client connector skips NOT NULL and other column 
checks
 Key: IGNITE-20155
 URL: https://issues.apache.org/jira/browse/IGNITE-20155
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20148:
-
Description: 
h3. Motivation

Currently, locks are released on primary when cleanup is replicated over 
majority
{code:java}
private CompletableFuture processTxCleanupAction(TxCleanupReplicaRequest 
request) {
...
return raftClient
.run(txCleanupCmd)
.thenCompose(ignored -> 
allOffFuturesExceptionIgnored(txReadFutures, request)
.thenRun(() -> releaseTxLocks(request.txId(;
...
} {code}
That is actually incorrect, because it's possible that primary won't be a part 
of majority, meaning that we will release lock still having writeIntent 
locally. Generally speaking that should be resovled by implementing writeIntent 
resolution for RW transactions

> Explicit writeIntent cleanup on primary replica
> ---
>
> Key: IGNITE-20148
> URL: https://issues.apache.org/jira/browse/IGNITE-20148
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Currently, locks are released on primary when cleanup is replicated over 
> majority
> {code:java}
> private CompletableFuture 
> processTxCleanupAction(TxCleanupReplicaRequest request) {
> ...
> return raftClient
> .run(txCleanupCmd)
> .thenCompose(ignored -> 
> allOffFuturesExceptionIgnored(txReadFutures, request)
> .thenRun(() -> releaseTxLocks(request.txId(;
> ...
> } {code}
> That is actually incorrect, because it's possible that primary won't be a 
> part of majority, meaning that we will release lock still having writeIntent 
> locally. Generally speaking that should be resovled by implementing 
> writeIntent resolution for RW transactions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20154) Line separator in ODBC errors text

2023-08-03 Thread Nikita Sivkov (Jira)
Nikita Sivkov created IGNITE-20154:
--

 Summary: Line separator in ODBC errors text
 Key: IGNITE-20154
 URL: https://issues.apache.org/jira/browse/IGNITE-20154
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 3.0
Reporter: Nikita Sivkov
 Attachments: odbc_client.py, smoke0.sql

*Issue:*
Getting line separator symbols "\r\n" in OBDC error text.
**

*For example:*
{noformat}

{noformat}
*ALTER TABLE PUBLIC.CAR DROP COLUMN IF EXISTS NAME
('HY000', '[HY000] org.apache.ignite.sql.SqlException: Failed to parse query: 
Encountered "" at line 1, column 36.\r\nWas expecting one of:\r\n     (262147, 
3e945522-7d71-48c0-a8cf-675bbc078bb0) (0) (SQLExecDirectW)')*

 

*Reproducer:*
 # Start Ignite node.
 # Execute command:
`python3 odbc_client.py -o smoke0.sql.actual localhost:10800 smoke0.sql`

*Commit id:*
2655e406b06a2605c2d5ad9402e06d81c1a168ef

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20153) Prepare existing tests for the distributed zone to switch to the catalog

2023-08-03 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-20153:


Assignee: Kirill Tkalenko

> Prepare existing tests for the distributed zone to switch to the catalog
> 
>
> Key: IGNITE-20153
> URL: https://issues.apache.org/jira/browse/IGNITE-20153
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When trying to switch the *DistributionZoneManager* to the catalog, tests 
> were found for which a lot of work would have to be done to implement the 
> transition to the catalog.
> A lot of similar code was found related to the creation / modification / 
> deletion of zones in the tests, it is proposed to fix this in this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20153) Prepare existing tests for the distributed zone to switch to the catalog

2023-08-03 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20153:


 Summary: Prepare existing tests for the distributed zone to switch 
to the catalog
 Key: IGNITE-20153
 URL: https://issues.apache.org/jira/browse/IGNITE-20153
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-beta2


When trying to switch the *DistributionZoneManager* to the catalog, tests were 
found for which a lot of work would have to be done to implement the transition 
to the catalog.

A lot of similar code was found related to the creation / modification / 
deletion of zones in the tests, it is proposed to fix this in this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20141) Transaction will never become finished on timeout when deadlock detection is disabled.

2023-08-03 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750753#comment-17750753
 ] 

Ignite TC Bot commented on IGNITE-20141:


{panel:title=Branch: [pull/10872/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10872/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7280849buildTypeId=IgniteTests24Java8_RunAll]

> Transaction will never become finished on timeout when deadlock detection is 
> disabled.
> --
>
> Key: IGNITE-20141
> URL: https://issues.apache.org/jira/browse/IGNITE-20141
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TxRollbackOnTimeoutNoDeadlockDetectionTest has no failures at CI because 
> Ignite ignores IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS property set after it 
> already set by previous tests (static field).
> But, when you run this test separately it fails because transaction will 
> never become finished on timeout when deadlockDetection is dissabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20152) PartitionAwarenessTest.startServer2 is flaky

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20152:

Description: 
https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunUnitTests/7409734?expandBuildProblemsSection=true=false=true=false

{code}
PartitionAwarenessTest STANDARD_ERROR
  2023-08-03 16:28:06:595 +0300 [INFO][Test worker][ClientHandlerModule] 
Thin client protocol started successfully [port=45259]
  org.apache.ignite.client.PartitionAwarenessTest.initializationError
org.apache.ignite.lang.IgniteException: IGN-NETWORK-2 
TraceId:1e574be3-d6b5-4394-91a5-78aaf61991bd Cannot start thin client connector 
endpoint. Port 45260 is in use.
org.apache.ignite.lang.IgniteException: IGN-NETWORK-2 
TraceId:1e574be3-d6b5-4394-91a5-78aaf61991bd Cannot start thin client connector 
endpoint. Port 45260 is in use.
  at 
app//org.apache.ignite.client.handler.ClientHandlerModule.startEndpoint(ClientHandlerModule.java:273)
  at 
app//org.apache.ignite.client.handler.ClientHandlerModule.start(ClientHandlerModule.java:179)
  at app//org.apache.ignite.client.TestServer.(TestServer.java:187)
  at 
app//org.apache.ignite.client.PartitionAwarenessTest.beforeAll(PartitionAwarenessTest.java:86)
  at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
  at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566)
  at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
  at 
app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:70)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$13(ClassBasedTestDescriptor.java:411)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:409)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:215)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at 
app//org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 

[jira] [Updated] (IGNITE-20152) PartitionAwarenessTest.startServer2 is flaky

2023-08-03 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20152:

Description: 
{code}
PartitionAwarenessTest STANDARD_ERROR
  2023-08-03 16:28:06:595 +0300 [INFO][Test worker][ClientHandlerModule] 
Thin client protocol started successfully [port=45259]
  org.apache.ignite.client.PartitionAwarenessTest.initializationError
org.apache.ignite.lang.IgniteException: IGN-NETWORK-2 
TraceId:1e574be3-d6b5-4394-91a5-78aaf61991bd Cannot start thin client connector 
endpoint. Port 45260 is in use.
org.apache.ignite.lang.IgniteException: IGN-NETWORK-2 
TraceId:1e574be3-d6b5-4394-91a5-78aaf61991bd Cannot start thin client connector 
endpoint. Port 45260 is in use.
  at 
app//org.apache.ignite.client.handler.ClientHandlerModule.startEndpoint(ClientHandlerModule.java:273)
  at 
app//org.apache.ignite.client.handler.ClientHandlerModule.start(ClientHandlerModule.java:179)
  at app//org.apache.ignite.client.TestServer.(TestServer.java:187)
  at 
app//org.apache.ignite.client.PartitionAwarenessTest.beforeAll(PartitionAwarenessTest.java:86)
  at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
  at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566)
  at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
  at 
app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:70)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
  at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$13(ClassBasedTestDescriptor.java:411)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:409)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:215)
  at 
app//org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at 
app//org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
  at 

[jira] [Created] (IGNITE-20152) PartitionAwarenessTest.startServer2 is flaky

2023-08-03 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-20152:
---

 Summary: PartitionAwarenessTest.startServer2 is flaky
 Key: IGNITE-20152
 URL: https://issues.apache.org/jira/browse/IGNITE-20152
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit

2023-08-03 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov reassigned IGNITE-20127:
--

Assignee: Alexey Scherbakov

> Implement 1rtt RW transaction await logic in pre commit
> ---
>
> Key: IGNITE-20127
> URL: https://issues.apache.org/jira/browse/IGNITE-20127
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3, transactions
>
> h3. Motivation
> Our transaction protocol assumes that all required request validations, lock 
> acquisitions and similar activities are performed on a primary replica prior 
> to command replication, meaning that it's not necessary to await replication 
> for every request one by one rather it's required to await them all at once 
> in pre-commit phase. Most of what is required for such all at once await has 
> already been implemented.
> h3. Definition of Done
>  * It's required to do the command replication in an async manner, meaning 
> that it's necessary to return the result to the client right after 
> replication is triggered. Currently, we return replication result in 
> PartitionReplicaListener#applyCmdWithExceptionHandling and await it in 
> ReplicaManager#onReplicaMessageReceive
> {code:java}
> CompletableFuture result = replica.processRequest(request);
> result.handle((res, ex) -> {
> ...
> msg = prepareReplicaResponse(requestTimestamp, res);
> ...
> clusterNetSvc.messagingService().respond(senderConsistentId, msg, 
> correlationId); {code}
>  * And of course it's required to await all commands replication at once in 
> pre-commit. We already have such logic in ReadWriteTransactionImpl#finish
> {code:java}
> protected CompletableFuture finish(boolean commit) {
> ...
> CompletableFuture mainFinishFut = CompletableFuture
> .allOf(enlistedResults.toArray(new CompletableFuture[0]))
> .thenCompose( 
> ...
>                 return txManager.finish(
> ...{code}
> however, it should use not the result from primary, but the replication 
> completion one.
> h3. Implementation Notes
> I believe it's possible to implement it in a following way:
>  * ReplicaManager should await only primary related actions like lock 
> acquisition and store the replication future in a sort of map. It's possible 
> to use safeTime as request Id.
>  * Transaction should send replicationAwaitRequest in an async manner right 
> after replicationResponse from primary was achieved.
>  * enlistedResults should be switched to replicationAwaitResponse.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20113) IgniteTxStateImpl initial cleanup

2023-08-03 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750736#comment-17750736
 ] 

Ignite TC Bot commented on IGNITE-20113:


{panel:title=Branch: [pull/10868/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10868/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7279669buildTypeId=IgniteTests24Java8_RunAll]

> IgniteTxStateImpl initial cleanup
> -
>
> Key: IGNITE-20113
> URL: https://issues.apache.org/jira/browse/IGNITE-20113
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20080) Reduce the number threads used by Raft in tests

2023-08-03 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20080:
-
Fix Version/s: 3.0.0-beta2

> Reduce the number threads used by Raft in tests
> ---
>
> Key: IGNITE-20080
> URL: https://issues.apache.org/jira/browse/IGNITE-20080
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After Kubernetes-based agents were enabled on TC, unit test builds started to 
> fail with OOM errors. After having inspected the heap dump, we've discovered 
> that Kubernetes agents report to have 64 cores in their processors. Since the 
> number of cores influences the number of threads the Raft component creates, 
> we ended up with having more than 500 Raft-related threads, which consumed 
> all available memory.
> As a quick solution to this problem, I propose to reduce the number of 
> threads used by the Raft component, at least in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19783) StripedScheduledExecutorService for DistributionZoneManager#executor

2023-08-03 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750714#comment-17750714
 ] 

Denis Chudov commented on IGNITE-19783:
---

[~Sergey Uttsel] LGTM.

> StripedScheduledExecutorService for DistributionZoneManager#executor
> 
>
> Key: IGNITE-19783
> URL: https://issues.apache.org/jira/browse/IGNITE-19783
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> In https://issues.apache.org/jira/browse/IGNITE-19736 we set corePoolSize=1 
> for DistributionZoneManager#executor to ensure that all data nodes 
> calculation tasks per a zone are executed in order of creation. But we need 
> more threads to process this tasks. So we need to create 
> StripedScheduledExecutorService and all tasks for the same zone must be 
> executed in one stripe. The pool to execute the task is defined by a zone id.
> h3. *Definition of Done*
>  # StripedScheduledExecutorService is created and used instead of single 
> thread executor in DistributionZoneManager.
>  # All tasks for the same zone must be executed in one stripe.
> h3. *Implementation Notes*
> I've created a draft StripedScheduledExecutorService in a branch 
> [https://github.com/gridgain/apache-ignite-3/tree/ignite-19783]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter

2023-08-03 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750713#comment-17750713
 ] 

Denis Chudov commented on IGNITE-20058:
---

[~Sergey Uttsel] LGTM

> NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter
> -
>
> Key: IGNITE-20058
> URL: https://issues.apache.org/jira/browse/IGNITE-20058
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> *{{Motivation}}*
> {{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with 
> very low failure rate it fails with NPE (1 fail in 1500 runs)
> {noformat}
> 2023-07-25 16:48:30:520 +0400 
> [ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred 
> when processing a watch event
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
>   at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
>   at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
>   at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
>   at 
> org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
>  Source)
> {noformat}
> {code:java}
> 2023-08-01 15:55:40:440 +0300 
> [INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
> notify configuration listener
> java.lang.NullPointerException
>     at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
>     at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
>     at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
>     at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
>     at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
>     at 
> org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
>     at 
> org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
>  Source){code}
>  
> *Implementation Notes*
> The reason is the wrong start order of the components:
> # Firstly metastorage watch listeners are deployed.
> # Then DistributionZoneManager is started.
> So I change this order to fix the issue.
> Also I will close https://issues.apache.org/jira/browse/IGNITE-19403  when 
> this ticket will be closed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20080) Reduce the number threads used by Raft in tests

2023-08-03 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750711#comment-17750711
 ] 

Roman Puchkovskiy commented on IGNITE-20080:


The patch looks good to me

> Reduce the number threads used by Raft in tests
> ---
>
> Key: IGNITE-20080
> URL: https://issues.apache.org/jira/browse/IGNITE-20080
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After Kubernetes-based agents were enabled on TC, unit test builds started to 
> fail with OOM errors. After having inspected the heap dump, we've discovered 
> that Kubernetes agents report to have 64 cores in their processors. Since the 
> number of cores influences the number of threads the Raft component creates, 
> we ended up with having more than 500 Raft-related threads, which consumed 
> all available memory.
> As a quick solution to this problem, I propose to reduce the number of 
> threads used by the Raft component, at least in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20080) Reduce the number threads used by Raft in tests

2023-08-03 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20080:
---
Reviewer: Roman Puchkovskiy

> Reduce the number threads used by Raft in tests
> ---
>
> Key: IGNITE-20080
> URL: https://issues.apache.org/jira/browse/IGNITE-20080
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After Kubernetes-based agents were enabled on TC, unit test builds started to 
> fail with OOM errors. After having inspected the heap dump, we've discovered 
> that Kubernetes agents report to have 64 cores in their processors. Since the 
> number of cores influences the number of threads the Raft component creates, 
> we ended up with having more than 500 Raft-related threads, which consumed 
> all available memory.
> As a quick solution to this problem, I propose to reduce the number of 
> threads used by the Raft component, at least in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20019) Introduce SystemViewManager

2023-08-03 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20019:
---
Description: 
SystemViewManager is a component responsible for managing the system views':
* during start, any component may register view definition to the view manager. 
View manager must ensure that there is no clashes in the views' names.
* before node is going to be validated, view manager must enhance node 
attributes (LogicalNode class) with list of clusterViews registered so far. 
Later, sql engine will be using this information to map queries over cluster 
views properly
* after node has passed validation and ready to join logical topology, view 
manager must register all views to the catalog.

  was:
SystemViewManager is a component responsible for managing the system views':
* during start, any component may register view definition to the view manager. 
View manager must ensure that there is no clashes in the views' names.
* before node is going to be validated, view manager must enhance node 
attributes with list of clusterViews registered so far. Later, sql engine will 
be using this information to map queries over cluster views properly
* after node has passed validation and ready to join logical topology, view 
manager must register all views to the catalog.


> Introduce SystemViewManager
> ---
>
> Key: IGNITE-20019
> URL: https://issues.apache.org/jira/browse/IGNITE-20019
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> SystemViewManager is a component responsible for managing the system views':
> * during start, any component may register view definition to the view 
> manager. View manager must ensure that there is no clashes in the views' 
> names.
> * before node is going to be validated, view manager must enhance node 
> attributes (LogicalNode class) with list of clusterViews registered so far. 
> Later, sql engine will be using this information to map queries over cluster 
> views properly
> * after node has passed validation and ready to join logical topology, view 
> manager must register all views to the catalog.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20141) Transaction will never become finished on timeout when deadlock detection is disabled.

2023-08-03 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-20141:
--
Description: 
TxRollbackOnTimeoutNoDeadlockDetectionTest has no failures at CI because Ignite 
ignores IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS property set after it already 
set by previous tests (static field).

But, when you run this test separately it fails because transaction will never 
become finished on timeout when deadlockDetection is dissabled.

> Transaction will never become finished on timeout when deadlock detection is 
> disabled.
> --
>
> Key: IGNITE-20141
> URL: https://issues.apache.org/jira/browse/IGNITE-20141
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TxRollbackOnTimeoutNoDeadlockDetectionTest has no failures at CI because 
> Ignite ignores IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS property set after it 
> already set by previous tests (static field).
> But, when you run this test separately it fails because transaction will 
> never become finished on timeout when deadlockDetection is dissabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (IGNITE-20141) Transaction will never become finished on timeout when deadlock detection is disabled.

2023-08-03 Thread Anton Vinogradov (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-20141 ]


Anton Vinogradov deleted comment on IGNITE-20141:
---

was (Author: av):
TxRollbackOnTimeoutNoDeadlockDetectionTest has no failures at CI because Ignite 
ignores IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS property set after it already 
set by previous tests (static field).

But, when you run this test separately it fails because transaction will never 
become finished on timeout when deadlockDetection is dissabled.

> Transaction will never become finished on timeout when deadlock detection is 
> disabled.
> --
>
> Key: IGNITE-20141
> URL: https://issues.apache.org/jira/browse/IGNITE-20141
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TxRollbackOnTimeoutNoDeadlockDetectionTest has no failures at CI because 
> Ignite ignores IGNITE_TX_DEADLOCK_DETECTION_MAX_ITERS property set after it 
> already set by previous tests (static field).
> But, when you run this test separately it fails because transaction will 
> never become finished on timeout when deadlockDetection is dissabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (IGNITE-20094) IgniteTxManager initial cleanup

2023-08-03 Thread Anton Vinogradov (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-20094 ]


Anton Vinogradov deleted comment on IGNITE-20094:
---

was (Author: ignitetcbot):
{panel:title=Branch: [pull/10864/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10864/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7274053buildTypeId=IgniteTests24Java8_RunAll]

> IgniteTxManager initial cleanup
> ---
>
> Key: IGNITE-20094
> URL: https://issues.apache.org/jira/browse/IGNITE-20094
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20151) Minor fixes in Ignite 3 public documentation

2023-08-03 Thread Aleksandr (Jira)
Aleksandr created IGNITE-20151:
--

 Summary: Minor fixes in Ignite 3 public documentation
 Key: IGNITE-20151
 URL: https://issues.apache.org/jira/browse/IGNITE-20151
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Aleksandr


#  Search "Alpha" word, it is confused with Betha
# Installing using docker: "Running In Memory Cluster?" - confusing. What does 
"In Memory means"? I would suggest get rid of this word.
# Rename "REPL mode" to "Interactive mode"
# In the CLI part: "Command options" is confused with "Interactive". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750695#comment-17750695
 ] 

Aleksandr commented on IGNITE-20150:


As a result, the documentation ticket should be created as well.

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr updated IGNITE-20150:
---
Labels: ignite-3  (was: )

> JDBC port not exposed in docker-compose.yml in Ignite 3
> ---
>
> Key: IGNITE-20150
> URL: https://issues.apache.org/jira/browse/IGNITE-20150
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-beta2
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> Ignite has an option to run cluster inside Docker container. To run several 
> nodes we can use docker compose file. In repo exists pre-defined 
> docker-compose.yml and example could be found in documentation. However both 
> file from repo and docs contains one simple error: JDBC port is not exposed. 
> So as soon as someone will try to enter SQL mode inside CLI following error 
> will be received: 
> {code}
> [node1]> sql
> 196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
> Connection failed
> Client failed to connect: Connection refused: localhost/127.0.0.1:10800
> {code}
> Adding 10800 into docker-compose file fixes that problem. 
> On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
> externally. As far as I understand they exists only for internal 
> communication between nodes and no one should connect to those ports 
> externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20150) JDBC port not exposed in docker-compose.yml in Ignite 3

2023-08-03 Thread Ivan Zlenko (Jira)
Ivan Zlenko created IGNITE-20150:


 Summary: JDBC port not exposed in docker-compose.yml in Ignite 3
 Key: IGNITE-20150
 URL: https://issues.apache.org/jira/browse/IGNITE-20150
 Project: Ignite
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-beta2
Reporter: Ivan Zlenko


Ignite has an option to run cluster inside Docker container. To run several 
nodes we can use docker compose file. In repo exists pre-defined 
docker-compose.yml and example could be found in documentation. However both 
file from repo and docs contains one simple error: JDBC port is not exposed. So 
as soon as someone will try to enter SQL mode inside CLI following error will 
be received: 
{code}
[node1]> sql
196609 Trace ID: 2c6f842d-2d08-4b51-b1cf-307e664dc9ff
Connection failed
Client failed to connect: Connection refused: localhost/127.0.0.1:10800
{code}
Adding 10800 into docker-compose file fixes that problem. 

On the side note: I'm not sure if it is necessary to expose ScaleCube ports 
externally. As far as I understand they exists only for internal communication 
between nodes and no one should connect to those ports externally. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20149) Sql. Revise use INTERNAL_ERR in sql module

2023-08-03 Thread Yury Gerzhedovich (Jira)
Yury Gerzhedovich created IGNITE-20149:
--

 Summary: Sql. Revise use INTERNAL_ERR in sql module
 Key: IGNITE-20149
 URL: https://issues.apache.org/jira/browse/IGNITE-20149
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Yury Gerzhedovich


Error code Common.INTERNAL_ERR should use only for internal error, which could 
treat as a bug require attention from developer. However we use the error code 
often and for normal situation, e.g. node left during execution of a query.

Let's revise SQL module on use INTERNAL_ERR error code according the above.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20142) Introduce changes for JDK17 tests run

2023-08-03 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750684#comment-17750684
 ] 

Ignite TC Bot commented on IGNITE-20142:


{panel:title=Branch: [pull/10873/head] Base: [master] : Possible Blockers 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (Windows){color} [[tests 0 TIMEOUT , Exit Code 
|https://ci2.ignite.apache.org/viewLog.html?buildId=7280970]]

{color:#d04437}Cache 2{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7280902]]
* IgniteCacheTestSuite2: 
GridCachePartitionedTxMultiThreadedSelfTest.testPessimisticReadCommittedCommitMultithreaded
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Disk Page Compressions 1{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7281000]]
* IgnitePdsCompressionTestSuite: 
IgniteClusterActivateDeactivateTestWithPersistence.testInactiveTopologyChangesReadOnly
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Snapshots{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7280982]]
* IgniteSnapshotTestSuite: 
IgniteSnapshotRestoreFromRemoteTest.testSnapshotCachesStoppedIfLoadingFailOnRemote[encryption=true,
 onlyPrimay=false] - Test has low fail rate in base branch 0,0% and is not flaky

{panel}
{panel:title=Branch: [pull/10873/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7281003buildTypeId=IgniteTests24Java8_RunAll]

> Introduce changes for JDK17 tests run
> -
>
> Key: IGNITE-20142
> URL: https://issues.apache.org/jira/browse/IGNITE-20142
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Petr Ivanov
>Assignee: Petr Ivanov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>
> Introduce several changes and extend current WAs for ability to run tests 
> under JDK17.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20148) Explicit writeIntent cleanup on primary replica

2023-08-03 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-20148:


 Summary: Explicit writeIntent cleanup on primary replica
 Key: IGNITE-20148
 URL: https://issues.apache.org/jira/browse/IGNITE-20148
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20147) C++ tests with timestamp fail on new agents

2023-08-03 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-20147:


 Summary: C++ tests with timestamp fail on new agents
 Key: IGNITE-20147
 URL: https://issues.apache.org/jira/browse/IGNITE-20147
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 3.0.0-beta2


ODBC tests get_timestamp_from_date and get_timestamp_from_timestamp fail on new 
TC agents. Probably, has to do something with timezone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20016) Introduce bulk operation to catalog

2023-08-03 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich reassigned IGNITE-20016:
--

Assignee: Konstantin Orlov

> Introduce bulk operation to catalog
> ---
>
> Key: IGNITE-20016
> URL: https://issues.apache.org/jira/browse/IGNITE-20016
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> To optimize registration of system views to catalog, the latter should 
> support bulk update.
> Let's extend API of catalogManager with method, accepting list of parameters, 
> and applying all changes as a single update to the catalog



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19770) Add a mechanism to wait till a schema is available via Schema Sync at a ts

2023-08-03 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-19770:
---
Description: 
According to IEP-98, when obtaining a schema at a timestamp T, we need to wait 
till Meta-Storage SafeTime becomes >= T-DD. A mechanism for such waits needs to 
be implemented.

It can be implemented either as methods like 
{{CompletableFuture table(String tableName, HybridTimestamp 
ts)}} (for each getting method) or a single method like 
{{CompletableFuture waitForTs(HybridTimestamp)}} (then usual sync methods 
to be used to get the definitions).

  was:
According to IEP-98, when obtaining a schema at a timestamp T, we need to wait 
till Meta-Storage SafeTime becomes >= T-DD. A mechanism for such waits needs to 
be implemented.

Also, we will probably need to wait for a specific version of a table/index/etc 
(or a version of a Catalog as a whole), so a way to wait for it (and not for a 
ts) is also needed.

Both can be implemented either as methods like 
{{CompletableFuture table(String tableName, HybridTimestamp 
ts)}} (for each getting method) or a single method like 
{{CompletableFuture waitForTs(HybridTimestamp)}} (then usual sync methods 
to be used to get the definitions).


> Add a mechanism to wait till a schema is available via Schema Sync at a ts
> --
>
> Key: IGNITE-19770
> URL: https://issues.apache.org/jira/browse/IGNITE-19770
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-98, ignite-3
> Fix For: 3.0.0-beta2
>
>
> According to IEP-98, when obtaining a schema at a timestamp T, we need to 
> wait till Meta-Storage SafeTime becomes >= T-DD. A mechanism for such waits 
> needs to be implemented.
> It can be implemented either as methods like 
> {{CompletableFuture table(String tableName, HybridTimestamp 
> ts)}} (for each getting method) or a single method like 
> {{CompletableFuture waitForTs(HybridTimestamp)}} (then usual sync 
> methods to be used to get the definitions).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20023) Unstable cluster topology leads to flaky test

2023-08-03 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20023:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Unstable cluster topology leads to flaky test
> -
>
> Key: IGNITE-20023
> URL: https://issues.apache.org/jira/browse/IGNITE-20023
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0
>Reporter: Alexey Scherbakov
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> 2023-07-21 18:30:15:108 +0300 
> [INFO][main][ItTxDistributedTestThreeNodesThreeReplicas] >>> Starting test: 
> ItTxDistributedTestThreeNodesThreeReplicas#testTxClosureAsync, displayName: 
> testTxClosureAsync(), workDir: 
> D:\work\ignite-3\modules\table\build\work\ItTxDistributedTestThreeNodesThreeReplicas\ttca_25996
> 2023-07-21 18:30:15:142 +0300 [INFO][main][ConnectionManager] Server started 
> [address=/0:0:0:0:0:0:0:0:20001]
> 2023-07-21 18:30:15:142 +0300 
> [INFO][ForkJoinPool.commonPool-worker-7][ConnectionManager] Server started 
> [address=/0:0:0:0:0:0:0:0:2]
> 2023-07-21 18:30:15:142 +0300 
> [INFO][ForkJoinPool.commonPool-worker-3][ConnectionManager] Server started 
> [address=/0:0:0:0:0:0:0:0:20002]
> 2023-07-21 18:30:15:145 +0300 
> [INFO][itdttntr_n_2-srv-worker-1][RecoveryServerHandshakeManager] Failed 
> to acquire recovery descriptor during handshake, it is held by: [id: 
> 0x348087cd, L:/192.168.0.138:63056 - R:/192.168.0.138:20001]
> 2023-07-21 18:30:15:147 +0300 
> [INFO][itdttntr_n_20002-client-1][RecoveryClientHandshakeManager] Failed to 
> acquire recovery descriptor during handshake, it is held by: [id: 0x8017c1b2, 
> L:/192.168.0.138:20002 - R:/192.168.0.138:63058]
> 2023-07-21 18:30:15:147 +0300 
> [INFO][itdttntr_n_20002-client-2][RecoveryClientHandshakeManager] Failed to 
> acquire recovery descriptor during handshake, it is held by: [id: 0xb26b9231, 
> L:/192.168.0.138:20002 - R:/192.168.0.138:63059]
> 2023-07-21 18:30:15:149 +0300 
> [INFO][itdttntr_n_20001-client-3][RecoveryClientHandshakeManager] Failed to 
> acquire recovery descriptor during handshake, it is held by: [id: 0x3dd8aa9e, 
> L:/192.168.0.138:20001 - R:/192.168.0.138:63056]
> 2023-07-21 18:30:15:149 +0300 
> [INFO][itdttntr_n_20002-client-3][RecoveryClientHandshakeManager] Failed to 
> acquire recovery descriptor during handshake, it is held by: [id: 0xb26b9231, 
> L:/192.168.0.138:20002 - R:/192.168.0.138:63059]
> 2023-07-21 18:30:15:149 +0300 
> [INFO][itdttntr_n_20002-client-4][RecoveryClientHandshakeManager] Failed to 
> acquire recovery descriptor during handshake, it is held by: [id: 0x8017c1b2, 
> L:/192.168.0.138:20002 - R:/192.168.0.138:63058]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20001-107][ScaleCubeTopologyService] Node joined 
> [node=ClusterNode [id=ade225ca-5ef0-4b03-9dbe-edbe2e2ba928, 
> name=itdttntr_n_20002, address=192.168.0.138:20002, nodeMetadata=null]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20002-106][ScaleCubeTopologyService] Node joined 
> [node=ClusterNode [id=bee1e04c-e2f0-48c0-ad54-eefdd6928278, 
> name=itdttntr_n_20001, address=192.168.0.138:20001, nodeMetadata=null]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-2-105][ScaleCubeTopologyService] Node joined 
> [node=ClusterNode [id=ade225ca-5ef0-4b03-9dbe-edbe2e2ba928, 
> name=itdttntr_n_20002, address=192.168.0.138:20002, nodeMetadata=null]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20001-107][ScaleCubeTopologyService] Topology snapshot 
> [nodes=[itdttntr_n_20002]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-2-105][ScaleCubeTopologyService] Topology snapshot 
> [nodes=[itdttntr_n_20002]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20002-106][ScaleCubeTopologyService] Topology snapshot 
> [nodes=[itdttntr_n_20001]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20001-107][ScaleCubeTopologyService] Node joined 
> [node=ClusterNode [id=496bbe6d-66b7-4aa9-9807-e1dd5207d900, 
> name=itdttntr_n_2, address=192.168.0.138:2, nodeMetadata=null]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20002-106][ScaleCubeTopologyService] Node joined 
> [node=ClusterNode [id=496bbe6d-66b7-4aa9-9807-e1dd5207d900, 
> name=itdttntr_n_2, address=192.168.0.138:2, nodeMetadata=null]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20001-107][ScaleCubeTopologyService] Topology snapshot 
> [nodes=[itdttntr_n_2, itdttntr_n_20002]]
> 2023-07-21 18:30:15:150 +0300 
> [INFO][sc-cluster-20002-106][ScaleCubeTopologyService] Topology snapshot 
> [nodes=[itdttntr_n_2, itdttntr_n_20001]]
> 2023-07-21 18:30:15:151 +0300 
> 

[jira] [Updated] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter

2023-08-03 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20058:
---
Description: 
*{{Motivation}}*

{{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with 
very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}
 
*Implementation Notes*
The reason is the wrong start order of the components:
# Firstly metastorage watch listeners are deployed.
# Then DistributionZoneManager is started.

So I change this order to fix the issue.

Also I will close https://issues.apache.org/jira/browse/IGNITE-19403  when this 
ticket will be closed.

  was:
*{{Motivation}}*

{{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with 
very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}
 
*Implementation Notes*
The reason is the wrong start order of the 

[jira] [Updated] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter

2023-08-03 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20058:
---
Description: 
*{{Motivation}}*

{{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with 
very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}
 
*Implementation Notes*
The reason is the wrong start order of the components:
# Firstly metastorage watch listeners are deployed.
# Then DistributionZoneManager is started.
So I change this order to fix the issue.

Also I will close https://issues.apache.org/jira/browse/IGNITE-19403  when this 
ticket will be closed.

  was:
{{MotivationDistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky 
and with very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}


> NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter
> 

[jira] [Updated] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter

2023-08-03 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20058:
---
Description: 
{{MotivationDistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky 
and with very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}

  was:
{{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with 
very low failure rate it fails with NPE (1 fail in 1500 runs)
{noformat}
2023-07-25 16:48:30:520 +0400 
[ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when 
processing a watch event
java.lang.NullPointerException
at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source)
{noformat}
{code:java}
2023-08-01 15:55:40:440 +0300 
[INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to 
notify configuration listener
java.lang.NullPointerException
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570)
    at 
org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557)
    at 
org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136)
    at 
org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129)
    at 
org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown
 Source){code}


> NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter
> -
>
> Key: IGNITE-20058
> URL: https://issues.apache.org/jira/browse/IGNITE-20058
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Sergey Uttsel
>Priority: Major
>