[jira] [Commented] (IGNITE-21437) Add bindHost configuration setting

2024-02-27 Thread Philipp Shergalis (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821552#comment-17821552
 ] 

Philipp Shergalis commented on IGNITE-21437:


https://github.com/apache/ignite-3/pull/3297

> Add bindHost configuration setting
> --
>
> Key: IGNITE-21437
> URL: https://issues.apache.org/jira/browse/IGNITE-21437
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Philipp Shergalis
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, we do not choose any specific interface when binding, so we bind 
> to all interfaces.
> bindHost (we should think whether this is the best name and choose another if 
> not) should be added to NetworkConfigurationSchema to allow to choose a local 
> interface to which to bind.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21619) "Failed to get the primary replica" after massive data insert and node restart

2024-02-27 Thread Andrey Khitrin (Jira)
Andrey Khitrin created IGNITE-21619:
---

 Summary: "Failed to get the primary replica" after massive data 
insert and node restart
 Key: IGNITE-21619
 URL: https://issues.apache.org/jira/browse/IGNITE-21619
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Andrey Khitrin
 Attachments: ignite-config.conf, ignite3db-0.log

Steps to reproduce:

1. Start a 1-node cluster.
2 Create several tables (5, for example) in aipersist zone.
3. Fill these tables with some data (1000 rows each, for example).
4. Verify that data is accessible via SQL.
5. Restart a node.
6. Try to fetch the same data again.

Expected result: we could fetch data.

Actual result: data is inaccessible.

Trace on the client side:
{code}
java.sql.SQLException: Failed to get the primary replica 
[tablePartitionId=6_part_1]
at 
org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:154)
at 
org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:111)
   ...
{code}

Trace in node log (attached):
{code}
2024-02-28 12:36:34:807 +0500 
[INFO][%ClusterFailoverTest_cluster_0%sql-execution-pool-0][JdbcQueryEventHandlerImpl]
 Exception while executing query [query=select sum(k1) from failoverTest00]
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:8d366905-a4bb-4333-b0b3-c647a1cf943f Failed to get the primary replica 
[tablePartitionId=6_part_1]
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:61)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:180)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.handleError(AsyncSqlCursorImpl.java:157)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$2(AsyncSqlCursorImpl.java:96)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$execute$18(ExecutionServiceImpl.java:864)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at 
org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:83)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:8d366905-a4bb-4333-b0b3-c647a1cf943f Failed to get the primary replica 
[tablePartitionId=6_part_1]
at 
org.apache.ignite.internal.lang.IgniteExceptionMapperUtil.mapToPublicException(IgniteExceptionMapperUtil.java:117)
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:51)
... 15 more
Caused by: org.apache.ignite.internal.lang.IgniteInternalException: 
IGN-PLACEMENTDRIVER-1 TraceId:8d366905-a4bb-4333-b0b3-c647a1cf943f Failed to 
get the primary replica [tablePartitionId=6_part_1]
at 
org.apache.ignite.internal.util.ExceptionUtils.lambda$withCause$1(ExceptionUtils.java:384)
at 
org.apache.ignite.internal.util.ExceptionUtils.withCauseInternal(ExceptionUtils.java:446)
at 
org.apache.ignite.internal.util.ExceptionUtils.withCause(ExceptionUtils.java:384)
at 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor.lambda$primaryReplicas$2(SqlQueryProcessor.java:402)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 

[jira] [Updated] (IGNITE-18269) Use a pool of threads instead of creating threads for LongOperationAsyncExecutor

2024-02-27 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-18269:
---
Labels: ignite-3 tech-debt threading  (was: ignite-3 tech-debt)

> Use a pool of threads instead of creating threads for 
> LongOperationAsyncExecutor
> 
>
> Key: IGNITE-18269
> URL: https://issues.apache.org/jira/browse/IGNITE-18269
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3, tech-debt, threading
> Fix For: 3.0.0-beta2
>
>
> It was discovered that in 
> *org.apache.ignite.internal.pagememory.persistence.store.LongOperationAsyncExecutor*
>  threads are created instead of using a pool of threads, this needs to be 
> fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21618) In-flights for read-only transactions

2024-02-27 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21618:
--
Description: 
*Motivation*

We need to make solid mechanism of closing read-only transactions' resources 
(scan cursors, etc.) on remote servers after tx finish. Resources are supposed 
to be closed by the requests from coordinator sent from a separate cleanup 
thread after the tx is finished, to maximise the performance of the tx finish 
itself and because these requests are needed only for resource cleanup. But we 
need to prevent a race, such as:
 * tx request supposing to create a scan cursor on remote server is sent
 * tx is finished
 * cleanup thread sends cleanup request
 * cleanup request reaches remote server
 * tx request reaches the remote server and opens a cursor that will never be 
closed.

We need to ensure that cleanup request will be not sent until the coordinator 
receives responses for all requests that sent before tx finish, and no requests 
are allowed after tx finish. Something similar to RW inflight requests counter 
for RO is to be done.

*Definition of done*

Cleanup request from cleanup thread will be not sent until the coordinator 
receives responses for all requests that sent before tx finish, and no requests 
are allowed after tx finish.

  was:
*Motivation*

We need to make solid mechanism of closing read-only transactions' resources 
(cursors, etc.) on remote servers after tx finish. 


> In-flights for read-only transactions
> -
>
> Key: IGNITE-21618
> URL: https://issues.apache.org/jira/browse/IGNITE-21618
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> We need to make solid mechanism of closing read-only transactions' resources 
> (scan cursors, etc.) on remote servers after tx finish. Resources are 
> supposed to be closed by the requests from coordinator sent from a separate 
> cleanup thread after the tx is finished, to maximise the performance of the 
> tx finish itself and because these requests are needed only for resource 
> cleanup. But we need to prevent a race, such as:
>  * tx request supposing to create a scan cursor on remote server is sent
>  * tx is finished
>  * cleanup thread sends cleanup request
>  * cleanup request reaches remote server
>  * tx request reaches the remote server and opens a cursor that will never be 
> closed.
> We need to ensure that cleanup request will be not sent until the coordinator 
> receives responses for all requests that sent before tx finish, and no 
> requests are allowed after tx finish. Something similar to RW inflight 
> requests counter for RO is to be done.
> *Definition of done*
> Cleanup request from cleanup thread will be not sent until the coordinator 
> receives responses for all requests that sent before tx finish, and no 
> requests are allowed after tx finish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21618) In-flights for read-only transactions

2024-02-27 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21618:
--
Epic Link: IGNITE-21221  (was: IGNITE-21174)

> In-flights for read-only transactions
> -
>
> Key: IGNITE-21618
> URL: https://issues.apache.org/jira/browse/IGNITE-21618
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> We need to make solid mechanism of closing read-only transactions' resources 
> (scan cursors, etc.) on remote servers after tx finish. Resources are 
> supposed to be closed by the requests from coordinator sent from a separate 
> cleanup thread after the tx is finished, to maximise the performance of the 
> tx finish itself and because these requests are needed only for resource 
> cleanup. But we need to prevent a race, such as:
>  * tx request supposing to create a scan cursor on remote server is sent
>  * tx is finished
>  * cleanup thread sends cleanup request
>  * cleanup request reaches remote server
>  * tx request reaches the remote server and opens a cursor that will never be 
> closed.
> We need to ensure that cleanup request will be not sent until the coordinator 
> receives responses for all requests that sent before tx finish, and no 
> requests are allowed after tx finish. Something similar to RW inflight 
> requests counter for RO is to be done.
> *Definition of done*
> Cleanup request from cleanup thread will be not sent until the coordinator 
> receives responses for all requests that sent before tx finish, and no 
> requests are allowed after tx finish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21618) In-flights for read-only transactions

2024-02-27 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21618:
--
Description: 
*Motivation*

We need to make solid mechanism of closing read-only transactions' resources 
(cursors, etc.) on remote servers after tx finish. 

  was:TBD


> In-flights for read-only transactions
> -
>
> Key: IGNITE-21618
> URL: https://issues.apache.org/jira/browse/IGNITE-21618
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> We need to make solid mechanism of closing read-only transactions' resources 
> (cursors, etc.) on remote servers after tx finish. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21618) In-flights for read-only transactions

2024-02-27 Thread Denis Chudov (Jira)
Denis Chudov created IGNITE-21618:
-

 Summary: In-flights for read-only transactions
 Key: IGNITE-21618
 URL: https://issues.apache.org/jira/browse/IGNITE-21618
 Project: Ignite
  Issue Type: Improvement
Reporter: Denis Chudov


TBD



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21617) Update to Gradle 9

2024-02-27 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21617:
---
Description: We need this to fully support Java 21 (with the currently used 
Gradle 7.x, builds fail on the TC).

> Update to Gradle 9
> --
>
> Key: IGNITE-21617
> URL: https://issues.apache.org/jira/browse/IGNITE-21617
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need this to fully support Java 21 (with the currently used Gradle 7.x, 
> builds fail on the TC).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21617) Update to Gradle 9

2024-02-27 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21617:
--

 Summary: Update to Gradle 9
 Key: IGNITE-21617
 URL: https://issues.apache.org/jira/browse/IGNITE-21617
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21550) ignite-cdc doesn't expose metrics via push metric exporters

2024-02-27 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821512#comment-17821512
 ] 

Ignite TC Bot commented on IGNITE-21550:


{panel:title=Branch: [pull/11248/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11248/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS 2{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7762353]]
* {color:#013220}IgnitePdsTestSuite2: 
CdcPushMetricsExporterTest.testPushMetricsExporter - PASSED{color}

{color:#8b}Disk Page Compressions 2{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7762397]]
* {color:#013220}IgnitePdsCompressionTestSuite2: 
CdcPushMetricsExporterTest.testPushMetricsExporter - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7762401buildTypeId=IgniteTests24Java8_RunAll]

> ignite-cdc doesn't expose metrics via push metric exporters
> ---
>
> Key: IGNITE-21550
> URL: https://issues.apache.org/jira/browse/IGNITE-21550
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For example cdc-related metric are not exposed via the 
> OpenCensusMetricExporterSpi



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21610) Upgrade tuples when inserting into indexes on a full state transfer

2024-02-27 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-21610:
-
Fix Version/s: 3.0.0-beta2

> Upgrade tuples when inserting into indexes on a full state transfer
> ---
>
> Key: IGNITE-21610
> URL: https://issues.apache.org/jira/browse/IGNITE-21610
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> By analogy with IGNITE-21591, we need to update tuples when inserting into 
> the index on a full transfer state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21610) Upgrade tuples when inserting into indexes on a full state transfer

2024-02-27 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-21610:


Assignee: Kirill Tkalenko

> Upgrade tuples when inserting into indexes on a full state transfer
> ---
>
> Key: IGNITE-21610
> URL: https://issues.apache.org/jira/browse/IGNITE-21610
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> By analogy with IGNITE-21591, we need to update tuples when inserting into 
> the index on a full transfer state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21578) ItDurableFinishTest#testWaitForCleanup failed with NPE

2024-02-27 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821438#comment-17821438
 ] 

Vladislav Pyatkov edited comment on IGNITE-21578 at 2/27/24 10:26 PM:
--

Likly, we do not have to look at the error because the exception occurs in the 
thread pool (the pool is ForkJoinPool.commonPool) whose lifecycle is different 
from the cluster. Currently, we use another pool.
At least because the test does not have a full operation. Probably this 
invocation is an inheritance from any previous test.
We will worry only if this reproduces again in the new base (in the cluster 
pool).


was (Author: v.pyatkov):
Likly, we do not have to look at the error because the exception occurs in the 
thread pool (the pool is ForkJoinPool.commonPool) whose lifecycle is different 
from the cluster. Currently, we use another pool.
At least because the test does not have a full operation. Probably this 
invocation is an inheritance from any previous test.

>  ItDurableFinishTest#testWaitForCleanup failed with NPE
> ---
>
> Key: IGNITE-21578
> URL: https://issues.apache.org/jira/browse/IGNITE-21578
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7870395?expandBuildDeploymentsSection=false=false=false=true+Inspection=true=true]
> {code:java}
>   Caused by: java.lang.NullPointerException
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.lambda$finishFull$3(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.lambda$updateMeta$0(VolatileTxStateMetaStorage.java:73)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908) 
> ~[?:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.updateMeta(VolatileTxStateMetaStorage.java:72)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.updateTxMeta(TxManagerImpl.java:455)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.finishFull(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$postEnlist$13(InternalTableImpl.java:593)
>  ~[ignite-table-3.0.0-SNAPSHOT.jar:?] {code}
> Seems that the reason is that old meta may be null in case of exception
> {code:java}
>     public void finishFull(HybridTimestampTracker timestampTracker, UUID 
> txId, boolean commit) {
>         ...
>         updateTxMeta(txId, old -> new TxStateMeta(finalState, 
> old.txCoordinatorId(), old.commitPartitionId(), old.commitTimestamp()));
>         ...
>     }
> {code}
> {code:java}
>         return fut.handle((BiFunction>) 
> (r, e) -> {
>             if (full) { // Full txn is already finished remotely. Just update 
> local state.
>                 txManager.finishFull(observableTimestampTracker, tx0.id(), e 
> == null);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21578) ItDurableFinishTest#testWaitForCleanup failed with NPE

2024-02-27 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821438#comment-17821438
 ] 

Vladislav Pyatkov commented on IGNITE-21578:


Likly, we do not have to look at the error because the exception occurs in the 
thread pool (the pool is ForkJoinPool.commonPool) whose lifecycle is different 
from the cluster. Currently, we use another pool.
At least because the test does not have a full operation. Probably this 
invocation is an inheritance from any previous test.

>  ItDurableFinishTest#testWaitForCleanup failed with NPE
> ---
>
> Key: IGNITE-21578
> URL: https://issues.apache.org/jira/browse/IGNITE-21578
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7870395?expandBuildDeploymentsSection=false=false=false=true+Inspection=true=true]
> {code:java}
>   Caused by: java.lang.NullPointerException
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.lambda$finishFull$3(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.lambda$updateMeta$0(VolatileTxStateMetaStorage.java:73)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908) 
> ~[?:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.updateMeta(VolatileTxStateMetaStorage.java:72)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.updateTxMeta(TxManagerImpl.java:455)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.finishFull(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$postEnlist$13(InternalTableImpl.java:593)
>  ~[ignite-table-3.0.0-SNAPSHOT.jar:?] {code}
> Seems that the reason is that old meta may be null in case of exception
> {code:java}
>     public void finishFull(HybridTimestampTracker timestampTracker, UUID 
> txId, boolean commit) {
>         ...
>         updateTxMeta(txId, old -> new TxStateMeta(finalState, 
> old.txCoordinatorId(), old.commitPartitionId(), old.commitTimestamp()));
>         ...
>     }
> {code}
> {code:java}
>         return fut.handle((BiFunction>) 
> (r, e) -> {
>             if (full) { // Full txn is already finished remotely. Just update 
> local state.
>                 txManager.finishFull(observableTimestampTracker, tx0.id(), e 
> == null);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21578) ItDurableFinishTest#testWaitForCleanup failed with NPE

2024-02-27 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-21578:
---
Summary:  ItDurableFinishTest#testWaitForCleanup failed with NPE  (was: 
ItDurableFinishTest#testCoordinatorMissedResponse failed with NPE)

>  ItDurableFinishTest#testWaitForCleanup failed with NPE
> ---
>
> Key: IGNITE-21578
> URL: https://issues.apache.org/jira/browse/IGNITE-21578
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/7870395?expandBuildDeploymentsSection=false=false=false=true+Inspection=true=true]
> {code:java}
>   Caused by: java.lang.NullPointerException
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.lambda$finishFull$3(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.lambda$updateMeta$0(VolatileTxStateMetaStorage.java:73)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908) 
> ~[?:?]
>     at 
> org.apache.ignite.internal.tx.impl.VolatileTxStateMetaStorage.updateMeta(VolatileTxStateMetaStorage.java:72)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.updateTxMeta(TxManagerImpl.java:455)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.tx.impl.TxManagerImpl.finishFull(TxManagerImpl.java:472)
>  ~[ignite-transactions-3.0.0-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$postEnlist$13(InternalTableImpl.java:593)
>  ~[ignite-table-3.0.0-SNAPSHOT.jar:?] {code}
> Seems that the reason is that old meta may be null in case of exception
> {code:java}
>     public void finishFull(HybridTimestampTracker timestampTracker, UUID 
> txId, boolean commit) {
>         ...
>         updateTxMeta(txId, old -> new TxStateMeta(finalState, 
> old.txCoordinatorId(), old.commitPartitionId(), old.commitTimestamp()));
>         ...
>     }
> {code}
> {code:java}
>         return fut.handle((BiFunction>) 
> (r, e) -> {
>             if (full) { // Full txn is already finished remotely. Just update 
> local state.
>                 txManager.finishFull(observableTimestampTracker, tx0.id(), e 
> == null);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21603) Incorrect backward connection check with loopback address.

2024-02-27 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-21603:
--
Description: 
We may skip backward connection check of a previous node if it has the same 
loopback address as the current node.

Consider:
# Neither _IgniteConfiguration#setLocalHost()_ or 
_TcpDiscoverySpi#setLocalAddress()_ is set. Or the localhost parameter is  
"_0.0.0.0_". 
# Nodes start on different hosts. All the available host addresses are resolved.
# Among the other addresses, all nodes get the loopback address 
"127.0.0.1:47500" (47500 is the default tcp discovery port).
# Cluster starts and works. 
# Some node N (A) decides that the connection to node N+1 (B) is lost and tries 
to connect to node N+2 (C) and sends _TcpDiscoveryHandshakeRequest_.
# Before C accepts incoming A's connection, it decides to check B and pings it 
with _ServerImpl#checkConnection(List addrs, int timeout)_
# Around here, the network is restored, and A can now connect to B anew.
# "_127.0.0.1:47500_" is last in _List_ addrs by 
_IgniteUtils#inetAddressesComparator(boolean sameHost)_. But the connect 
attempts in _checkConnection(...)_ are parallel. "_127.0.0.1:47500_" answers 
first.
# C sees it can connect to "_127.0.0.1:47500_" and chooses it as the alive 
address of B. Other pings to rest of B's addresses are ignored.
# But "_127.0.0.1:47500_" is one of C's addresses. C realizes it pinged itself 
and marks that B is not reachable:
{code:java}
 // If local node was able to connect to previous, confirm that it's 
alive.
 ok = liveAddr != null && (!liveAddr.getAddress().isLoopbackAddress() 
|| !locNode.socketAddresses().contains(liveAddr));
{code}
# C accepts connection from A and answers with 
_TcpDiscoveryHandshakeResponse#previousNodeAlive() == false_
# B is ok now, but A connects to C and B is kicked from the ring.

The problem is that C pings itself with B's address "_127.0.0.1:47500_"

  was:
We may skip backward connection check to a previous node if it has the same 
loopback address as the current node.

Consider:
# Neither _IgniteConfiguration#setLocalHost()_ or 
_TcpDiscoverySpi#setLocalAddress()_ is set. Or the localhost parameter is set 
to "_0.0.0.0_". 
# Nodes start on different hosts. All the available host addresses are resolved 
and
# Among the other addresses, all nodes get the loopback address 
"127.0.0.1:47500" (47500 is the default tcp discovery port).
# Cluster starts and works. But 
# Some node N (A) decides the connection to node N+1 (B) is lost and tries to 
connect to node N+2 (C) and sends _TcpDiscoveryHandshakeRequest_.
# Before C accepts incoming A's connection, it decides to check B and pings it 
with _ServerImpl#checkConnection(List addrs, int timeout)_
# Around here, the network is restored, and A can now connect to B anew.
# "_127.0.0.1:47500_" is last in _List_ addrs by 
_IgniteUtils#inetAddressesComparator(boolean sameHost)_. But the connect 
attempts in _checkConnection(...)_ are parallel. "_127.0.0.1:47500_" answers 
first.
# C sees it can connect to "_127.0.0.1:47500_" and chooses it as the alive 
address of B. Other pings to rest of B's addresses are ignored.
# But "_127.0.0.1:47500_" is one of C's addresses. C realizes it pinged itself 
and decides that B is not reachable:
{code:java}
 // If local node was able to connect to previous, confirm that it's 
alive.
 ok = liveAddr != null && (!liveAddr.getAddress().isLoopbackAddress() 
|| !locNode.socketAddresses().contains(liveAddr));
{code}
# C accepts connection from A and answers with 
_TcpDiscoveryHandshakeResponse#previousNodeAlive() == false_
# But B is ok now. But A connects to C and B is kicked from the ring.

The problem is that C ping itself by B's address "_127.0.0.1:47500_"


> Incorrect backward connection check with loopback address.
> --
>
> Key: IGNITE-21603
> URL: https://issues.apache.org/jira/browse/IGNITE-21603
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We may skip backward connection check of a previous node if it has the same 
> loopback address as the current node.
> Consider:
> # Neither _IgniteConfiguration#setLocalHost()_ or 
> _TcpDiscoverySpi#setLocalAddress()_ is set. Or the localhost parameter is  
> "_0.0.0.0_". 
> # Nodes start on different hosts. All the available host addresses are 
> resolved.
> # Among the other addresses, all nodes get the loopback address 
> "127.0.0.1:47500" (47500 is the default tcp discovery port).
> # Cluster starts and works. 
> # Some node N (A) decides that the connection to node N+1 (B) is lost and 
> tries to 

[jira] [Commented] (IGNITE-21606) Missing tuple update during filtering index scans

2024-02-27 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821277#comment-17821277
 ] 

Roman Puchkovskiy commented on IGNITE-21606:


The patch looks good to me

> Missing tuple update during filtering index scans
> -
>
> Key: IGNITE-21606
> URL: https://issues.apache.org/jira/browse/IGNITE-21606
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Consider the following test scenario:
> # Create a table with two columns.
> # We insert into this table.
> # Add a column to the table.
> # Create an index on the new column.
> # We make a “select” with a condition on the new column.
> We get an error:
> {noformat}
> [2024-02-26T11:30:37,822][WARN 
> ][%ibiont_n_0%scan-query-executor-0][ReplicaManager] Failed to process 
> replica request [request=ReadOnlyScanRetrieveBatchReplicaRequestImpl 
> [batchSize=100, columnsToInclude={0, 1, 2, 3}, exactKey=null, flags=3, 
> groupId=7_part_0, indexToUse=10, lowerBoundPrefix=BinaryTupleMessageImpl 
> [elementCount=1, tuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9]], 
> readTimestampLong=111996845269843968, scanId=1, 
> transactionId=018de489-92b1--41c5-8f650001, 
> upperBoundPrefix=BinaryTupleMessageImpl [elementCount=1, 
> tuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9.
>  java.util.concurrent.CompletionException: java.lang.AssertionError: 
> schemaVersion=1, column=SURNAME
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: java.lang.AssertionError: schemaVersion=1, column=SURNAME
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.resolveColumnIndexes(IndexManager.java:289)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.createConverter(IndexManager.java:276)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.converter(IndexManager.java:262)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.extractColumns(IndexManager.java:249)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.indexRowMatches(PartitionReplicaListener.java:1369)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$continueReadOnlyIndexScan$51(PartitionReplicaListener.java:1295)
>  ~[main/:?]
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
>   ... 4 more
> {noformat}
> This happens because when filtering index scans (IGNITE-18518), we do not 
> update the tuple to the required schema version.
> Look TODOs in the code.
> We also need to understand which schema version to use to update the tuple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21606) Missing tuple update during filtering index scans

2024-02-27 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-21606:
-
Reviewer: Roman Puchkovskiy

> Missing tuple update during filtering index scans
> -
>
> Key: IGNITE-21606
> URL: https://issues.apache.org/jira/browse/IGNITE-21606
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Consider the following test scenario:
> # Create a table with two columns.
> # We insert into this table.
> # Add a column to the table.
> # Create an index on the new column.
> # We make a “select” with a condition on the new column.
> We get an error:
> {noformat}
> [2024-02-26T11:30:37,822][WARN 
> ][%ibiont_n_0%scan-query-executor-0][ReplicaManager] Failed to process 
> replica request [request=ReadOnlyScanRetrieveBatchReplicaRequestImpl 
> [batchSize=100, columnsToInclude={0, 1, 2, 3}, exactKey=null, flags=3, 
> groupId=7_part_0, indexToUse=10, lowerBoundPrefix=BinaryTupleMessageImpl 
> [elementCount=1, tuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9]], 
> readTimestampLong=111996845269843968, scanId=1, 
> transactionId=018de489-92b1--41c5-8f650001, 
> upperBoundPrefix=BinaryTupleMessageImpl [elementCount=1, 
> tuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9.
>  java.util.concurrent.CompletionException: java.lang.AssertionError: 
> schemaVersion=1, column=SURNAME
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  [?:?]
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: java.lang.AssertionError: schemaVersion=1, column=SURNAME
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.resolveColumnIndexes(IndexManager.java:289)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.createConverter(IndexManager.java:276)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.converter(IndexManager.java:262)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.index.IndexManager$TableRowToIndexKeyConverter.extractColumns(IndexManager.java:249)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.indexRowMatches(PartitionReplicaListener.java:1369)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$continueReadOnlyIndexScan$51(PartitionReplicaListener.java:1295)
>  ~[main/:?]
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
>   ... 4 more
> {noformat}
> This happens because when filtering index scans (IGNITE-18518), we do not 
> update the tuple to the required schema version.
> Look TODOs in the code.
> We also need to understand which schema version to use to update the tuple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21603) Incorrect backward connection check with loopback address.

2024-02-27 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-21603:
--
Fix Version/s: 2.17

> Incorrect backward connection check with loopback address.
> --
>
> Key: IGNITE-21603
> URL: https://issues.apache.org/jira/browse/IGNITE-21603
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We may skip backward connection check to a previous node if it has the same 
> loopback address as the current node.
> Consider:
> # Neither _IgniteConfiguration#setLocalHost()_ or 
> _TcpDiscoverySpi#setLocalAddress()_ is set. Or the localhost parameter is set 
> to "_0.0.0.0_". 
> # Nodes start on different hosts. All the available host addresses are 
> resolved and
> # Among the other addresses, all nodes get the loopback address 
> "127.0.0.1:47500" (47500 is the default tcp discovery port).
> # Cluster starts and works. But 
> # Some node N (A) decides the connection to node N+1 (B) is lost and tries to 
> connect to node N+2 (C) and sends _TcpDiscoveryHandshakeRequest_.
> # Before C accepts incoming A's connection, it decides to check B and pings 
> it with _ServerImpl#checkConnection(List addrs, int 
> timeout)_
> # Around here, the network is restored, and A can now connect to B anew.
> # "_127.0.0.1:47500_" is last in _List_ addrs by 
> _IgniteUtils#inetAddressesComparator(boolean sameHost)_. But the connect 
> attempts in _checkConnection(...)_ are parallel. "_127.0.0.1:47500_" answers 
> first.
> # C sees it can connect to "_127.0.0.1:47500_" and chooses it as the alive 
> address of B. Other pings to rest of B's addresses are ignored.
> # But "_127.0.0.1:47500_" is one of C's addresses. C realizes it pinged 
> itself and decides that B is not reachable:
> {code:java}
>  // If local node was able to connect to previous, confirm that it's 
> alive.
>  ok = liveAddr != null && (!liveAddr.getAddress().isLoopbackAddress() 
> || !locNode.socketAddresses().contains(liveAddr));
> {code}
> # C accepts connection from A and answers with 
> _TcpDiscoveryHandshakeResponse#previousNodeAlive() == false_
> # But B is ok now. But A connects to C and B is kicked from the ring.
> The problem is that C ping itself by B's address "_127.0.0.1:47500_"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-15889) Add 'contains' method to Record API

2024-02-27 Thread Mikhail Efremov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Efremov updated IGNITE-15889:
-
Summary: Add 'contains' method to Record API  (was: Add 'contains' method 
to Record  API)

> Add 'contains' method to Record API
> ---
>
> Key: IGNITE-15889
> URL: https://issues.apache.org/jira/browse/IGNITE-15889
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3, newbie
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is no method in Record API with the same semantic as the 'contains' 
> method in KV views.
> Add *RecordView.contains* similar to *KeyValueView.contains*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21607) Code cleanup

2024-02-27 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-21607:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Code cleanup
> 
>
> Key: IGNITE-21607
> URL: https://issues.apache.org/jira/browse/IGNITE-21607
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fixing of typos, abbreviations, unused code and untyped generics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21346) Removal of MVCC code from GridCacheEntryEx and IgniteCacheOffheapManager

2024-02-27 Thread Ilya Shishkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821199#comment-17821199
 ] 

Ilya Shishkov commented on IGNITE-21346:


[~av], thank you a lot for the review!

> Removal of MVCC code from GridCacheEntryEx and IgniteCacheOffheapManager
> 
>
> Key: IGNITE-21346
> URL: https://issues.apache.org/jira/browse/IGNITE-21346
> Project: Ignite
>  Issue Type: Sub-task
>  Components: mvcc
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remove all MVCC code from GridCacheEntryEx and IgniteCacheOffheapManager and 
> their succesors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21607) Code cleanup

2024-02-27 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev resolved IGNITE-21607.
--
Fix Version/s: 2.17
   Resolution: Fixed

Merged into the master.

[~vladsz83], Thank you for the contribution.

> Code cleanup
> 
>
> Key: IGNITE-21607
> URL: https://issues.apache.org/jira/browse/IGNITE-21607
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fixing of typos, abbreviations, unused code and untyped generics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-15889) Add 'contains' method to Record API

2024-02-27 Thread Mikhail Efremov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Efremov updated IGNITE-15889:
-
Summary: Add 'contains' method to Record  API  (was: Add 'contains' method 
to Record API)

> Add 'contains' method to Record  API
> 
>
> Key: IGNITE-15889
> URL: https://issues.apache.org/jira/browse/IGNITE-15889
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3, newbie
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is no method in Record API with the same semantic as the 'contains' 
> method in KV views.
> Add *RecordView.contains* similar to *KeyValueView.contains*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21387) Recovery is not possible, if node have no needed storage profile

2024-02-27 Thread Mikhail Efremov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Efremov updated IGNITE-21387:
-
Summary: Recovery is not possible, if node have no needed storage profile  
(was: Recovery is not possible, if  node have no needed storage profile)

> Recovery is not possible, if node have no needed storage profile
> 
>
> Key: IGNITE-21387
> URL: https://issues.apache.org/jira/browse/IGNITE-21387
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3
>
> Looks like any table try to create storages on the recovery node, even if is 
> shouldn't be here, because of zone storage profile filter.
> The isssue is reproducing by 
> ItDistributionZonesFiltersTest#testFilteredDataNodesPropagatedToStable. So, 
> test must be enabled or reworked by this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21387) Recovery is not possible, if node have no needed storage profile

2024-02-27 Thread Mikhail Efremov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Efremov updated IGNITE-21387:
-
Summary: Recovery is not possible, if  node have no needed storage profile  
(was: Recovery is not possible, if node have no needed storage profile)

> Recovery is not possible, if  node have no needed storage profile
> -
>
> Key: IGNITE-21387
> URL: https://issues.apache.org/jira/browse/IGNITE-21387
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3
>
> Looks like any table try to create storages on the recovery node, even if is 
> shouldn't be here, because of zone storage profile filter.
> The isssue is reproducing by 
> ItDistributionZonesFiltersTest#testFilteredDataNodesPropagatedToStable. So, 
> test must be enabled or reworked by this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21558) Sql. Remove ExecutionContext dependency from ExpressionFactory.

2024-02-27 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21558:
-

Assignee: (was: Maksim Zhuravkov)

> Sql. Remove ExecutionContext dependency from ExpressionFactory.
> ---
>
> Key: IGNITE-21558
> URL: https://issues.apache.org/jira/browse/IGNITE-21558
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At the moment ExpressionFactory depends on `ExecutionContext`, so it is not 
> possible to compile expressions prior to execution. 
> Let's remove `ExecutionContext ` dependency from `ExpressionFactory`  and 
> make it a standalone component that is created per SQL engine instance. In 
> addition to that, static expression cache of compiled expressions should also 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21613) Make CheckpointManagerTest run on Java 21

2024-02-27 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821169#comment-17821169
 ] 

Roman Puchkovskiy commented on IGNITE-21613:


Thanks!

> Make CheckpointManagerTest run on Java 21
> -
>
> Key: IGNITE-21613
> URL: https://issues.apache.org/jira/browse/IGNITE-21613
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test mocks ByteBuffer class, this does not work on Java 21. We can try to 
> switch to spying instead of mocking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21558) Sql. Remove ExecutionContext dependency from ExpressionFactory.

2024-02-27 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21558:
-

Assignee: Maksim Zhuravkov

> Sql. Remove ExecutionContext dependency from ExpressionFactory.
> ---
>
> Key: IGNITE-21558
> URL: https://issues.apache.org/jira/browse/IGNITE-21558
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> At the moment ExpressionFactory depends on `ExecutionContext`, so it is not 
> possible to compile expressions prior to execution. 
> Let's remove `ExecutionContext ` dependency from `ExpressionFactory`  and 
> make it a standalone component that is created per SQL engine instance. In 
> addition to that, static expression cache of compiled expressions should also 
> be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21616) Test ClientLoggingTest.testBasicLogging is flaky

2024-02-27 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-21616:


 Summary: Test ClientLoggingTest.testBasicLogging is flaky
 Key: IGNITE-21616
 URL: https://issues.apache.org/jira/browse/IGNITE-21616
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Reporter: Igor Sapego


The following test is flaky on TC: 
https://ci.ignite.apache.org/test/8764679646897676088?currentProjectId=ApacheIgnite3xGradle_Test_RunUnitTests_virtual=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21615) Update the config updated message to correctly reflect the need to restart

2024-02-27 Thread Igor Gusev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821121#comment-17821121
 ] 

Igor Gusev edited comment on IGNITE-21615 at 2/27/24 9:41 AM:
--

Suggested message:
"Node configuration updated. Restart the node to apply changes."

We could also keep track of unapplied changes, and display a warning when user 
looks at config.


was (Author: igusev):
Suggested message:
"Node configuration updated. Restart the node to apply changes."

> Update the config updated message to correctly reflect the need to restart
> --
>
> Key: IGNITE-21615
> URL: https://issues.apache.org/jira/browse/IGNITE-21615
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>
> Currently, a number of configuration updates require node restart to apply, 
> but the CLI message remains the same regarless. We should provide users with 
> more detailed information on when they need to restart the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21615) Update the config updated message to correctly reflect the need to restart

2024-02-27 Thread Igor Gusev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821121#comment-17821121
 ] 

Igor Gusev commented on IGNITE-21615:
-

Suggested message:
"Node configuration updated. Restart the node to apply changes."

> Update the config updated message to correctly reflect the need to restart
> --
>
> Key: IGNITE-21615
> URL: https://issues.apache.org/jira/browse/IGNITE-21615
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>
> Currently, a number of configuration updates require node restart to apply, 
> but the CLI message remains the same regarless. We should provide users with 
> more detailed information on when they need to restart the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21615) Update the config updated message to correctly reflect the need to restart

2024-02-27 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-21615:
---

 Summary: Update the config updated message to correctly reflect 
the need to restart
 Key: IGNITE-21615
 URL: https://issues.apache.org/jira/browse/IGNITE-21615
 Project: Ignite
  Issue Type: Task
Reporter: Igor Gusev


Currently, a number of configuration updates require node restart to apply, but 
the CLI message remains the same regarless. We should provide users with more 
detailed information on when they need to restart the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21613) Make CheckpointManagerTest run on Java 21

2024-02-27 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1782#comment-1782
 ] 

Kirill Tkalenko commented on IGNITE-21613:
--

Looks good.

> Make CheckpointManagerTest run on Java 21
> -
>
> Key: IGNITE-21613
> URL: https://issues.apache.org/jira/browse/IGNITE-21613
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test mocks ByteBuffer class, this does not work on Java 21. We can try to 
> switch to spying instead of mocking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21219) Write memory leak tests

2024-02-27 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev reassigned IGNITE-21219:
-

Assignee: Vadim Pakhnushev  (was: Dmitry Baranov)

> Write memory leak tests
> ---
>
> Key: IGNITE-21219
> URL: https://issues.apache.org/jira/browse/IGNITE-21219
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute
>Reporter: Aleksandr
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> Compute jobs handling logic is getting harder to track every reference we 
> have. It seems like we can easily introduce a memory leak. I wonder if we 
> have some microbenchmarks that prove the absence of memory leak in the 
> compute component.
> Important note: the simulation of leaving and joining the cluster of several 
> nodes (candidates, workers) should be done as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21614) Incorrect BindException handling in ClientHandlerModule

2024-02-27 Thread Philipp Shergalis (Jira)
Philipp Shergalis created IGNITE-21614:
--

 Summary: Incorrect BindException handling in ClientHandlerModule
 Key: IGNITE-21614
 URL: https://issues.apache.org/jira/browse/IGNITE-21614
 Project: Ignite
  Issue Type: Bug
  Components: networking
Reporter: Philipp Shergalis


[https://github.com/apache/ignite-3/blob/main/modules/client-handler/src/main/java/org/apache/ignite/client/handler/ClientHandlerModule.java#L327]

 

Any bind exception is thrown as "PORT_IN_USE_ERR", although there could be a 
problem with address



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21387) Recovery is not possible, if node have no needed storage profile

2024-02-27 Thread Vladimir Pligin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Pligin reassigned IGNITE-21387:


Assignee: Mikhail Efremov

> Recovery is not possible, if node have no needed storage profile
> 
>
> Key: IGNITE-21387
> URL: https://issues.apache.org/jira/browse/IGNITE-21387
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3
>
> Looks like any table try to create storages on the recovery node, even if is 
> shouldn't be here, because of zone storage profile filter.
> The isssue is reproducing by 
> ItDistributionZonesFiltersTest#testFilteredDataNodesPropagatedToStable. So, 
> test must be enabled or reworked by this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)