[jira] [Updated] (IGNITE-20718) DumpIterator for primary copies

2023-10-23 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-20718:
-
Issue Type: Improvement  (was: Bug)

> DumpIterator for primary copies
> ---
>
> Key: IGNITE-20718
> URL: https://issues.apache.org/jira/browse/IGNITE-20718
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-109, ise
>
> Primary copies of partitions don't contain entry duplicates.
> Because, entries can be filtered based on version on the creation time.
> So, during iteration we don't need to track possible duplicates inside 
> {{DumpedPartitionIterator}} implementation in {{partKeys}} variable. This 
> will decrease memory consumption and improve performance of dump iteration 
> and therefore dump check procedures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20718) DumpIterator for primary copies

2023-10-23 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-20718:
-
Description: 
Primary copies of partitions don't contain entry duplicates.
Because, entries can be filtered based on version on the creation time.

So, during iteration we don't need to track possible duplicates inside 
{{DumpedPartitionIterator}} implementation in {{partKeys}} variable. This will 
decrease memory consumption and improve performance of dump iteration and 
therefore dump check procedures.

  was:
Snapshots supports {{snapshotTransferRate}} distributed property to limit 
amount of bytes written by snapshot creation process to the disk.
Dump must use same property to limit disc usage while creating dumps.


> DumpIterator for primary copies
> ---
>
> Key: IGNITE-20718
> URL: https://issues.apache.org/jira/browse/IGNITE-20718
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-109, ise
>
> Primary copies of partitions don't contain entry duplicates.
> Because, entries can be filtered based on version on the creation time.
> So, during iteration we don't need to track possible duplicates inside 
> {{DumpedPartitionIterator}} implementation in {{partKeys}} variable. This 
> will decrease memory consumption and improve performance of dump iteration 
> and therefore dump check procedures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20718) DumpIterator for primary copies

2023-10-23 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-20718:


 Summary: DumpIterator for primary copies
 Key: IGNITE-20718
 URL: https://issues.apache.org/jira/browse/IGNITE-20718
 Project: Ignite
  Issue Type: Bug
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


Snapshots supports {{snapshotTransferRate}} distributed property to limit 
amount of bytes written by snapshot creation process to the disk.
Dump must use same property to limit disc usage while creating dumps.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20528) CDC doesn't work if the "Cache objects transformation" is applied

2023-10-23 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-20528:
-
Fix Version/s: 2.16

> CDC doesn't work if the "Cache objects transformation" is applied
> -
>
> Key: IGNITE-20528
> URL: https://issues.apache.org/jira/browse/IGNITE-20528
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Korotkov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-97, ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CDC doesn't work If some cache objects transformation is applied (see the 
> [https://cwiki.apache.org/confluence/display/IGNITE/IEP-97+Cache+objects+transformation|https://cwiki.apache.org/confluence/display/IGNITE/IEP-97+Cache+objects+transformation]).
> ignite_cdc.sh utility produces the NPE (see below). The immediate reason of 
> the NPE is that ignite_cdc.sh uses the reduced version of the context 
> (StandaloneGridKernalContext), which doesn't contain the GridCacheProcessor.
>  
> {noformat}
> [2023-10-02T10:43:32,017][ERROR][Thread-1][] Unable to convert value 
> [CacheObjectImpl [val=null, hasValBytes=true]]
> java.lang.NullPointerException: null
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectTransformerUtils.transformer(CacheObjectTransformerUtils.java:32)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectTransformerUtils.restoreIfNecessary(CacheObjectTransformerUtils.java:120)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectAdapter.valueFromValueBytes(CacheObjectAdapter.java:73)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.value(CacheObjectImpl.java:92)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.value(CacheObjectImpl.java:58)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.pagemem.wal.record.UnwrapDataEntry.unwrappedValue(UnwrapDataEntry.java:104)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.cdc.WalRecordsConsumer.lambda$static$c56580e2$1(WalRecordsConsumer.java:99)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.lang.gridfunc.TransformFilteringIterator.nextX(TransformFilteringIterator.java:119)
>  [classes/:?]
>   at 
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.next(GridIteratorAdapter.java:35)
>  [classes/:?]
>   at 
> org.apache.ignite.internal.util.lang.gridfunc.TransformFilteringIterator.hasNextX(TransformFilteringIterator.java:85)
>  [classes/:?]
>   at 
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
>  [classes/:?]
>   at 
> org.apache.ignite.cdc.AbstractCdcEventsApplier.apply(AbstractCdcEventsApplier.java:71)
>  [ignite-cdc-ext-1.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.cdc.AbstractIgniteCdcStreamer.onEvents(AbstractIgniteCdcStreamer.java:118)
>  [ignite-cdc-ext-1.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.cdc.WalRecordsConsumer.onRecords(WalRecordsConsumer.java:142)
>  [classes/:?]
>   at 
> org.apache.ignite.internal.cdc.CdcMain.consumeSegmentActively(CdcMain.java:557)
>  [classes/:?]
>   at 
> org.apache.ignite.internal.cdc.CdcMain.consumeWalSegmentsUntilStopped(CdcMain.java:496)
>  [classes/:?]
>   at org.apache.ignite.internal.cdc.CdcMain.runX(CdcMain.java:344) 
> [classes/:?]
>   at org.apache.ignite.internal.cdc.CdcMain.run(CdcMain.java:283) [classes/:?]
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20717) Release 2.16

2023-10-23 Thread Nikita Amelchev (Jira)
Nikita Amelchev created IGNITE-20717:


 Summary: Release 2.16
 Key: IGNITE-20717
 URL: https://issues.apache.org/jira/browse/IGNITE-20717
 Project: Ignite
  Issue Type: Task
Reporter: Nikita Amelchev
Assignee: Nikita Amelchev
 Fix For: 2.16


This is umbrella ticket for 2.16 release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20311) Sql. Fix behaviour of ROUND function.

2023-10-23 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20311:
--
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Sql. Fix behaviour of ROUND function.
> -
>
> Key: IGNITE-20311
> URL: https://issues.apache.org/jira/browse/IGNITE-20311
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> The return type for ROUND(N)/ROUND(N, s) is equal to the type of `N`, which 
> causes issues when reading data from a `BinaryTuple` because this way 
> ROUND(DECIMAL(2,1)) has return type DECIMAL(2,1):
> {code}
>   SELECT ROUND(1.7)
>   # Although the implementation of the round function produces 2, RowSchema 
> has NativeType (precision=2, scale=1).
>   # Because of that this query returns 2.0 
> {code}
> Implementation we agreed upon:
> - For `ROUND(N)` return DECIMAL(p, 0) where p is precision of N's type.
> - For `ROUND(N, s)` return DECIMAL(p, derived_s) where where p is precision 
> of N's type, and derived_s is scale of N's type.
> Examples:
> {code}
> # ROUND(N):
> SELECT ROUND(1.1) 
> # Returns 1, Type: DECIMAL(p, 0)
> # ROUND(N, s):
> SELECT ROUND(1.123, s) FROM (VALUES (0), (1), (2), (3), (4) ) t(s)
> # Returns
> # 1.000
> # 1.100
> # 1.120
> # 1.123
> # 1.123
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20716) Partial data loss after node restart

2023-10-23 Thread Igor (Jira)
Igor created IGNITE-20716:
-

 Summary: Partial data loss after node restart
 Key: IGNITE-20716
 URL: https://issues.apache.org/jira/browse/IGNITE-20716
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 3.0.0-beta2
Reporter: Igor


How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

No errors in logs observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20106) Check that client schema version matches server-side schema version

2023-10-23 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy resolved IGNITE-20106.

Resolution: Fixed

> Check that client schema version matches server-side schema version
> ---
>
> Key: IGNITE-20106
> URL: https://issues.apache.org/jira/browse/IGNITE-20106
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-110, ignite-3
> Fix For: 3.0.0-beta2
>
>
> As per 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110%3A+Schema+synchronization%3A+basic+schema+changes#IEP110:Schemasynchronization:basicschemachanges-Overallflow
>  , the schema version that the client sends with each  request should be 
> validated against the server-side schema version corresponding to the given 
> table in the tx. If it does not match, SCHEMA_VERSION_MISMATCH_ERR should be 
> sent to the client along with the correct server-side schema version.
> The check should be done on a tx coordinator. Also, the coordinator must 
> check that all the tuples sent by the client (in the same request) are 
> encoded using the same schema version.
> The IEP defines baseTs as tableEnlistTs(tx, table). On the first iteration, 
> we should implement a simpler way to calculate baseTs (max(beginTs, 
> tableCreationTs) to allow created tables to 'appear' in a transaction or even 
> simply beginTs). The full-blown baseTs calculation will be implemented in 
> IGNITE-20108. It makes sense to do it later because it requires substantially 
> more work to only support a not-to-common use-case (ALTER TABLEs after a 
> transaction started, but before it enlisted the table).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20715) Check that versions of tuples sent to PartitionReplicaListener match tx-bound schema version

2023-10-23 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20715:
---
Description: 
When record/KV views don't get an explicit transaction, they choose 'now' to 
get schema version for tuple marshalling. Only after that they call 
InternalTable methods, and those methods actually create an implicit 
transaction; so there is a gap between the moment at which a schema was 
obtained for marshalling and the moment corresponding to the transaction start. 
From the point of view of PartitionReplicaListener, the latter is the moment 
that should be used to obtain the table schema.

As there is a gap, those schemas (for marshalling and for processing) might be 
different (if a schema change activates in between).
 # We should check that the schema version of the tuples that arrive to 
PartitionReplicaListener match the schema version corresponding to the 
transaction; if not, a special exception has to be thrown.
 # Record/KV views must retry an operation that causes such an exception.

> Check that versions of tuples sent to PartitionReplicaListener match tx-bound 
> schema version
> 
>
> Key: IGNITE-20715
> URL: https://issues.apache.org/jira/browse/IGNITE-20715
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When record/KV views don't get an explicit transaction, they choose 'now' to 
> get schema version for tuple marshalling. Only after that they call 
> InternalTable methods, and those methods actually create an implicit 
> transaction; so there is a gap between the moment at which a schema was 
> obtained for marshalling and the moment corresponding to the transaction 
> start. From the point of view of PartitionReplicaListener, the latter is the 
> moment that should be used to obtain the table schema.
> As there is a gap, those schemas (for marshalling and for processing) might 
> be different (if a schema change activates in between).
>  # We should check that the schema version of the tuples that arrive to 
> PartitionReplicaListener match the schema version corresponding to the 
> transaction; if not, a special exception has to be thrown.
>  # Record/KV views must retry an operation that causes such an exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20715) Check that versions of tuples sent to PartitionReplicaListener match tx-bound schema version

2023-10-23 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20715:
--

 Summary: Check that versions of tuples sent to 
PartitionReplicaListener match tx-bound schema version
 Key: IGNITE-20715
 URL: https://issues.apache.org/jira/browse/IGNITE-20715
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20714) Sql. Test Framework. Support DDL scripts to initialise schema

2023-10-23 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-20714:
-

 Summary: Sql. Test Framework. Support DDL scripts to initialise 
schema
 Key: IGNITE-20714
 URL: https://issues.apache.org/jira/browse/IGNITE-20714
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


To make Test Framework more useful when it comes to debugging real-world cases, 
it would be nice to define schema as a script containing DDL statements. As an 
example, there is ddl script attached to IGNITE-19813, but now in order to 
check this scenario we have to define all those tables an indices in code by 
hand.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20713) WalWriter doesn't poll all read segments

2023-10-23 Thread Maksim Timonin (Jira)
Maksim Timonin created IGNITE-20713:
---

 Summary: WalWriter doesn't poll all read segments
 Key: IGNITE-20713
 URL: https://issues.apache.org/jira/browse/IGNITE-20713
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.15
Reporter: Maksim Timonin


See FileHandleManagerImpl#WalWriter#body()

For WalWriter the #poll return cut segment, limited with the buffer `capacity` 
value. Then it's required to run the #poll in the while loop while it returns 
non-null segment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20713) WalWriter doesn't poll all read segments

2023-10-23 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-20713:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> WalWriter doesn't poll all read segments
> 
>
> Key: IGNITE-20713
> URL: https://issues.apache.org/jira/browse/IGNITE-20713
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.15
>Reporter: Maksim Timonin
>Priority: Major
>
> See FileHandleManagerImpl#WalWriter#body()
> For WalWriter the #poll return cut segment, limited with the buffer 
> `capacity` value. Then it's required to run the #poll in the while loop while 
> it returns non-null segment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20712) Incorrect error for any modification queries under RO transaction

2023-10-23 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20712:
---
Summary: Incorrect error for any modification queries under RO transaction  
(was: incorrect error for any modification queries under RO transaction)

> Incorrect error for any modification queries under RO transaction
> -
>
> Key: IGNITE-20712
> URL: https://issues.apache.org/jira/browse/IGNITE-20712
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> Any DML run under RO transaction lead to INTERNAL Error.
> Obviously DML is prohibited for RO transaction, but instead of internal error 
> user should see correct error.
> Current error:
> {code:java}
> org.apache.ignite.sql.SqlException: IGN-CMN-65535 
> TraceId:17a52b57-9c70-45b1-b245-17219ab23da5
>   at 
> org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:59)
>   at 
> org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:101)
>   at 
> org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$0(AsyncSqlCursorImpl.java:77)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.closeExecNode(ExecutionServiceImpl.java:954)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.close(ExecutionServiceImpl.java:854)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$onError$2(ExecutionServiceImpl.java:532)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:714)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.executeFragment(ExecutionServiceImpl.java:574)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$submitFragment$9(ExecutionServiceImpl.java:624)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>   at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:315)
>   at 
> org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:81)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20712) incorrect error for any modification queries under RO transaction

2023-10-23 Thread Yury Gerzhedovich (Jira)
Yury Gerzhedovich created IGNITE-20712:
--

 Summary: incorrect error for any modification queries under RO 
transaction
 Key: IGNITE-20712
 URL: https://issues.apache.org/jira/browse/IGNITE-20712
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Yury Gerzhedovich
Assignee: Yury Gerzhedovich


Any DML run under RO transaction lead to INTERNAL Error.
Obviously DML is prohibited for RO transaction, but instead of internal error 
user should see correct error.

Current error:

{code:java}
org.apache.ignite.sql.SqlException: IGN-CMN-65535 
TraceId:17a52b57-9c70-45b1-b245-17219ab23da5
at 
org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:59)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:101)
at 
org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$0(AsyncSqlCursorImpl.java:77)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.closeExecNode(ExecutionServiceImpl.java:954)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.close(ExecutionServiceImpl.java:854)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$onError$2(ExecutionServiceImpl.java:532)
at 
java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:714)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.executeFragment(ExecutionServiceImpl.java:574)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl$DistributedQueryManager.lambda$submitFragment$9(ExecutionServiceImpl.java:624)
at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:315)
at 
org.apache.ignite.internal.sql.engine.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:81)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20359) Expose node storage as a node attribute

2023-10-23 Thread Kirill Gusakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778640#comment-17778640
 ] 

Kirill Gusakov commented on IGNITE-20359:
-

LGTM

> Expose node storage as a node attribute
> ---
>
> Key: IGNITE-20359
> URL: https://issues.apache.org/jira/browse/IGNITE-20359
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> *Motivation*
> To introduce the nodes' filtering by storage type and profile we need to 
> expose the appropriate node storage configurations as node attributes (kind 
> of inner attributes, not suitable for general filters).
> *Definition of done*
> - Node storage and storage profile exposed as node attributes for further 
> filtering during the zone dataNodes setup.
> *Implementation notes*
> - These attributes must be the sepparate list of inner attributes, which are 
> not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
> extended with the appropriate field.
> - The attrbiutes should looks like a map of (String engineType-> String 
> profileName)
> - While the IGNITE-20564 is not done yet, the part about attributes receiving 
> from the node configurations can be implemented around the engine and 
> dataRegions, instead of profiles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20682) [ducktests] Add extension point to rebalance test

2023-10-23 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-20682:
-
Fix Version/s: 2.16

> [ducktests] Add extension point to rebalance test
> -
>
> Key: IGNITE-20682
> URL: https://issues.apache.org/jira/browse/IGNITE-20682
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It's needed to add the extension point to the rebalance ducktests to be able 
> to create subclasses to test rebalance with some extension modules or plugins 
> applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20682) [ducktests] Add extension point to rebalance test

2023-10-23 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-20682:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> [ducktests] Add extension point to rebalance test
> -
>
> Key: IGNITE-20682
> URL: https://issues.apache.org/jira/browse/IGNITE-20682
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It's needed to add the extension point to the rebalance ducktests to be able 
> to create subclasses to test rebalance with some extension modules or plugins 
> applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20514) Transaction becomes stuck after GridNearTxFinishRequest was lost

2023-10-23 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-20514:
---
Description: 
In case of network failures we can get into situation when 
{{GridNearTxFinishRequest}} which was sent from transaction coordinator (near 
node) is lost. 
For example:
{code:title=Near node - handshake failed}
2023-09-19 11:49:55.504 [WARN ] 
[org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] 
[tcp-comm-worker-#1%Node%NodeName%-#138%Node%NodeName%] - Handshake timed out 
(will stop attempts to perform the handshake)
...
addr=/10.10.10.9:47100, failureDetectionTimeoutEnabled=true, timeout=28441]
{code}
{code:title=Near node - failed to send message}
2023-09-19 11:49:55.539 [ERROR] 
[org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] 
[sys-stripe-39-#40%Node%NodeName%] - Failed to send message to remote node 
[node=TcpDiscoveryNode [id=537f0a80-cef0-44df-a082-2fd6652e3eee, 
consistentId=host.name, addrs=ArrayList [10.10.10.9, 127.0.0.1],
...
msg=GridNearTxFinishRequest
...
ver=GridCacheVersion [topVer=306492927, order=1695095952984, nodeOrder=63, 
dataCenterId=0]
...
org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node 
still alive?). Make sure that each ComputeTask and cache Transaction has a 
timeout set in order to prevent parties from waiting forever in case of network 
issues [nodeId=537f0a80-cef0-44df-a082-2fd6652e3eee, addrs=[/10.10.10.9:47100]]
at 
org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper.createNioSession(GridNioServerWrapper.java:565)
at 
org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper.createTcpClient(GridNioServerWrapper.java:693)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:1181)
at 
org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper.createTcpClient(GridNioServerWrapper.java:691)
at 
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.createCommunicationClient(ConnectionClientPool.java:442)
at 
org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:231)
at 
org.apache.ignite.spi.communication.tcp.internal.CommunicationWorker.processDisconnect(CommunicationWorker.java:376)
at 
org.apache.ignite.spi.communication.tcp.internal.CommunicationWorker.body(CommunicationWorker.java:174)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$3.body(TcpCommunicationSpi.java:848)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:58)
Caused by: org.apache.ignite.spi.IgniteSpiOperationTimeoutException: Failed to 
perform handshake due to timeout (consider increasing 'connectionTimeout' 
configuration property).
at 
org.apache.ignite.spi.communication.tcp.internal.CommunicationTcpUtils.handshakeTimeoutException(CommunicationTcpUtils.java:156)
at 
org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper.safeTcpHandshake(GridNioServerWrapper.java:1197)
at 
org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper.createNioSession(GridNioServerWrapper.java:485)
... 10 common frames omitted
{code}
After such message you will get long running transaction on primary and backups 
which will not rollback itself. In order to stop transaction *_you have to kill 
it explicitly_* via {{{}control.sh{}}}.
{code:title=LRT on primary}
2023-09-19 12:23:39.915 [WARN 
][sys-#115589][org.apache.ignite.internal.diagnostic] >>> Transaction 
[startTime=11:49:16,483, curTime=12:23:39,913, tx=GridDhtTxLocal 
...
nearXidVer=GridCacheVersion [topVer=306492927, order=1695095952984, 
nodeOrder=63, dataCenterId=0]
...
isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, 
timeout=30
...
state=PREPARED, 
timedOut=false, 
...
duration=2063430ms
...
{code}



*Some points:*
# Transaction stuck in PREPARED state.
# Transaction was not rolled back after timeout on finish phase.  
# LRT goes away in case if near node restarts, because of two-phase commit 
recovery.

*Reproducer:*  [^IGNITE-20514_NearFinishRequestDelayTest.patch] 

  was:
In case of network failures we can get into situation when 
{{GridNearTxFinishRequest}} which was sent from transaction coordinator (near 
node) is lost. 
For example:
{noformat:title=Near node - handshake failed}
2023-09-19 11:49:55.504 [WARN ] 
[org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] 
[tcp-comm-worker-#1%Node%NodeName%-#138%Node%NodeName%] - Handshake timed out 
(will stop attempts to perform the handshake)
...
addr=/10.10.10.9:47100, failureDetectionTimeoutEnabled=true, timeout=28441]
{noformat}
{noformat:title=Near node - failed to send message}
2023-09-19 11:49:55.539 [ERROR] 

[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage

2023-10-23 Thread Aleksey Demakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Demakov updated IGNITE-20697:
-
Description: 
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crash recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in stable workflow (without crashes and 
rebalances):
 # We write records to WAL files
 # We copy WAL files to archive
 # We compact WAL files (remove phisical records + compress)

So, totally we write all physical records twice and read physical records at 
least twice.

To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. To provide the same crash recovery guarantees we 
can write modified pages twice during checkpoint. First time to some delta file 
and second time to the page storage. In this case we can recover any page if we 
crash during write to page storage from delta file (instead of WAL, as we do 
now).

This proposal has pros and cons.
Pros:
 - Less size of stored data (we don't store page delta files, only final state 
of the page)
 - Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
 - Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)

Cons:
 - Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)

Let's try to implement it and benchmark.

  was:
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crush recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in stable workflow (without crashes and 
rebalances):
 # We write records to WAL files
 # We copy WAL files to archive
 # We compact WAL files (remove phisical records + compress)

So, totally we write all physical records twice and read physical records at 
least twice.

To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. To provide the same crush recovery guarantees we 
can write modified pages twice during checkpoint. First time to some delta file 
and second time to the page storage. In this case we can recover any page if we 
crash during write to page storage from delta file (instead of WAL, as we do 
now).

This proposal has pros and cons.
Pros:
 - Less size of stored data (we don't store page delta files, only final state 
of the page)
 - Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
 - Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)

Cons:
 - Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)

Let's try to implement it and benchmark.


> Move physical records from WAL to another storage 
> --
>
> Key: IGNITE-20697
> URL: https://issues.apache.org/jira/browse/IGNITE-20697
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Currentrly, physycal records take most of the WAL size. But physical records 
> in WAL files required only for crash recovery and these records are useful 
> only for a short period of time (since last checkpoint). 
> Size of physical records during checkpoint is more than size of all modified 
> pages between checkpoints, since we need to store page snapshot record for 
> each modified page and page delta records, if page is modified more than once 
> between checkpoints.
> We process WAL file several times in stable workflow (without crashes and 
> rebalances):
>  # We write records to WAL files
>  # We copy WAL files to archive
>  # We compact WAL files (remove phisical records + compress)
> So, totally we write all physical records twice and read physical 

[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage

2023-10-23 Thread Aleksey Demakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Demakov updated IGNITE-20697:
-
Description: 
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crash recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in stable workflow (without crashes and 
rebalances):
 # We write records to WAL files
 # We copy WAL files to archive
 # We compact WAL files (remove phisical records + compress)

So, totally we write all physical records twice and read physical records at 
least twice.

To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. To provide the same crash recovery guarantees we 
can write modified pages twice during checkpoint. First time to some delta file 
and second time to the page storage. In this case we can recover any page if we 
crash during write to page storage from delta file (instead of WAL, as we do 
now).

This proposal has pros and cons.
Pros:
 - Less size of stored data (we don't store page delta files, only final state 
of the page)
 - Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
 - Potentially reduced latency (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)

Cons:
 - Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)

Let's try to implement it and benchmark.

  was:
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crash recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in stable workflow (without crashes and 
rebalances):
 # We write records to WAL files
 # We copy WAL files to archive
 # We compact WAL files (remove phisical records + compress)

So, totally we write all physical records twice and read physical records at 
least twice.

To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. To provide the same crash recovery guarantees we 
can write modified pages twice during checkpoint. First time to some delta file 
and second time to the page storage. In this case we can recover any page if we 
crash during write to page storage from delta file (instead of WAL, as we do 
now).

This proposal has pros and cons.
Pros:
 - Less size of stored data (we don't store page delta files, only final state 
of the page)
 - Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
 - Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)

Cons:
 - Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)

Let's try to implement it and benchmark.


> Move physical records from WAL to another storage 
> --
>
> Key: IGNITE-20697
> URL: https://issues.apache.org/jira/browse/IGNITE-20697
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Currentrly, physycal records take most of the WAL size. But physical records 
> in WAL files required only for crash recovery and these records are useful 
> only for a short period of time (since last checkpoint). 
> Size of physical records during checkpoint is more than size of all modified 
> pages between checkpoints, since we need to store page snapshot record for 
> each modified page and page delta records, if page is modified more than once 
> between checkpoints.
> We process WAL file several times in stable workflow (without crashes and 
> rebalances):
>  # We write records to WAL files
>  # We copy WAL files to archive
>  # We compact WAL files (remove phisical records + compress)
> So, totally we write all physical records twice and read physical 

[jira] [Commented] (IGNITE-20644) ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is flaky

2023-10-23 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778621#comment-17778621
 ] 

Pavel Tupitsyn commented on IGNITE-20644:
-

Flaky because:
* When *sendServerExceptionStackTraceToClient* is enabled, 
*ClientSchemaVersionMismatchException* is not detected correctly
* Different nodes in *ItAbstractThinClientTest* have different 
*sendServerExceptionStackTraceToClient* setting

> ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite is 
> flaky
> 
>
> Key: IGNITE-20644
> URL: https://issues.apache.org/jira/browse/IGNITE-20644
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Roman Puchkovskiy
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> org.apache.ignite.internal.runner.app.client.ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite
>  sometimes fails, here is an example: 
> [https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleRunner/7559413?hideProblemsFromDependencies=false=false=true=true=true]
>  
> org.opentest4j.AssertionFailedError: Exception is neither of a specified 
> class, nor has a cause of the specified class: class 
> java.lang.IllegalArgumentException
>   at app//org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:42)
>   at app//org.junit.jupiter.api.Assertions.fail(Assertions.java:147)
>   at 
> app//org.apache.ignite.internal.testframework.IgniteTestUtils.assertThrowsWithCause(IgniteTestUtils.java:310)
>   at 
> app//org.apache.ignite.internal.testframework.IgniteTestUtils.assertThrowsWithCause(IgniteTestUtils.java:290)
>   at 
> app//org.apache.ignite.internal.runner.app.client.ItThinClientSchemaSynchronizationTest.testClientUsesLatestSchemaOnWrite(ItThinClientSchemaSynchronizationTest.java:61)
>   at 
> java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>  Method)
>   at 
> java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>   at 
> app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
>   at 
> app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
>   at 
> 

[jira] [Commented] (IGNITE-20576) Rework ClientTableTest# testGetReturningTupleWithUnknownSchemaRequestsNewSchema()

2023-10-23 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778609#comment-17778609
 ] 

Pavel Tupitsyn commented on IGNITE-20576:
-

Merged to main: ccea9b68d428aea245042f96bc4dd71480268ef3

> Rework ClientTableTest# 
> testGetReturningTupleWithUnknownSchemaRequestsNewSchema()
> -
>
> Key: IGNITE-20576
> URL: https://issues.apache.org/jira/browse/IGNITE-20576
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Roman Puchkovskiy
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ClientTableTest#testGetReturningTupleWithUnknownSchemaRequestsNewSchema() 
> makes latest schema version known to the server 2, then it lowers it to 1, so 
> the latest schema version number loses its monotonicity. This monotonicity is 
> now relied upon by the server, so the test should probably be modified to do 
> its job without breaking this invariant.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20576) Rework ClientTableTest# testGetReturningTupleWithUnknownSchemaRequestsNewSchema()

2023-10-23 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778607#comment-17778607
 ] 

Igor Sapego commented on IGNITE-20576:
--

Looks good to me

> Rework ClientTableTest# 
> testGetReturningTupleWithUnknownSchemaRequestsNewSchema()
> -
>
> Key: IGNITE-20576
> URL: https://issues.apache.org/jira/browse/IGNITE-20576
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Roman Puchkovskiy
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ClientTableTest#testGetReturningTupleWithUnknownSchemaRequestsNewSchema() 
> makes latest schema version known to the server 2, then it lowers it to 1, so 
> the latest schema version number loses its monotonicity. This monotonicity is 
> now relied upon by the server, so the test should probably be modified to do 
> its job without breaking this invariant.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20304) Documentation for system views module.

2023-10-23 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778602#comment-17778602
 ] 

Yury Gerzhedovich commented on IGNITE-20304:


[~xtern] LGTM

> Documentation for system views module.
> --
>
> Key: IGNITE-20304
> URL: https://issues.apache.org/jira/browse/IGNITE-20304
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Add documentation for system views module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20711) Update Ignite dependency: Apache Commons Codec to 1.16.0

2023-10-23 Thread Aleksandr Nikolaev (Jira)
Aleksandr Nikolaev created IGNITE-20711:
---

 Summary: Update Ignite dependency: Apache Commons Codec to 1.16.0
 Key: IGNITE-20711
 URL: https://issues.apache.org/jira/browse/IGNITE-20711
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Nikolaev
Assignee: Aleksandr Nikolaev
 Fix For: 2.16


Update Ignite dependency: Apache Commons Codec to 1.16.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20711) Update Ignite dependency: Apache Commons Codec to 1.16.0

2023-10-23 Thread Aleksandr Nikolaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Nikolaev updated IGNITE-20711:

Labels: ise  (was: )

> Update Ignite dependency: Apache Commons Codec to 1.16.0
> 
>
> Key: IGNITE-20711
> URL: https://issues.apache.org/jira/browse/IGNITE-20711
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>
> Update Ignite dependency: Apache Commons Codec to 1.16.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20042) Check table existence before executing each operation in an RW transaction

2023-10-23 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20042:
-
Release Note:   (was: Merged into main, thanks for the improvement)

> Check table existence before executing each operation in an RW transaction
> --
>
> Key: IGNITE-20042
> URL: https://issues.apache.org/jira/browse/IGNITE-20042
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-110, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As per 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110%3A+Schema+synchronization%3A+basic+schema+changes#IEP110:Schemasynchronization:basicschemachanges-Checkingtheexistenceofatablewhenreading/writing
>  , table existence must be checked before executing an operation 
> (read/write/commit) in a transaction. The table existence must be checked for 
> the operationTs.
> This requires IGNITE-19770 to be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20042) Check table existence before executing each operation in an RW transaction

2023-10-23 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778564#comment-17778564
 ] 

Roman Puchkovskiy commented on IGNITE-20042:


Thanks!

> Check table existence before executing each operation in an RW transaction
> --
>
> Key: IGNITE-20042
> URL: https://issues.apache.org/jira/browse/IGNITE-20042
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-110, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As per 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110%3A+Schema+synchronization%3A+basic+schema+changes#IEP110:Schemasynchronization:basicschemachanges-Checkingtheexistenceofatablewhenreading/writing
>  , table existence must be checked before executing an operation 
> (read/write/commit) in a transaction. The table existence must be checked for 
> the operationTs.
> This requires IGNITE-19770 to be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20042) Check table existence before executing each operation in an RW transaction

2023-10-23 Thread Aleksandr Polovtcev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778563#comment-17778563
 ] 

Aleksandr Polovtcev commented on IGNITE-20042:
--

Merged into main, thanks for the improvement

> Check table existence before executing each operation in an RW transaction
> --
>
> Key: IGNITE-20042
> URL: https://issues.apache.org/jira/browse/IGNITE-20042
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-110, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As per 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-110%3A+Schema+synchronization%3A+basic+schema+changes#IEP110:Schemasynchronization:basicschemachanges-Checkingtheexistenceofatablewhenreading/writing
>  , table existence must be checked before executing an operation 
> (read/write/commit) in a transaction. The table existence must be checked for 
> the operationTs.
> This requires IGNITE-19770 to be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20700) Implement durable transaction coordinator finish

2023-10-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov reassigned IGNITE-20700:
--

Assignee:  Kirill Sizov

> Implement durable transaction coordinator finish
> 
>
> Key: IGNITE-20700
> URL: https://issues.apache.org/jira/browse/IGNITE-20700
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> Transaction coordinator should properly handle:
>  * All kind of exceptions that might be thrown during sending finish request 
> and finish response awaiting.
>  * Commit partition primary replica changes.
>  * Including dedicated scenario of commit partition recovery after commit 
> partition majority loss.
> h3. Definition of Done
> Transaction finish request is sent in a durable manner.
> h3. Implementation Notes
>  * Commit timestamp, tx outcome (commit/abort), enlisted paritions set, etc 
> are calculated once only and do not change over retries.
>  * Recipient on the other hand may change, and should be evaluated as 
> PD.awaitPrimaryReplica(commitPartition). Thus, we will handle both primary 
> replica changes and commit partition recovery after majority loss.
>  * On the commit partition side, finish request should await for all locks to 
> be released (see lock released flag in txnState).
>  * It's possible for finish request to see a terminated transaction with 
> another outcome, meaning that recovery logic (that's not yet implemented) 
> will rollback the transaction while finish request contains commit as the 
> desired outcome. In that case, we expect the user to receive a 
> tx-was-rolled-back exception. Any consecutive user calls, both commit and 
> rollback should not throw exceptions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20710) Chain Raft command execution in PartitionReplicaListener

2023-10-23 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20710:
--

 Summary: Chain Raft command execution in PartitionReplicaListener
 Key: IGNITE-20710
 URL: https://issues.apache.org/jira/browse/IGNITE-20710
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)