[jira] [Updated] (IGNITE-17613) Create incremental snapshot

2023-07-06 Thread YuJue Li (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YuJue Li updated IGNITE-17613:
--
Fix Version/s: 2.15

> Create incremental snapshot
> ---
>
> Key: IGNITE-17613
> URL: https://issues.apache.org/jira/browse/IGNITE-17613
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: IEP-89, ise
> Fix For: 2.15
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Incremental snapshot is a lightweight alternative to full snapshot. It bases 
> on the non-blocking Consistent Cut algorithm and provides a collection of WAL 
> segments that hold logical changes since previous snapshot (full or 
> incremental).
> Incremental snapshot should contain:
>  * compacted WAL segments
>  * meta file with Consistent Cut to restore on
>  * binary_meta if it has changed since previous snapshot.
> Incremental snapshot is stored within full snapshot directory.
> Incremental snapshot before creation checks:
>  * Base snapshot (at least its metafile) exists. Exists metafile for all 
> incremental snapshots.
>  * Validate that no misses in WAL segments since previous snapshot.
>  * Check that new _ConsistentCutVersion_ is greater than version of previous 
> snapshots.
>  * Check that baseline topology and cacheGroups are the same (relatively to 
> base snapshot).
> More info in IEP: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211884314]
> 
> Highlevel process of creation of incremental snapshot(without consistent cut 
> algorithm):
> * Incremental snapshot started with create snapshot command with 
> --incremental flag.
> * Incremental snapshot consists of:
>   ** WAL segments from previous increment snapshot (or full snapshot in case 
> first incremental snapshot).
>   ** Changed binary meta and marshaller files.
>   ** Snapshot metafile
> * Incremental snapshot are placed to 
> {{work/snapshots/mybackup/increments/node01/0001}} folder where
>   ** mybackup - name of the full snapshot. 
>   ** node01 - consistent id of the node.
>   ** 0001, 0002, etc - number of incremental snapshot.
> * Incremental snapshot creation consists of the following actions executed on 
> each node. The whole process orchestrated by the {{DistributedProcess}} in 
> the same manner as the full snapshot creation:
>   ** creation of the snapshot folder
>   ** awaits while required WAL segments will be archived
>   ** copy (hard-linked) required WAL segments to the incremental snapshot 
> folder.
>   ** creates snapshot metafile.
> * Failover guarantees (remove partially created snapshot, etc) should be the 
> same as for the full snapshot.
> * Removal of the full snapshot must also removes all the incremental snapshot 
> based on the full one.
> * Removal of the incremental snapshot may be done only for the last one. If 
> there are next incremental snapshot (0003, for example) then removal of any 
> previous (0001 or 0002) must be restricted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19915) Remove obsolete IgniteCacheSnapshotManager

2023-07-06 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-19915:
-
Labels: IEP-80 iep-43 ise  (was: IEP-80 iep-43)

> Remove obsolete IgniteCacheSnapshotManager
> --
>
> Key: IGNITE-19915
> URL: https://issues.apache.org/jira/browse/IGNITE-19915
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-80, iep-43, ise
> Fix For: 2.16
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> IgniteSnapshotManager implements snapshotting features. 
> IgniteCacheSnapshotManager is obsolete and can be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19930:

Description: 
Reproducer (does not consistently reproduce the problem, locally the following 
exceptions occurs in half of the test runs):
{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}
When re-running the test mentioned above, two exceptions were faced:
{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 

[jira] [Updated] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19930:

Description: 
Reproducer (does not consistently reproduce the problem, locally the following 
exception occurs in half of the test runs):
{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}
When re-running the test mentioned above, two exceptions were faced:
{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 

[jira] [Updated] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19930:

Description: 
Reproducer (does not consistently reproduce the problem, locally the following 
exception occurs in half of the test runs):
{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}
Two exceptions were faced while repeatedly run the mentioned above test:
{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 

[jira] [Updated] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19930:

Description: 
Reproducer (does not consistently reproduce the problem, locally the following 
exception occurs in half of the test runs):
{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}
Exception:
{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 

[jira] [Updated] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-19930:

Description: 
Reproducer (does not consistently reproduce the problem):
{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}
Exception:
{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 

[jira] [Created] (IGNITE-19930) Node fails with assertion error when cache with node filter is updated.

2023-07-06 Thread Mikhail Petrov (Jira)
Mikhail Petrov created IGNITE-19930:
---

 Summary: Node fails with assertion error when cache with node 
filter is updated. 
 Key: IGNITE-19930
 URL: https://issues.apache.org/jira/browse/IGNITE-19930
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Petrov


Reproducer:


{code:java}
/** */
public class CacheWithNodeFilterTest extends GridCommonAbstractTest {
    /** */
    @Test
    public void createCacheWithNodeFilterTest() throws Exception {
        IgniteEx ignite = startGrids(3);        ignite.createCache(new 
CacheConfiguration<>(DEFAULT_CACHE_NAME).setNodeFilter(new TestNodeFilter()));  
      grid(2).cache(DEFAULT_CACHE_NAME).put(0, 0);        assertEquals(0, 
grid(0).cache(DEFAULT_CACHE_NAME).get(0));
    }    /** */
    public static class TestNodeFilter implements IgnitePredicate {
        /** {@inheritDoc} */
        @Override public boolean apply(ClusterNode e) {
            return e.id().toString().endsWith("1");
        }
    }
} {code}

Exception:


{code:java}
[2023-07-06T22:51:24,212][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][GridCacheIoManager]
 Failed processing message [senderId=b02639c3-1ac0-43d3-af3f-429019e1, 
msg=GridNearAtomicUpdateResponse [nodeId=edb94847-7cc8-4c64-bcd2-d081ebe2, 
futId=1, errs=null, ret=null, remapTopVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=2], nearUpdates=null, partId=0, mapping=null, nodeLeft=false, 
super=GridCacheIdMessage [cacheId=1544803905, super=GridCacheMessage [msgId=25, 
depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, 
minorTopVer=0], err=null, skipPrepare=false
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:3197)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$300(GridDhtAtomicCache.java:147)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:290)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$4.apply(GridDhtAtomicCache.java:285)
 ~[classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112)
 [classes/:?]
    at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
 [classes/:?]
    at 
org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
 [classes/:?]
    at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
 [classes/:?]
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
[classes/:?]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351]
[2023-07-06T22:51:24,218][ERROR][sys-stripe-0-#179%cache.CreateCacheWithNodeFilterTest2%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=java.lang.AssertionError]]
 java.lang.AssertionError: null
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:241)
 ~[classes/:?]
    at 

[jira] [Updated] (IGNITE-19915) Remove obsolete IgniteCacheSnapshotManager

2023-07-06 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-19915:
-
Fix Version/s: 2.16

> Remove obsolete IgniteCacheSnapshotManager
> --
>
> Key: IGNITE-19915
> URL: https://issues.apache.org/jira/browse/IGNITE-19915
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-80, iep-43
> Fix For: 2.16
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> IgniteSnapshotManager implements snapshotting features. 
> IgniteCacheSnapshotManager is obsolete and can be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19915) Remove obsolete IgniteCacheSnapshotManager

2023-07-06 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740778#comment-17740778
 ] 

Ignite TC Bot commented on IGNITE-19915:


{panel:title=Branch: [pull/10824/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10824/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7246200buildTypeId=IgniteTests24Java8_RunAll]

> Remove obsolete IgniteCacheSnapshotManager
> --
>
> Key: IGNITE-19915
> URL: https://issues.apache.org/jira/browse/IGNITE-19915
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-80, iep-43
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> IgniteSnapshotManager implements snapshotting features. 
> IgniteCacheSnapshotManager is obsolete and can be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19887) Transfer observable timestamp to read-only transaction

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19887:
---
Description: 
*Motivation*
RO transaction has timestamp which determine a moment when data will be read. 
To avoid waiting, safe time is supposed to provide the timestamp in the past. 
The timestamp is determined by the observable timestamp and current time in 
order to be available to retrieve all data which is locally viewed.

*Implementation notes*
* The observable timestamp would be provided externally.
* Read timestamp is determined as {{max(observableTs, now() - 
safeTimePropagationFrequency - maxClockSkew)}}.
* Add a new method to start read only transaction with specific observable 
timestamp:
{code}
/**
 * Starts a readonly transaction with an observable timestamp.
 *
 * @param observableTs Observable timestamp.
 * @return Reade only transaction.
 */
public ReadOnlyTransactionImpl begin(HybridTimestamp observableTs)
{code}

*Definition of done*
API for RO transaction in past is implemented. The read transaction timestamp 
should evaluate by formula: {{max(observableTs, now() - 
safeTimePropagationFrequency)}} and available through {{ReadOnlyTransactionImpl 
.readTimestamp()}}


  was:
*Motivation*
RO transaction has timestamp which determine a moment when data will be read. 
To avoid waiting, safe time is supposed to provide the timestamp in the past. 
The timestamp is determined by the observable timestamp and current time in 
order to be available to retrieve all data which is locally viewed.

*Implementation notes*
* The observable timestamp would be provided externally.
* Read timestamp is determined as {{max(observableTs, now() - 
safeTimePropagationFrequency)}}.
* Add a new method to start read only transaction with specific observable 
timestamp:
{code}
/**
 * Starts a readonly transaction with an observable timestamp.
 *
 * @param observableTs Observable timestamp.
 * @return Reade only transaction.
 */
public ReadOnlyTransactionImpl begin(HybridTimestamp observableTs)
{code}

*Definition of done*
API for RO transaction in past is implemented. The read transaction timestamp 
should evaluate by formula: {{max(observableTs, now() - 
safeTimePropagationFrequency)}} and available through {{ReadOnlyTransactionImpl 
.readTimestamp()}}



> Transfer observable timestamp to read-only transaction
> --
>
> Key: IGNITE-19887
> URL: https://issues.apache.org/jira/browse/IGNITE-19887
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> RO transaction has timestamp which determine a moment when data will be read. 
> To avoid waiting, safe time is supposed to provide the timestamp in the past. 
> The timestamp is determined by the observable timestamp and current time in 
> order to be available to retrieve all data which is locally viewed.
> *Implementation notes*
> * The observable timestamp would be provided externally.
> * Read timestamp is determined as {{max(observableTs, now() - 
> safeTimePropagationFrequency - maxClockSkew)}}.
> * Add a new method to start read only transaction with specific observable 
> timestamp:
> {code}
> /**
>  * Starts a readonly transaction with an observable timestamp.
>  *
>  * @param observableTs Observable timestamp.
>  * @return Reade only transaction.
>  */
> public ReadOnlyTransactionImpl begin(HybridTimestamp observableTs)
> {code}
> *Definition of done*
> API for RO transaction in past is implemented. The read transaction timestamp 
> should evaluate by formula: {{max(observableTs, now() - 
> safeTimePropagationFrequency)}} and available through 
> {{ReadOnlyTransactionImpl .readTimestamp()}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19869) Design safe time propagation outside of replication protocol

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19869:
---
Epic Link: IGNITE-19929

> Design safe time propagation outside of replication protocol
> 
>
> Key: IGNITE-19869
> URL: https://issues.apache.org/jira/browse/IGNITE-19869
> Project: Ignite
>  Issue Type: Task
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Safe time propagation mechanism are using replication protocol. Currently, it 
> is a RAFT, that propagates any command to log storage and state machine. We 
> want to write a protocol to propose safe time using direct messages 
> (bypassing replication protocol).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19929) Direct safe time propagation

2023-07-06 Thread Vladislav Pyatkov (Jira)
Vladislav Pyatkov created IGNITE-19929:
--

 Summary: Direct safe time propagation
 Key: IGNITE-19929
 URL: https://issues.apache.org/jira/browse/IGNITE-19929
 Project: Ignite
  Issue Type: Epic
Reporter: Vladislav Pyatkov


Safe time propagation works over replication layer. The process replicates 
timestamp with high guaranties: clogging the replication log, takes time from 
replication flow. Meanwhile, the reliability is not needed for safe time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19924) Catalog tests shouldn't guess object ids.

2023-07-06 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-19924:
--
Priority: Minor  (was: Major)

> Catalog tests shouldn't guess object ids.
> -
>
> Key: IGNITE-19924
> URL: https://issues.apache.org/jira/browse/IGNITE-19924
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Catalog service generates identifiers for new catalog objects. Id generation 
> strategy is unspecified.
> Thus, there is no reason to expect `id` will be increased monotonically.
> Also, Catalog may creates objects implicitely during initialization, such as 
> system views and/or default entites (schema, zone or whatever) .
> Let's improve tests and avoid using certain `id` values in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19927) Sql. Improve test coverage for CREATE TABLE operation.

2023-07-06 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-19927:
-

Assignee: Andrey Mashenkov

> Sql. Improve test coverage for CREATE TABLE operation.
> --
>
> Key: IGNITE-19927
> URL: https://issues.apache.org/jira/browse/IGNITE-19927
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> As of now we have `CatalogServiceSelfTest` unit test and 
> `ItCreateTableDdlTest` integration test that test negative scenarios. These 
> scenarios are differs.
> Let's fix CREATE TABLE command validation in Catalog and ensures scenarios 
> are the same for unit and integration tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19928) Fix method signature related to creating a new error group and registering a new error code

2023-07-06 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-19928:


 Summary: Fix method signature related to creating a new error 
group and registering a new error code
 Key: IGNITE-19928
 URL: https://issues.apache.org/jira/browse/IGNITE-19928
 Project: Ignite
  Issue Type: Bug
Reporter: Vyacheslav Koptilin
Assignee: Vyacheslav Koptilin
 Fix For: 3.0.0-beta2


The error group and the error code in the group are defined by two bytes each. 
On the other hand, the following methods define these parameters as int:

{code:java}
/**
 * Creates a new error group with the given {@code groupName} and {@code 
groupCode}.
 *
 * @param groupName Group name to be created.
 * @param groupCode Group code to be created.
 * @return New error group.
 * @throws IllegalArgumentException If the specified name or group code is 
already registered.
 *  or {@code groupCode} is greater than 0x or less than or equal 
to 0.
 *  Also, this exception is thrown if the given {@code groupName} is 
{@code null} or empty.
 */
public static synchronized ErrorGroup newGroup(String groupName, int 
groupCode)

/**
 * Registers a new error code within this error group.
 *
 * @param errorCode Error code to be registered.
 * @return Full error code which includes group code and specific error 
code.
 * @throws IllegalArgumentException If the given {@code errorCode} is 
already registered
 *  or {@code errorCode} is greater than 0x or less than or equal 
to 0.
 */
public int registerErrorCode(int errorCode)
{code}

This fact leads to runtime checks that the error code and error group 
identifier is greater or equal to 0 and less than or equal to 0x. It can be 
avoided by changing the type from `int` to `short` obviously. All related 
places should be corrected accordingly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19821) Introduce TraceableException

2023-07-06 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin resolved IGNITE-19821.
--
Resolution: Duplicate

duplicates https://issues.apache.org/jira/browse/IGNITE-19864

> Introduce TraceableException
> 
>
> Key: IGNITE-19821
> URL: https://issues.apache.org/jira/browse/IGNITE-19821
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19927) Sql. Improve test coverage for CREATE TABLE operation.

2023-07-06 Thread Andrey Mashenkov (Jira)
Andrey Mashenkov created IGNITE-19927:
-

 Summary: Sql. Improve test coverage for CREATE TABLE operation.
 Key: IGNITE-19927
 URL: https://issues.apache.org/jira/browse/IGNITE-19927
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Andrey Mashenkov
 Fix For: 3.0.0-beta2


As of now we have `CatalogServiceSelfTest` unit test and `ItCreateTableDdlTest` 
integration test that test negative scenarios. These scenarios are differs.
Let's fix CREATE TABLE command validation in Catalog and ensures scenarios are 
the same for unit and integration tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19801) Configuration prematurely executes metastore revision update listener

2023-07-06 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov reassigned IGNITE-19801:
--

Assignee: Ivan Bessonov  (was: Kirill Tkalenko)

> Configuration prematurely executes metastore revision update listener
> -
>
> Key: IGNITE-19801
> URL: https://issues.apache.org/jira/browse/IGNITE-19801
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now there is a problem due to the fact that the configuration completes the 
> *VersionedValue* on updating the revision of the metastore, even if no 
> changes were received for the configuration.
> Especially now it manifests itself when, for example, we add a table to the 
> catalog and while listening to this event we try to update any 
> *VersionedValue*, we get an error because the version has already been 
> compiled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19924) Catalog tests shouldn't guess object ids.

2023-07-06 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-19924:
--
Fix Version/s: 3.0.0-beta2

> Catalog tests shouldn't guess object ids.
> -
>
> Key: IGNITE-19924
> URL: https://issues.apache.org/jira/browse/IGNITE-19924
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Catalog service generates identifiers for new catalog objects. Id generation 
> strategy is unspecified.
> Thus, there is no reason to expect `id` will be increased monotonically.
> Also, Catalog may creates objects implicitely during initialization, such as 
> system views and/or default entites (schema, zone or whatever) .
> Let's improve tests and avoid using certain `id` values in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19926) Netty buffer memory leak

2023-07-06 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-19926:
---
Description: 
[TeamCity 
(apache.org)|https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_RunAllOther/7344461?buildTab=overview=true=false=false=true]

 

Please unpin TC run after problem fix.

  was:[TeamCity 
(apache.org)|https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_RunAllOther/7344461?buildTab=overview=true=false=false=true]


> Netty buffer memory leak 
> -
>
> Key: IGNITE-19926
> URL: https://issues.apache.org/jira/browse/IGNITE-19926
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>
> [TeamCity 
> (apache.org)|https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_RunAllOther/7344461?buildTab=overview=true=false=false=true]
>  
> Please unpin TC run after problem fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19926) Netty buffer memory leak

2023-07-06 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-19926:
--

 Summary: Netty buffer memory leak 
 Key: IGNITE-19926
 URL: https://issues.apache.org/jira/browse/IGNITE-19926
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Pochatkin


[TeamCity 
(apache.org)|https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_RunAllOther/7344461?buildTab=overview=true=false=false=true]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19925) NodeStoppingException upon stopping an embedded Ignite node

2023-07-06 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-19925:

Component/s: persistence
 (was: general)

> NodeStoppingException upon stopping an embedded Ignite node
> ---
>
> Key: IGNITE-19925
> URL: https://issues.apache.org/jira/browse/IGNITE-19925
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: NodeStoppingExceptionTest.java, 
> NodeStoppingExceptionTest.log
>
>
> See the attached reproducer.
> Steps:
>  - Start an embedded Ignite node.
>  - Create a table via key-vaue: 11 columns of type VARCHAR
>  - Insert 10 sample rows
>  - Stop the node via {{IgnitionManager#stop}}
> Expected result:
> No exceptions in the node's log
> Actual result:
> The following exception is seen:
> {noformat}
> Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
> TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled 
> (node is stopping).
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
> at 
> org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
> at 
> org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
> at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stopReplicaInternal(ReplicaManager.java:410)
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stopReplica(ReplicaManager.java:385)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$cleanUpTablesResources$30(TableManager.java:1093)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.cleanUpTablesResources(TableManager.java:1119)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.stop(TableManager.java:1045)
> at 
> org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
> at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> at 
> org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
> at 
> org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
> at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:807)
> at 
> org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:109)
> at 
> java.base/java.util.concurrent.ConcurrentHashMap.computeIfPresent(ConcurrentHashMap.java:1822)
> at 
> org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:108)
> at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
> at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
> at 
> org.apache.ignite.example.AbstractExamplesTest.stopNode(AbstractExamplesTest.java:76)
> {noformat}
> {{git bisect}} says that the following commit introduced the bug (belongs to 
> IGNITE-19199):
> {noformat}
> b6004047b3c3e9cd91b5ccf28c26ee206c1e3a7f is the first bad commit
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19925) NodeStoppingException upon stopping an embedded Ignite node

2023-07-06 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-19925:

Description: 
See the attached reproducer.

Steps:
 - Start an embedded Ignite node.
 - Create a table via key-vaue: 11 columns of type VARCHAR
 - Insert 10 sample rows
 - Stop the node via {{IgnitionManager#stop}}

Expected result:
No exceptions in the node's log

Actual result:
The following exception is seen:
{noformat}
Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled (node 
is stopping).
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
at 
org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
at 
org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplicaInternal(ReplicaManager.java:410)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplica(ReplicaManager.java:385)
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$cleanUpTablesResources$30(TableManager.java:1093)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
org.apache.ignite.internal.table.distributed.TableManager.cleanUpTablesResources(TableManager.java:1119)
at 
org.apache.ignite.internal.table.distributed.TableManager.stop(TableManager.java:1045)
at 
org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:807)
at 
org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:109)
at 
java.base/java.util.concurrent.ConcurrentHashMap.computeIfPresent(ConcurrentHashMap.java:1822)
at 
org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:108)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
at 
org.apache.ignite.example.AbstractExamplesTest.stopNode(AbstractExamplesTest.java:76)
{noformat}
{{git bisect}} says that the following commit introduced the bug (belongs to 
IGNITE-19199):
{noformat}
b6004047b3c3e9cd91b5ccf28c26ee206c1e3a7f is the first bad commit
{noformat}

  was:
See the attached reproducer.

Steps:
 - Start an embedded Ignite node.
 - Create a table via key-vaue: 11 columns of type VARCHAR
 - Insert 10 sample rows
 - Stop the node via {{IgnitionManager#stop}}

Expected result:
No exceptions in the node's log

Actual result:
The following exception is seen:
{noformat}
Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled (node 
is stopping).
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
at 
org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
at 
org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 

[jira] [Updated] (IGNITE-19925) NodeStoppingException upon stopping an embedded Ignite node

2023-07-06 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-19925:

Description: 
See the attached reproducer.

Steps:
 - Start an embedded Ignite node.
 - Create a table via key-vaue: 11 columns of type VARCHAR
 - Insert 10 sample rows
 - Stop the node via {{IgnitionManager#stop}}

Expected result:
No exceptions in the node's log

Actual result:
The following exception is seen:
{noformat}
Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled (node 
is stopping).
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
at 
org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
at 
org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplicaInternal(ReplicaManager.java:410)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplica(ReplicaManager.java:385)
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$cleanUpTablesResources$30(TableManager.java:1093)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
org.apache.ignite.internal.table.distributed.TableManager.cleanUpTablesResources(TableManager.java:1119)
at 
org.apache.ignite.internal.table.distributed.TableManager.stop(TableManager.java:1045)
at 
org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:807)
at 
org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:109)
at 
java.base/java.util.concurrent.ConcurrentHashMap.computeIfPresent(ConcurrentHashMap.java:1822)
at 
org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:108)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
at 
org.apache.ignite.example.AbstractExamplesTest.stopNode(AbstractExamplesTest.java:76)
{noformat}
{{git bisect}} says that the following commit introduced the bug:
{noformat}
b6004047b3c3e9cd91b5ccf28c26ee206c1e3a7f is the first bad commit
{noformat}

  was:
See the attached reproducer.

Steps:
- Start an embedded Ignite node.
- Create a table via SQL API: 11 columns of type VARCHAR
- Insert 10 sample rows
- Stop the node via {{IgnitionManager#stop}}

Expected result:
No exceptions in the node's log

Actual result:
The following exception is seen:

{noformat}
Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled (node 
is stopping).
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
at 
org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
at 
org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 

[jira] [Commented] (IGNITE-19925) NodeStoppingException upon stopping an embedded Ignite node

2023-07-06 Thread Ivan Artiukhov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740618#comment-17740618
 ] 

Ivan Artiukhov commented on IGNITE-19925:
-

The same exception if insert data via either SQL API or via JDBC.

> NodeStoppingException upon stopping an embedded Ignite node
> ---
>
> Key: IGNITE-19925
> URL: https://issues.apache.org/jira/browse/IGNITE-19925
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: NodeStoppingExceptionTest.java, 
> NodeStoppingExceptionTest.log
>
>
> See the attached reproducer.
> Steps:
>  - Start an embedded Ignite node.
>  - Create a table via key-vaue: 11 columns of type VARCHAR
>  - Insert 10 sample rows
>  - Stop the node via {{IgnitionManager#stop}}
> Expected result:
> No exceptions in the node's log
> Actual result:
> The following exception is seen:
> {noformat}
> Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
> TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled 
> (node is stopping).
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
> at 
> org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
> at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
> at 
> org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
> at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stopReplicaInternal(ReplicaManager.java:410)
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stopReplica(ReplicaManager.java:385)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$cleanUpTablesResources$30(TableManager.java:1093)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.cleanUpTablesResources(TableManager.java:1119)
> at 
> org.apache.ignite.internal.table.distributed.TableManager.stop(TableManager.java:1045)
> at 
> org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
> at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> at 
> org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
> at 
> org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
> at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:807)
> at 
> org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:109)
> at 
> java.base/java.util.concurrent.ConcurrentHashMap.computeIfPresent(ConcurrentHashMap.java:1822)
> at 
> org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:108)
> at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
> at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
> at 
> org.apache.ignite.example.AbstractExamplesTest.stopNode(AbstractExamplesTest.java:76)
> {noformat}
> {{git bisect}} says that the following commit introduced the bug:
> {noformat}
> b6004047b3c3e9cd91b5ccf28c26ee206c1e3a7f is the first bad commit
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19925) NodeStoppingException upon stopping an embedded Ignite node

2023-07-06 Thread Ivan Artiukhov (Jira)
Ivan Artiukhov created IGNITE-19925:
---

 Summary: NodeStoppingException upon stopping an embedded Ignite 
node
 Key: IGNITE-19925
 URL: https://issues.apache.org/jira/browse/IGNITE-19925
 Project: Ignite
  Issue Type: Bug
  Components: general
Reporter: Ivan Artiukhov
 Attachments: NodeStoppingExceptionTest.java, 
NodeStoppingExceptionTest.log

See the attached reproducer.

Steps:
- Start an embedded Ignite node.
- Create a table via SQL API: 11 columns of type VARCHAR
- Insert 10 sample rows
- Stop the node via {{IgnitionManager#stop}}

Expected result:
No exceptions in the node's log

Actual result:
The following exception is seen:

{noformat}
Caused by: org.apache.ignite.lang.NodeStoppingException: IGN-CMN-1 
TraceId:65d933f8-94bd-41e6-928d-7defcf52744c Operation has been cancelled (node 
is stopping).
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:227)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:159)
at 
org.apache.ignite.network.MessagingService.invoke(MessagingService.java:145)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendWithRetry(TopologyAwareRaftGroupService.java:211)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.sendSubscribeMessage(TopologyAwareRaftGroupService.java:197)
at 
org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.unsubscribeLeader(TopologyAwareRaftGroupService.java:329)
at 
org.apache.ignite.internal.replicator.Replica.shutdown(Replica.java:278)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplicaInternal(ReplicaManager.java:410)
at 
org.apache.ignite.internal.replicator.ReplicaManager.stopReplica(ReplicaManager.java:385)
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$cleanUpTablesResources$30(TableManager.java:1093)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
org.apache.ignite.internal.table.distributed.TableManager.cleanUpTablesResources(TableManager.java:1119)
at 
org.apache.ignite.internal.table.distributed.TableManager.stop(TableManager.java:1045)
at 
org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:807)
at 
org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:109)
at 
java.base/java.util.concurrent.ConcurrentHashMap.computeIfPresent(ConcurrentHashMap.java:1822)
at 
org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:108)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
at 
org.apache.ignite.example.AbstractExamplesTest.stopNode(AbstractExamplesTest.java:76)
{noformat}

{{git bisect}} says that the following commit introduced the bug:

{noformat}
b6004047b3c3e9cd91b5ccf28c26ee206c1e3a7f is the first bad commit
{noformat}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19924) Catalog tests shouldn't guess object ids.

2023-07-06 Thread Andrey Mashenkov (Jira)
Andrey Mashenkov created IGNITE-19924:
-

 Summary: Catalog tests shouldn't guess object ids.
 Key: IGNITE-19924
 URL: https://issues.apache.org/jira/browse/IGNITE-19924
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Andrey Mashenkov


Catalog service generates identifiers for new catalog objects. Id generation 
strategy is unspecified.

Thus, there is no reason to expect `id` will be increased monotonically.
Also, Catalog may creates objects implicitely during initialization, such as 
system views and/or default entites (schema, zone or whatever) .

Let's improve tests and avoid using certain `id` values in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19924) Catalog tests shouldn't guess object ids.

2023-07-06 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-19924:
-

Assignee: Andrey Mashenkov

> Catalog tests shouldn't guess object ids.
> -
>
> Key: IGNITE-19924
> URL: https://issues.apache.org/jira/browse/IGNITE-19924
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> Catalog service generates identifiers for new catalog objects. Id generation 
> strategy is unspecified.
> Thus, there is no reason to expect `id` will be increased monotonically.
> Also, Catalog may creates objects implicitely during initialization, such as 
> system views and/or default entites (schema, zone or whatever) .
> Let's improve tests and avoid using certain `id` values in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19578) Decrease count of lease messages to meta storage

2023-07-06 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740576#comment-17740576
 ] 

Vladislav Pyatkov commented on IGNITE-19578:


Thank you for the contribution
Merged 8586df07f81460c5e89846fa2fc7cb3eaaa3b6b6

> Decrease count of lease messages to meta storage
> 
>
> Key: IGNITE-19578
> URL: https://issues.apache.org/jira/browse/IGNITE-19578
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> *Motivation* 
> Huge number of lease messages cause serious load on meta storage, which 
> impacts the performance of a cluster overall. Leases can be sent as single 
> message to meta storage and then the size of this message can be reduced as 
> described in IGNITE-19819.
> *Definition of done*
> Count of meta storage invokes from lease updater is significantly reduced.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19578) Decrease count of lease messages to meta storage

2023-07-06 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740574#comment-17740574
 ] 

Vladislav Pyatkov commented on IGNITE-19578:


LGTM

> Decrease count of lease messages to meta storage
> 
>
> Key: IGNITE-19578
> URL: https://issues.apache.org/jira/browse/IGNITE-19578
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> *Motivation* 
> Huge number of lease messages cause serious load on meta storage, which 
> impacts the performance of a cluster overall. Leases can be sent as single 
> message to meta storage and then the size of this message can be reduced as 
> described in IGNITE-19819.
> *Definition of done*
> Count of meta storage invokes from lease updater is significantly reduced.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19915) Remove obsolete IgniteCacheSnapshotManager

2023-07-06 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740549#comment-17740549
 ] 

Ignite TC Bot commented on IGNITE-19915:


{panel:title=Branch: [pull/10824/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 TIMEOUT , Exit Code 
, TC_SERVICE_MESSAGE 
|https://ci2.ignite.apache.org/viewLog.html?buildId=7246626]]

{panel}
{panel:title=Branch: [pull/10824/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7246200buildTypeId=IgniteTests24Java8_RunAll]

> Remove obsolete IgniteCacheSnapshotManager
> --
>
> Key: IGNITE-19915
> URL: https://issues.apache.org/jira/browse/IGNITE-19915
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-80, iep-43
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> IgniteSnapshotManager implements snapshotting features. 
> IgniteCacheSnapshotManager is obsolete and can be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19667) ClientTableCommon.readTable should be async

2023-07-06 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740548#comment-17740548
 ] 

Pavel Tupitsyn commented on IGNITE-19667:
-

Merged to main: 00bbecb8a5f1339d395ecadd2606d7a0ba77ebc3

> ClientTableCommon.readTable should be async
> ---
>
> Key: IGNITE-19667
> URL: https://issues.apache.org/jira/browse/IGNITE-19667
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *IgniteTablesInternal.tableAsync* is available now, we should retrieve the 
> table asynchronously to avoid blocking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19912) Duplicated index creation using SQL leads to node start-up failure

2023-07-06 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740545#comment-17740545
 ] 

Ignite TC Bot commented on IGNITE-19912:


{panel:title=Branch: [pull/10822/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10822/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Queries 1 (lazy=true){color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7245335]]
* {color:#013220}IgniteBinaryCacheQueryLazyTestSuite: 
DuplicateIndexCreationTest.testIndexCreation - PASSED{color}

{color:#8b}Queries 1{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7245334]]
* {color:#013220}IgniteBinaryCacheQueryTestSuite: 
DuplicateIndexCreationTest.testIndexCreation - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7245366buildTypeId=IgniteTests24Java8_RunAll]

> Duplicated index creation using SQL leads to node start-up failure
> --
>
> Key: IGNITE-19912
> URL: https://issues.apache.org/jira/browse/IGNITE-19912
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.15
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
> Attachments: DuplicateIndexCreationTest.java
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In case an index for the field is specified using QuerySqlFields(index=true) 
> annotation, it's possible to create multiple additional indices for the same 
> field using CREATE INDEX IF NOT EXISTS statement without explicit index name 
> specification. As a result, all indices that were created via SQL have the 
> same name, which leads to node failure on the next restart due to Index with 
> name 'person_name_asc_idx' already exists. exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19904) Assertion in defragmentation

2023-07-06 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-19904:
--
Attachment: failure2.16_with_thread_dump.log

> Assertion in defragmentation
> 
>
> Key: IGNITE-19904
> URL: https://issues.apache.org/jira/browse/IGNITE-19904
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.12
>Reporter: Vladimir Steshin
>Priority: Major
>  Labels: ise
> Attachments: default-config.xml, failure2.16_with_thread_dump.log, 
> ignite.log, ignite_wierd_other_failureNPE.log, jvm.opts
>
>
> Defragmentaion fails with:
> {code:java}
> java.lang.AssertionError: Invalid state. Type is 0! pageId = 0001000d00024cbf
>   at 
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.copyPageForCheckpoint(PageMemoryImpl.java:1359)
>  ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1277)
>  ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.writePages(CheckpointPagesWriter.java:208)
>  ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.run(CheckpointPagesWriter.java:150)
>  ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT]
> {code}
> Difficult to write a test. Can't reproduce on my computers :(. Flackly 
> appears on a server (4 core x 4 cpu) with 100G of the test cache data and 
> million+ pages to checkpoint during defragmentation. More often, this occurs 
> with pageSize 1024 (to produce more pages).
> Regarding my diagnostic build, I suppose that a fresh, empty page is caught 
> in defragmentation. Here is a page dump with test-expented PAGE_OVERHEAD 
> (=64) and same error a bit before copyPageForCheckpoint():
> {code:java}
> org.apache.ignite.IgniteException: Wrong page type in checkpointWritePage1. 
> Page: Data region = 'defragPartitionsDataRegion'.
>  FullPageId [pageId=281878703760205, effectivePageId=403727049549, 
> grpId=-1368047378].
>  PageDump = page_id: 281878703760205, rel_id: 48603, cache_id: -1368047378, 
> pin: 0, lock: 65536, tmp_buf: 72057594037927935, test_val: 1. data_hex: 
> 
>   at 
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1240)
>  ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT]
>   at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.writePages(CheckpointPagesWriter.java:208)
>  

[jira] [Assigned] (IGNITE-19640) [IEP-104] Add ignite-cdc backup mode

2023-07-06 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin reassigned IGNITE-19640:
---

Assignee: Maksim Timonin

> [IEP-104] Add ignite-cdc backup mode
> 
>
> Key: IGNITE-19640
> URL: https://issues.apache.org/jira/browse/IGNITE-19640
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: IEP-104, ise
>
> ignite-cdc should fetch IgniteConfiguration, and run in backup mode if 
> RealtimeCdc is enabled
>  # In backup mode should consume Cdc WAL records
>  # Switch to active state after get StopRealtimeCdcRecord
>  # Persist actual state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19567) Wrong consistentId is used for verifying incremental snapshot

2023-07-06 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19567:

Description: 
1.start a node with consistentId A;

2.insert some data;

3../control.sh --snapshot create full

4.kill A;

5.start a node with consistentId B;

6../control.sh --snapshot restore full.

this is OK.

But, if an incremental snapshot is created, an error will be reported during 
recovery:

1.start a node with consistentId A;

2.insert some data;

3../control.sh --snapshot create full --incremental

4.kill A;

5.start a node with consistentId B;

6../control.sh --snapshot restore full --increment 1.

There is a wrong check. It verifies that increments match the full snapshot. 
But instead of getting consistentId from the full snapshot metafile, it gets it 
from the local node.

 

---

There is a wrong check. It verifies that increments match the full snapshot. 
But instead of getting consistentId from the full snapshot metafile, it gets it 
from the local node.
 

Possible solutions is:
 # Calculate affinity assignment basing on existing snapshot data (partitions)
 # Validate that this affinity is correct
 # Check that local snapshot and incremental snapshots match
 # Restore incremental snapshot in this case.

Can be implemented on SnapshotRestoreProcess#preload state. 

 

  was:
1.start a node with consistentId A;

2.insert some data;


3../control.sh --snapshot create full


4.kill A;

5.start a node with consistentId B;

6../control.sh --snapshot restore full.

this is OK.

But, if an incremental snapshot is created, an error will be reported during 
recovery:


1.start a node with consistentId A;

2.insert some data;


3../control.sh --snapshot create full --incremental


4.kill A;

5.start a node with consistentId B;

6../control.sh --snapshot restore full --increment 1.

There is a wrong check. It verifies that increments match the full snapshot. 
But instead of getting consistentId from the full snapshot metafile, it gets it 
from the local node.
 


> Wrong consistentId is used for verifying incremental snapshot
> -
>
> Key: IGNITE-19567
> URL: https://issues.apache.org/jira/browse/IGNITE-19567
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.15
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: IEP-89
> Fix For: 2.16
>
>
> 1.start a node with consistentId A;
> 2.insert some data;
> 3../control.sh --snapshot create full
> 4.kill A;
> 5.start a node with consistentId B;
> 6../control.sh --snapshot restore full.
> this is OK.
> But, if an incremental snapshot is created, an error will be reported during 
> recovery:
> 1.start a node with consistentId A;
> 2.insert some data;
> 3../control.sh --snapshot create full --incremental
> 4.kill A;
> 5.start a node with consistentId B;
> 6../control.sh --snapshot restore full --increment 1.
> There is a wrong check. It verifies that increments match the full snapshot. 
> But instead of getting consistentId from the full snapshot metafile, it gets 
> it from the local node.
>  
> ---
> There is a wrong check. It verifies that increments match the full snapshot. 
> But instead of getting consistentId from the full snapshot metafile, it gets 
> it from the local node.
>  
> Possible solutions is:
>  # Calculate affinity assignment basing on existing snapshot data (partitions)
>  # Validate that this affinity is correct
>  # Check that local snapshot and incremental snapshots match
>  # Restore incremental snapshot in this case.
> Can be implemented on SnapshotRestoreProcess#preload state. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19888) Track observable timestamp on client

2023-07-06 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740543#comment-17740543
 ] 

Vladislav Pyatkov commented on IGNITE-19888:


[~isapego] Sure, I thought it will be done by you or someone from your command.
It just a ticket to all things for clients that is supposed to do in bound of 
the epic activity.

> Track observable timestamp on client
> 
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * Responses, which start SQL, also might provide a specific timestamp (if 
> they start RO internally) (IGNITE-19898 here the concrete method to retrieve 
> timestamp will be implemented).
> * Current server timestamp ({{clock.now()}}) should insert to other (except 
> cases above) transaction responses.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19923) ODBC 3.0: Document ODBC in Ignite 3

2023-07-06 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-19923:
---

 Summary: ODBC 3.0: Document ODBC in Ignite 3
 Key: IGNITE-19923
 URL: https://issues.apache.org/jira/browse/IGNITE-19923
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Igor Gusev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19888) Track observable timestamp on client

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19888:
---
Description: 
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* Responses, which start SQL, also might provide a specific timestamp (if they 
start RO internally) (IGNITE-19898 here the concrete method to retrieve 
timestamp will be implemented).
* Current server timestamp ({{clock.now()}}) should insert to other (except 
cases above) transaction responses.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.

  was:
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* Current server timestamp ({{clock.now()}}) should insert to other (except 
cases above) transaction responses.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.


> Track observable timestamp on client
> 
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * Responses, which start SQL, also might provide a specific timestamp (if 
> they start RO internally) (IGNITE-19898 here the concrete method to retrieve 
> timestamp will be implemented).
> * Current server timestamp ({{clock.now()}}) should insert to other (except 
> cases above) transaction responses.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19898) SQL implicit RO transaction should used observation timestamp

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19898:
---
Description: 
*Motivation*
Reorganization of processing of RO transaction requires choosing a specific 
timestamp for an implicit RO SQL transaction.

*Implementation notes*
* Observable timestamp is passed through API and propagate to SQL engine.
* SQL uses the observable timestamp when it has to start RO transaction 
(IGNITE-19887 is used to start RO transaction with specified observable 
timestamp).
* Read timestamp, that is got through transaction API 
{{ReadOnlyTransactionImpl#readTimestamp}}, should to be available for invoking 
side.

*Definition of done*
Select in an implicit transaction should execute with an observable timestamp. 
A read timestamp, which is used for RO transaction calculation, returns to the 
invoking side.

  was:
*Motivation*
Reorganization of processing of RO transaction requires choosing a specific 
timestamp for an implicit RO SQL transaction.

*Implementation notes*
Observation timestamp is passed through API and propagate to SQL engine
If SQL script does not require starting an implicit RO transaction, the 
timestamp is not used.
If the SQL is started an RO transaction, the transaction should be created 
using the timestamp.

*Definition of done*
Select in an implicit transaction should execute with an observation timestamp. 


> SQL implicit RO transaction should used observation timestamp
> -
>
> Key: IGNITE-19898
> URL: https://issues.apache.org/jira/browse/IGNITE-19898
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Reorganization of processing of RO transaction requires choosing a specific 
> timestamp for an implicit RO SQL transaction.
> *Implementation notes*
> * Observable timestamp is passed through API and propagate to SQL engine.
> * SQL uses the observable timestamp when it has to start RO transaction 
> (IGNITE-19887 is used to start RO transaction with specified observable 
> timestamp).
> * Read timestamp, that is got through transaction API 
> {{ReadOnlyTransactionImpl#readTimestamp}}, should to be available for 
> invoking side.
> *Definition of done*
> Select in an implicit transaction should execute with an observable 
> timestamp. A read timestamp, which is used for RO transaction calculation, 
> returns to the invoking side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19887) Transfer observable timestamp to read-only transaction

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19887:
---
Description: 
*Motivation*
RO transaction has timestamp which determine a moment when data will be read. 
To avoid waiting, safe time is supposed to provide the timestamp in the past. 
The timestamp is determined by the observable timestamp and current time in 
order to be available to retrieve all data which is locally viewed.

*Implementation notes*
* The observable timestamp would be provided externally.
* Read timestamp is determined as {{max(observableTs, now() - 
safeTimePropagationFrequency)}}.
* Add a new method to start read only transaction with specific observable 
timestamp:
{code}
/**
 * Starts a readonly transaction with an observable timestamp.
 *
 * @param observableTs Observable timestamp.
 * @return Reade only transaction.
 */
public ReadOnlyTransactionImpl begin(HybridTimestamp observableTs)
{code}

*Definition of done*
API for RO transaction in past is implemented. The read transaction timestamp 
should evaluate by formula: {{max(observableTs, now() - 
safeTimePropagationFrequency)}} and available through {{ReadOnlyTransactionImpl 
.readTimestamp()}}


  was:
*Motivation*
RO transaction has timestamp which determine a moment when data will be read. 
To avoid waiting, safe time is supposed to provide the timestamp in the past. 
The timestamp is determined by the latest observation timestamp and current 
time in order to be available to retrieve all data which is locally viewed.
*Implementation notes*
* The latest observable timestamp would be provided externally (from client, 
from local context for server node).
* Read timestamp is determined as {{max(lastObservableTs, now() - 
safeTimePropagationFrequency)}}
* Add a new method to start read only transaction with specific timestamp:
{code}
/**
 * Starts a readonly transaction with last observable timestamp.
 *
 * @param lastObservableTs Read timestamp.
 * @return Reade only transaction.
 */
public ReadOnlyTransactionImpl begin(HybridTimestamp lastObservableTs)
{code}

*Definition of done*
API for RO transaction in past is implemented. The read transaction timestamp 
should evaluate by formula: {{max(lastObservableTs, now() - 
safeTimePropagationFrequency)}} and available through {{ReadOnlyTransactionImpl 
.readTimestamp()}}



> Transfer observable timestamp to read-only transaction
> --
>
> Key: IGNITE-19887
> URL: https://issues.apache.org/jira/browse/IGNITE-19887
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> RO transaction has timestamp which determine a moment when data will be read. 
> To avoid waiting, safe time is supposed to provide the timestamp in the past. 
> The timestamp is determined by the observable timestamp and current time in 
> order to be available to retrieve all data which is locally viewed.
> *Implementation notes*
> * The observable timestamp would be provided externally.
> * Read timestamp is determined as {{max(observableTs, now() - 
> safeTimePropagationFrequency)}}.
> * Add a new method to start read only transaction with specific observable 
> timestamp:
> {code}
> /**
>  * Starts a readonly transaction with an observable timestamp.
>  *
>  * @param observableTs Observable timestamp.
>  * @return Reade only transaction.
>  */
> public ReadOnlyTransactionImpl begin(HybridTimestamp observableTs)
> {code}
> *Definition of done*
> API for RO transaction in past is implemented. The read transaction timestamp 
> should evaluate by formula: {{max(observableTs, now() - 
> safeTimePropagationFrequency)}} and available through 
> {{ReadOnlyTransactionImpl .readTimestamp()}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19889) Implement observable timestamp on server

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19889:
---
Summary: Implement observable timestamp on server  (was: Implement last 
oservable timestamp for server)

> Implement observable timestamp on server
> 
>
> Key: IGNITE-19889
> URL: https://issues.apache.org/jira/browse/IGNITE-19889
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Client timestamp is used to determine a read timestamp for RO transaction on 
> client-side (IGNITE-19888). For consistency behavior, need to implement a 
> similar timestamp on server.
> *Implementation note*
> The last server observable timestamp should update at least when the 
> transaction commuted.
> Any RO transaction should use the timestamp: for SQL (IGNITE-19898) and 
> through key-value API (IGNITE-19887)
> *Definition of done*
> All serve-side created RO transactions should execute in past with timestamp 
> has been determining by last observation time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19887) Transfer observable timestamp to read-only transaction

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19887:
---
Summary: Transfer observable timestamp to read-only transaction  (was: Add 
internal API to pass read timestamp to read-only transaction)

> Transfer observable timestamp to read-only transaction
> --
>
> Key: IGNITE-19887
> URL: https://issues.apache.org/jira/browse/IGNITE-19887
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> RO transaction has timestamp which determine a moment when data will be read. 
> To avoid waiting, safe time is supposed to provide the timestamp in the past. 
> The timestamp is determined by the latest observation timestamp and current 
> time in order to be available to retrieve all data which is locally viewed.
> *Implementation notes*
> * The latest observable timestamp would be provided externally (from client, 
> from local context for server node).
> * Read timestamp is determined as {{max(lastObservableTs, now() - 
> safeTimePropagationFrequency)}}
> * Add a new method to start read only transaction with specific timestamp:
> {code}
> /**
>  * Starts a readonly transaction with last observable timestamp.
>  *
>  * @param lastObservableTs Read timestamp.
>  * @return Reade only transaction.
>  */
> public ReadOnlyTransactionImpl begin(HybridTimestamp lastObservableTs)
> {code}
> *Definition of done*
> API for RO transaction in past is implemented. The read transaction timestamp 
> should evaluate by formula: {{max(lastObservableTs, now() - 
> safeTimePropagationFrequency)}} and available through 
> {{ReadOnlyTransactionImpl .readTimestamp()}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19886) Retrive commit timestamp

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19886:
---
Description: 
*Motivation*
The commit timestamp is generated in a transaction coordinator for a particular 
transaction at the time when the transaction is committed. Any other node 
cannot get the timestamp through API. A client node cannot be the transaction 
coordinator (because the role is server only), but the client has to use this 
timestamp to track observation one.
*Implementation notes*
Extend interface of {{InternalTransaction}} with method:
{code}
/**
 * Fixes transaction.
 *
 * @param commit True when the transaction committed, false is rolled back.
 * @return Future with commit timestamp or {@code null} if timestamp is not 
specified for the transaction type.
 */
CompletableFuture finish(boolean commit);
{code}

*Definition of done*
Transaction commit timestamp can be got through API of {{InternalTransaction}}.

  was:
*Motivation*
The timestamp is generated in the commit partition for a particular transaction 
at the time when the transaction is committed. Any other node cannot get the 
timestamp through API. But client node have to have possibility to get it.
*Implementation notes*
Extend interface of InternalTransaction with method:
{code}
/**
 * Fixes transaction.
 *
 * @param commit True when the transaction committed, false is rolled back.
 * @return Future with commit timestamp or {@code null} if timestamp is not 
specified for the transaction type.
 */
CompletableFuture finish(boolean commit);
{code}

*Definition of done*
A possibility to know transaction commit timestamp through API.


> Retrive commit timestamp
> 
>
> Key: IGNITE-19886
> URL: https://issues.apache.org/jira/browse/IGNITE-19886
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> The commit timestamp is generated in a transaction coordinator for a 
> particular transaction at the time when the transaction is committed. Any 
> other node cannot get the timestamp through API. A client node cannot be the 
> transaction coordinator (because the role is server only), but the client has 
> to use this timestamp to track observation one.
> *Implementation notes*
> Extend interface of {{InternalTransaction}} with method:
> {code}
> /**
>  * Fixes transaction.
>  *
>  * @param commit True when the transaction committed, false is rolled back.
>  * @return Future with commit timestamp or {@code null} if timestamp is not 
> specified for the transaction type.
>  */
> CompletableFuture finish(boolean commit);
> {code}
> *Definition of done*
> Transaction commit timestamp can be got through API of 
> {{InternalTransaction}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19741) Epic for restoring Distribution Zone Manager states after restart

2023-07-06 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev resolved IGNITE-19741.
--
Fix Version/s: 3.0.0-beta2
 Assignee: Mirza Aliev
   Resolution: Fixed

> Epic for restoring Distribution Zone Manager states after restart
> -
>
> Key: IGNITE-19741
> URL: https://issues.apache.org/jira/browse/IGNITE-19741
> Project: Ignite
>  Issue Type: Epic
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> In this epic we want to provide the correct behaviour of Distribution Zone 
> Manager after restart, so all states for the manager are restored correctly  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19886) Retrive commit timestamp

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19886:
---
Summary: Retrive commit timestamp  (was: Add method to receive commit 
transaction timestamp)

> Retrive commit timestamp
> 
>
> Key: IGNITE-19886
> URL: https://issues.apache.org/jira/browse/IGNITE-19886
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> The timestamp is generated in the commit partition for a particular 
> transaction at the time when the transaction is committed. Any other node 
> cannot get the timestamp through API. But client node have to have 
> possibility to get it.
> *Implementation notes*
> Extend interface of InternalTransaction with method:
> {code}
> /**
>  * Fixes transaction.
>  *
>  * @param commit True when the transaction committed, false is rolled back.
>  * @return Future with commit timestamp or {@code null} if timestamp is not 
> specified for the transaction type.
>  */
> CompletableFuture finish(boolean commit);
> {code}
> *Definition of done*
> A possibility to know transaction commit timestamp through API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19667) ClientTableCommon.readTable should be async

2023-07-06 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740534#comment-17740534
 ] 

Igor Sapego commented on IGNITE-19667:
--

Looks good to me.

> ClientTableCommon.readTable should be async
> ---
>
> Key: IGNITE-19667
> URL: https://issues.apache.org/jira/browse/IGNITE-19667
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *IgniteTablesInternal.tableAsync* is available now, we should retrieve the 
> table asynchronously to avoid blocking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19888) Track observable timestamp on client

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19888:
---
Summary: Track observable timestamp on client  (was: Add tracking of last 
observed transaction timestamp to client)

> Track observable timestamp on client
> 
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * Current server timestamp ({{clock.now()}}) should insert to other (except 
> cases above) transaction responses.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19888) Add tracking of last observed transaction timestamp to client

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19888:
---
Description: 
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* Current server timestamp ({{clock.now()}}) should insert to other (except 
cases above) transaction responses.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.

  was:
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* Current server timestamp should insert to other (except cases above) 
transaction responses.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.


> Add tracking of last observed transaction timestamp to client
> -
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * Current server timestamp ({{clock.now()}}) should insert to other (except 
> cases above) transaction responses.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19888) Add tracking of last observed transaction timestamp to client

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19888:
---
Description: 
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* Current server timestamp should insert to other (except cases above) 
transaction responses.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.

  was:
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.


> Add tracking of last observed transaction timestamp to client
> -
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * Current server timestamp should insert to other (except cases above) 
> transaction responses.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19888) Add tracking of last observed transaction timestamp to client

2023-07-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-19888:
---
Description: 
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
* Responses, which start RO transaction (IGNITE-19887) and commit RW 
transaction (IGNITE-19886), have to provide a timestamp.
* If a server response does not have the timestamp or timestamp is less than 
the client already has, do nothing.
* If the time is grater than the client has, the client timestamp should be 
updated.
* The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.

  was:
*Motivation*
Read timestamp for RO transaction is supposed to determine with taking into 
client timestamp to linearize client transactions.

*Implementation notes*
Responses, which start RO transaction (IGNITE-19887) and commit RW transaction 
(IGNITE-19886), have to provide a timestamp.
If a server response does not have the timestamp or timestamp is less than the 
client already has, do nothing.
If the time is grater than the client has, the client timestamp should be 
updated.
The timestamp is used to start RO transaction (IGNITE-19887)

*Definition of done*
The timestamp is passed from the server-side to a client. The client just save 
the timestamp and send it in each request to server-side.
All client-side created RO transactions should execute in past with timestamp 
has been determining by client timestamp.


> Add tracking of last observed transaction timestamp to client
> -
>
> Key: IGNITE-19888
> URL: https://issues.apache.org/jira/browse/IGNITE-19888
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> Read timestamp for RO transaction is supposed to determine with taking into 
> client timestamp to linearize client transactions.
> *Implementation notes*
> * Responses, which start RO transaction (IGNITE-19887) and commit RW 
> transaction (IGNITE-19886), have to provide a timestamp.
> * If a server response does not have the timestamp or timestamp is less than 
> the client already has, do nothing.
> * If the time is grater than the client has, the client timestamp should be 
> updated.
> * The timestamp is used to start RO transaction (IGNITE-19887)
> *Definition of done*
> The timestamp is passed from the server-side to a client. The client just 
> save the timestamp and send it in each request to server-side.
> All client-side created RO transactions should execute in past with timestamp 
> has been determining by client timestamp.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19910) CDC through Kafka: refactor timeouts

2023-07-06 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19910:
---
Description: 
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout ({{kafkaRequestTimeout}}), but 
future fails itself if delivery timeout exceeded.


*Timeouts for KafkaConsumer*
All above methods will fail with an exception, when specified timeout exceeds, 
thus, specified timeout *_should not be too low_*.

On the other hand, kafka-to-ignite.sh also invokes {{KafkaConsumer#poll}} with 
timeout {{kafkaRequestTimeout}}, which blocks until data will become available 
or specified timeout will expire [5]. So, {{#poll}} should be called quite 
often and we *_should not set too large timeout_* for it, otherwise, we can 
face with delays of replication, when some topic partitions have no new data. 
It is not desired behavior, because in this case some partitions will wait to 
be processed.


*Kafka clients request retries*
Each single request will be retried in case of {{request.timeout.ms}} exceeding 
[2, 4]. Behavior of retries is similar both for {{KafkaConsumer}} and 
{{KafkaProducer}}. Minimal amount of retries approximately equals to ratio of 
total operation timeout to {{request.timeout.ms}}. Total timeout is an 
explicitly specified argument of API method or default value (described in 
above tables). 
It is obvious, that currently {{kafkaRequestTimeout}} have to be N times 
greater, than {{request.timeout.ms}} in order to make request retries possible, 
i.e. most of time we have to override default value of 3s in CDC configuration.


*Conclusion*
# It seems, that the better approach is to rely only on built-in kafka clients 
timeouts, because kafka clients have already provided connection reliability 
features. These timeouts should be configured according to Kafka documentation.
# {{kafkaRequestTimeout}} should be used only for {{KafkaConsumer#poll}}, 
default value of 3s can remain the same. 
# As alternative to points 1,2 we can add separate timeout for 
{{KafkaConsumer#poll}}. Default timeouts for all other operations have to be 
increased.



Links:
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_delivery.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_default.api.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll-java.time.Duration-

  was:
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 

[jira] [Updated] (IGNITE-19910) CDC through Kafka: refactor timeouts

2023-07-06 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19910:
---
Description: 
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout ({{kafkaRequestTimeout}}), but 
future fails itself if delivery timeout exceeded.


*Timeouts for KafkaConsumer*
All above methods will fail with an exception, when specified timeout exceeds, 
thus, specified timeout *_should not be too low_*.

On the other hand, kafka-to-ignite.sh also invokes {{KafkaConsumer#poll}} with 
timeout {{kafkaRequestTimeout}}, which blocks until data will become available 
or specified timeout will expire. So, {{#poll}} should be called quite often 
and we *_should not set too large timeout_* for it, otherwise, we can face with 
delays of replication, when some topic partitions have no new data. It is not 
desired behavior, because in this case some partitions will wait to be 
processed.


*Kafka clients request retries*
Each single request will be retried in case of {{request.timeout.ms}} exceeding 
[2, 4]. Behavior of retries is similar both for {{KafkaConsumer}} and 
{{KafkaProducer}}. Minimal amount of retries approximately equals to ratio of 
total operation timeout to {{request.timeout.ms}}. Total timeout is an 
explicitly specified argument of API method or default value (described in 
above tables). 
It is obvious, that currently {{kafkaRequestTimeout}} have to be N times 
greater, than {{request.timeout.ms}} in order to make request retries possible, 
i.e. most of time we have to override default value of 3s in CDC configuration.


*Conclusion*
# It seems, that the better approach is to rely only on built-in kafka clients 
timeouts, because kafka clients have already provided connection reliability 
features. These timeouts should be configured according to Kafka documentation.
# {{kafkaRequestTimeout}} should be used only for {{KafkaConsumer#poll}}, 
default value of 3s can remain the same. 
# As alternative to points 1,2 we can add separate timeout for 
{{KafkaConsumer#poll}}. Default timeouts for all other operations have to be 
increased.



Links:
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_delivery.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_default.api.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_request.timeout.ms

  was:
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 

[jira] [Updated] (IGNITE-19910) CDC through Kafka: refactor timeouts

2023-07-06 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19910:
---
Description: 
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout ({{kafkaRequestTimeout}}), but 
future fails itself if delivery timeout exceeded.


*Timeouts for KafkaConsumer*
All above methods will fail with an exception, when specified timeout exceeds, 
thus, specified timeout *_should not be too low_*.

On the other hand, kafka-to-ignite.sh also invokes {{KafkaConsumer#poll}} with 
timeout {{kafkaRequestTimeout}}, but it just blocks until data will become 
available or specified timeout will expire. So, {{#poll}} should be called 
quite often and we *_should not set too large timeout_* for it, otherwise, we 
can face with delays of replication, when some topic partitions have no new 
data. It is not desired behavior, because in this case some partitions will 
wait to be processed.


*Kafka clients request retries*
Each single request will be retried in case of {{request.timeout.ms}} exceeding 
[2, 4]. Behavior of retries is similar both for {{KafkaConsumer}} and 
{{KafkaProducer}}. Minimal amount of retries approximately equals to ratio of 
total operation timeout to {{request.timeout.ms}}. Total timeout is an 
explicitly specified argument of API method or default value (described in 
above tables). 
It is obvious, that currently {{kafkaRequestTimeout}} have to be N times 
greater, than {{request.timeout.ms}} in order to make request retries possible, 
i.e. most of time we have to override default value of 3s in CDC configuration.


*Conclusion*
# It seems, that the better approach is to rely only on built-in kafka clients 
timeouts, because kafka clients have already provided connection reliability 
features. These timeouts should be configured according to Kafka documentation.
# {{kafkaRequestTimeout}} should be used only for {{KafkaConsumer#poll}}, 
default value of 3s can remain the same. 
# As alternative to points 1,2 we can add separate timeout for 
{{KafkaConsumer#poll}}. Default timeouts for all other operations have to be 
increased.



Links:
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_delivery.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_default.api.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_request.timeout.ms

  was:
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 

[jira] [Comment Edited] (IGNITE-19877) Sql. Erroneous cast possibility Custom object to Numeric.

2023-07-06 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740485#comment-17740485
 ] 

Pavel Pereslegin edited comment on IGNITE-19877 at 7/6/23 8:36 AM:
---

We also need to align the BOOLEAN cast according to the standard.
For example, currently we have
{code:java}
1::BOOLEAN -> false
1.0::BOOLEAN -> throws NoSuchMethodException: 
java.math.BigDecimal.booleanValue(){code}
It is suggested to forbid casting to boolean from other types (other than 
'{{true}}'/'{{false}}' char literals) with a user-friendly exception of type 
casting.


was (Author: xtern):
We also need to align the BOOLEAN cast according to the standard.
For example, currently we have
{code:java}
1::BOOLEAN -> false
1.0::BOOLEAN -> throws NoSuchMethodException: 
java.math.BigDecimal.booleanValue(){code}
It is suggested to forbid casting to boolean from other types (other than 
'{{true}}'/'{{false}}' literals) with a user-friendly exception of type casting.

> Sql. Erroneous cast possibility Custom object to Numeric.
> -
>
> Key: IGNITE-19877
> URL: https://issues.apache.org/jira/browse/IGNITE-19877
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> @Test
> public void test0() \{
> String query = format("SELECT CAST(? AS DECIMAL(5, 1))");
> sql(query).withParams(LocalDateTime.now()).returns(2).ok();
> }
> {code}
> Throws Numeric overflow exception, seems this is incorrect behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-19877) Sql. Erroneous cast possibility Custom object to Numeric.

2023-07-06 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740485#comment-17740485
 ] 

Pavel Pereslegin edited comment on IGNITE-19877 at 7/6/23 8:35 AM:
---

We also need to align the BOOLEAN cast according to the standard.
For example, currently we have
{code:java}
1::BOOLEAN -> false
1.0::BOOLEAN -> throws NoSuchMethodException: 
java.math.BigDecimal.booleanValue(){code}
It is suggested to forbid casting to boolean from other types (other than 
'{{true}}'/'{{false}}' literals) with a user-friendly exception of type casting.


was (Author: xtern):
We also need to align the BOOLEAN cast according to the standard.
For example, currently we have

{code:java}
1::BOOLEAN -> false
1.0::BOOLEAN -> throws NoSuchMethodException: 
java.math.BigDecimal.booleanValue(){code}
It is suggested to forbid casting to boolean from other types (other than 
true/false literals) with a user-friendly exception of type casting.

> Sql. Erroneous cast possibility Custom object to Numeric.
> -
>
> Key: IGNITE-19877
> URL: https://issues.apache.org/jira/browse/IGNITE-19877
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> @Test
> public void test0() \{
> String query = format("SELECT CAST(? AS DECIMAL(5, 1))");
> sql(query).withParams(LocalDateTime.now()).returns(2).ok();
> }
> {code}
> Throws Numeric overflow exception, seems this is incorrect behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19877) Sql. Erroneous cast possibility Custom object to Numeric.

2023-07-06 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740485#comment-17740485
 ] 

Pavel Pereslegin commented on IGNITE-19877:
---

We also need to align the BOOLEAN cast according to the standard.
For example, currently we have

{code:java}
1::BOOLEAN -> false
1.0::BOOLEAN -> throws NoSuchMethodException: 
java.math.BigDecimal.booleanValue(){code}
It is suggested to forbid casting to boolean from other types (other than 
true/false literals) with a user-friendly exception of type casting.

> Sql. Erroneous cast possibility Custom object to Numeric.
> -
>
> Key: IGNITE-19877
> URL: https://issues.apache.org/jira/browse/IGNITE-19877
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> @Test
> public void test0() \{
> String query = format("SELECT CAST(? AS DECIMAL(5, 1))");
> sql(query).withParams(LocalDateTime.now()).returns(2).ok();
> }
> {code}
> Throws Numeric overflow exception, seems this is incorrect behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19922) Gradle checkstyle tasks are greedy

2023-07-06 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-19922:

Attachment: screenshot-1.png

> Gradle checkstyle tasks are greedy
> --
>
> Key: IGNITE-19922
> URL: https://issues.apache.org/jira/browse/IGNITE-19922
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2023-07-06-11-18-40-515.png, screenshot-1.png
>
>
> This is memory consumption during {{gradlew checkstyleMain}}  execution - 
> goes from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome 
> tabs to unload and overall stress on the system. 
> Also, RAM usage does not go down after this command unless I kill/stop Gradle 
> daemons
> !image-2023-07-06-11-18-40-515.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-19922) Gradle checkstyle tasks are greedy

2023-07-06 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740479#comment-17740479
 ] 

Pavel Tupitsyn edited comment on IGNITE-19922 at 7/6/23 8:21 AM:
-

A lot of processes are spawned and they don't go away after then scan:

 !screenshot-1.png! 


was (Author: ptupitsyn):
A lot of processes are spawned and they don't go away:

 !screenshot-1.png! 

> Gradle checkstyle tasks are greedy
> --
>
> Key: IGNITE-19922
> URL: https://issues.apache.org/jira/browse/IGNITE-19922
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2023-07-06-11-18-40-515.png, screenshot-1.png
>
>
> This is memory consumption during {{gradlew checkstyleMain}}  execution - 
> goes from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome 
> tabs to unload and overall stress on the system. 
> Also, RAM usage does not go down after this command unless I kill/stop Gradle 
> daemons
> !image-2023-07-06-11-18-40-515.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19922) Gradle checkstyle tasks are greedy

2023-07-06 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740479#comment-17740479
 ] 

Pavel Tupitsyn commented on IGNITE-19922:
-

A lot of processes are spawned and they don't go away:

 !screenshot-1.png! 

> Gradle checkstyle tasks are greedy
> --
>
> Key: IGNITE-19922
> URL: https://issues.apache.org/jira/browse/IGNITE-19922
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2023-07-06-11-18-40-515.png, screenshot-1.png
>
>
> This is memory consumption during {{gradlew checkstyleMain}}  execution - 
> goes from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome 
> tabs to unload and overall stress on the system. 
> Also, RAM usage does not go down after this command unless I kill/stop Gradle 
> daemons
> !image-2023-07-06-11-18-40-515.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19922) Gradle checkstyle tasks are greedy

2023-07-06 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-19922:
--

 Summary: Gradle checkstyle tasks are greedy
 Key: IGNITE-19922
 URL: https://issues.apache.org/jira/browse/IGNITE-19922
 Project: Ignite
  Issue Type: New Feature
Reporter: Mikhail Pochatkin
 Attachments: image-2023-07-06-11-18-40-515.png

This is memory consumption during {{gradlew checkstyleMain}}  execution - goes 
from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome tabs to 
unload and overall stress on the system. 
Also, RAM usage does not go down after this command unless I kill/stop Gradle 
daemons
!image-2023-07-06-11-18-40-515.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19921) Add thin client support for Spring Session.

2023-07-06 Thread Andrey Novikov (Jira)
Andrey Novikov created IGNITE-19921:
---

 Summary: Add thin client support for Spring Session.
 Key: IGNITE-19921
 URL: https://issues.apache.org/jira/browse/IGNITE-19921
 Project: Ignite
  Issue Type: Improvement
  Components: extensions
Reporter: Andrey Novikov


It's needed to add thin client support for Spring Session.

To work with a thin client it proposed:
 # Configure the bean of IgniteClient type
 # Mark the bean from step 1 with SpringSessionIgnite annotation.
 # Create session cache over create table query.

At the moment, the repository configuration which uses node to access the 
cluster is performed in the same way.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19921) Add thin client support for Spring Session.

2023-07-06 Thread Andrey Novikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov reassigned IGNITE-19921:
---

Assignee: Andrey Novikov

> Add thin client support for Spring Session.
> ---
>
> Key: IGNITE-19921
> URL: https://issues.apache.org/jira/browse/IGNITE-19921
> Project: Ignite
>  Issue Type: Improvement
>  Components: extensions
>Reporter: Andrey Novikov
>Assignee: Andrey Novikov
>Priority: Major
>
> It's needed to add thin client support for Spring Session.
> To work with a thin client it proposed:
>  # Configure the bean of IgniteClient type
>  # Mark the bean from step 1 with SpringSessionIgnite annotation.
>  # Create session cache over create table query.
> At the moment, the repository configuration which uses node to access the 
> cluster is performed in the same way.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19912) Duplicated index creation using SQL leads to node start-up failure

2023-07-06 Thread Evgeny Stanilovsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740465#comment-17740465
 ] 

Evgeny Stanilovsky commented on IGNITE-19912:
-

[~xtern] can u make a review plz? bot report will be soon.

> Duplicated index creation using SQL leads to node start-up failure
> --
>
> Key: IGNITE-19912
> URL: https://issues.apache.org/jira/browse/IGNITE-19912
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.15
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
> Attachments: DuplicateIndexCreationTest.java
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In case an index for the field is specified using QuerySqlFields(index=true) 
> annotation, it's possible to create multiple additional indices for the same 
> field using CREATE INDEX IF NOT EXISTS statement without explicit index name 
> specification. As a result, all indices that were created via SQL have the 
> same name, which leads to node failure on the next restart due to Index with 
> name 'person_name_asc_idx' already exists. exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19910) CDC through Kafka: refactor timeouts

2023-07-06 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19910:
---
Description: 
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout ({{kafkaRequestTimeout}}), but 
future fails itself if delivery timeout exceeded.


*Timeouts for KafkaConsumer*
All above methods will fail with an exception, when specified timeout exceeds, 
thus, specified timeout *_should not be too low_*.

On the other hand, kafka-to-ignite.sh also invokes {{KafkaConsumer#poll}} with 
timeout {{kafkaRequestTimeout}}, but it just waits for data until specified 
timeout expires. So, {{#poll}} should be called quite often and we *_should not 
set too large timeout_* for it, otherwise, we can face with delays of 
replication, when some topic partitions have no new data. It is not desired 
behavior, because in this case some partitions will wait to be processed.


*Kafka clients request retries*
Each single request will be retried in case of {{request.timeout.ms}} exceeding 
[2, 4]. Behavior of retries is similar both for {{KafkaConsumer}} and 
{{KafkaProducer}}. Minimal amount of retries approximately equals to ratio of 
total operation timeout to {{request.timeout.ms}}. Total timeout is an 
explicitly specified argument of API method or default value (described in 
above tables). 
It is obvious, that currently {{kafkaRequestTimeout}} have to be N times 
greater, than {{request.timeout.ms}} in order to make request retries possible, 
i.e. most of time we have to override default value of 3s in CDC configuration.


*Conclusion*
# It seems, that the better approach is to rely only on built-in kafka clients 
timeouts, because kafka clients have already provided connection reliability 
features. These timeouts should be configured according to Kafka documentation.
# {{kafkaRequestTimeout}} should be used only for {{KafkaConsumer#poll}}, 
default value of 3s can remain the same. 
# As alternative to points 1,2 we can add separate timeout for 
{{KafkaConsumer#poll}}. Default timeouts for all other operations have to be 
increased.



Links:
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_delivery.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_default.api.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_request.timeout.ms

  was:
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 

[jira] [Updated] (IGNITE-19910) CDC through Kafka: refactor timeouts

2023-07-06 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19910:
---
Description: 
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout ({{kafkaRequestTimeout}}), but 
future fails itself if delivery timeout exceeded.


*Timeouts for KafkaConsumer*
All above methods will fail with an exception, when specified timeout exceeds, 
thus, specified timeout *_should not be too low_*.

On the other hand, kafka-to-ignite.sh also invokes {{KafkaConsumer#poll}} with 
timeout {{kafkaRequestTimeout}}, but it just waits for data until specified 
timeout expires. So, {{#poll}} should be called quite often and we *_should not 
set too large timeout_* for it, otherwise, we can face with delays of 
replication, when some topic partitions have no new data. It is not desired 
behavior, because in this case some partitions will wait to be processed.


*Kafka clients request retries*
Each single request will be retried in case of {{request.timeout.ms}} exceeding 
[2, 4]. Behavior of retries is similar both for {{KafkaConsumer}} and 
{{KafkaProducer}}. Minimal amount of retries approximately equals to ratio of 
total operation timeout to {{request.timeout.ms}}. Total timeout is an 
explicitly specified argument of API method or default value (described in 
above tables). 
It is obvious, that currently {{kafkaRequestTimeout}} have to be N times 
greater, than {{request.timeout.ms}} in order to make request retries possible, 
i.e. most of time we have to override default value of 3s in CDC configuration.


*Conclusion*
# It seems, that the better approach is to rely only on built-in kafka clients 
timeouts, because kafka clients have already provided connection reliability 
features. These timeouts should be configured according to Kafka documentation.
# As alternative to points 1,2 we can add separate timeout for 
{{KafkaConsumer#poll}}. Default timeouts for all other operations have to be 
increased.



Links:
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_delivery.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#producerconfigs_request.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_default.api.timeout.ms
# 
https://kafka.apache.org/27/documentation.html#consumerconfigs_request.timeout.ms

  was:
Currently, in CDC through Kafka applications, single timeout property 
({{kafkaRequestTimeout)}} is used for all Kafka related operations instead of 
built-in timeouts of Kafka clients API (moreover, default value of 3 seconds 
does not correspond to Kafka clients defaults):
||Client||Timeout||Default value, s||
|{{KafkaProducer}}|{{delivery.timeout.ms}}|120|
|{{KafkaProducer}}|{{request.timeout.ms}}|30|
|{{KafkaConsumer}}|{{default.api.timeout.ms}}|60|
|{{KafkaConsumer}}|{{request.timeout.ms}}|30|


Table below describes places where {{kafkaRequestTimeout}} is _explicitly 
specified_ as total operation timeout instead of using default timeouts:
||CDC application||API||Default value ||
|ignite-cdc.sh: 
{{IgniteToKafkaCdcStreamer}}|{{KafkaProducer#send}}|{{delivery.timeout.ms}} *|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#commitSync}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteCdcStreamerApplier}}|{{KafkaConsumer#close}}|{{KafkaConsumer#DEFAULT_CLOSE_TIMEOUT_MS}}
 (30s)|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#partitionsFor}}|{{default.api.timeout.ms}}|
|kafka-to-ignite.sh: 
{{KafkaToIgniteMetadataUpdater}}|{{KafkaConsumer#endOffsets}}|{{request.timeout.ms}}|

\* - waits for future during specified timeout 

[jira] [Created] (IGNITE-19920) Change documentation after ignite-19644 (add column if not exists)

2023-07-06 Thread Evgeny Stanilovsky (Jira)
Evgeny Stanilovsky created IGNITE-19920:
---

 Summary: Change documentation after ignite-19644 (add column if 
not exists)
 Key: IGNITE-19920
 URL: https://issues.apache.org/jira/browse/IGNITE-19920
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0-beta1
Reporter: Evgeny Stanilovsky
Assignee: Igor Gusev


After [1] merging, no more available syntax like :

{noformat}
ADD COLUMN IF NOT EXISTS 
DROP COLUMN IF EXISTS
{noformat}

documentation need to be figed.

[1] ignite-19644



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19612) Drop IF EXISTS clause from add/drop column syntax.

2023-07-06 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky resolved IGNITE-19612.
-
Resolution: Duplicate

> Drop IF EXISTS clause from add/drop column syntax.
> --
>
> Key: IGNITE-19612
> URL: https://issues.apache.org/jira/browse/IGNITE-19612
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3, tech-debt
>
> Using IF EXISTS/IF NOT EXISTS clause in ADD/DROP COLUMN DDL command looks 
> ambiguous when adding/dropping multiple columns.
> Let's drop IF EXISTS/IF NOT EXITST clause support SQL syntax and drop 
> ifExists flags from AlterTableAddCommand, AlterTableDropCommand classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)