[jira] [Commented] (IGNITE-17003) SqlDataTypesCoverageTests.testDecimalDataType failes flaky

2022-05-20 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540256#comment-17540256
 ] 

Yury Gerzhedovich commented on IGNITE-17003:


[~tledkov-gridgain] , LGTM. Thanks for your efforts to contribute.

> SqlDataTypesCoverageTests.testDecimalDataType failes flaky
> --
>
> Key: IGNITE-17003
> URL: https://issues.apache.org/jira/browse/IGNITE-17003
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a problem with this critical test - when start full test class there 
> is high posibuility to get error on first tests. By default first test is 
> testDecimalDataType. Every time I get different errors - it makes a 
> suggestion about side effect of started cluster. Test doesn’t fail starting 
> separately. 
> Case of fail: [atomicityMode=ATOMIC, cacheMode=PARTITIONED, ttlFactory=null, 
> backups=2, evictionFactory=null, onheapCacheEnabled=false, 
> writeSyncMode=FULL_ASYNC, persistenceEnabled=false]
> *Root cause:*
> The test is invalid for {{FULL_ASYNC }}mode. 
> The sequential execute
> UPDATE …
> SELECT …
> cannot guarantee visibility UPDATE changes for the SELECT query when 
> FULL_ASYNC mode is used.
> *Possible fixes:*
> There can be two ways:
> don’t use FULL_ASYNC mode at the test. 
> wait for UPDATE changes is applied.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17015) Perform rebalance on node recovery before join

2022-05-20 Thread Denis Chudov (Jira)
Denis Chudov created IGNITE-17015:
-

 Summary: Perform rebalance on node recovery before join 
 Key: IGNITE-17015
 URL: https://issues.apache.org/jira/browse/IGNITE-17015
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Chudov


For now, there is only one condition for distributed recovery completion - it's 
meta storage catch-up (see {{ 
org.apache.ignite.internal.recovery.RecoveryCompletionFutureFactory#create }} 
). During recovery raft groups for tables start. So, rebalance is able to start 
during recovery and we can just wait for it in recovery completion future.
Rebalance should be performed for those tables which assignments are up-to-date 
with the cluster, i.e. it should start after meta storage catch-up.
Rebalance should be considered as completed after successful read from local 
node. It can be implemented after implementation of consistent read from 
backups (see IGNITE-16767), which incapsulates following logic: consistent read 
is possible either when local raft log is replicated and consistent with 
leader, i.e. local index and leader index are the same, or when read with 
safe-time* is successul. 

*Safe-time is some timestamp. All updates which have less or equal timestamp 
than safe-time are already replicated on this node.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17003) SqlDataTypesCoverageTests.testDecimalDataType failes flaky

2022-05-20 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540153#comment-17540153
 ] 

Ignite TC Bot commented on IGNITE-17003:


{panel:title=Branch: [pull/10027/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10027/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6578257buildTypeId=IgniteTests24Java8_RunAll]

> SqlDataTypesCoverageTests.testDecimalDataType failes flaky
> --
>
> Key: IGNITE-17003
> URL: https://issues.apache.org/jira/browse/IGNITE-17003
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is a problem with this critical test - when start full test class there 
> is high posibuility to get error on first tests. By default first test is 
> testDecimalDataType. Every time I get different errors - it makes a 
> suggestion about side effect of started cluster. Test doesn’t fail starting 
> separately. 
> Case of fail: [atomicityMode=ATOMIC, cacheMode=PARTITIONED, ttlFactory=null, 
> backups=2, evictionFactory=null, onheapCacheEnabled=false, 
> writeSyncMode=FULL_ASYNC, persistenceEnabled=false]
> *Root cause:*
> The test is invalid for {{FULL_ASYNC }}mode. 
> The sequential execute
> UPDATE …
> SELECT …
> cannot guarantee visibility UPDATE changes for the SELECT query when 
> FULL_ASYNC mode is used.
> *Possible fixes:*
> There can be two ways:
> don’t use FULL_ASYNC mode at the test. 
> wait for UPDATE changes is applied.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-17002) Indexes rebuild in Maintenance Mode

2022-05-20 Thread Semyon Danilov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semyon Danilov reassigned IGNITE-17002:
---

Assignee: Semyon Danilov

> Indexes rebuild in Maintenance Mode
> ---
>
> Key: IGNITE-17002
> URL: https://issues.apache.org/jira/browse/IGNITE-17002
> Project: Ignite
>  Issue Type: Improvement
>  Components: control.sh, persistence
>Reporter: Sergey Chugunov
>Assignee: Semyon Danilov
>Priority: Major
> Fix For: 2.14
>
>
> Now Ignite supports entering Maintenance Mode after index corruption 
> automatically - this was implemented in linked issue.
> But there are use-cases when user needs to request rebuilding specific 
> indexes in MM, so we need to provide a control.sh API to make these requests.
> Also for better integration with monitoring tools it is nice to provide an 
> API to check status of rebuilding task and print message to logs when each 
> task is finished and all tasks are finished.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540101#comment-17540101
 ] 

Denis Chudov edited comment on IGNITE-16922 at 5/20/22 12:30 PM:
-

This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which returns false and makes Ignite add new row first, and then remove old one.
Problem is that for large entries, fragmented by several pages, Ignite can't 
update the row in-place, because in-place operations are made under write lock 
on single page (see AbstractDataPageIO#updateRow  )


was (Author: denis chudov):
This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which returns false and makes Ignite add new row first, and then remove old one.
Problem is that for large entries, fragmented by several pages, Ignite can't 
update the row in-place, because in-place operations are made under write lock 
on single page (see {{ AbstractDataPageIO#updateRow }} )

> Getting an entry with expiry policy causes IgniteOutOfMemoryException
> -
>
> Key: IGNITE-16922
> URL: https://issues.apache.org/jira/browse/IGNITE-16922
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.13
>Reporter: Alexey Kukushkin
>Priority: Major
>  Labels: cggg
>   Original Estimate: 64h
>  Remaining Estimate: 64h
>
> {{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
> {{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the 
> {{key}} and Ignite has not enough storage for another entry of the same or 
> bigger size.
> This happens because:
> # Ignite needs to update TTL
> # TTL is part of the entry and Ignite overwrites full entry to update the TTL
> # The problem is Ignite runs common code that checks if Ignite has enough 
> storage to write the entry with updated TTL back. The check fails causing the 
> {{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
> # This behavior is very confusing for Ignite users: why would a "read" 
> operation throw Ignite OOM?
> Can we update the TTL atomically and skip the storage size check?
> Please enhance Ignite not to throw Ignite OOM on {{get}}. 
> Stack trace:
> {code:java}
> [2022-05-20 
> 15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out 
> of memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies]]
> class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of 
> memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
>   at 
> 

[jira] [Comment Edited] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540101#comment-17540101
 ] 

Denis Chudov edited comment on IGNITE-16922 at 5/20/22 12:29 PM:
-

This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which returns false and makes Ignite add new row first, and then remove old one.
Problem is that for large entries, fragmented by several pages, Ignite can't 
update the row in-place, because in-place operations are made under write lock 
on single page (see {{ AbstractDataPageIO#updateRow }} )


was (Author: denis chudov):
This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which returns false and makes Ignite add new row first, and then remove old one.

> Getting an entry with expiry policy causes IgniteOutOfMemoryException
> -
>
> Key: IGNITE-16922
> URL: https://issues.apache.org/jira/browse/IGNITE-16922
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.13
>Reporter: Alexey Kukushkin
>Priority: Major
>  Labels: cggg
>   Original Estimate: 64h
>  Remaining Estimate: 64h
>
> {{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
> {{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the 
> {{key}} and Ignite has not enough storage for another entry of the same or 
> bigger size.
> This happens because:
> # Ignite needs to update TTL
> # TTL is part of the entry and Ignite overwrites full entry to update the TTL
> # The problem is Ignite runs common code that checks if Ignite has enough 
> storage to write the entry with updated TTL back. The check fails causing the 
> {{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
> # This behavior is very confusing for Ignite users: why would a "read" 
> operation throw Ignite OOM?
> Can we update the TTL atomically and skip the storage size check?
> Please enhance Ignite not to throw Ignite OOM on {{get}}. 
> Stack trace:
> {code:java}
> [2022-05-20 
> 15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out 
> of memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies]]
> class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of 
> memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4131)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2121)
>   at 
> 

[jira] [Comment Edited] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540101#comment-17540101
 ] 

Denis Chudov edited comment on IGNITE-16922 at 5/20/22 12:21 PM:
-

This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which returns false and makes Ignite add new row first, and then remove old one.


was (Author: denis chudov):
This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which return false and makes Ignite add new row first, and then remove old one.

> Getting an entry with expiry policy causes IgniteOutOfMemoryException
> -
>
> Key: IGNITE-16922
> URL: https://issues.apache.org/jira/browse/IGNITE-16922
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.13
>Reporter: Alexey Kukushkin
>Priority: Major
>  Labels: cggg
>   Original Estimate: 64h
>  Remaining Estimate: 64h
>
> {{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
> {{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the 
> {{key}} and Ignite has not enough storage for another entry of the same or 
> bigger size.
> This happens because:
> # Ignite needs to update TTL
> # TTL is part of the entry and Ignite overwrites full entry to update the TTL
> # The problem is Ignite runs common code that checks if Ignite has enough 
> storage to write the entry with updated TTL back. The check fails causing the 
> {{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
> # This behavior is very confusing for Ignite users: why would a "read" 
> operation throw Ignite OOM?
> Can we update the TTL atomically and skip the storage size check?
> Please enhance Ignite not to throw Ignite OOM on {{get}}. 
> Stack trace:
> {code:java}
> [2022-05-20 
> 15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out 
> of memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies]]
> class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of 
> memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4131)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2121)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1997)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1860)
>   at 
> 

[jira] [Commented] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540101#comment-17540101
 ] 

Denis Chudov commented on IGNITE-16922:
---

This happens because Ignite stores TTL in row itself and needs to update row, 
and in-place update doesn't happen because of the condition in 
{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#createRow}}:
{code:java}
if (canUpdateOldRow(cctx, oldRow, dataRow) && 
rowStore.updateRow(oldRow.link(), dataRow, grp.statisticsHolderData()))
{code}
which return false and makes Ignite add new row first, and then remove old one.

> Getting an entry with expiry policy causes IgniteOutOfMemoryException
> -
>
> Key: IGNITE-16922
> URL: https://issues.apache.org/jira/browse/IGNITE-16922
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.13
>Reporter: Alexey Kukushkin
>Priority: Major
>  Labels: cggg
>   Original Estimate: 64h
>  Remaining Estimate: 64h
>
> {{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
> {{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the 
> {{key}} and Ignite has not enough storage for another entry of the same or 
> bigger size.
> This happens because:
> # Ignite needs to update TTL
> # TTL is part of the entry and Ignite overwrites full entry to update the TTL
> # The problem is Ignite runs common code that checks if Ignite has enough 
> storage to write the entry with updated TTL back. The check fails causing the 
> {{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
> # This behavior is very confusing for Ignite users: why would a "read" 
> operation throw Ignite OOM?
> Can we update the TTL atomically and skip the storage size check?
> Please enhance Ignite not to throw Ignite OOM on {{get}}. 
> Stack trace:
> {code:java}
> [2022-05-20 
> 15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out 
> of memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies]]
> class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of 
> memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
> persistenceEnabled=false] Try the following:
>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>   ^-- Enable eviction or expiration policies
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4131)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2121)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1997)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1860)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1843)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:471)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4164)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4140)
>   at 
> 

[jira] [Commented] (IGNITE-16937) [Versioned Storage] A multi version TableStorage for MvPartitionStorage partitions

2022-05-20 Thread Sergey Uttsel (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540099#comment-17540099
 ] 

Sergey Uttsel commented on IGNITE-16937:


LGTM. I think it will unblock the 
https://issues.apache.org/jira/browse/IGNITE-16881 ticket.

> [Versioned Storage] A multi version TableStorage for MvPartitionStorage 
> partitions
> --
>
> Key: IGNITE-16937
> URL: https://issues.apache.org/jira/browse/IGNITE-16937
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Reporter: Sergey Uttsel
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to create a multi version table storage which aggregate 
> MvPartitionStorage partitions.
> Need to think how to integrate the multi version table storage to Ignite. May 
> be it's need to create for example a multi version StorageEngine.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-16922:
--
Description: 
{{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
{{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the {{key}} 
and Ignite has not enough storage for another entry of the same or bigger size.

This happens because:
# Ignite needs to update TTL
# TTL is part of the entry and Ignite overwrites full entry to update the TTL
# The problem is Ignite runs common code that checks if Ignite has enough 
storage to write the entry with updated TTL back. The check fails causing the 
{{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
# This behavior is very confusing for Ignite users: why would a "read" 
operation throw Ignite OOM?

Can we update the TTL atomically and skip the storage size check?
Please enhance Ignite not to throw Ignite OOM on {{get}}. 

Stack trace:

{code:java}
[2022-05-20 
15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out of 
memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies]]
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of memory 
in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
at 
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4131)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2121)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1997)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1860)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1843)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:471)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4164)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4140)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateTtl(GridCacheMapEntry.java:2961)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateTtl(GridCacheMapEntry.java:2934)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:825)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGetVersioned(GridCacheMapEntry.java:704)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getAllAsync0(GridDhtCacheAdapter.java:851)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:691)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:413)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:289)
at 

[jira] [Updated] (IGNITE-16922) Getting an entry with expiry policy causes IgniteOutOfMemoryException

2022-05-20 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-16922:
--
Description: 
{{IgniteCache#get(key)}} operation causes {{IgniteOutOfMemoryException}} if 
{{AccessedExpiryPolicy}} or {{TouchedExpiryPolicy}} is enabled for the {{key}} 
and Ignite has not enough storage for another entry of the same or bigger size.

This happens because:
# Ignite needs to update TTL
# TTL is part of the entry and Ignite overwrites full entry to update the TTL
# The problem is Ignite runs common code that checks if Ignite has enough 
storage to write the entry with updated TTL back. The check fails causing the 
{{IgniteCache#get(key)}} operation to throw {{IgniteOutOfMemoryException}}.
# This behavior is very confusing for Ignite users: why would a "read" 
operation throw Ignite OOM?

Can we update the TTL atomically and skip the storage size check?
Please enhance Ignite not to throw Ignite OOM on {{get}}. 

Stack trace:

{code:java}
[2022-05-20 
15:08:20,025][ERROR][sys-stripe-6-#8%ignite.IgniteOOMOnGet0%][IgniteTestResources]
 Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out of 
memory in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies]]
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of memory 
in data region [name=default, initSize=18.1 MiB, maxSize=18.1 MiB, 
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
at 
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1234)
at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1962)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5767)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5695)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4131)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2121)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1997)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1860)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1843)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:471)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4164)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4140)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateTtl(GridCacheMapEntry.java:2961)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateTtl(GridCacheMapEntry.java:2934)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:825)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGetVersioned(GridCacheMapEntry.java:704)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getAllAsync0(GridDhtCacheAdapter.java:851)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:691)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:413)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:289)
at 

[jira] [Updated] (IGNITE-14972) Thin 3.0: Implement SQL API for java thin client

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-14972:
---
Labels: ignite-3  (was: )

> Thin 3.0: Implement SQL API for java thin client
> 
>
> Key: IGNITE-14972
> URL: https://issues.apache.org/jira/browse/IGNITE-14972
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> We need to implement basic SQL API for java thin client in 3.0. Maybe this 
> task should involve creating an IEP and discussion on the dev list.
> Also, keep in mind that protocol messages themselves should re-use as much as 
> possible JDBC protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (IGNITE-16966) SQL API: Add SQL queries support to client protocol.

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich resolved IGNITE-16966.

Resolution: Duplicate

> SQL API: Add SQL queries support to client protocol.
> 
>
> Key: IGNITE-16966
> URL: https://issues.apache.org/jira/browse/IGNITE-16966
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql, thin client
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> Add SQL queries support to the client protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17539666#comment-17539666
 ] 

Pavel Pereslegin edited comment on IGNITE-14341 at 5/20/22 10:17 AM:
-

Currently to clean up expired entries we are getting the cursor and deleting 
rows one by one.
Thus, for each row, we perform a search from the root, lock the page for 
writing, and delete single row.
 !expire1.png!

Instead of deleting a single row, we can delete all rows from a specified range 
on the page. 
This reduces the number of page write locks.
 !expire2.png! 

The proposed changes extend the {{remove}} (rather than the {{cursor}}) 
operation.
Those. after deleting rows from the page, the removal operation (search) is 
repeated from the root.
This is also why this operation does not support tree mode, in which the inner 
page may contain a key that is not present in the leaf page 
({{canGetRowFromInner = false}}).


Benchmark results on local machine (i7-8700 CPU @ 3.20GHz, 6 cores, 12 threads).
 !bench_diagram.png! 

Since this patch only speeds up pendingtree clearing, it would be more correct 
to compare the speed of inserting into a cache with an expiry policy but 
with/and without pendingtree cleanup (with excluding code that removes entries 
from the page store).
 !bench3.png! 


was (Author: xtern):
Currently to clean up expired entries we are getting the cursor and deleting 
rows one by one.
Thus, for each row, we perform a search from the root, lock the page for 
writing, and delete single row.
 !expire1.png!

Instead of deleting a single row, we can delete all rows from a specified range 
on the page. 
This reduces the number of page write locks.
 !expire2.png! 

The proposed changes extend the {{remove}} (rather than the {{cursor}}) 
operation.
Those. after deleting rows from the page, the removal operation (search) is 
repeated from the root.
This is also why this operation does not support tree mode, in which the inner 
page may contain a key that is not present in the leaf page 
({{canGetRowFromInner = false}}).


Benchmark results on local machine (i7-8700 CPU @ 3.20GHz, 6 cores, 12 threads).
 !bench_diagram.png! 

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench3.png, 
> bench_diagram.png, expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> 

[jira] [Updated] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-14341:
--
Attachment: bench3.png

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench3.png, 
> bench_diagram.png, expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:2076)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expireInternal(IgniteCacheOffheapManagerImpl.java:1426)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:246)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:882){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-14341:
--
Attachment: (was: bench2.png)

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench_diagram.png, 
> expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:2076)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expireInternal(IgniteCacheOffheapManagerImpl.java:1426)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:246)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:882){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16936) Incorrect DML syntax error message contains sensitive information

2022-05-20 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540028#comment-17540028
 ] 

Yury Gerzhedovich commented on IGNITE-16936:


[~AldoRaine] , https://issues.apache.org/jira/browse/IGNITE-17001 - pls check 
the ticket, seems it fixed your issue

> Incorrect DML syntax error message contains sensitive information
> -
>
> Key: IGNITE-16936
> URL: https://issues.apache.org/jira/browse/IGNITE-16936
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: 
> IGNITE-16936_Ignore_IGNITE_TO_STRING_INCLUDE_SENSITIVE_in_wrong_syntax_DML_error_message_-.patch
>
>
> Incorrect DML syntax error message contains sensitive information.
> Regardless of the value of IGNITE_TO_STRING_INCLUDE_SENSITIVE.
> Reproducer  
> [^IGNITE-16936_Ignore_IGNITE_TO_STRING_INCLUDE_SENSITIVE_in_wrong_syntax_DML_error_message_-.patch]
>  show what SENSITIVE contains in message.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16965) SQL API: Implement synchronous SQL API.

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16965:
---
Labels: ignite-3  (was: )

> SQL API: Implement synchronous SQL API.
> ---
>
> Key: IGNITE-16965
> URL: https://issues.apache.org/jira/browse/IGNITE-16965
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> Implement session and session builder interfaces (excl. async/reactive)
> Implement statement and statement builder (excl. prepared)
> Implement client-server protocol, messages and integrate it with the query 
> engine.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17014) [Native Persistence 3.0] Porting FilePageStoreFactory and FilePageStore from 2.0

2022-05-20 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17014:
-
Reviewer: Ivan Bessonov

> [Native Persistence 3.0] Porting FilePageStoreFactory and FilePageStore from 
> 2.0
> 
>
> Key: IGNITE-17014
> URL: https://issues.apache.org/jira/browse/IGNITE-17014
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> To continue porting the checkpoint from 2.0, we need to port:
> * 
> *org.apache.ignite.internal.processors.cache.persistence.file.FileVersionCheckingFactory*
> * *org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17014) [Native Persistence 3.0] Porting FilePageStoreFactory and FilePageStore from 2.0

2022-05-20 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17014:


 Summary: [Native Persistence 3.0] Porting FilePageStoreFactory and 
FilePageStore from 2.0
 Key: IGNITE-17014
 URL: https://issues.apache.org/jira/browse/IGNITE-17014
 Project: Ignite
  Issue Type: Task
Reporter: Kirill Tkalenko
Assignee: Kirill Tkalenko
 Fix For: 3.0.0-alpha5


To continue porting the checkpoint from 2.0, we need to port:
* 
*org.apache.ignite.internal.processors.cache.persistence.file.FileVersionCheckingFactory*
* *org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-14341:
--
Attachment: bench2.png

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench2.png, 
> bench_diagram.png, expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:2076)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expireInternal(IgniteCacheOffheapManagerImpl.java:1426)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:246)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:882){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16856) Sql. Ability to create table without specifying PK

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16856:
---
Fix Version/s: 3.0.0-alpha5

> Sql. Ability to create table without specifying PK
> --
>
> Key: IGNITE-16856
> URL: https://issues.apache.org/jira/browse/IGNITE-16856
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> Despite a keyless use case currently is not supported by Ignite-3, SQL 
> standard allows to create such tables, and many external tests (TCP-H for 
> instance) are taking advantage of this.
> To make it easier to adopt such a tests, let's provide a special mode for 
> Ignite, where implicit PK will be created in case of lack explicit one.
> Key points to consider:
>  * This mode considered for test purpose only, hence the implementation 
> should be as less invasive as possible.
>  * An implicit key column should not be returned by {{SELECT *}} queries. It 
> may be accessed by its name though.
>  * Type of this column doesn't matter, but the range of possible values 
> should be big enough to support billiones of unique values.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16962) SQL API: Implement query metadata.

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16962:
---
Fix Version/s: 3.0.0-alpha5

> SQL API: Implement query metadata.
> --
>
> Key: IGNITE-16962
> URL: https://issues.apache.org/jira/browse/IGNITE-16962
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> Implement query result metadata.
> Add public classes for SQL types (if needed) and map them to Calcite types.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16965) SQL API: Implement synchronous SQL API.

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16965:
---
Fix Version/s: 3.0.0-alpha5

> SQL API: Implement synchronous SQL API.
> ---
>
> Key: IGNITE-16965
> URL: https://issues.apache.org/jira/browse/IGNITE-16965
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Priority: Major
> Fix For: 3.0.0-alpha5
>
>
> Implement session and session builder interfaces (excl. async/reactive)
> Implement statement and statement builder (excl. prepared)
> Implement client-server protocol, messages and integrate it with the query 
> engine.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16963) SQL API: Add batched DML queries support.

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16963:
---
Fix Version/s: 3.0.0-alpha5

> SQL API: Add batched DML queries support.
> -
>
> Key: IGNITE-16963
> URL: https://issues.apache.org/jira/browse/IGNITE-16963
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> Add batching for DML queries.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16937) [Versioned Storage] A multi version TableStorage for MvPartitionStorage partitions

2022-05-20 Thread Semyon Danilov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semyon Danilov updated IGNITE-16937:

Reviewer: Semyon Danilov

> [Versioned Storage] A multi version TableStorage for MvPartitionStorage 
> partitions
> --
>
> Key: IGNITE-16937
> URL: https://issues.apache.org/jira/browse/IGNITE-16937
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Reporter: Sergey Uttsel
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to create a multi version table storage which aggregate 
> MvPartitionStorage partitions.
> Need to think how to integrate the multi version table storage to Ignite. May 
> be it's need to create for example a multi version StorageEngine.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16937) [Versioned Storage] A multi version TableStorage for MvPartitionStorage partitions

2022-05-20 Thread Semyon Danilov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540012#comment-17540012
 ] 

Semyon Danilov commented on IGNITE-16937:
-

The patch looks good to me!

> [Versioned Storage] A multi version TableStorage for MvPartitionStorage 
> partitions
> --
>
> Key: IGNITE-16937
> URL: https://issues.apache.org/jira/browse/IGNITE-16937
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Reporter: Sergey Uttsel
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to create a multi version table storage which aggregate 
> MvPartitionStorage partitions.
> Need to think how to integrate the multi version table storage to Ignite. May 
> be it's need to create for example a multi version StorageEngine.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16964) SQL API: Implement async SQL API

2022-05-20 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-16964:
---
Fix Version/s: 3.0.0-alpha5

> SQL API: Implement async SQL API
> 
>
> Key: IGNITE-16964
> URL: https://issues.apache.org/jira/browse/IGNITE-16964
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> Implement async SQL API.
> Startpoint AsyncResultSet and AsyncSession interfaces.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17539666#comment-17539666
 ] 

Pavel Pereslegin edited comment on IGNITE-14341 at 5/20/22 9:16 AM:


Currently to clean up expired entries we are getting the cursor and deleting 
rows one by one.
Thus, for each row, we perform a search from the root, lock the page for 
writing, and delete single row.
 !expire1.png!

Instead of deleting a single row, we can delete all rows from a specified range 
on the page. 
This reduces the number of page write locks.
 !expire2.png! 

The proposed changes extend the {{remove}} (rather than the {{cursor}}) 
operation.
Those. after deleting rows from the page, the removal operation (search) is 
repeated from the root.
This is also why this operation does not support tree mode, in which the inner 
page may contain a key that is not present in the leaf page 
({{canGetRowFromInner = false}}).


Benchmark results on local machine (i7-8700 CPU @ 3.20GHz, 6 cores, 12 threads).
 !bench_diagram.png! 


was (Author: xtern):
Currently to clean up expired entries we are getting the cursor and deleting 
rows one by one.
Thus, for each row, we perform a search from the root, lock the page for 
writing, and delete single row.
 !expire1.png!

Instead of deleting a single row, we can delete all rows from a specified range 
on the page. 
This reduces the number of page write locks.
 !expire2.png! 

The proposed changes extend the {{remove}} (rather than the {{cursor}}) 
operation.
Those. after deleting rows from the page, the removal operation (search) is 
repeated from the root.
This is also why this operation does not support tree mode, in which the inner 
page may contain a key that is not present in the leaf page 
({{canGetRowFromInner = false}}).

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench_diagram.png, 
> expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> 

[jira] [Updated] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-20 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-14341:
--
Attachment: bench_diagram.png

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Attachments: JmhCacheExpireBenchmark.java, bench_diagram.png, 
> expire1.png, expire2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:2076)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expireInternal(IgniteCacheOffheapManagerImpl.java:1426)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:246)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:882){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17001) Useless ERROR in server logs like Duplicate key during INSERT

2022-05-20 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17539997#comment-17539997
 ] 

Yury Gerzhedovich commented on IGNITE-17001:


[~tledkov-gridgain] , LGTM. Thanks for contrubution!

> Useless ERROR in server logs like Duplicate key during INSERT
> -
>
> Key: IGNITE-17001
> URL: https://issues.apache.org/jira/browse/IGNITE-17001
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During execute SQL we have exceptions which looks like do not require any 
> actions from administrator. So it’s looks like we don’t need this one in 
> logs. As example:
> {code}
> [2022-03-22T14:16:09,423][ERROR][client-connector-#187%nebula-node%][JdbcRequestHandler]
>  Failed to execute SQL query [reqId=5411, req=JdbcQueryExecuteRequest 
> [schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=I
> NSERT INTO ref_bid_prd_crew_dtl VALUES 
> ('201904','U174148','Initload','2019-07-03 07:48:10',NULL,NULL), 
> args=Object[] [], stmtType=ANY_STATEMENT_TYPE, autoCommit=true, 
> partResReq=false, explicitTimeout=false, sup
> er=JdbcRequest [type=2, reqId=5411]]]
> org.apache.ignite.transactions.TransactionDuplicateKeyException: Duplicate 
> key during INSERT 
> [key=SQL_PUBLIC_REF_BID_PRD_CREW_DTL_e46b9a63d00b9bb81931d447fdf13491_KEY 
> [idHash=1167531818, hash=2095161702, BID_PRD=
> 201904, FILE_NBR=U174148]]
> at 
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.dmlDoInsert(DmlUtils.java:206)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:169)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:3199)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:3043)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2970)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1319)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1240)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:2887)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:2883)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:35)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3435)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:2903)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2941)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2877)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2835)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:656)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.doHandle(JdbcRequestHandler.java:321)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:258)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:210)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:58)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> 

[jira] [Commented] (IGNITE-17001) Useless ERROR in server logs like Duplicate key during INSERT

2022-05-20 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17539971#comment-17539971
 ] 

Ignite TC Bot commented on IGNITE-17001:


{panel:title=Branch: [pull/10026/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10026/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6578073buildTypeId=IgniteTests24Java8_RunAll]

> Useless ERROR in server logs like Duplicate key during INSERT
> -
>
> Key: IGNITE-17001
> URL: https://issues.apache.org/jira/browse/IGNITE-17001
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During execute SQL we have exceptions which looks like do not require any 
> actions from administrator. So it’s looks like we don’t need this one in 
> logs. As example:
> {code}
> [2022-03-22T14:16:09,423][ERROR][client-connector-#187%nebula-node%][JdbcRequestHandler]
>  Failed to execute SQL query [reqId=5411, req=JdbcQueryExecuteRequest 
> [schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=I
> NSERT INTO ref_bid_prd_crew_dtl VALUES 
> ('201904','U174148','Initload','2019-07-03 07:48:10',NULL,NULL), 
> args=Object[] [], stmtType=ANY_STATEMENT_TYPE, autoCommit=true, 
> partResReq=false, explicitTimeout=false, sup
> er=JdbcRequest [type=2, reqId=5411]]]
> org.apache.ignite.transactions.TransactionDuplicateKeyException: Duplicate 
> key during INSERT 
> [key=SQL_PUBLIC_REF_BID_PRD_CREW_DTL_e46b9a63d00b9bb81931d447fdf13491_KEY 
> [idHash=1167531818, hash=2095161702, BID_PRD=
> 201904, FILE_NBR=U174148]]
> at 
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.dmlDoInsert(DmlUtils.java:206)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:169)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:3199)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:3043)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2970)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1319)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1240)
>  ~[ignite-indexing-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:2887)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:2883)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:35)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3435)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:2903)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2941)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2877)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2835)
>  ~[ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:656)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.doHandle(JdbcRequestHandler.java:321)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:258)
>  [ignite-core-8.8.16.jar:8.8.16]
> at 
>