[jira] [Assigned] (IGNITE-7953) MVCC TX: continuous queries

2018-08-13 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-7953:


Assignee: Igor Seliverstov

> MVCC TX: continuous queries
> ---
>
> Key: IGNITE-7953
> URL: https://issues.apache.org/jira/browse/IGNITE-7953
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Alexander Paschenko
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> We need to implement MVCC-compatible continuous queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9265) MVCC TX: Two rows with the same key in one MERGE statement produce an exception

2018-08-14 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9265:


 Summary: MVCC TX: Two rows with the same key in one MERGE 
statement produce an exception
 Key: IGNITE-9265
 URL: https://issues.apache.org/jira/browse/IGNITE-9265
 Project: Ignite
  Issue Type: Bug
Reporter: Igor Seliverstov


In case the operation like {{MERGE INTO INTEGER (_key, _val) KEY(_key) VALUES 
(1,1),(1,2)}} is called an exception is occurred.

Correct behavior: each next update on the same key overwrites pervious one 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5935) MVCC TX: Tx recovery protocol

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-5935:
-
Summary: MVCC TX: Tx recovery protocol  (was: Integrate mvcc support in tx 
recovery protocol)

> MVCC TX: Tx recovery protocol
> -
>
> Key: IGNITE-5935
> URL: https://issues.apache.org/jira/browse/IGNITE-5935
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Need make sure tx recovery work properly with mvcc enabled:
> - tx IDs are generated and not lost if transaction is committed by recovery 
> procedure
> - tx should be removed from list of active transactions on coordinator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5935) MVCC TX: Tx recovery protocol

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-5935:
-
Description: 
Tx recovery doesn't work properly for txs over MVCC enabled caches using Cache 
API. It requires MvccSnapshot which may not be acquired at recovery time.
Need to implement logic for checking whether snapshot was already gotten by one 
of tx participants and use existing one, request and spread between 
participants a new snapshot otherwise.

  was:
Need make sure tx recovery work properly with mvcc enabled:
- tx IDs are generated and not lost if transaction is committed by recovery 
procedure
- tx should be removed from list of active transactions on coordinator


> MVCC TX: Tx recovery protocol
> -
>
> Key: IGNITE-5935
> URL: https://issues.apache.org/jira/browse/IGNITE-5935
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Tx recovery doesn't work properly for txs over MVCC enabled caches using 
> Cache API. It requires MvccSnapshot which may not be acquired at recovery 
> time.
> Need to implement logic for checking whether snapshot was already gotten by 
> one of tx participants and use existing one, request and spread between 
> participants a new snapshot otherwise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5935) MVCC TX: Tx recovery protocol

2018-08-17 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583787#comment-16583787
 ] 

Igor Seliverstov commented on IGNITE-5935:
--

Since Cache API internals are going to be rethought and reworked it makes sense 
to suspend the task and return to it after IGNITE-7764 is done.

> MVCC TX: Tx recovery protocol
> -
>
> Key: IGNITE-5935
> URL: https://issues.apache.org/jira/browse/IGNITE-5935
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Tx recovery doesn't work properly for txs over MVCC enabled caches using 
> Cache API. It requires MvccSnapshot which may not be acquired at recovery 
> time.
> Need to implement logic for checking whether snapshot was already gotten by 
> one of tx participants and use existing one, request and spread between 
> participants a new snapshot otherwise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-9310:


Assignee: Igor Seliverstov

> SQL: throw exception when missing cache is attempted to be created inside a 
> transaction
> ---
>
> Key: IGNITE-9310
> URL: https://issues.apache.org/jira/browse/IGNITE-9310
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc, sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
>Priority: Critical
> Fix For: 2.7
>
>
> See 
> \{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
>  This method might be called inside a transaction (both MVCC and non-MVCC 
> modes). If we do not have any protective mechanics (need to check), then this 
> call may lead to cache creation on a client, which in turn will wait for all 
> TXes to finish, including current one, leading to a deadlock.
>  # Create tests confirming the problem
>  # If hang is reproduced - add a check for ongoing transaction and throw an 
> exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9314) MVCC TX: Datastreamer operations

2018-08-17 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9314:


 Summary: MVCC TX: Datastreamer operations
 Key: IGNITE-9314
 URL: https://issues.apache.org/jira/browse/IGNITE-9314
 Project: Ignite
  Issue Type: Task
Reporter: Igor Seliverstov


Need to change DataStreamer semantics (make it transactional)

Currently clients can see DataStreamer partial writes and two subsequent 
selects, which are run in scope of one transaction at load time, may return 
different results.

Related thread:
http://apache-ignite-developers.2346864.n4.nabble.com/MVCC-and-IgniteDataStreamer-td32340.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9314) MVCC TX: Datastreamer operations

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9314:
-
Component/s: mvcc

> MVCC TX: Datastreamer operations
> 
>
> Key: IGNITE-9314
> URL: https://issues.apache.org/jira/browse/IGNITE-9314
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Need to change DataStreamer semantics (make it transactional)
> Currently clients can see DataStreamer partial writes and two subsequent 
> selects, which are run in scope of one transaction at load time, may return 
> different results.
> Related thread:
> http://apache-ignite-developers.2346864.n4.nabble.com/MVCC-and-IgniteDataStreamer-td32340.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9314) MVCC TX: Datastreamer operations

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9314:
-
Fix Version/s: 2.7

> MVCC TX: Datastreamer operations
> 
>
> Key: IGNITE-9314
> URL: https://issues.apache.org/jira/browse/IGNITE-9314
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Need to change DataStreamer semantics (make it transactional)
> Currently clients can see DataStreamer partial writes and two subsequent 
> selects, which are run in scope of one transaction at load time, may return 
> different results.
> Related thread:
> http://apache-ignite-developers.2346864.n4.nabble.com/MVCC-and-IgniteDataStreamer-td32340.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9314) MVCC TX: Datastreamer operations

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9314:
-
Component/s: sql

> MVCC TX: Datastreamer operations
> 
>
> Key: IGNITE-9314
> URL: https://issues.apache.org/jira/browse/IGNITE-9314
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Need to change DataStreamer semantics (make it transactional)
> Currently clients can see DataStreamer partial writes and two subsequent 
> selects, which are run in scope of one transaction at load time, may return 
> different results.
> Related thread:
> http://apache-ignite-developers.2346864.n4.nabble.com/MVCC-and-IgniteDataStreamer-td32340.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9310:
-
Component/s: (was: mvcc)

> SQL: throw exception when missing cache is attempted to be created inside a 
> transaction
> ---
>
> Key: IGNITE-9310
> URL: https://issues.apache.org/jira/browse/IGNITE-9310
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
>Priority: Critical
> Fix For: 2.7
>
>
> See 
> \{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
>  This method might be called inside a transaction (both MVCC and non-MVCC 
> modes). If we do not have any protective mechanics (need to check), then this 
> call may lead to cache creation on a client, which in turn will wait for all 
> TXes to finish, including current one, leading to a deadlock.
>  # Create tests confirming the problem
>  # If hang is reproduced - add a check for ongoing transaction and throw an 
> exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9310:
-
Attachment: CacheCreationItTransactionSelfTest.java

> SQL: throw exception when missing cache is attempted to be created inside a 
> transaction
> ---
>
> Key: IGNITE-9310
> URL: https://issues.apache.org/jira/browse/IGNITE-9310
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
>Priority: Critical
> Fix For: 2.7
>
> Attachments: CacheCreationItTransactionSelfTest.java
>
>
> See 
> \{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
>  This method might be called inside a transaction (both MVCC and non-MVCC 
> modes). If we do not have any protective mechanics (need to check), then this 
> call may lead to cache creation on a client, which in turn will wait for all 
> TXes to finish, including current one, leading to a deadlock.
>  # Create tests confirming the problem
>  # If hang is reproduced - add a check for ongoing transaction and throw an 
> exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9310:
-
Priority: Minor  (was: Critical)

> SQL: throw exception when missing cache is attempted to be created inside a 
> transaction
> ---
>
> Key: IGNITE-9310
> URL: https://issues.apache.org/jira/browse/IGNITE-9310
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
>Priority: Minor
> Fix For: 2.7
>
> Attachments: CacheCreationItTransactionSelfTest.java
>
>
> See 
> \{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
>  This method might be called inside a transaction (both MVCC and non-MVCC 
> modes). If we do not have any protective mechanics (need to check), then this 
> call may lead to cache creation on a client, which in turn will wait for all 
> TXes to finish, including current one, leading to a deadlock.
>  # Create tests confirming the problem
>  # If hang is reproduced - add a check for ongoing transaction and throw an 
> exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584002#comment-16584002
 ] 

Igor Seliverstov commented on IGNITE-9310:
--

Attached test shows there is no hang on attempt to create missing client cache 
in the middle of transaction. However the thrown exception isn't descriptive, I 
think we need to include more details into it (whether it is missing client 
cache or there is no such cache at all, which table in the query requires the 
cache, cache name, etc)

[~vozerov], your thoughts?

> SQL: throw exception when missing cache is attempted to be created inside a 
> transaction
> ---
>
> Key: IGNITE-9310
> URL: https://issues.apache.org/jira/browse/IGNITE-9310
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
>Priority: Minor
> Fix For: 2.7
>
> Attachments: CacheCreationItTransactionSelfTest.java
>
>
> See 
> \{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
>  This method might be called inside a transaction (both MVCC and non-MVCC 
> modes). If we do not have any protective mechanics (need to check), then this 
> call may lead to cache creation on a client, which in turn will wait for all 
> TXes to finish, including current one, leading to a deadlock.
>  # Create tests confirming the problem
>  # If hang is reproduced - add a check for ongoing transaction and throw an 
> exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8149) MVCC TX: Size method should use tx snapshot

2018-08-30 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16597429#comment-16597429
 ] 

Igor Seliverstov commented on IGNITE-8149:
--

[~Pavlukhin], see my comments below:
1) Move {{applyAndCollectLocalUpdateCounters()}} and 
{{txCounters().updateCounters(updCntrs)}} to 
{{org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter#applyTxCounters}}
2) Inline 
{{org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter#applyUpdateCounters}}
 method
3) Inline 
{{org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter#applyCacheSizeDeltas}}
 method
4) 
{{org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalState#updatedCachePartitions}}
 - involvedPartitions, I think, much better name
5) 
{{org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalStateAdapter#updCacheParts}}
 has no sense for non-mvcc transactions and should be initialized lazily
6) What about changes, made by DataStreamer? Appropriate tests are needed.
7) There are no tests for size consistency after just joined node is fully 
rebalanced.
8) Inline 
{{org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#updateSize0}}
 and use 
{{org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#updateSize}}
 instead
9) Check imports.

> MVCC TX: Size method should use tx snapshot
> ---
>
> Key: IGNITE-8149
> URL: https://issues.apache.org/jira/browse/IGNITE-8149
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Igor Seliverstov
>Assignee: Ivan Pavlukhin
>Priority: Major
> Fix For: 2.7
>
>
> Currently cache.size() returns number of entries in cache trees while there 
> can be several versions of one key-value pairs.
> We should use tx snapshot and count all passed mvcc filter entries instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8149) MVCC TX: Size method should use tx snapshot

2018-08-30 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16597508#comment-16597508
 ] 

Igor Seliverstov commented on IGNITE-8149:
--

[~Pavlukhin], "updated rows" sounds natural, but I cant say the same about 
"updated partitions". I would insist to rename the method.

> MVCC TX: Size method should use tx snapshot
> ---
>
> Key: IGNITE-8149
> URL: https://issues.apache.org/jira/browse/IGNITE-8149
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Igor Seliverstov
>Assignee: Ivan Pavlukhin
>Priority: Major
> Fix For: 2.7
>
>
> Currently cache.size() returns number of entries in cache trees while there 
> can be several versions of one key-value pairs.
> We should use tx snapshot and count all passed mvcc filter entries instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8149) MVCC TX: Size method should use tx snapshot

2018-08-31 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16598492#comment-16598492
 ] 

Igor Seliverstov commented on IGNITE-8149:
--

[~vozerov], could you look at?

> MVCC TX: Size method should use tx snapshot
> ---
>
> Key: IGNITE-8149
> URL: https://issues.apache.org/jira/browse/IGNITE-8149
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Igor Seliverstov
>Assignee: Ivan Pavlukhin
>Priority: Major
> Fix For: 2.7
>
>
> Currently cache.size() returns number of entries in cache trees while there 
> can be several versions of one key-value pairs.
> We should use tx snapshot and count all passed mvcc filter entries instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9483) JDBC/ODBC thin drivers protocol versions compatibility

2018-09-06 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9483:


 Summary: JDBC/ODBC thin drivers protocol versions compatibility
 Key: IGNITE-9483
 URL: https://issues.apache.org/jira/browse/IGNITE-9483
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Affects Versions: 2.7
Reporter: Igor Seliverstov
 Fix For: 2.7


Initially MVCC feature was aimed to 2.5 version but cannot be released earlier 
than in scope of 2.7

There are several protocol versions checks that do their stuff for 2.5.0 
version instead of 2.7.0 (for example: 
{{JdbcConnectionContext#initializeFromHandshake}})

Need to identify such places and fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-9483) JDBC/ODBC thin drivers protocol versions compatibility

2018-09-06 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-9483:


Assignee: Igor Seliverstov

> JDBC/ODBC thin drivers protocol versions compatibility
> --
>
> Key: IGNITE-9483
> URL: https://issues.apache.org/jira/browse/IGNITE-9483
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.7
>
>
> Initially MVCC feature was aimed to 2.5 version but cannot be released 
> earlier than in scope of 2.7
> There are several protocol versions checks that do their stuff for 2.5.0 
> version instead of 2.7.0 (for example: 
> {{JdbcConnectionContext#initializeFromHandshake}})
> Need to identify such places and fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9489) CorruptedTreeException on index create.

2018-09-06 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9489:


 Summary: CorruptedTreeException on index create.
 Key: IGNITE-9489
 URL: https://issues.apache.org/jira/browse/IGNITE-9489
 Project: Ignite
  Issue Type: Bug
  Components: cache, sql
Affects Versions: 2.6, 2.5, 2.4
Reporter: Igor Seliverstov
 Attachments: Test.java

Currently on dynamic index drop with enabled persistence H2TreeIndex instances 
aren't destroyed. That means that their root pages aren't removed from meta 
tree (see 
{{org.apache.ignite.internal.processors.cache.persistence.IndexStorageImpl#getOrAllocateForTree}})
 and reused on subsequent dynamic index create that leads 
CorruptedTreeException on initial index rebuild because there are some items 
with broken links on the root page.

Reproducer attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9489) CorruptedTreeException on index create.

2018-09-06 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9489:
-
Description: 
Currently on dynamic index drop with enabled persistence H2TreeIndex instances 
aren't destroyed. That means that their root pages aren't removed from meta 
tree (see 
{{org.apache.ignite.internal.processors.cache.persistence.IndexStorageImpl#getOrAllocateForTree}})
 and reused on subsequent dynamic index create that leads 
CorruptedTreeException on initial index rebuild because there are some items 
with broken links on the root page.

Reproducer attached.

Error log:

{noformat}
Error during parallel index create/rebuild.
org.h2.message.DbException: Внутренняя ошибка: "class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@7745722d[ key: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
 [idHash=2038596277, hash=-1388553726, id=1], val: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
 [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
[topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]"
General error: "class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@7745722d[ key: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
 [idHash=2038596277, hash=-1388553726, id=1], val: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
 [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
[topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]" 
[5-195]
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:251)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$3.apply(IgniteH2Indexing.java:890)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:4320)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processKey(SchemaIndexCacheVisitorImpl.java:244)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartition(SchemaIndexCacheVisitorImpl.java:207)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartitions(SchemaIndexCacheVisitorImpl.java:166)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.access$100(SchemaIndexCacheVisitorImpl.java:50)
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl$AsyncWorker.body(SchemaIndexCacheVisitorImpl.java:317)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: Внутренняя ошибка: "class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@7745722d[ key: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
 [idHash=2038596277, hash=-1388553726, id=1], val: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
 [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
[topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]"
General error: "class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@7745722d[ key: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
 [idHash=2038596277, hash=-1388553726, id=1], val: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
 [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
[topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]" 
[5-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
... 12 more
Caused by: class 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 Runtime failure on row: Row@7745722d[ key: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
 [idHash=2038596277, hash=-1388553726, id=1], val: 
org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
 [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
[topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2285)
at 
org.apache.i

[jira] [Updated] (IGNITE-9489) CorruptedTreeException on index create.

2018-09-06 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9489:
-
Ignite Flags:   (was: Docs Required)

> CorruptedTreeException on index create.
> ---
>
> Key: IGNITE-9489
> URL: https://issues.apache.org/jira/browse/IGNITE-9489
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, sql
>Affects Versions: 2.4, 2.5, 2.6
>Reporter: Igor Seliverstov
>Priority: Major
> Attachments: Test.java
>
>
> Currently on dynamic index drop with enabled persistence H2TreeIndex 
> instances aren't destroyed. That means that their root pages aren't removed 
> from meta tree (see 
> {{org.apache.ignite.internal.processors.cache.persistence.IndexStorageImpl#getOrAllocateForTree}})
>  and reused on subsequent dynamic index create that leads 
> CorruptedTreeException on initial index rebuild because there are some items 
> with broken links on the root page.
> Reproducer attached.
> Error log:
> {noformat}
> Error during parallel index create/rebuild.
> org.h2.message.DbException: Внутренняя ошибка: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@7745722d[ key: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
>  [idHash=2038596277, hash=-1388553726, id=1], val: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
>  [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
> [topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]"
> General error: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@7745722d[ key: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
>  [idHash=2038596277, hash=-1388553726, id=1], val: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
>  [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
> [topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]" 
> [5-195]
>   at org.h2.message.DbException.get(DbException.java:168)
>   at org.h2.message.DbException.convert(DbException.java:295)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:251)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$3.apply(IgniteH2Indexing.java:890)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:4320)
>   at 
> org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processKey(SchemaIndexCacheVisitorImpl.java:244)
>   at 
> org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartition(SchemaIndexCacheVisitorImpl.java:207)
>   at 
> org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.processPartitions(SchemaIndexCacheVisitorImpl.java:166)
>   at 
> org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl.access$100(SchemaIndexCacheVisitorImpl.java:50)
>   at 
> org.apache.ignite.internal.processors.query.schema.SchemaIndexCacheVisitorImpl$AsyncWorker.body(SchemaIndexCacheVisitorImpl.java:317)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.h2.jdbc.JdbcSQLException: Внутренняя ошибка: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@7745722d[ key: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
>  [idHash=2038596277, hash=-1388553726, id=1], val: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
>  [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
> [topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]"
> General error: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@7745722d[ key: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$KeyClass
>  [idHash=2038596277, hash=-1388553726, id=1], val: 
> org.apache.ignite.internal.processors.cache.index.AbstractSchemaSelfTest$ValueClass
>  [idHash=2109544797, hash=-898815788, field1=val1], ver: GridCacheVersion 
> [topVer=147733489, order=1536253488473, nodeOrder=2] ][ 1, val1, null ]" 
> [5-195]
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
>   ... 12 more
> Caused by: class 
> org.apache.ignite.internal.processors.cache.persistence.tree.Corrupt

[jira] [Updated] (IGNITE-9484) MVCC TX: Handling transactions from multiple threads in jdbc requests handler.

2018-09-07 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9484:
-
Description: 
Currently JDBC/ODBC requests are handled in NioWorker threads; that means what 
sessions may be rebalanced between nio threads which makes impossible to 
associate a NearTransaction with caller thread.

There are two possible solutions:
 # To process client messages like regular grid ones in separate thread pool in 
thread-per-connection basis
 # Implement Suspend/Resume functionality for pessimistic transactions as well 
as for optimistic ones (only pessimistic transactions are supported by 
mvcc-enabled caches)

  was:
JDBC requests may be handled by the different threads of thread pool even if 
they belong to the same transaction. As a workaround a dedicated worker thread 
is created for each session, which is not the best solution.

it is much better to have an abiltity to drive {{Near}} transactions from 
different threads in the cases when transaction actions are applied 
sequentially.


> MVCC TX: Handling transactions from multiple threads in jdbc requests handler.
> --
>
> Key: IGNITE-9484
> URL: https://issues.apache.org/jira/browse/IGNITE-9484
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc, mvcc
>Reporter: Roman Kondakov
>Priority: Major
>
> Currently JDBC/ODBC requests are handled in NioWorker threads; that means 
> what sessions may be rebalanced between nio threads which makes impossible to 
> associate a NearTransaction with caller thread.
> There are two possible solutions:
>  # To process client messages like regular grid ones in separate thread pool 
> in thread-per-connection basis
>  # Implement Suspend/Resume functionality for pessimistic transactions as 
> well as for optimistic ones (only pessimistic transactions are supported by 
> mvcc-enabled caches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9484) MVCC TX: Handling transactions from multiple threads in jdbc requests handler.

2018-09-07 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-9484:
-
Description: 
Currently JDBC/ODBC requests are handled in NioWorker threads; that means what 
sessions may be rebalanced between nio threads which makes impossible to 
associate a NearTransaction with caller thread.

There are two possible solutions:
 # To process client messages like regular grid ones in separate thread pool in 
thread-per-connection basis
 # Implement Suspend/Resume functionality for pessimistic transactions like it 
was done for optimistic ones (only pessimistic transactions are supported by 
mvcc-enabled caches)

  was:
Currently JDBC/ODBC requests are handled in NioWorker threads; that means what 
sessions may be rebalanced between nio threads which makes impossible to 
associate a NearTransaction with caller thread.

There are two possible solutions:
 # To process client messages like regular grid ones in separate thread pool in 
thread-per-connection basis
 # Implement Suspend/Resume functionality for pessimistic transactions as well 
as for optimistic ones (only pessimistic transactions are supported by 
mvcc-enabled caches)


> MVCC TX: Handling transactions from multiple threads in jdbc requests handler.
> --
>
> Key: IGNITE-9484
> URL: https://issues.apache.org/jira/browse/IGNITE-9484
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc, mvcc
>Reporter: Roman Kondakov
>Priority: Major
>
> Currently JDBC/ODBC requests are handled in NioWorker threads; that means 
> what sessions may be rebalanced between nio threads which makes impossible to 
> associate a NearTransaction with caller thread.
> There are two possible solutions:
>  # To process client messages like regular grid ones in separate thread pool 
> in thread-per-connection basis
>  # Implement Suspend/Resume functionality for pessimistic transactions like 
> it was done for optimistic ones (only pessimistic transactions are supported 
> by mvcc-enabled caches)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9320) MVCC: finalize configuration

2018-09-11 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16610277#comment-16610277
 ] 

Igor Seliverstov commented on IGNITE-9320:
--

[~rkondakov], configuration validation on initialize is a wrong pattern, we 
already have validation step in the flow, you should move all validation checks 
to this place and should not do it twice.

See 
\{{org.apache.ignite.internal.processors.cache.mvcc.MvccProcessorImpl#validateCacheConfiguration}}
 usages.

> MVCC: finalize configuration
> 
>
> Key: IGNITE-9320
> URL: https://issues.apache.org/jira/browse/IGNITE-9320
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
>Priority: Major
> Fix For: 2.7
>
>
> We need to find a way to configure MVCC caches. Currently this is a global 
> setting, which is not very convenient. Proposed solution:
>  # Introduce new {{CacheAtomicityMode.TRANSACTIONAL_SNAPSHOT}}
>  # Do not allow to change cache this mode during restart (when persistence is 
> enabled)
>  # Do not allow transactions between {{TRANSACTIONAL}} and 
> {{TRANSACTIONAL_SNAPSHOT}} caches
>  # Add limitation to cache group - all caches within a group should have the 
> same mode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9538) MVCC TX: Send partition update counters to backup nodes on prepare state.

2018-09-11 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9538:


 Summary: MVCC TX: Send partition update counters to backup nodes 
on prepare state.
 Key: IGNITE-9538
 URL: https://issues.apache.org/jira/browse/IGNITE-9538
 Project: Ignite
  Issue Type: Task
  Components: cache, mvcc
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov
 Fix For: 2.7


There are several issues with partition update counters consistency in 
transactional caches. The next approach solves most of them:
 # Count per-partition updates
 # on prepare state on primary node update current partition counter 
incrementing it by per-partition updates count and send initial value with 
updates count to backup nodes
 # on backup nodes hold all pending updates and update partition update counter 
applying the lowest gapless update (on tx finish).
 # on historical rebalance use partition update counter as start point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4645) Best effort to avoid extra copying in binary marshaller

2017-02-13 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15864058#comment-15864058
 ] 

Igor Seliverstov commented on IGNITE-4645:
--

[~vozerov], answering on your message:

1) It's done for us inside {{BinaryContext#descriptorForClass}} method, it 
cashes resolved class descriptors in {{descByCls}} field.

All the other issues are fixed

> Best effort to avoid extra copying in binary marshaller
> ---
>
> Key: IGNITE-4645
> URL: https://issues.apache.org/jira/browse/IGNITE-4645
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> If we marshal a class that contain only primitives then we can predict the 
> final byte array size and avoid copies to grow array and final trimming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4645) Best effort to avoid extra copying in binary marshaller

2017-02-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15865679#comment-15865679
 ] 

Igor Seliverstov commented on IGNITE-4645:
--

[~vozerov], solved.

> Best effort to avoid extra copying in binary marshaller
> ---
>
> Key: IGNITE-4645
> URL: https://issues.apache.org/jira/browse/IGNITE-4645
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 1.9
>
>
> If we marshal a class that contain only primitives then we can predict the 
> final byte array size and avoid copies to grow array and final trimming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4695) Write primitive fields before during binary object marshalling

2017-02-14 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4695:


 Summary: Write primitive fields before during binary object 
marshalling
 Key: IGNITE-4695
 URL: https://issues.apache.org/jira/browse/IGNITE-4695
 Project: Ignite
  Issue Type: Improvement
  Components: binary
Reporter: Igor Seliverstov


Now serializing objects we sort fields on the basis of their names. To prvide 
better performance it makes sense to change this behavior.

The main idea to provide an ability of streaming deserialization putting 
primitive fields at the start of the result array in fixed order at the 
serialization time. 

This way everything what we need to deserialize the object is just read its 
fields in the same order without getting their positions from footer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-4661) Optimizations: optimize PagesList.removeDataPage

2017-02-14 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-4661:
-
Attachment: Pagemem_benchmark_results.xlsx

> Optimizations: optimize PagesList.removeDataPage
> 
>
> Key: IGNITE-4661
> URL: https://issues.apache.org/jira/browse/IGNITE-4661
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
> Attachments: Pagemem_benchmark_results.xlsx
>
>
> Optimization for new PageMemory approach (IGNITE-3477, branch ignite-3477).
> Currently PagesList.removeDataPage requires linear search by page ID, need 
> check if it makes sense to change structure of PagesList's element from list 
> to hash table.
> Here are links to proposed hash table alrorithm:
> http://codecapsule.com/2013/11/11/robin-hood-hashing
> http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/
> Note: with hash table approach 'take' from PagesList will require linear 
> search, so we'll also need some heuristic to make it more optimal.
> For more details see:
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.insertDataRow, 
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.removeDataRowByLink.
> To check result of optimization IgnitePutRandomValueSizeBenchmark can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4661) Optimizations: optimize PagesList.removeDataPage

2017-02-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15865828#comment-15865828
 ] 

Igor Seliverstov commented on IGNITE-4661:
--

Have added local benchmark results (with attempts to optimize dealing with 
stripes)

Seems that such optimization doesn't bring any improvements using current page 
size (perhaps it will if we have longer pages with huge amounts of data pages 
ids)

> Optimizations: optimize PagesList.removeDataPage
> 
>
> Key: IGNITE-4661
> URL: https://issues.apache.org/jira/browse/IGNITE-4661
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
> Attachments: Pagemem_benchmark_results.xlsx
>
>
> Optimization for new PageMemory approach (IGNITE-3477, branch ignite-3477).
> Currently PagesList.removeDataPage requires linear search by page ID, need 
> check if it makes sense to change structure of PagesList's element from list 
> to hash table.
> Here are links to proposed hash table alrorithm:
> http://codecapsule.com/2013/11/11/robin-hood-hashing
> http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/
> Note: with hash table approach 'take' from PagesList will require linear 
> search, so we'll also need some heuristic to make it more optimal.
> For more details see:
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.insertDataRow, 
> IgniteCacheOffheapManagerImpl.update -> FreeListImpl.removeDataRowByLink.
> To check result of optimization IgnitePutRandomValueSizeBenchmark can be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15865835#comment-15865835
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

Which brunch is the base for the changes?

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-15 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15867589#comment-15867589
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

- test create cache/destroy cache operations - *already in place* 
{{org.apache.ignite.internal.processors.database.IgniteDbDynamicCacheSelfTest}}

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4712) Memory leaks in PageMemory

2017-02-16 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4712:


 Summary: Memory leaks in PageMemory
 Key: IGNITE-4712
 URL: https://issues.apache.org/jira/browse/IGNITE-4712
 Project: Ignite
  Issue Type: Bug
Reporter: Igor Seliverstov
 Fix For: 2.0


During performing put/get/remove operations on large objects (objects, which 
size is more than 1kb) allocated pages number constantly grows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-16 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-4694:
-
Issue Type: Sub-task  (was: Task)
Parent: IGNITE-4712

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4712) Memory leaks in PageMemory

2017-02-16 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-4712:


   Assignee: Igor Seliverstov
Component/s: cache

> Memory leaks in PageMemory
> --
>
> Key: IGNITE-4712
> URL: https://issues.apache.org/jira/browse/IGNITE-4712
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> During performing put/get/remove operations on large objects (objects, which 
> size is more than 1kb) allocated pages number constantly grows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-16 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870140#comment-15870140
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

To write the test in proper way first we need to resolve the related issue

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4645) Best effort to avoid extra copying in binary marshaller

2017-02-16 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870188#comment-15870188
 ] 

Igor Seliverstov commented on IGNITE-4645:
--

[~vozerov], inside {{GridBinaryMarshaller#marshal}} method 
{{BinaryWriterExImpl}} is created for each marshalled object, I see no way to 
reuse it except the case where we marshall complex object and use the writer to 
marshall object fields, but there we don't pass the descriptor.

Despite that your approach looks better, so, I updated the request according 
your comment.

> Best effort to avoid extra copying in binary marshaller
> ---
>
> Key: IGNITE-4645
> URL: https://issues.apache.org/jira/browse/IGNITE-4645
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 1.9
>
>
> If we marshal a class that contain only primitives then we can predict the 
> final byte array size and avoid copies to grow array and final trimming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-21 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875716#comment-15875716
 ] 

Igor Seliverstov edited comment on IGNITE-4694 at 2/21/17 10:03 AM:


I've created two test suits:
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}

But I cannot add them into the TeamCity priject
Seems there is a lack of permissions


was (Author: gvvinblade):
I've created two test suits:
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}

But I cannot add them into the TeamCity priject

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-02-21 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15875716#comment-15875716
 ] 

Igor Seliverstov edited comment on IGNITE-4694 at 2/21/17 10:13 AM:


I've created two test suits:
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}

But I cannot add them into the TeamCity project
Seems there is a lack of permissions


was (Author: gvvinblade):
I've created two test suits:
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}
- {{org.apache.ignite.testsuites.IgniteDbMemoryLeakTestSuite}}

But I cannot add them into the TeamCity priject
Seems there is a lack of permissions

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-03-01 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889943#comment-15889943
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

Changes are placed with IGNITE-4712 request tohether
https://github.com/apache/ignite/pull/1559

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Semen Boikov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-03-01 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889943#comment-15889943
 ] 

Igor Seliverstov edited comment on IGNITE-4694 at 3/1/17 10:38 AM:
---

Changes are placed with IGNITE-4712 request together
https://github.com/apache/ignite/pull/1559


was (Author: gvvinblade):
Changes are placed with IGNITE-4712 request tohether
https://github.com/apache/ignite/pull/1559

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Semen Boikov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4681) Apply new future adapter

2017-03-02 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-4681:


Assignee: Igor Seliverstov  (was: Yakov Zhdanov)

> Apply new future adapter
> 
>
> Key: IGNITE-4681
> URL: https://issues.apache.org/jira/browse/IGNITE-4681
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
> Attachments: GridFutAdapter over jdk8.java
>
>
> Attached is reference future adapter implementation. It is proven to consume 
> less memory and it does not require explicit locking on listen(). We need to 
> apply it.
> Known threats:
> # if future is completed normally, but with Throwable as result, get() throws 
> exception. This can be fixed with internal wrapper class
> # listener notification order changes - this is known to cause problems 
> org.apache.ignite.internal.processors.rest.ClientMemcachedProtocolSelfTest#testGetBulk
>  which is minor but still.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4683) Need to avoid extra-copy to byte array when marshalling to cache object (e.g. return ByteBuffer)

2017-03-02 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-4683:


Assignee: Igor Seliverstov  (was: Yakov Zhdanov)

> Need to avoid extra-copy to byte array when marshalling to cache object (e.g. 
> return ByteBuffer)
> 
>
> Key: IGNITE-4683
> URL: https://issues.apache.org/jira/browse/IGNITE-4683
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4681) Apply new future adapter

2017-03-03 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894019#comment-15894019
 ] 

Igor Seliverstov commented on IGNITE-4681:
--

Some fixes after tests run are included

> Apply new future adapter
> 
>
> Key: IGNITE-4681
> URL: https://issues.apache.org/jira/browse/IGNITE-4681
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
> Attachments: GridFutAdapter over jdk8.java
>
>
> Attached is reference future adapter implementation. It is proven to consume 
> less memory and it does not require explicit locking on listen(). We need to 
> apply it.
> Known threats:
> # if future is completed normally, but with Throwable as result, get() throws 
> exception. This can be fixed with internal wrapper class
> # listener notification order changes - this is known to cause problems 
> org.apache.ignite.internal.processors.rest.ClientMemcachedProtocolSelfTest#testGetBulk
>  which is minor but still.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4803) Create a test to check processing doesn't break when discovery cache history overflows

2017-03-09 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4803:


 Summary: Create a test to check processing doesn't break when 
discovery cache history overflows
 Key: IGNITE-4803
 URL: https://issues.apache.org/jira/browse/IGNITE-4803
 Project: Ignite
  Issue Type: Test
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov
 Fix For: 2.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4811) Reduce GC pressure using page memory

2017-03-10 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4811:


 Summary: Reduce GC pressure using page memory
 Key: IGNITE-4811
 URL: https://issues.apache.org/jira/browse/IGNITE-4811
 Project: Ignite
  Issue Type: Improvement
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov
 Fix For: 2.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-03-19 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931760#comment-15931760
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

[~sboikov],

I've checked the tests. free lists work fine; there are no huge numbers of 
available pages with lots of allocations at the same time (as it was) but now 
it's required more pages than before (for example 26000 pages for 
IgniteDbMemoryLeakIndexedTest against previous 24000)

Definitely it isn't an issue of FreeListImpl logic, after I added more space 
into the cache the test passed. There are no significant allocations after a 
couple of minutes the test running.

Probably we should investigate it more carefully (why we need more pages than 
before)

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.0
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4860) Empty leafs in BPlusTree

2017-03-24 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4860:


 Summary: Empty leafs in BPlusTree
 Key: IGNITE-4860
 URL: https://issues.apache.org/jira/browse/IGNITE-4860
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Igor Seliverstov
Assignee: Sergi Vladykin


{{org.apache.ignite.internal.processors.database.BPlusTreeSelfTest#testMassiveRemove2_true}}

The test fails approximately once out of 500 times with following error:
{noformat}
10:34:30,394][INFO ][main][root] >>> Stopping test: 
BPlusTreeSelfTest#testMassiveRemove2_true in 49 ms <<<
[10:34:30,392][ERROR][main][root] Test failed.
java.lang.AssertionError: Empty leaf page.
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.fail(BPlusTree.java:1214)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateDownPages(BPlusTree.java:1285)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateDownPages(BPlusTree.java:1311)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateTree(BPlusTree.java:1102)
at 
org.apache.ignite.internal.processors.database.BPlusTreeSelfTest.doTestMassiveRemove(BPlusTreeSelfTest.java:853)
at 
org.apache.ignite.internal.processors.database.BPlusTreeSelfTest.testMassiveRemove2_true(BPlusTreeSelfTest.java:774)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1811)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1726)
at java.lang.Thread.run(Thread.java:745)

java.lang.AssertionError: Empty leaf page.

at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.fail(BPlusTree.java:1214)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateDownPages(BPlusTree.java:1285)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateDownPages(BPlusTree.java:1311)
at 
org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.validateTree(BPlusTree.java:1102)
at 
org.apache.ignite.internal.processors.database.BPlusTreeSelfTest.doTestMassiveRemove(BPlusTreeSelfTest.java:853)
at 
org.apache.ignite.internal.processors.database.BPlusTreeSelfTest.testMassiveRemove2_true(BPlusTreeSelfTest.java:774)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1811)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1726)
at java.lang.Thread.run(Thread.java:745)
{noformat}

I've checked the test, seems a race happens while deleting pages, which 
prevents to merge empty leaf properly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4888) An assertion error in TcpDiscoverySelfTest

2017-03-30 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4888:


 Summary: An assertion error in TcpDiscoverySelfTest
 Key: IGNITE-4888
 URL: https://issues.apache.org/jira/browse/IGNITE-4888
 Project: Ignite
  Issue Type: Bug
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov


The exception as shown below sometimes appears in output:

{noformat}
java.lang.AssertionError
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:735)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:503)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1678)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4930) Try to move free lists into a tread-local structure to avoid contention

2017-04-07 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4930:


 Summary: Try to move free lists into a tread-local structure to 
avoid contention
 Key: IGNITE-4930
 URL: https://issues.apache.org/jira/browse/IGNITE-4930
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Reporter: Igor Seliverstov






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-4930) Try to move free lists into a tread-local structure to avoid contention

2017-04-07 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-4930:


Assignee: Igor Seliverstov

> Try to move free lists into a tread-local structure to avoid contention
> ---
>
> Key: IGNITE-4930
> URL: https://issues.apache.org/jira/browse/IGNITE-4930
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4930) Try to move free lists into a tread-local structure to avoid contention

2017-04-07 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960561#comment-15960561
 ] 

Igor Seliverstov commented on IGNITE-4930:
--

This possible solution gives roughly 6-7% in some benchmarks (put-random-size) 
which actively use free lists.

> Try to move free lists into a tread-local structure to avoid contention
> ---
>
> Key: IGNITE-4930
> URL: https://issues.apache.org/jira/browse/IGNITE-4930
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4930) Try to move free lists into a tread-local structure to avoid contention

2017-04-07 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960564#comment-15960564
 ] 

Igor Seliverstov commented on IGNITE-4930:
--

[~sboikov] Could you look at it?

> Try to move free lists into a tread-local structure to avoid contention
> ---
>
> Key: IGNITE-4930
> URL: https://issues.apache.org/jira/browse/IGNITE-4930
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4930) Try to move free lists into a tread-local structure to avoid contention

2017-04-07 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960561#comment-15960561
 ] 

Igor Seliverstov edited comment on IGNITE-4930 at 4/7/17 9:54 AM:
--

This possible solution gives roughly 6-7% in some benchmarks (put-random-size) 
which actively use free lists.

Used topology: 4 servers, 8 clients, full sync


was (Author: gvvinblade):
This possible solution gives roughly 6-7% in some benchmarks (put-random-size) 
which actively use free lists.

> Try to move free lists into a tread-local structure to avoid contention
> ---
>
> Key: IGNITE-4930
> URL: https://issues.apache.org/jira/browse/IGNITE-4930
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-10 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-3488:


Assignee: Igor Seliverstov

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4946) GridCacheP2PUndeploySelfTest became failed

2017-04-11 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-4946:


 Summary: GridCacheP2PUndeploySelfTest became failed
 Key: IGNITE-4946
 URL: https://issues.apache.org/jira/browse/IGNITE-4946
 Project: Ignite
  Issue Type: Bug
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov


There are a race between GridDhtPartitionsExchangeFuture which cleans caches 
after undeployment and GridDeploymentPerVersionStore which makes the 
undeployment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-12 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965617#comment-15965617
 ] 

Igor Seliverstov edited comment on IGNITE-3488 at 4/12/17 9:41 AM:
---

Temporary on hold due to works on IGNITE-4946


was (Author: gvvinblade):
Temporary hold due to works on IGNITE-4946

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-12 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965617#comment-15965617
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

Temporary hold due to works on IGNITE-4946

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968744#comment-15968744
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

Cachges in ignite-core are ready
I hope I've fixed all places where null cache name is used in tests (about 600 
java classes), waiting for results from TC

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968744#comment-15968744
 ] 

Igor Seliverstov edited comment on IGNITE-3488 at 4/14/17 7:41 AM:
---

Changes in ignite-core are ready
I hope I've fixed all places where null cache name is used in tests (about 600 
java classes), waiting for results from TC


was (Author: gvvinblade):
Cachges in ignite-core are ready
I hope I've fixed all places where null cache name is used in tests (about 600 
java classes), waiting for results from TC

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4694) Add tests to check there are no memory leaks in PageMemory

2017-04-14 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968960#comment-15968960
 ] 

Igor Seliverstov commented on IGNITE-4694:
--

The required number of pages was increased because of changes in H2TreeIndex. 
The index became segmented, so what it requires a bit more space now (A tree 
per segment)

Fixed the tests

> Add tests to check there are no memory leaks in PageMemory
> --
>
> Key: IGNITE-4694
> URL: https://issues.apache.org/jira/browse/IGNITE-4694
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> Need add several tests running for 5-10 munites verifying there are no memory 
> leaks in new data structures based of PageMemory:
> - test various page size
> - test various objects size
> - test put/get/remove operations
> - test create cache/destroy cache operations
> - test cases when indexes enabled/disabled
> - test case when cache expiry policy is used
> New tests should be added in new test suite.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-17 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970871#comment-15970871
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

Lots of tests failed, fixing each of them...

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-17 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971089#comment-15971089
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

I've merged the brunch with current master (lots of conflicts) and rerun tests 
after changes, waiting for results.

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-17 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-3488:
-
Description: 
Need to create a list of all the affected components.

2.0 migration guide has to be updated: 
https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide

Check REST API and update documentation (cacheName becomes mandatory)

  was:
Need to create a list of all the affected components.

2.0 migration guide has to be updated: 
https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide


> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide
> Check REST API and update documentation (cacheName becomes mandatory)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-18 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972309#comment-15972309
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

Updated pull request, waiting for new test results

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide
> Check REST API and update documentation (cacheName becomes mandatory)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (IGNITE-4695) Write primitive fields before during binary object marshalling

2017-04-18 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-4695.
--
Resolution: Won't Fix

> Write primitive fields before during binary object marshalling
> --
>
> Key: IGNITE-4695
> URL: https://issues.apache.org/jira/browse/IGNITE-4695
> Project: Ignite
>  Issue Type: Improvement
>  Components: binary
>Reporter: Igor Seliverstov
> Fix For: 2.1
>
>
> Now serializing objects we sort fields on the basis of their names. To prvide 
> better performance it makes sense to change this behavior.
> The main idea to provide an ability of streaming deserialization putting 
> primitive fields at the start of the result array in fixed order at the 
> serialization time. 
> This way everything what we need to deserialize the object is just read its 
> fields in the same order without getting their positions from footer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-21 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978354#comment-15978354
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

Almost finished

There are a couple of important questions... Seems that we aren't available to 
set a cache we use with memcached and redis protocol. It must be configured 
someway.

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide
> Check REST API and update documentation (cacheName becomes mandatory)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-3488) Prohibit null as name in all the components (cache name first of all).

2017-04-25 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982568#comment-15982568
 ] 

Igor Seliverstov commented on IGNITE-3488:
--

[~ptupitsyn] Please take a look. There are quite a lot of fails in .net tests. 
I'll appreciate your help on that.

> Prohibit null as name in all the components (cache name first of all).
> --
>
> Key: IGNITE-3488
> URL: https://issues.apache.org/jira/browse/IGNITE-3488
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.8
>Reporter: Sergi Vladykin
>Assignee: Igor Seliverstov
>Priority: Critical
>  Labels: important
> Fix For: 2.0
>
>
> Need to create a list of all the affected components.
> 2.0 migration guide has to be updated: 
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide
> Check REST API and update documentation (cacheName becomes mandatory)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5134) Explicit cast to PageMemoryNoStoreImpl in IgniteCacheDatabaseSharedManager

2017-05-02 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-5134:


 Summary: Explicit cast to PageMemoryNoStoreImpl in 
IgniteCacheDatabaseSharedManager
 Key: IGNITE-5134
 URL: https://issues.apache.org/jira/browse/IGNITE-5134
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.0
Reporter: Igor Seliverstov


The method {{IgniteCacheDatabaseSharedManager#initMemory}} contains an explicit 
cast, that makes impossible to change {{PageMemory}} implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5175) Performance degradation using evictions in near-enabled caches

2017-05-05 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-5175:


 Summary: Performance degradation using evictions in near-enabled 
caches
 Key: IGNITE-5175
 URL: https://issues.apache.org/jira/browse/IGNITE-5175
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.0
Reporter: Igor Seliverstov
Assignee: Ivan Rakov


Both RandomLruNearEnabledPageEvictionMultinodeTest.testPageEviction and 
Random2LruNearEnabledPageEvictionMultinodeTest.testPageEviction fail with 
timeout exceptions. 

Seems that the execution time (eviction operation) is non-linearly dependent on 
a count of elements in the cache (the more elements the longer the operation).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-4646) Try to unmarshall direct messages in striped pool

2017-05-11 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006329#comment-16006329
 ] 

Igor Seliverstov commented on IGNITE-4646:
--

Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|

> Try to unmarshall direct messages in striped pool
> -
>
> Key: IGNITE-4646
> URL: https://issues.apache.org/jira/browse/IGNITE-4646
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> During marshalling in NIO thread the following should be added to the write 
> buffer and sent to peer:
> 1. chunk size - 16 bits (this probably puts limitation on max write buffer 
> size of 64k or will require some changes to direct writer)
> 2. last  chunk - 1 bit
> 3. pool policy - 8 bits
> Here is the scheme to explain how this should work.
> {noformat}
> [chunk size] [pool policy] [partition] [last flag] [chunk data] X <-- no more 
> space in write buffer
> [next chunk size] [last flag] [chunk data] <<-- we write next chunk once some 
> space is available in write buffer, but we skip partition and policy flags 
> and maybe others that should be sent only once.
> ...
> ...
> [next chunk size] [last flag] [chunk data] <<-- last flag is true here
> {noformat}
> Examples
> Write buffer - 64k
> Message - 84k
> # sender reserves space for chunk size
> # reserves space for policy and last chunk flag
> # marshalls message to buffer while it has free space (64k - SPACE will be 
> written to buffer)
> # puts size and flags to reserved space in the beginning
> # sends buffer or part of it which makes some space available to further 
> writes
> # reserves space for next chunk size and flags
> # marshalls message to buffer while it has free space (lets assume the rest 
> of message fits)
> # puts size and last=true to the reserved space and sends
> Receiver:
> # reads chunk size, stores the target pool and partition
> # allocates heap buffer and copies chunk data to it from read buffer
> # once all message chunks are fully read message should be submitted to a 
> pool where it will be unmarshalled and processed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4646) Try to unmarshall direct messages in striped pool

2017-05-11 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006329#comment-16006329
 ] 

Igor Seliverstov edited comment on IGNITE-4646 at 5/11/17 12:27 PM:


[~yzhdanov] Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|


was (Author: gvvinblade):
[~yzhdanov]] Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|

> Try to unmarshall direct messages in striped pool
> -
>
> Key: IGNITE-4646
> URL: https://issues.apache.org/jira/browse/IGNITE-4646
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> During marshalling in NIO thread the following should be added to the write 
> buffer and sent to peer:
> 1. chunk size - 16 bits (this probably puts limitation on max write buffer 
> size of 64k or will require some changes to direct writer)
> 2. last  chunk - 1 bit
> 3. pool policy - 8 bits
> Here is the scheme to explain how this should work.
> {noformat}
> [chunk size] [pool policy] [partition] [last flag] [chunk data] X <-- no more 
> space in write buffer
> [next chunk size] [last flag] [chunk data] <<-- we write next chunk once some 
> space is available in write buffer, but we skip partition and policy flags 
> and maybe others that should be sent only once.
> ...
> ...
> [next chunk size] [last flag] [chunk data] <<-- last flag is true here
> {noformat}
> Examples
> Write buffer - 64k
> Message - 84k
> # sender reserves space for chunk size
> # reserves space for policy and last chunk flag
> # marshalls message to buffer while it has free space (64k - SPACE will be 
> written to buffer)
> # puts size and flags to reserved space in the beginning
> # sends buffer or part of it which makes some space available to further 
> writes
> # reserves space for next chunk size and flags
> # marshalls message to buffer while it has free space (lets assume the rest 
> of message fits)
> # puts size and last=true to the reserved space and sends
> Receiver:
> # reads chunk size, stores the target pool and partition
> # allocates heap buffer and copies chunk data to it from read buffer
> # once all message chunks are fully read message should be submitted to a 
> pool where it will be unmarshalled and processed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4646) Try to unmarshall direct messages in striped pool

2017-05-11 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006329#comment-16006329
 ] 

Igor Seliverstov edited comment on IGNITE-4646 at 5/11/17 12:27 PM:


[~yzhdanov]] Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|


was (Author: gvvinblade):
Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|

> Try to unmarshall direct messages in striped pool
> -
>
> Key: IGNITE-4646
> URL: https://issues.apache.org/jira/browse/IGNITE-4646
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> During marshalling in NIO thread the following should be added to the write 
> buffer and sent to peer:
> 1. chunk size - 16 bits (this probably puts limitation on max write buffer 
> size of 64k or will require some changes to direct writer)
> 2. last  chunk - 1 bit
> 3. pool policy - 8 bits
> Here is the scheme to explain how this should work.
> {noformat}
> [chunk size] [pool policy] [partition] [last flag] [chunk data] X <-- no more 
> space in write buffer
> [next chunk size] [last flag] [chunk data] <<-- we write next chunk once some 
> space is available in write buffer, but we skip partition and policy flags 
> and maybe others that should be sent only once.
> ...
> ...
> [next chunk size] [last flag] [chunk data] <<-- last flag is true here
> {noformat}
> Examples
> Write buffer - 64k
> Message - 84k
> # sender reserves space for chunk size
> # reserves space for policy and last chunk flag
> # marshalls message to buffer while it has free space (64k - SPACE will be 
> written to buffer)
> # puts size and flags to reserved space in the beginning
> # sends buffer or part of it which makes some space available to further 
> writes
> # reserves space for next chunk size and flags
> # marshalls message to buffer while it has free space (lets assume the rest 
> of message fits)
> # puts size and last=true to the reserved space and sends
> Receiver:
> # reads chunk size, stores the target pool and partition
> # allocates heap buffer and copies chunk data to it from read buffer
> # once all message chunks are fully read message should be submitted to a 
> pool where it will be unmarshalled and processed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (IGNITE-4646) Try to unmarshall direct messages in striped pool

2017-05-11 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006329#comment-16006329
 ] 

Igor Seliverstov edited comment on IGNITE-4646 at 5/11/17 12:27 PM:


[~yzhdanov], Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|


was (Author: gvvinblade):
[~yzhdanov] Here are numbers:
|| ||ignite-4646 rev: bbabcbb||master rev: 46ff66||delta||  
|*atomic-put*|85046.6|101168|{color:red}-18.96%{color}|

> Try to unmarshall direct messages in striped pool
> -
>
> Key: IGNITE-4646
> URL: https://issues.apache.org/jira/browse/IGNITE-4646
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> During marshalling in NIO thread the following should be added to the write 
> buffer and sent to peer:
> 1. chunk size - 16 bits (this probably puts limitation on max write buffer 
> size of 64k or will require some changes to direct writer)
> 2. last  chunk - 1 bit
> 3. pool policy - 8 bits
> Here is the scheme to explain how this should work.
> {noformat}
> [chunk size] [pool policy] [partition] [last flag] [chunk data] X <-- no more 
> space in write buffer
> [next chunk size] [last flag] [chunk data] <<-- we write next chunk once some 
> space is available in write buffer, but we skip partition and policy flags 
> and maybe others that should be sent only once.
> ...
> ...
> [next chunk size] [last flag] [chunk data] <<-- last flag is true here
> {noformat}
> Examples
> Write buffer - 64k
> Message - 84k
> # sender reserves space for chunk size
> # reserves space for policy and last chunk flag
> # marshalls message to buffer while it has free space (64k - SPACE will be 
> written to buffer)
> # puts size and flags to reserved space in the beginning
> # sends buffer or part of it which makes some space available to further 
> writes
> # reserves space for next chunk size and flags
> # marshalls message to buffer while it has free space (lets assume the rest 
> of message fits)
> # puts size and last=true to the reserved space and sends
> Receiver:
> # reads chunk size, stores the target pool and partition
> # allocates heap buffer and copies chunk data to it from read buffer
> # once all message chunks are fully read message should be submitted to a 
> pool where it will be unmarshalled and processed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5301) JVM crushes on H2TreeIndex destroy

2017-05-26 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-5301:


 Summary: JVM crushes on H2TreeIndex destroy
 Key: IGNITE-5301
 URL: https://issues.apache.org/jira/browse/IGNITE-5301
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Igor Seliverstov
 Attachments: hs_err_pid9664.log





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5301) JVM crushes on H2TreeIndex destroy

2017-05-26 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-5301:
-
Attachment: hs_err_pid9664.log

> JVM crushes on H2TreeIndex destroy
> --
>
> Key: IGNITE-5301
> URL: https://issues.apache.org/jira/browse/IGNITE-5301
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Igor Seliverstov
> Attachments: hs_err_pid9664.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5301) JVM crashes on H2TreeIndex destroy

2017-05-26 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-5301:
-
Description: 
There is a bug in destroy method because of which 
{noformat}cctx.offheap().dropRootPageForIndex(idxName){noformat} method 
actually does nothing. It happens because idx names on create RootPage and 
destroy it are different (unlike creation a root page, a segment suffix isn't 
added to tree name on destroy, so that it can't delete the page from metastore 
by different key).

After fixing this behavior I faced JVM crash. 

I'm quite not familiar with the code, but I suppose something is wrong in 
MetaStoreInnerIO logic.

Crash report is attached.

How to reproduce:

just create and destroy a cache with indexed types and enabled PDS feature 
after the fix I provided above.


  was:See the attached log.


> JVM crashes on H2TreeIndex destroy
> --
>
> Key: IGNITE-5301
> URL: https://issues.apache.org/jira/browse/IGNITE-5301
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.1
>Reporter: Igor Seliverstov
> Attachments: hs_err_pid9664.log
>
>
> There is a bug in destroy method because of which 
> {noformat}cctx.offheap().dropRootPageForIndex(idxName){noformat} method 
> actually does nothing. It happens because idx names on create RootPage and 
> destroy it are different (unlike creation a root page, a segment suffix isn't 
> added to tree name on destroy, so that it can't delete the page from 
> metastore by different key).
> After fixing this behavior I faced JVM crash. 
> I'm quite not familiar with the code, but I suppose something is wrong in 
> MetaStoreInnerIO logic.
> Crash report is attached.
> How to reproduce:
> just create and destroy a cache with indexed types and enabled PDS feature 
> after the fix I provided above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (IGNITE-5301) JVM crashes on H2TreeIndex destroy

2017-05-26 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-5301:
-
Description: 
There is a bug in destroy method because of which 
{noformat}cctx.offheap().dropRootPageForIndex(idxName){noformat} method 
actually does nothing. It happens because idx names on create RootPage and 
destroy it are different (unlike creation a root page, a segment suffix isn't 
added to tree name on destroy, so that it can't delete the page from metastore 
by different key).

After fixing this behavior I faced JVM crash. 

I'm quite not familiar with the code, but I suppose something is wrong in 
MetaStoreInnerIO logic.

Crash report is attached.

How to reproduce:

just create and destroy a cache with indexed types and enabled PDS feature 
after the fix I provided above is applied.


  was:
There is a bug in destroy method because of which 
{noformat}cctx.offheap().dropRootPageForIndex(idxName){noformat} method 
actually does nothing. It happens because idx names on create RootPage and 
destroy it are different (unlike creation a root page, a segment suffix isn't 
added to tree name on destroy, so that it can't delete the page from metastore 
by different key).

After fixing this behavior I faced JVM crash. 

I'm quite not familiar with the code, but I suppose something is wrong in 
MetaStoreInnerIO logic.

Crash report is attached.

How to reproduce:

just create and destroy a cache with indexed types and enabled PDS feature 
after the fix I provided above.



> JVM crashes on H2TreeIndex destroy
> --
>
> Key: IGNITE-5301
> URL: https://issues.apache.org/jira/browse/IGNITE-5301
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.1
>Reporter: Igor Seliverstov
> Attachments: hs_err_pid9664.log
>
>
> There is a bug in destroy method because of which 
> {noformat}cctx.offheap().dropRootPageForIndex(idxName){noformat} method 
> actually does nothing. It happens because idx names on create RootPage and 
> destroy it are different (unlike creation a root page, a segment suffix isn't 
> added to tree name on destroy, so that it can't delete the page from 
> metastore by different key).
> After fixing this behavior I faced JVM crash. 
> I'm quite not familiar with the code, but I suppose something is wrong in 
> MetaStoreInnerIO logic.
> Crash report is attached.
> How to reproduce:
> just create and destroy a cache with indexed types and enabled PDS feature 
> after the fix I provided above is applied.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5370) IgniteSet misses its size after cluster restart with enabled PDS feature

2017-06-01 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-5370:


 Summary: IgniteSet misses its size after cluster restart with 
enabled PDS feature
 Key: IGNITE-5370
 URL: https://issues.apache.org/jira/browse/IGNITE-5370
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 2.0
Reporter: Igor Seliverstov


{{CacheDataStructuresManager}} tracks {{IgniteSet}} updates using an on-heap 
map ({{CacheDataStructuresManager#setDataMap}}), which is used for several 
operations (including {{IgniteSet.size()}}) and isn't restored after cluster 
restart.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-5196) Concurrent modification in .GridDiscoveryManager.nodeCaches

2017-06-05 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-5196:


Assignee: Igor Seliverstov  (was: Semen Boikov)

> Concurrent modification in .GridDiscoveryManager.nodeCaches
> ---
>
> Key: IGNITE-5196
> URL: https://issues.apache.org/jira/browse/IGNITE-5196
> Project: Ignite
>  Issue Type: Bug
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> {noformat}
> ./grid149.tar.gz:org.apache.ignite.IgniteCheckedException: null
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7281) 
> ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:171)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_121]
> ./grid149.tar.gz:Caused by: java.util.ConcurrentModificationException: null
> ./grid149.tar.gz:   at 
> java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.nodeCaches(GridDiscoveryManager.java:1733)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.createNodeBean(GridTopologyCommandHandler.java:219)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHO
> T]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.handleAsync(GridTopologyCommandHandler.java:109)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:265)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:88)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:154)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   ... 4 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5196) Concurrent modification in .GridDiscoveryManager.nodeCaches

2017-06-06 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16038863#comment-16038863
 ] 

Igor Seliverstov commented on IGNITE-5196:
--

[~sboikov], I have just added the tests and updated the request.

> Concurrent modification in .GridDiscoveryManager.nodeCaches
> ---
>
> Key: IGNITE-5196
> URL: https://issues.apache.org/jira/browse/IGNITE-5196
> Project: Ignite
>  Issue Type: Bug
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> {noformat}
> ./grid149.tar.gz:org.apache.ignite.IgniteCheckedException: null
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7281) 
> ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:171)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_121]
> ./grid149.tar.gz:Caused by: java.util.ConcurrentModificationException: null
> ./grid149.tar.gz:   at 
> java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.nodeCaches(GridDiscoveryManager.java:1733)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.createNodeBean(GridTopologyCommandHandler.java:219)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHO
> T]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.handleAsync(GridTopologyCommandHandler.java:109)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:265)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:88)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:154)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   ... 4 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (IGNITE-5155) Need to improve stats dump on exchange timeout

2017-06-06 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-5155:


Assignee: Igor Seliverstov  (was: Semen Boikov)

> Need to improve stats dump on exchange timeout
> --
>
> Key: IGNITE-5155
> URL: https://issues.apache.org/jira/browse/IGNITE-5155
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> Currently, on large topologies info dumped on "Failed to wait for partition 
> map exchange" 
> (org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java:1713)
>  floods the log and we need to reduce information dumped.
> 1. Reduce output for exchange futures that are already done. Keep event, 
> topology version, servers count, clients count (more?)
> 2. Do not dump the whole communication stats, but send message to exchange 
> coordinator, ask for its status and for number of messages received and for 
> acked messages from local node.
> 3. we can think of sending new message from cache node to coordinator that 
> may be a sign of a problem on that node (e.g. unreleased tx locks or still 
> renting partitions) and coordinator may include this info to a status thus 
> every Ignite node may point to a problem node in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (IGNITE-5196) Concurrent modification in .GridDiscoveryManager.nodeCaches

2017-06-07 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040428#comment-16040428
 ] 

Igor Seliverstov commented on IGNITE-5196:
--

[~sboikov], I've fixed the issue. 

Checking each test separately I forgot to run all of them together.

Master is merged, tests are passed (checked twice).

> Concurrent modification in .GridDiscoveryManager.nodeCaches
> ---
>
> Key: IGNITE-5196
> URL: https://issues.apache.org/jira/browse/IGNITE-5196
> Project: Ignite
>  Issue Type: Bug
>Reporter: Yakov Zhdanov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> {noformat}
> ./grid149.tar.gz:org.apache.ignite.IgniteCheckedException: null
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7281) 
> ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:171)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> ./grid149.tar.gz:   at java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_121]
> ./grid149.tar.gz:Caused by: java.util.ConcurrentModificationException: null
> ./grid149.tar.gz:   at 
> java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[na:1.8.0_121]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.nodeCaches(GridDiscoveryManager.java:1733)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.createNodeBean(GridTopologyCommandHandler.java:219)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHO
> T]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.handlers.top.GridTopologyCommandHandler.handleAsync(GridTopologyCommandHandler.java:109)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:265)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:88)
>  ~[ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:154)
>  [ignite-core-1.10.3.ea6.jar:2.0.0-SNAPSHOT]
> ./grid149.tar.gz:   ... 4 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5523) GridCachePreloader.onInitialExchangeComplete() is not called in certain cases

2017-06-16 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-5523:


 Summary: GridCachePreloader.onInitialExchangeComplete() is not 
called in certain cases
 Key: IGNITE-5523
 URL: https://issues.apache.org/jira/browse/IGNITE-5523
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.1
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov
 Fix For: 2.1






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5498) Failures in GridNioSslSelfTest

2017-06-20 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov reassigned IGNITE-5498:


Assignee: Igor Seliverstov  (was: Semen Boikov)

> Failures in GridNioSslSelfTest
> --
>
> Key: IGNITE-5498
> URL: https://issues.apache.org/jira/browse/IGNITE-5498
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> Affected tests:
> {{GridNioSslSelfTest.testConcurrentConnects}}
> {{GridNioSslSelfTest.testSimpleMessages}}
> {code}
> junit.framework.AssertionFailedError: Unexpected exception occurred while 
> handling connection: class 
> org.apache.ignite.internal.util.nio.GridNioException: An established 
> connection was aborted by the software in your host machine
> at junit.framework.Assert.fail(Assert.java:57)
> at junit.framework.TestCase.fail(TestCase.java:227)
> at 
> org.apache.ignite.internal.util.nio.GridNioSelfTest$EchoListener.onDisconnected(GridNioSelfTest.java:1361)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onExceptionCaught(GridNioFilterChain.java:261)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onExceptionCaught(GridNioCodecFilter.java:80)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.onExceptionCaught(GridNioSslFilter.java:241)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onExceptionCaught(GridNioServer.java:3188)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain.onExceptionCaught(GridNioFilterChain.java:160)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2437)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2199)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1968)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1669)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5498) Failures in GridNioSslSelfTest

2017-06-20 Thread Igor Seliverstov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16055532#comment-16055532
 ] 

Igor Seliverstov commented on IGNITE-5498:
--

Cannot reproduce on both Windows and Linux, is there a link to TeamCity suite?

> Failures in GridNioSslSelfTest
> --
>
> Key: IGNITE-5498
> URL: https://issues.apache.org/jira/browse/IGNITE-5498
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Vladimir Ozerov
>Assignee: Igor Seliverstov
> Fix For: 2.1
>
>
> Affected tests:
> {{GridNioSslSelfTest.testConcurrentConnects}}
> {{GridNioSslSelfTest.testSimpleMessages}}
> {code}
> junit.framework.AssertionFailedError: Unexpected exception occurred while 
> handling connection: class 
> org.apache.ignite.internal.util.nio.GridNioException: An established 
> connection was aborted by the software in your host machine
> at junit.framework.Assert.fail(Assert.java:57)
> at junit.framework.TestCase.fail(TestCase.java:227)
> at 
> org.apache.ignite.internal.util.nio.GridNioSelfTest$EchoListener.onDisconnected(GridNioSelfTest.java:1361)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onExceptionCaught(GridNioFilterChain.java:261)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onExceptionCaught(GridNioCodecFilter.java:80)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.onExceptionCaught(GridNioSslFilter.java:241)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedExceptionCaught(GridNioFilterAdapter.java:102)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onExceptionCaught(GridNioServer.java:3188)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain.onExceptionCaught(GridNioFilterChain.java:160)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2437)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2199)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1968)
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1669)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-10873) CorruptedTreeException during simultaneous cache put operations

2019-02-20 Thread Igor Seliverstov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773116#comment-16773116
 ] 

Igor Seliverstov commented on IGNITE-10873:
---

[~ivan.glukos], looks OK, at least I don't see any possible issue goes from the 
change.

> CorruptedTreeException during simultaneous cache put operations
> ---
>
> Key: IGNITE-10873
> URL: https://issues.apache.org/jira/browse/IGNITE-10873
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence, sql
>Affects Versions: 2.7
>Reporter: Pavel Vinokurov
>Assignee: Ivan Rakov
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> [2019-01-09 20:47:04,376][ERROR][pool-9-thread-9][GridDhtAtomicCache]  
> Unexpected exception during cache update
> org.h2.message.DbException: General error: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@780acfb4[ key: .. ][ GTEST, null, 254, null, 
> null, null, null, 0, null, null, null, null, null, null, null, 0, 0, 0, null, 
> 0, 0, 0, 0, 0, 0, 0, null, 0, 0, null, 0, null, 0, null, 0, null, null, null, 
> 0, 0, 0, 0, 0, 0, null, null, null, null, null, null, null, 0.0, 0, 0.0, 0, 
> 0.0, 0, null, 0, 0, 0, 0, null, null, null, null, null, null, null, null, 
> null, null, null, null, null, null, null, null ]" [5-197]
>   at org.h2.message.DbException.get(DbException.java:168)
>   at org.h2.message.DbException.convert(DbException.java:307)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:302)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:546)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:479)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:768)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:404)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2633)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1646)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1621)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2295)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2494)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1951)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1780)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1668)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1153)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:611)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)
>   at 
> org.apach

[jira] [Created] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-11433:
-

 Summary: MVCC: Link entry versions at the Data Store layer.
 Key: IGNITE-11433
 URL: https://issues.apache.org/jira/browse/IGNITE-11433
 Project: Ignite
  Issue Type: Improvement
  Components: mvcc, sql
Reporter: Igor Seliverstov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all entry versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over versions at update time under a read (or even 
> write) lock on an index page which blocks other write (read) operations for a 
> relatively long period of time.
> 2) We hold all versions of row in all indexes which makes them use much more 
> space than needed
> 3) We cannot implement several important improvements (data streamer 
> optimizations) because having several versions of one key in an index page 
> doesn't allow using of Invoke operations.
> 3) Write amplification suffers not only Data Store layer, but indexes as 
> well, which makes read/lookup ops into indexes much slower.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next entry in the versions chain) and {{lock}} (a tx, which holds a 
write lock on the entry) There are several possible optimizations: 1) leave 
lock as is (in index leaf item) 2) use max version as lock version as well

  was:
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.


> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all entry versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over versions at update time under a read (or even 
> write) lock on an index page which blocks other write (read) operations for a 
> relatively long period of time.
> 2) We hold all versions of row in all indexes which makes them use much more 
> space than needed
> 3) We cannot implement several important improvements (data streamer 
> optimizations) because having several versions of one key in an index page 
> doesn't allow using of Invoke operations.
> 3) Write amplification suffers not only Data Store layer, but indexes as 
> well, which makes read/lookup ops into indexes much slower.
> Using versions linking at the Data Store only (like it do other vendors) 
> solves or decreases impact of that issues.
> So, the proposed changes:
> 1) Change data page layout adding two fields into its header: {{link}} (a 
> link to the next entry in the versions chain) and {{lock}} (a tx, which holds 
> a write lock on the entry) There are several possible optimizations: 1) leave 
> lock as is (in index leaf item) 2) use max version as lock version as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next entry in the versions chain) and {{lock}} (a tx, which holds a 
write lock on the entry) There are several possible optimizations: 1) leave 
lock as is (in index leaf item) 2) use max version as lock version as well


> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all tuple versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over tuple versions at update time under a read (or 
> even write) lock on an index page which blocks other write (read) operations 
> for a relatively long period of time.
> 2) We index all tup

[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.


Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
a

[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 

[jira] [Updated] (IGNITE-10729) MVCC TX: Improvements.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

  was:
Currently there are several problems:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

3) all entry versions are placed in BTrees, so, we cannot do updates like PG - 
just adding a new version and linking the old one to it. Having only one 
unversioned item per row in all indexes making possible fast invoke operations 
on such indexes in MVCC mode. Also it let us not to update all indexes on each 
update operation (partition index isn't updated at all, only SQL indexes, built 
over changed fields need to be updated) - this dramatically reduces write 
operations, hence it reduces amount of pages to be "checkpointed" and reduces 
checkpoint mark phase.


> MVCC TX: Improvements.
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Summary: MVCC TX: Improve VAC  (was: MVCC TX: Improvements.)

> MVCC TX: Improve VAC
> 
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to iterate 
on updated data pages only and do not use cache data tree.

  was:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)


> MVCC TX: Improve VAC
> 
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to 
> iterate on updated data pages only and do not use cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC using visibility maps

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Summary: MVCC TX: Improve VAC using visibility maps  (was: MVCC TX: Improve 
VAC)

> MVCC TX: Improve VAC using visibility maps
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to 
> iterate on updated data pages only and do not use cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC using visibility maps

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
Currently we have several issues:

1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one) - this can be easily done by just 
having a special bit at the data page, so - any update resets this bit, vacuum 
travers only data pages with zero value bit and sets it to 1 after processing.

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to reduce 
examined pages amount, iterate over updated data pages only and do not use 
cache data tree.

  was:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to iterate 
on updated data pages only and do not use cache data tree.


> MVCC TX: Improve VAC using visibility maps
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> Currently we have several issues:
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one) - this can be easily 
> done by just having a special bit at the data page, so - any update resets 
> this bit, vacuum travers only data pages with zero value bit and sets it to 1 
> after processing.
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to reduce 
> examined pages amount, iterate over updated data pages only and do not use 
> cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10829) MVCC TX: Lazy query execution for query enlists.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10829:
--
Description: 
Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
execution to data nodes, such execution runs a local select 
(GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
each select result row.

The main difficult starts when we cannot execute whole operation at once (due 
to lock conflict or backup message queue overflow). Such case we break 
iteration and save a context (detach H2 connection for further exclusive usage 
and save current position in cursor). There is no issue since in non-lazy mode 
the cursor internally have a list of all needed entries and doesn't hold any 
resources but in lazy mode we may face two issues:
1) Schema change in between of iteration
2) Possible starvation because of heavy time consuming operations in cache 
pool, which used by default for operation continuation. 

As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
taken into consideration. This mean:

1) before braking iteration we need to release all holding shared locks on on 
being iterated tables.
2) before continue iteration we need to acquire shared locks on all needed 
tables and check the schema wasn't changed in between locks were acquired.
3) the operation should be continued in the same pool it was started to prevent 
possible starvation of concurrent cache operations (See IGNITE-10597).

  was:
Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
execution to data nodes, such execution runs a local select 
(GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
each select result row.

The main difficult starts when we cannot execute whole operation at once (due 
to lock conflict or backup message queue overflow). Such case we break 
iteration and save a context (detach H2 connection for further exclusive usage 
and save current position in cursor). There is no issue since in non-lazy mode 
the cursor internally have a list of all needed entries and doesn't hold any 
resources but in lazy mode we may face two issues:
1) Schema change in between of iteration
2) Possible starvation because of heavy time consuming operations in cache 
pool, which used by default for operation continuation. 

As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
taken into consideration. This mean:

1) before braking iteration we need to release all holding shared locks on on 
being iterated tables.
2) before continue iteration we need to acquire shared locks on all needed 
tables and check the schema wasn't changed in between locks were acquired.
3) the operation should be continued in the same pool it was started to prevent 
possible starvation of concurrent cache operations.


> MVCC TX: Lazy query execution for query enlists.
> 
>
> Key: IGNITE-10829
> URL: https://issues.apache.org/jira/browse/IGNITE-10829
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
> execution to data nodes, such execution runs a local select 
> (GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
> each select result row.
> The main difficult starts when we cannot execute whole operation at once (due 
> to lock conflict or backup message queue overflow). Such case we break 
> iteration and save a context (detach H2 connection for further exclusive 
> usage and save current position in cursor). There is no issue since in 
> non-lazy mode the cursor internally have a list of all needed entries and 
> doesn't hold any resources but in lazy mode we may face two issues:
> 1) Schema change in between of iteration
> 2) Possible starvation because of heavy time consuming operations in cache 
> pool, which used by default for operation continuation. 
> As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
> taken into consideration. This mean:
> 1) before braking iteration we need to release all holding shared locks on on 
> being iterated tables.
> 2) before continue iteration we need to acquire shared locks on all needed 
> tables and check the schema wasn't changed in between locks were acquired.
> 3) the operation should be continued in the same pool it was started to 
> prevent possible starvation of concurrent cache operations (See IGNITE-10597).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   6   7   >