[jira] [Updated] (IGNITE-11210) SQL: Introduce common logical execution plan for all query types

2019-02-27 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-11210:
-
Ignite Flags:   (was: Docs Required)

> SQL: Introduce common logical execution plan for all query types
> 
>
> Key: IGNITE-11210
> URL: https://issues.apache.org/jira/browse/IGNITE-11210
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At the moment we have a lot of various cached stuff used for different SQL 
> types (prepared statements for local queries, two-step queries for 
> distributed queries, update plan for DML). 
> What we need instead of having multiple caches is to create common execution 
> plan for every query, which will hold both DML and SELECT stuff. Approximate 
> content of such a plan:
> # Two-step plan
> # DML plan 
> # Partition pruning stuff
> # May be even cached physical node distribution (for reduce queries) for the 
> given {{AffinityTopologyVersion}}
> # Probably {{AffinityTopologyVersion}}
> Then we will perform a single plan lookup/build per every query execution. In 
> future we will probably display these plans in SQL views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10261) MVCC: cache operation may hang during late affinity assignment.

2019-02-27 Thread Roman Kondakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780182#comment-16780182
 ] 

Roman Kondakov commented on IGNITE-10261:
-

[~amashenkov], I've fixed this test. Please, take a look.

> MVCC: cache operation may hang during late affinity assignment.
> ---
>
> Key: IGNITE-10261
> URL: https://issues.apache.org/jira/browse/IGNITE-10261
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Andrew Mashenkov
>Assignee: Roman Kondakov
>Priority: Critical
>  Labels: failover, mvcc_stabilization_stage_1
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ForceKey response processing fails with ClassCastException with Mvcc mode 
> that causes test hanging.
> Issue can be easily reproduced with backups > 0 and disabled rebalance. See 
> GridCacheDhtPreloadPutGetSelfTest.testPutGetNone1().
> Also CacheLateAffinityAssignmentTest.testRandomOperations() hangs sometimes 
> due to same reason.
>  
> {noformat}
> java.lang.ClassCastException: 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry 
> cannot be cast to 
> org.apache.ignite.internal.processors.cache.mvcc.MvccVersionAware
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture$MiniFuture.onResult(GridDhtForceKeysFuture.java:545)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture.onResult(GridDhtForceKeysFuture.java:202)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processForceKeyResponse(GridDhtCacheAdapter.java:180)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:208)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:206)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1434)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1416)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11085) MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest

2019-02-27 Thread Ivan Pavlukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Pavlukhin updated IGNITE-11085:

Summary: MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest  
(was: MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest.)

> MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest
> --
>
> Key: IGNITE-11085
> URL: https://issues.apache.org/jira/browse/IGNITE-11085
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Ivan Pavlukhin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1
>
> There is no need to run GridCachePartitionEvictionDuringReadThroughSelfTest 
> in MvccCache 6 suite.
> Test uses ATOMIC cache, so we have to either remove it from mvcc run or make 
> it use TRANSACTIONAL cache and mute with certain reason as no CacheStore is 
> supported in Mvcc mode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11085) MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest

2019-02-27 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780180#comment-16780180
 ] 

Ivan Pavlukhin commented on IGNITE-11085:
-

Test ignored from MVCC run in PR.

> MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest
> --
>
> Key: IGNITE-11085
> URL: https://issues.apache.org/jira/browse/IGNITE-11085
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Ivan Pavlukhin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is no need to run GridCachePartitionEvictionDuringReadThroughSelfTest 
> in MvccCache 6 suite.
> Test uses ATOMIC cache, so we have to either remove it from mvcc run or make 
> it use TRANSACTIONAL cache and mute with certain reason as no CacheStore is 
> supported in Mvcc mode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10104) MVCC TX: client SFU doesn't work on replicated caches

2019-02-27 Thread Roman Kondakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780178#comment-16780178
 ] 

Roman Kondakov commented on IGNITE-10104:
-

[~gvvinblade], patch is ready for review. Tests are ok.

> MVCC TX: client SFU doesn't work on replicated caches
> -
>
> Key: IGNITE-10104
> URL: https://issues.apache.org/jira/browse/IGNITE-10104
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc_stabilization_stage_1, transactions
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When select for update executes from client node the execution is sent to 
> random owning node. On that node dht enlist operation is started what causes 
> an assertion error because dht enlist operation implies that the node is 
> primary for all processed keys.
> see 
> {{CacheMvccReplicatedBackupsTest.testBackupsCoherenceWithLargeOperations}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10261) MVCC: cache operation may hang during late affinity assignment.

2019-02-27 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780176#comment-16780176
 ] 

Ignite TC Bot commented on IGNITE-10261:


{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3194790buildTypeId=IgniteTests24Java8_RunAll]

> MVCC: cache operation may hang during late affinity assignment.
> ---
>
> Key: IGNITE-10261
> URL: https://issues.apache.org/jira/browse/IGNITE-10261
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Andrew Mashenkov
>Assignee: Roman Kondakov
>Priority: Critical
>  Labels: failover, mvcc_stabilization_stage_1
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ForceKey response processing fails with ClassCastException with Mvcc mode 
> that causes test hanging.
> Issue can be easily reproduced with backups > 0 and disabled rebalance. See 
> GridCacheDhtPreloadPutGetSelfTest.testPutGetNone1().
> Also CacheLateAffinityAssignmentTest.testRandomOperations() hangs sometimes 
> due to same reason.
>  
> {noformat}
> java.lang.ClassCastException: 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry 
> cannot be cast to 
> org.apache.ignite.internal.processors.cache.mvcc.MvccVersionAware
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture$MiniFuture.onResult(GridDhtForceKeysFuture.java:545)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtForceKeysFuture.onResult(GridDhtForceKeysFuture.java:202)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processForceKeyResponse(GridDhtCacheAdapter.java:180)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:208)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$11.onMessage(GridDhtTransactionalCacheAdapter.java:206)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1434)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$MessageHandler.apply(GridDhtCacheAdapter.java:1416)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10104) MVCC TX: client SFU doesn't work on replicated caches

2019-02-27 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780177#comment-16780177
 ] 

Ignite TC Bot commented on IGNITE-10104:


{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3194165buildTypeId=IgniteTests24Java8_RunAll]

> MVCC TX: client SFU doesn't work on replicated caches
> -
>
> Key: IGNITE-10104
> URL: https://issues.apache.org/jira/browse/IGNITE-10104
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc_stabilization_stage_1, transactions
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When select for update executes from client node the execution is sent to 
> random owning node. On that node dht enlist operation is started what causes 
> an assertion error because dht enlist operation implies that the node is 
> primary for all processed keys.
> see 
> {{CacheMvccReplicatedBackupsTest.testBackupsCoherenceWithLargeOperations}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-11085) MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest.

2019-02-27 Thread Ivan Pavlukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Pavlukhin reassigned IGNITE-11085:
---

Assignee: Ivan Pavlukhin

> MVCC: Mute GridCachePartitionEvictionDuringReadThroughSelfTest.
> ---
>
> Key: IGNITE-11085
> URL: https://issues.apache.org/jira/browse/IGNITE-11085
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Ivan Pavlukhin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1
>
> There is no need to run GridCachePartitionEvictionDuringReadThroughSelfTest 
> in MvccCache 6 suite.
> Test uses ATOMIC cache, so we have to either remove it from mvcc run or make 
> it use TRANSACTIONAL cache and mute with certain reason as no CacheStore is 
> supported in Mvcc mode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11088) Flacky LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge

2019-02-27 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780165#comment-16780165
 ] 

Ivan Pavlukhin commented on IGNITE-11088:
-

It worth to recheck current issue after fixing global problems in MVCC caches 
rebalance.

> Flacky LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge
> -
>
> Key: IGNITE-11088
> URL: https://issues.apache.org/jira/browse/IGNITE-11088
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Ivan Pavlukhin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1
>
> [LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge|https://ci.ignite.apache.org/viewLog.html?buildId=2895774=buildResultsDiv=IgniteTests24Java8_MvccPds2#testNameId-6585115376754732686]
>  fails sporadicatlly in MvccPds 2 suite.
> I've found no failures in non-mvcc Pds 2 suite, so, probably it is mvcc issue.
> See stacktraces from 2 failures that may have same reason. We have to 
> investigate this.
> {noformat}
> java.lang.AssertionError: nodeIdx=2, key=6606 expected:<13212> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge(LocalWalModeChangeDuringRebalancingSelfTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088)
>   at java.lang.Thread.run(Thread.java:748) {noformat}
> {noformat}
> [2019-01-23 
> 11:26:25,287][ERROR][sys-stripe-5-#6186%persistence.LocalWalModeChangeDuringRebalancingSelfTest3%][GridDhtColocatedCache]
>   Failed processing get request: GridNearSingleGetRequest 
> [futId=1548243502606, key=KeyCacheObjectImpl [part=381, val=7037, 
> hasValBytes=true], flags=1, topVer=AffinityTopologyVersion [topVer=8, 
> minorTopVer=1], subjId=f1fbb371-3232-4bfa-a20a-d4cad4b2, taskNameHash=0, 
> createTtl=-1, accessTtl=-1, txLbl=null, mvccSnapshot=MvccSnapshotResponse 
> [futId=7040, crdVer=1548242747966, cntr=20023, opCntr=1073741823, txs=null, 
> cleanupVer=0, tracking=0]] class org.apache.ignite.IgniteCheckedException: 
> Runtime failure on bounds: [lower=MvccSnapshotSearchRow [res=null, 
> snapshot=MvccSnapshotResponse [futId=7040, crdVer=1548242747966, cntr=20023, 
> opCntr=1073741823, txs=null, cleanupVer=0, tracking=0]], 
> upper=MvccMinSearchRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.iterate(BPlusTree.java:1043)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.mvccFind(IgniteCacheOffheapManagerImpl.java:2683)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.mvccFind(GridCacheOffheapManager.java:2141)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.mvccRead(IgniteCacheOffheapManagerImpl.java:666)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAllAsync0(GridCacheAdapter.java:2023)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:807)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:399)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:277)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map(GridDhtGetSingleFuture.java:259)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.init(GridDhtGetSingleFuture.java:182)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtSingleAsync(GridDhtCacheAdapter.java:918)
>  at 
> 

[jira] [Updated] (IGNITE-11438) TTL manager may not clear entries from the underlying CacheDataStore

2019-02-27 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-11438:
-
Summary: TTL manager may not clear entries from the underlying 
CacheDataStore  (was: TTL manager may not clean entries from the underlying 
CacheDataStore)

> TTL manager may not clear entries from the underlying CacheDataStore
> 
>
> Key: IGNITE-11438
> URL: https://issues.apache.org/jira/browse/IGNITE-11438
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>
> Please see the attached test:
> {code:java}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one or more
>  * contributor license agreements. See the NOTICE file distributed with
>  * this work for additional information regarding copyright ownership.
>  * The ASF licenses this file to You under the Apache License, Version 2.0
>  * (the "License"); you may not use this file except in compliance with
>  * the License. You may obtain a copy of the License at
>  *
>  *  http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.ignite.internal.processors.cache.persistence.db;
> import java.util.TreeMap;
> import java.util.concurrent.TimeUnit;
> import javax.cache.expiry.AccessedExpiryPolicy;
> import javax.cache.expiry.CreatedExpiryPolicy;
> import javax.cache.expiry.Duration;
> import javax.cache.expiry.ExpiryPolicy;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteCheckedException;
> import org.apache.ignite.cache.CachePeekMode;
> import org.apache.ignite.cache.CacheRebalanceMode;
> import org.apache.ignite.cache.CacheWriteSynchronizationMode;
> import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.configuration.WALMode;
> import org.apache.ignite.internal.IgniteEx;
> import org.apache.ignite.internal.IgniteInterruptedCheckedException;
> import org.apache.ignite.internal.processors.cache.GridCacheContext;
> import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
> import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager;
> import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
> import 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition;
> import org.apache.ignite.internal.util.lang.GridAbsPredicate;
> import org.apache.ignite.internal.util.lang.GridCursor;
> import org.apache.ignite.internal.util.typedef.PA;
> import org.apache.ignite.internal.util.typedef.internal.CU;
> import org.apache.ignite.testframework.GridTestUtils;
> import org.apache.ignite.testframework.MvccFeatureChecker;
> import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
> import org.junit.Test;
> import static 
> org.apache.ignite.IgniteSystemProperties.IGNITE_BASELINE_AUTO_ADJUST_ENABLED;
> /**
>  * Test TTL worker with persistence enabled
>  */
> public class IgnitePdsWithTtlTest extends GridCommonAbstractTest {
> /** */
> public static final String CACHE_NAME = "expirableCache";
> /** */
> public static final String GROUP_NAME = "group1";
> /** */
> public static final int PART_SIZE = 32;
> /** */
> private static final int EXPIRATION_TIMEOUT = 10;
> /** */
> public static final int ENTRIES = 100_000;
> /** {@inheritDoc} */
> @Override protected void beforeTestsStarted() throws Exception {
> System.setProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED, "false");
> super.beforeTestsStarted();
> }
> /** {@inheritDoc} */
> @Override protected void afterTestsStopped() throws Exception {
> super.afterTestsStopped();
> System.clearProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED);
> }
> /** {@inheritDoc} */
> @Override protected void beforeTest() throws Exception {
> 
> MvccFeatureChecker.skipIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION);
> super.beforeTest();
> cleanPersistenceDir();
> }
> /** {@inheritDoc} */
> @Override protected void afterTest() throws 

[jira] [Assigned] (IGNITE-11438) TTL manager may not clean entries from the underlying CacheDataStore

2019-02-27 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-11438:


Assignee: Vyacheslav Koptilin
Ignite Flags:   (was: Docs Required)

> TTL manager may not clean entries from the underlying CacheDataStore
> 
>
> Key: IGNITE-11438
> URL: https://issues.apache.org/jira/browse/IGNITE-11438
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>
> Please see the attached test:
> {code:java}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one or more
>  * contributor license agreements. See the NOTICE file distributed with
>  * this work for additional information regarding copyright ownership.
>  * The ASF licenses this file to You under the Apache License, Version 2.0
>  * (the "License"); you may not use this file except in compliance with
>  * the License. You may obtain a copy of the License at
>  *
>  *  http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.ignite.internal.processors.cache.persistence.db;
> import java.util.TreeMap;
> import java.util.concurrent.TimeUnit;
> import javax.cache.expiry.AccessedExpiryPolicy;
> import javax.cache.expiry.CreatedExpiryPolicy;
> import javax.cache.expiry.Duration;
> import javax.cache.expiry.ExpiryPolicy;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteCheckedException;
> import org.apache.ignite.cache.CachePeekMode;
> import org.apache.ignite.cache.CacheRebalanceMode;
> import org.apache.ignite.cache.CacheWriteSynchronizationMode;
> import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.configuration.WALMode;
> import org.apache.ignite.internal.IgniteEx;
> import org.apache.ignite.internal.IgniteInterruptedCheckedException;
> import org.apache.ignite.internal.processors.cache.GridCacheContext;
> import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
> import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager;
> import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
> import 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition;
> import org.apache.ignite.internal.util.lang.GridAbsPredicate;
> import org.apache.ignite.internal.util.lang.GridCursor;
> import org.apache.ignite.internal.util.typedef.PA;
> import org.apache.ignite.internal.util.typedef.internal.CU;
> import org.apache.ignite.testframework.GridTestUtils;
> import org.apache.ignite.testframework.MvccFeatureChecker;
> import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
> import org.junit.Test;
> import static 
> org.apache.ignite.IgniteSystemProperties.IGNITE_BASELINE_AUTO_ADJUST_ENABLED;
> /**
>  * Test TTL worker with persistence enabled
>  */
> public class IgnitePdsWithTtlTest extends GridCommonAbstractTest {
> /** */
> public static final String CACHE_NAME = "expirableCache";
> /** */
> public static final String GROUP_NAME = "group1";
> /** */
> public static final int PART_SIZE = 32;
> /** */
> private static final int EXPIRATION_TIMEOUT = 10;
> /** */
> public static final int ENTRIES = 100_000;
> /** {@inheritDoc} */
> @Override protected void beforeTestsStarted() throws Exception {
> System.setProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED, "false");
> super.beforeTestsStarted();
> }
> /** {@inheritDoc} */
> @Override protected void afterTestsStopped() throws Exception {
> super.afterTestsStopped();
> System.clearProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED);
> }
> /** {@inheritDoc} */
> @Override protected void beforeTest() throws Exception {
> 
> MvccFeatureChecker.skipIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION);
> super.beforeTest();
> cleanPersistenceDir();
> }
> /** {@inheritDoc} */
> @Override protected void afterTest() throws Exception {
> super.afterTest();
> //protection if test 

[jira] [Updated] (IGNITE-11438) TTL manager may not clean entries from the underlying CacheDataStore

2019-02-27 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-11438:
-
Description: 
Please see the attached test:
{code:java}
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements. See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License. You may obtain a copy of the License at
 *
 *  http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal.processors.cache.persistence.db;

import java.util.TreeMap;
import java.util.concurrent.TimeUnit;
import javax.cache.expiry.AccessedExpiryPolicy;
import javax.cache.expiry.CreatedExpiryPolicy;
import javax.cache.expiry.Duration;
import javax.cache.expiry.ExpiryPolicy;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteCheckedException;
import org.apache.ignite.cache.CachePeekMode;
import org.apache.ignite.cache.CacheRebalanceMode;
import org.apache.ignite.cache.CacheWriteSynchronizationMode;
import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.DataRegionConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.configuration.WALMode;
import org.apache.ignite.internal.IgniteEx;
import org.apache.ignite.internal.IgniteInterruptedCheckedException;
import org.apache.ignite.internal.processors.cache.GridCacheContext;
import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager;
import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
import 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition;
import org.apache.ignite.internal.util.lang.GridAbsPredicate;
import org.apache.ignite.internal.util.lang.GridCursor;
import org.apache.ignite.internal.util.typedef.PA;
import org.apache.ignite.internal.util.typedef.internal.CU;
import org.apache.ignite.testframework.GridTestUtils;
import org.apache.ignite.testframework.MvccFeatureChecker;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
import org.junit.Test;

import static 
org.apache.ignite.IgniteSystemProperties.IGNITE_BASELINE_AUTO_ADJUST_ENABLED;

/**
 * Test TTL worker with persistence enabled
 */
public class IgnitePdsWithTtlTest extends GridCommonAbstractTest {
/** */
public static final String CACHE_NAME = "expirableCache";

/** */
public static final String GROUP_NAME = "group1";

/** */
public static final int PART_SIZE = 32;

/** */
private static final int EXPIRATION_TIMEOUT = 10;

/** */
public static final int ENTRIES = 100_000;

/** {@inheritDoc} */
@Override protected void beforeTestsStarted() throws Exception {
System.setProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED, "false");

super.beforeTestsStarted();
}

/** {@inheritDoc} */
@Override protected void afterTestsStopped() throws Exception {
super.afterTestsStopped();

System.clearProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED);
}

/** {@inheritDoc} */
@Override protected void beforeTest() throws Exception {

MvccFeatureChecker.skipIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION);

super.beforeTest();

cleanPersistenceDir();
}

/** {@inheritDoc} */
@Override protected void afterTest() throws Exception {
super.afterTest();

//protection if test failed to finish, e.g. by error
stopAllGrids();

cleanPersistenceDir();
}

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
final IgniteConfiguration cfg = 
super.getConfiguration(igniteInstanceName);

final CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName(CACHE_NAME);
ccfg.setGroupName(GROUP_NAME);
ccfg.setAffinity(new RendezvousAffinityFunction(false, PART_SIZE));
ccfg.setExpiryPolicyFactory(AccessedExpiryPolicy.factoryOf(new 
Duration(TimeUnit.SECONDS, EXPIRATION_TIMEOUT)));
ccfg.setEagerTtl(true);


[jira] [Commented] (IGNITE-11262) Compression on Discovery data bag

2019-02-27 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779636#comment-16779636
 ] 

Vladislav Pyatkov commented on IGNITE-11262:


[~ivan.glukos] Done.
Waiting to TC re-run.

> Compression on Discovery data bag
> -
>
> Key: IGNITE-11262
> URL: https://issues.apache.org/jira/browse/IGNITE-11262
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Size of GridComponetns data may increase significantly in large deployment.
> Examples:
> 1) In case of more then 3K caches with QueryEntry configured - size of 
> {{DiscoveryDataBag}}{{GridCacheProcessor}} data bag consume more then 20 Mb
> 2) If cluster contain more then 13K objects - 
> {{GridMarshallerMappingProcessor}} size more then 1 Mb
> 3) Cluster with more then 3К types in binary format - 
> {{CacheObjectBinaryProcessorImpl}} size can grow to 10Mb
> The data in most cases contain duplicated structure and simple zip 
> compression can led to seriously reduce size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11059) Print information about pending locks queue in case of dht local tx timeout.

2019-02-27 Thread Ivan Daschinskiy (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779623#comment-16779623
 ] 

Ivan Daschinskiy commented on IGNITE-11059:
---

[~ivan.glukos] Is it ok?
Transaction tx=GridNearTxLocal 
[xid=d63ad303961--09b3-b63c--0002, xidVer=GridCacheVersion 
[topVer=162772540, order=1551292539757, nodeOrder=2], 
nearXid=d63ad303961--09b3-b63c--0002, 
nearXidVer=GridCacheVersion [topVer=162772540, order=1551292539757, 
nodeOrder=2], nearNodeId=a2b85431-604d-4a60-a985-f7a0e291, label=lock] 
timed out, can't acquire lock for key=KeyCacheObjectImpl [part=1, val=1, 
hasValBytes=true], owner=[xid=c63ad303961--09b3-b63c--0002, 
xidVer=GridCacheVersion [topVer=162772540, order=1551292539756, nodeOrder=2], 
nearXid=b63ad303961--09b3-b63c--0001, 
nearXidVer=GridCacheVersion [topVer=162772540, order=1551292539755, 
nodeOrder=1], label=lock, nearNodeId=174c06e1-4115-4569-a8bc-664a92b0], 
queueSize=1

> Print information about pending locks queue in case of dht local tx timeout.
> 
>
> Key: IGNITE-11059
> URL: https://issues.apache.org/jira/browse/IGNITE-11059
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: newbie
> Fix For: 2.8
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently in case of dht local tx timeout it's hard to understand which keys 
> was not locked.
> Addtional information should be printed in log on timeout containing 
> information about pending keys:
> key, tx info holding a lock (xid, label if present)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11436) sqlline is not working on Java 9+

2019-02-27 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-11436:
-
Ignite Flags:   (was: Docs Required)

> sqlline is not working on Java 9+
> -
>
> Key: IGNITE-11436
> URL: https://issues.apache.org/jira/browse/IGNITE-11436
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Anton Kurbanov
>Assignee: Anton Kurbanov
>Priority: Major
> Fix For: 2.8
>
>
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.ignite.internal.util.GridUnsafe$2 
> (file:/var/lib/teamcity/data/work/ead1d0aeaa1f7813/i2test/var/suite-client/art-gg-pro/libs/ignite-core-2.7.2-p1.jar)
>  to field java.nio.Buffer.address
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.ignite.internal.util.GridUnsafe$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.ignite.internal.util.IgniteUtils
>   at org.apache.ignite.internal.IgnitionEx.(IgnitionEx.java:209)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.loadConfiguration(JdbcConnection.java:323)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.getIgnite(JdbcConnection.java:295)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.(JdbcConnection.java:229)
>   at org.apache.ignite.IgniteJdbcDriver.connect(IgniteJdbcDriver.java:437)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
>   at sqlline.Commands.connect(Commands.java:1095)
>   at sqlline.Commands.connect(Commands.java:1001)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:791)
>   at sqlline.SqlLine.initArgs(SqlLine.java:566)
>   at sqlline.SqlLine.begin(SqlLine.java:643)
>   at sqlline.SqlLine.start(SqlLine.java:373)
>   at sqlline.SqlLine.main(SqlLine.java:265)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-11436) sqlline is not working on Java 9+

2019-02-27 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev resolved IGNITE-11436.
--
   Resolution: Fixed
Fix Version/s: 2.8

Thank you for your contribution! I have merged it to master.

> sqlline is not working on Java 9+
> -
>
> Key: IGNITE-11436
> URL: https://issues.apache.org/jira/browse/IGNITE-11436
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Anton Kurbanov
>Assignee: Anton Kurbanov
>Priority: Major
> Fix For: 2.8
>
>
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.ignite.internal.util.GridUnsafe$2 
> (file:/var/lib/teamcity/data/work/ead1d0aeaa1f7813/i2test/var/suite-client/art-gg-pro/libs/ignite-core-2.7.2-p1.jar)
>  to field java.nio.Buffer.address
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.ignite.internal.util.GridUnsafe$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.ignite.internal.util.IgniteUtils
>   at org.apache.ignite.internal.IgnitionEx.(IgnitionEx.java:209)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.loadConfiguration(JdbcConnection.java:323)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.getIgnite(JdbcConnection.java:295)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.(JdbcConnection.java:229)
>   at org.apache.ignite.IgniteJdbcDriver.connect(IgniteJdbcDriver.java:437)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
>   at sqlline.Commands.connect(Commands.java:1095)
>   at sqlline.Commands.connect(Commands.java:1001)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:791)
>   at sqlline.SqlLine.initArgs(SqlLine.java:566)
>   at sqlline.SqlLine.begin(SqlLine.java:643)
>   at sqlline.SqlLine.start(SqlLine.java:373)
>   at sqlline.SqlLine.main(SqlLine.java:265)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11438) TTL manager may not clean entries from the underlying CacheDataStore

2019-02-27 Thread Vyacheslav Koptilin (JIRA)
Vyacheslav Koptilin created IGNITE-11438:


 Summary: TTL manager may not clean entries from the underlying 
CacheDataStore
 Key: IGNITE-11438
 URL: https://issues.apache.org/jira/browse/IGNITE-11438
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.7
Reporter: Vyacheslav Koptilin


Please see the attached test:
{code:java}
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements. See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License. You may obtain a copy of the License at
 *
 *  http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal.processors.cache.persistence.db;

import java.util.TreeMap;
import java.util.concurrent.TimeUnit;
import javax.cache.expiry.AccessedExpiryPolicy;
import javax.cache.expiry.CreatedExpiryPolicy;
import javax.cache.expiry.Duration;
import javax.cache.expiry.ExpiryPolicy;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteCheckedException;
import org.apache.ignite.cache.CachePeekMode;
import org.apache.ignite.cache.CacheRebalanceMode;
import org.apache.ignite.cache.CacheWriteSynchronizationMode;
import org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.DataRegionConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.configuration.WALMode;
import org.apache.ignite.internal.IgniteEx;
import org.apache.ignite.internal.IgniteInterruptedCheckedException;
import org.apache.ignite.internal.processors.cache.GridCacheContext;
import org.apache.ignite.internal.processors.cache.GridCacheSharedContext;
import org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManager;
import org.apache.ignite.internal.processors.cache.IgniteCacheProxy;
import 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition;
import org.apache.ignite.internal.util.lang.GridAbsPredicate;
import org.apache.ignite.internal.util.lang.GridCursor;
import org.apache.ignite.internal.util.typedef.PA;
import org.apache.ignite.internal.util.typedef.internal.CU;
import org.apache.ignite.testframework.GridTestUtils;
import org.apache.ignite.testframework.MvccFeatureChecker;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
import org.junit.Test;

import static 
org.apache.ignite.IgniteSystemProperties.IGNITE_BASELINE_AUTO_ADJUST_ENABLED;

/**
 * Test TTL worker with persistence enabled
 */
public class IgnitePdsWithTtlTest extends GridCommonAbstractTest {
/** */
public static final String CACHE_NAME = "expirableCache";

/** */
public static final String GROUP_NAME = "group1";

/** */
public static final int PART_SIZE = 32;

/** */
private static final int EXPIRATION_TIMEOUT = 10;

/** */
public static final int ENTRIES = 100_000;

/** {@inheritDoc} */
@Override protected void beforeTestsStarted() throws Exception {
System.setProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED, "false");

super.beforeTestsStarted();
}

/** {@inheritDoc} */
@Override protected void afterTestsStopped() throws Exception {
super.afterTestsStopped();

System.clearProperty(IGNITE_BASELINE_AUTO_ADJUST_ENABLED);
}

/** {@inheritDoc} */
@Override protected void beforeTest() throws Exception {

MvccFeatureChecker.skipIfNotSupported(MvccFeatureChecker.Feature.EXPIRATION);

super.beforeTest();

cleanPersistenceDir();
}

/** {@inheritDoc} */
@Override protected void afterTest() throws Exception {
super.afterTest();

//protection if test failed to finish, e.g. by error
stopAllGrids();

cleanPersistenceDir();
}

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
final IgniteConfiguration cfg = 
super.getConfiguration(igniteInstanceName);

final CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName(CACHE_NAME);
ccfg.setGroupName(GROUP_NAME);
ccfg.setAffinity(new 

[jira] [Commented] (IGNITE-11262) Compression on Discovery data bag

2019-02-27 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779496#comment-16779496
 ] 

Ivan Rakov commented on IGNITE-11262:
-

[~v.pyatkov], please merge fresh master to PR branch.

> Compression on Discovery data bag
> -
>
> Key: IGNITE-11262
> URL: https://issues.apache.org/jira/browse/IGNITE-11262
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Size of GridComponetns data may increase significantly in large deployment.
> Examples:
> 1) In case of more then 3K caches with QueryEntry configured - size of 
> {{DiscoveryDataBag}}{{GridCacheProcessor}} data bag consume more then 20 Mb
> 2) If cluster contain more then 13K objects - 
> {{GridMarshallerMappingProcessor}} size more then 1 Mb
> 3) Cluster with more then 3К types in binary format - 
> {{CacheObjectBinaryProcessorImpl}} size can grow to 10Mb
> The data in most cases contain duplicated structure and simple zip 
> compression can led to seriously reduce size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11059) Print information about pending locks queue in case of dht local tx timeout.

2019-02-27 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779494#comment-16779494
 ] 

Ivan Rakov commented on IGNITE-11059:
-

[~agoncharuk], [~ascherbakov], what do you think?

> Print information about pending locks queue in case of dht local tx timeout.
> 
>
> Key: IGNITE-11059
> URL: https://issues.apache.org/jira/browse/IGNITE-11059
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: newbie
> Fix For: 2.8
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently in case of dht local tx timeout it's hard to understand which keys 
> was not locked.
> Addtional information should be printed in log on timeout containing 
> information about pending keys:
> key, tx info holding a lock (xid, label if present)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11059) Print information about pending locks queue in case of dht local tx timeout.

2019-02-27 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779492#comment-16779492
 ] 

Ivan Rakov commented on IGNITE-11059:
-

I think, we can still make the message more informative.
{code:java}
StringBuilder sb = new StringBuilder().append("Timed out waiting 
for lock response, holding lock on: ");
{code}
I'd describe stiuation in details and add more info about current transaction, 
something like "Transaction [nearXid=<>, xid=<>, nearNodeId=<>] commit on 
primary node timed out, can't acquire lock for key: ".

> Print information about pending locks queue in case of dht local tx timeout.
> 
>
> Key: IGNITE-11059
> URL: https://issues.apache.org/jira/browse/IGNITE-11059
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: newbie
> Fix For: 2.8
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently in case of dht local tx timeout it's hard to understand which keys 
> was not locked.
> Addtional information should be printed in log on timeout containing 
> information about pending keys:
> key, tx info holding a lock (xid, label if present)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10925) Failure to submit affinity task from client node

2019-02-27 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779470#comment-16779470
 ] 

Andrey Gura commented on IGNITE-10925:
--

[~ilyak] LGTM. Please merge. Thanks!

> Failure to submit affinity task from client node
> 
>
> Key: IGNITE-10925
> URL: https://issues.apache.org/jira/browse/IGNITE-10925
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7
>Reporter: Prasad
>Assignee: Ilya Kasnacheev
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Getting following exception while submitting the affinity task from client 
> node to server node.
> Before submitting the affinity task ignite first gets the affinity cached 
> function (AffinityInfo) by submitting the cluster wide task "AffinityJob". 
> But while in the process of retrieving the output of this AffinityJob, ignite 
> deserializes this output. I am getting exception while deserailizing this 
> output.
> Code fails while un-marshalling cachesnapshotmetrics on client node.
>  
> [Userlist 
> Discussion|http://apache-ignite-users.70518.x6.nabble.com/After-upgrading-2-7-getting-Unexpected-error-occurred-during-unmarshalling-td26262.html]
> [Reproducer 
> Project|https://github.com/prasadbhalerao1983/IgniteIssueReproducer.git]
>  
> Step to Reproduce:
> 1) First Run com.example.demo.Server class as a java program
> 2) Then run com.example.demo.Client as java program.
>  
> {noformat}
> 2019-01-14 15:37:02.723 ERROR 10712 --- [springDataNode%] 
> o.a.i.i.processors.task.GridTaskWorker   : Error deserializing job response: 
> GridJobExecuteResponse [nodeId=e9a24c20-0d00-4808-b2f5-13e1ce35496a, 
> sesId=76324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, 
> jobId=86324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, gridEx=null, 
> isCancelled=false, retry=null]
> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with 
> optimized marshaller
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146) 
> ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_144]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_144]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to 
> unmarshal object with optimized marshaller
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140) 
> ~[ignite-core-2.7.0.jar:2.7.0]
>  ... 10 common frames omitted
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to deserialize 
> object with given class loader: 
> [clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, err=Failed to deserialize 
> object [typeName=org.apache.ignite.internal.util.lang.GridTuple3]]
>  at 
> 

[jira] [Commented] (IGNITE-6563) .NET: ICache.GetLongSize

2019-02-27 Thread Pavel Tupitsyn (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779443#comment-16779443
 ] 

Pavel Tupitsyn commented on IGNITE-6563:


[~ivandasch] merged to master, thank you!

> .NET: ICache.GetLongSize
> 
>
> Key: IGNITE-6563
> URL: https://issues.apache.org/jira/browse/IGNITE-6563
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: .NET
> Fix For: 2.8
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{IgniteCache}} in Java has {{sizeLong}} and {{localSizeLong}}. Add similar 
> methods to {{ICache}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7384) MVCC TX: Support historical rebalance

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-7384:
-
Description: 
Currently MVCC doesn't support historical (delta) rebalance.

The main difficulty is that MVCC writes changes on tx active phase while 
partition update version, aka update counter, is being applied on tx finish. 
This means we cannot start iteration over WAL right from the pointer where the 
update counter updated, but should include updates, which the transaction that 
updated the counter did.

Currently proposed approach:
 * Maintain a list of active TXs with update counter (UC) which was actual at 
the time before TX did its first update (on per partition basis)
 * on each checkpoint save two counters - update counter (UC) and back counter 
(BC) which is earliest UC mapped to a tx from active list at checkpoint time.
 * during local restore move UC and BC forward as far as possible.
 * send BC instead of update counter in demand message.
 * start iteration from a first checkpoint having UC less or equal received BC

See [linked dev list 
thread|http://apache-ignite-developers.2346864.n4.nabble.com/Historical-rebalance-td38380.html]
 for details

  was:
Currently MVCC doesn't support historical (delta) rebalance.

The main difficulty is that MVCC writes changes on tx active phase while 
partition update version, aka update counter, is being applied on tx finish. 
This means we cannot start iteration over WAL right from the pointer where the 
update counter updated, but should include updates, which the transaction that 
updated the counter did.

Currently proposed approach:

Maintain a list of active TXs with update counter (UC) which was actual at the 
time before TX did its first update (on per partition basis)
 on each checkpoint save two counters - update counter (UC) and back counter 
(BC) which is earliest UC mapped to a tx from active list at checkpoint time.
 during local restore move UC and BC forward as far as possible. 
 send BC instead of update counter in demand message.
 start iteration from a first checkpoint having UC less or equal received BC

See [linked dev list 
thread|http://apache-ignite-developers.2346864.n4.nabble.com/Historical-rebalance-td38380.html]
 for details


> MVCC TX: Support historical rebalance
> -
>
> Key: IGNITE-7384
> URL: https://issues.apache.org/jira/browse/IGNITE-7384
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> Currently MVCC doesn't support historical (delta) rebalance.
> The main difficulty is that MVCC writes changes on tx active phase while 
> partition update version, aka update counter, is being applied on tx finish. 
> This means we cannot start iteration over WAL right from the pointer where 
> the update counter updated, but should include updates, which the transaction 
> that updated the counter did.
> Currently proposed approach:
>  * Maintain a list of active TXs with update counter (UC) which was actual at 
> the time before TX did its first update (on per partition basis)
>  * on each checkpoint save two counters - update counter (UC) and back 
> counter (BC) which is earliest UC mapped to a tx from active list at 
> checkpoint time.
>  * during local restore move UC and BC forward as far as possible.
>  * send BC instead of update counter in demand message.
>  * start iteration from a first checkpoint having UC less or equal received BC
> See [linked dev list 
> thread|http://apache-ignite-developers.2346864.n4.nabble.com/Historical-rebalance-td38380.html]
>  for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7384) MVCC TX: Support historical rebalance

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-7384:
-
Description: 
Currently MVCC doesn't support historical (delta) rebalance.

The main difficulty is that MVCC writes changes on tx active phase while 
partition update version, aka update counter, is being applied on tx finish. 
This means we cannot start iteration over WAL right from the pointer where the 
update counter updated, but should include updates, which the transaction that 
updated the counter did.

Currently proposed approach:

Maintain a list of active TXs with update counter (UC) which was actual at the 
time before TX did its first update (on per partition basis)
 on each checkpoint save two counters - update counter (UC) and back counter 
(BC) which is earliest UC mapped to a tx from active list at checkpoint time.
 during local restore move UC and BC forward as far as possible. 
 send BC instead of update counter in demand message.
 start iteration from a first checkpoint having UC less or equal received BC

See [linked dev list 
thread|http://apache-ignite-developers.2346864.n4.nabble.com/Historical-rebalance-td38380.html]
 for details

  was:
Currently MVCC doesn't support historical (delta) rebalance.

The main difficulty is that MVCC writes changes on tx active phase while 
partition update version, aka update counter, is being applied on tx finish. 
This means we cannot start iteration over WAL right from the pointer where the 
update counter updated, but should include updates, which the transaction that 
updated the counter did.

Currently proposed approach:

Maintain a list of active TXs with update counter (UC) which was actual at the 
time before TX did its first update (on per partition basis)
on each checkpoint save two counters - update counter (UC) and back counter 
(BC) which is earliest UC mapped to a tx from active list at checkpoint time.
during local restore move UC and BC forward as far as possible. 
send BC instead of update counter in demand message.
start iteration from a first checkpoint having UC less or equal received BC

See linked dev list thread for details


> MVCC TX: Support historical rebalance
> -
>
> Key: IGNITE-7384
> URL: https://issues.apache.org/jira/browse/IGNITE-7384
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> Currently MVCC doesn't support historical (delta) rebalance.
> The main difficulty is that MVCC writes changes on tx active phase while 
> partition update version, aka update counter, is being applied on tx finish. 
> This means we cannot start iteration over WAL right from the pointer where 
> the update counter updated, but should include updates, which the transaction 
> that updated the counter did.
> Currently proposed approach:
> Maintain a list of active TXs with update counter (UC) which was actual at 
> the time before TX did its first update (on per partition basis)
>  on each checkpoint save two counters - update counter (UC) and back counter 
> (BC) which is earliest UC mapped to a tx from active list at checkpoint time.
>  during local restore move UC and BC forward as far as possible. 
>  send BC instead of update counter in demand message.
>  start iteration from a first checkpoint having UC less or equal received BC
> See [linked dev list 
> thread|http://apache-ignite-developers.2346864.n4.nabble.com/Historical-rebalance-td38380.html]
>  for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7384) MVCC TX: Support historical rebalance

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-7384:
-
Description: 
Currently MVCC doesn't support historical (delta) rebalance.

The main difficulty is that MVCC writes changes on tx active phase while 
partition update version, aka update counter, is being applied on tx finish. 
This means we cannot start iteration over WAL right from the pointer where the 
update counter updated, but should include updates, which the transaction that 
updated the counter did.

Currently proposed approach:

Maintain a list of active TXs with update counter (UC) which was actual at the 
time before TX did its first update (on per partition basis)
on each checkpoint save two counters - update counter (UC) and back counter 
(BC) which is earliest UC mapped to a tx from active list at checkpoint time.
during local restore move UC and BC forward as far as possible. 
send BC instead of update counter in demand message.
start iteration from a first checkpoint having UC less or equal received BC

See linked dev list thread for details

  was:
In case a node returns to topology it requests a delta instead of full 
partition, WAL-based iterator is used there 
({{o.a.i.i.processors.cache.persistence.GridCacheOffheapManager#rebalanceIterator}})

WAL-based iterator doesn't contain MVCC versions which causes issues.


> MVCC TX: Support historical rebalance
> -
>
> Key: IGNITE-7384
> URL: https://issues.apache.org/jira/browse/IGNITE-7384
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> Currently MVCC doesn't support historical (delta) rebalance.
> The main difficulty is that MVCC writes changes on tx active phase while 
> partition update version, aka update counter, is being applied on tx finish. 
> This means we cannot start iteration over WAL right from the pointer where 
> the update counter updated, but should include updates, which the transaction 
> that updated the counter did.
> Currently proposed approach:
> Maintain a list of active TXs with update counter (UC) which was actual at 
> the time before TX did its first update (on per partition basis)
> on each checkpoint save two counters - update counter (UC) and back counter 
> (BC) which is earliest UC mapped to a tx from active list at checkpoint time.
> during local restore move UC and BC forward as far as possible. 
> send BC instead of update counter in demand message.
> start iteration from a first checkpoint having UC less or equal received BC
> See linked dev list thread for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11437) Start grid in remote JVM in test framework fails if TDE is enabled

2019-02-27 Thread Aleksey Plekhanov (JIRA)
Aleksey Plekhanov created IGNITE-11437:
--

 Summary: Start grid in remote JVM in test framework fails if TDE 
is enabled
 Key: IGNITE-11437
 URL: https://issues.apache.org/jira/browse/IGNITE-11437
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Plekhanov


When we start grid in remote JVM with enabled TDE, it fails with exception:
{noformat}
java.lang.NullPointerException
at java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:284)
at java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:180)
at java.lang.ThreadLocal.get(ThreadLocal.java:170)
at 
org.apache.ignite.spi.encryption.keystore.KeystoreEncryptionSpi.encrypt(KeystoreEncryptionSpi.java:211){noformat}
Test framework uses {{XStream}} to pass Ignite configuration to remote JVM. 
{{XStream}} cannot serialize lamda expression and replace lambda with {{null}}. 
So, after deserialization {{ThreadLocal}} object has {{supplier == null}}.

Reproducer:
{code:java}
public class TdeTest extends GridCommonAbstractTest {
/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String gridName) 
throws Exception {
IgniteConfiguration cfg = super.getConfiguration(gridName);

cfg.setDataStorageConfiguration(new DataStorageConfiguration()
.setDefaultDataRegionConfiguration(new 
DataRegionConfiguration().setPersistenceEnabled(true)));

KeystoreEncryptionSpi encSpi = new KeystoreEncryptionSpi();

encSpi.setKeyStorePath(AbstractEncryptionTest.KEYSTORE_PATH);

encSpi.setKeyStorePassword(AbstractEncryptionTest.KEYSTORE_PASSWORD.toCharArray());

cfg.setEncryptionSpi(encSpi);

cfg.setCacheConfiguration(new 
CacheConfiguration().setName("cache").setEncryptionEnabled(true));

return cfg;
}

/** {@inheritDoc} */
@Override protected boolean isMultiJvm() {
return true;
}

@Test
public void testTdeMultiJvm() throws Exception {
startGrids(2);
}
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-9530) MVCC TX: Local caches support.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-9530.
--
Resolution: Won't Do

> MVCC TX: Local caches support.
> --
>
> Key: IGNITE-9530
> URL: https://issues.apache.org/jira/browse/IGNITE-9530
> Project: Ignite
>  Issue Type: Task
>  Components: cache, mvcc
>Reporter: Roman Kondakov
>Priority: Major
>
> Mvcc support for local caches is turned off now. We need to consider 
> implementing it in the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-8865) SQL Transactions: Set sequential false if update query has no order

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-8865.
--
Resolution: Duplicate

> SQL Transactions: Set sequential false if update query has no order
> ---
>
> Key: IGNITE-8865
> URL: https://issues.apache.org/jira/browse/IGNITE-8865
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> See appropriate TODO in DmlStatementsProcessor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-7998) SQL: Improve MVCC vacuum performance by iterating over data pages instead of cache tree.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-7998.
--
Resolution: Duplicate

> SQL: Improve MVCC vacuum performance by iterating over data pages instead of 
> cache tree. 
> -
>
> Key: IGNITE-7998
> URL: https://issues.apache.org/jira/browse/IGNITE-7998
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Roman Kondakov
>Priority: Major
>
> At the moment vacuum process uses cache trees to find outdated (dead) entries 
> and cache and index trees to cleanup them. It is not efficient due to several 
> reasons. For example, we should lock a datapage for each cache tree entry to 
> find out if entry is dead.
> We can consider a direct iteration over datapages as a possible improvement 
> of  the vacuum process. Data page iteration prototype demonstrated 5-10 times 
> time improvement over the tree iteration.
> At first stage we need to implement direct datapages iteration only for 
> collecting dead entries links.
> At the second stage we need to consider removing links to dead entries from 
> index pages directly. In other words, we need to efficiently remove batches 
> of dead links from indexes without traversing cache and index tree one dead 
> link by one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-9735) Determine partitions during parsing for MVCC DML statements

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-9735.
--
Resolution: Duplicate

Already done in scope of IGNITE-10559

> Determine partitions during parsing for MVCC DML statements
> ---
>
> Key: IGNITE-9735
> URL: https://issues.apache.org/jira/browse/IGNITE-9735
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Ivan Pavlukhin
>Priority: Major
>  Labels: mvcc_stabilization_stage_1, transactions
>
> Now with for MVCC caches query like below is broadcasted instead of sending 
> to single node only.
> {code:java}
> update table set _val = _val + 1 where _key = ?{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11429) rror on queryng IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view

2019-02-27 Thread Anton Vinogradov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779412#comment-16779412
 ] 

Anton Vinogradov commented on IGNITE-11429:
---

Duplicates IGNITE-11430?

> rror on queryng IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view
> -
>
> Key: IGNITE-11429
> URL: https://issues.apache.org/jira/browse/IGNITE-11429
> Project: Ignite
>  Issue Type: Bug
>Reporter: Yury Gerzhedovich
>Priority: Major
>
> Need to fix TIMESTAMP_WITH_TIMEZONE issue in the LOCAL_SQL_RUNNING_QUERIES 
> view.
> SELECT * FROM IGNITE.LOCAL_SQL_RUNNING_QUERIES;
>  
> [2019-02-27 
> 11:28:24,357][ERROR][client-connector-#56][ClientListenerNioListener] Failed 
> to process client request [req=JdbcQueryExecuteRequest [schemaName=PUBLIC, 
> pageSize=1024, maxRows=200, sqlQry=SELECT * FROM 
> IGNITE.LOCAL_SQL_RUNNING_QUERIES, args=Object[] [], 
> stmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
> class org.apache.ignite.binary.BinaryObjectException: Custom objects are not 
> supported
>  at 
> org.apache.ignite.internal.processors.odbc.SqlListenerUtils.writeObject(SqlListenerUtils.java:219)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcUtils.writeItems(JdbcUtils.java:44)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult.writeBinary(JdbcQueryExecuteResult.java:128)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcResponse.writeBinary(JdbcResponse.java:88)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcMessageParser.encode(JdbcMessageParser.java:91)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:198)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:48)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>  at 
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
>  at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>  at 
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-10248) MVCC TX: remove redundant partition checking from GridDhtTxAbstractEnlistFuture

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov resolved IGNITE-10248.
---
Resolution: Duplicate

Already done in scope of IGNITE-10752

> MVCC TX: remove redundant partition checking from 
> GridDhtTxAbstractEnlistFuture
> ---
>
> Key: IGNITE-10248
> URL: https://issues.apache.org/jira/browse/IGNITE-10248
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> We need to ensure that on unstable topology all queries (even those that 
> doesn't require reducer) should execute with reducer (which support execution 
> on unstable topology)
> All verifications should be done inside 
> {{*IgniteH2Indexing#prepareDistributedUpdate*}} method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10829) MVCC TX: Lazy query execution for query enlists.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10829:
--
Description: 
Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
execution to data nodes, such execution runs a local select 
(GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
each select result row.

The main difficult starts when we cannot execute whole operation at once (due 
to lock conflict or backup message queue overflow). Such case we break 
iteration and save a context (detach H2 connection for further exclusive usage 
and save current position in cursor). There is no issue since in non-lazy mode 
the cursor internally have a list of all needed entries and doesn't hold any 
resources but in lazy mode we may face two issues:
1) Schema change in between of iteration
2) Possible starvation because of heavy time consuming operations in cache 
pool, which used by default for operation continuation. 

As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
taken into consideration. This mean:

1) before braking iteration we need to release all holding shared locks on on 
being iterated tables.
2) before continue iteration we need to acquire shared locks on all needed 
tables and check the schema wasn't changed in between locks were acquired.
3) the operation should be continued in the same pool it was started to prevent 
possible starvation of concurrent cache operations (See IGNITE-10597).

  was:
Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
execution to data nodes, such execution runs a local select 
(GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
each select result row.

The main difficult starts when we cannot execute whole operation at once (due 
to lock conflict or backup message queue overflow). Such case we break 
iteration and save a context (detach H2 connection for further exclusive usage 
and save current position in cursor). There is no issue since in non-lazy mode 
the cursor internally have a list of all needed entries and doesn't hold any 
resources but in lazy mode we may face two issues:
1) Schema change in between of iteration
2) Possible starvation because of heavy time consuming operations in cache 
pool, which used by default for operation continuation. 

As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
taken into consideration. This mean:

1) before braking iteration we need to release all holding shared locks on on 
being iterated tables.
2) before continue iteration we need to acquire shared locks on all needed 
tables and check the schema wasn't changed in between locks were acquired.
3) the operation should be continued in the same pool it was started to prevent 
possible starvation of concurrent cache operations.


> MVCC TX: Lazy query execution for query enlists.
> 
>
> Key: IGNITE-10829
> URL: https://issues.apache.org/jira/browse/IGNITE-10829
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Igor Seliverstov
>Priority: Major
> Fix For: 2.8
>
>
> Running query enlist operations (GridNearTxQueryEnlistFuture) we put query 
> execution to data nodes, such execution runs a local select 
> (GridDhtTxQueryEnlistFuture), gets a cursor and executes write operation for 
> each select result row.
> The main difficult starts when we cannot execute whole operation at once (due 
> to lock conflict or backup message queue overflow). Such case we break 
> iteration and save a context (detach H2 connection for further exclusive 
> usage and save current position in cursor). There is no issue since in 
> non-lazy mode the cursor internally have a list of all needed entries and 
> doesn't hold any resources but in lazy mode we may face two issues:
> 1) Schema change in between of iteration
> 2) Possible starvation because of heavy time consuming operations in cache 
> pool, which used by default for operation continuation. 
> As soon as IGNITE-9171 is implemented, possible lazy execution is had to be 
> taken into consideration. This mean:
> 1) before braking iteration we need to release all holding shared locks on on 
> being iterated tables.
> 2) before continue iteration we need to acquire shared locks on all needed 
> tables and check the schema wasn't changed in between locks were acquired.
> 3) the operation should be continued in the same pool it was started to 
> prevent possible starvation of concurrent cache operations (See IGNITE-10597).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11396) Actualize JUnit3TestLegacyAssert

2019-02-27 Thread Ivan Fedotov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Fedotov updated IGNITE-11396:
--
Description: Rename JUnit3TestLegacyAssert class and actualize methods.  
(was: Replace assert methods by imports.

That will lead to full remove JUnit3TestLegacyAssert class.)

> Actualize JUnit3TestLegacyAssert
> 
>
> Key: IGNITE-11396
> URL: https://issues.apache.org/jira/browse/IGNITE-11396
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rename JUnit3TestLegacyAssert class and actualize methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC using visibility maps

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
Currently we have several issues:

1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one) - this can be easily done by just 
having a special bit at the data page, so - any update resets this bit, vacuum 
travers only data pages with zero value bit and sets it to 1 after processing.

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to reduce 
examined pages amount, iterate over updated data pages only and do not use 
cache data tree.

  was:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to iterate 
on updated data pages only and do not use cache data tree.


> MVCC TX: Improve VAC using visibility maps
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> Currently we have several issues:
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one) - this can be easily 
> done by just having a special bit at the data page, so - any update resets 
> this bit, vacuum travers only data pages with zero value bit and sets it to 1 
> after processing.
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to reduce 
> examined pages amount, iterate over updated data pages only and do not use 
> cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC using visibility maps

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Summary: MVCC TX: Improve VAC using visibility maps  (was: MVCC TX: Improve 
VAC)

> MVCC TX: Improve VAC using visibility maps
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to 
> iterate on updated data pages only and do not use cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

We need to implement a special structure like visibility maps in PG to iterate 
on updated data pages only and do not use cache data tree.

  was:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)


> MVCC TX: Improve VAC
> 
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)
> We need to implement a special structure like visibility maps in PG to 
> iterate on updated data pages only and do not use cache data tree.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11300) MVCC: forbid using DataStreamer with allowOverwrite=true

2019-02-27 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767211#comment-16767211
 ] 

Ivan Pavlukhin edited comment on IGNITE-11300 at 2/27/19 2:16 PM:
--

[~amashenkov], {{allowOverwrite=true}} have not been explicitly tested. 
Moreover, streamer implementation using single {{cache.put}} operations will 
most likely have very poor performance.

{{allowOverwrite=false}} should be addressed in scope of IGNITE-9314. 
Currently, streamer in such mode will not insert a tuple if there is anything 
(e.g. aborted versions) for a given key in BPlusTree.


was (Author: pavlukhin):
[~amashenkov], {{allowOverwrite=true}} have not been explicitly tested. 
Moreover, streamer implementation using single {{cache.put}} operations will 
most likely have very poor performance.

{{allowOverwrite=false}} should be addressed in scope of IGNITE-9314. 
Currently, streamer in such mode will not insert a tuple if there is anything 
(e.g. aborted versions) for a give key in BPlusTree.

> MVCC: forbid using DataStreamer with allowOverwrite=true
> 
>
> Key: IGNITE-11300
> URL: https://issues.apache.org/jira/browse/IGNITE-11300
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Affects Versions: 2.7
>Reporter: Ivan Pavlukhin
>Priority: Major
> Fix For: 2.8
>
>
> Calling {{IgniteDataStreamer.allowOverwrite(true)}} configures a streamer to 
> use single-key cache put/remove operations for data modification. But 
> put/remove operations on MVCC caches can be aborted due to write conflicts. 
> So, some development effort is needed to support that mode properly. Let's 
> throw exception in such case for MVCC caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improve VAC

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Summary: MVCC TX: Improve VAC  (was: MVCC TX: Improvements.)

> MVCC TX: Improve VAC
> 
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10729) MVCC TX: Improvements.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-10729:
--
Description: 
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

  was:
Currently there are several problems:
1) vacuum doesn't have change set, this means it travers all data to find 
invisible entries; hanse it breaks read statistics and make all data set "hot" 
- we should travers data entries instead, and only those entries, which was 
updated (linked to newer versions), moreover, vacuum should travers only those 
data pages, which were updated after last successful vacuum (at least one entry 
on the data page was linked to a never one)

2) vacuum travers over partitions instead of data entries, so, there possible 
some races like: reader checks an entry; updater removes this entry from 
partition; vacuum doesn't see the entry and clean TxLog -> reader cannot check 
the entry state with TxLog and gets an exception. This race prevents an 
optimization when all entries, older than last successful vacuum version, are 
considered as COMMITTED (see previous suggestion)

3) all entry versions are placed in BTrees, so, we cannot do updates like PG - 
just adding a new version and linking the old one to it. Having only one 
unversioned item per row in all indexes making possible fast invoke operations 
on such indexes in MVCC mode. Also it let us not to update all indexes on each 
update operation (partition index isn't updated at all, only SQL indexes, built 
over changed fields need to be updated) - this dramatically reduces write 
operations, hence it reduces amount of pages to be "checkpointed" and reduces 
checkpoint mark phase.


> MVCC TX: Improvements.
> --
>
> Key: IGNITE-10729
> URL: https://issues.apache.org/jira/browse/IGNITE-10729
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc
>Reporter: Igor Seliverstov
>Priority: Major
>
> 1) vacuum doesn't have change set, this means it travers all data to find 
> invisible entries; hanse it breaks read statistics and make all data set 
> "hot" - we should travers data entries instead, and only those entries, which 
> was updated (linked to newer versions), moreover, vacuum should travers only 
> those data pages, which were updated after last successful vacuum (at least 
> one entry on the data page was linked to a never one)
> 2) vacuum travers over partitions instead of data entries, so, there possible 
> some races like: reader checks an entry; updater removes this entry from 
> partition; vacuum doesn't see the entry and clean TxLog -> reader cannot 
> check the entry state with TxLog and gets an exception. This race prevents an 
> optimization when all entries, older than last successful vacuum version, are 
> considered as COMMITTED (see previous suggestion)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-11088) Flacky LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge

2019-02-27 Thread Ivan Pavlukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Pavlukhin reassigned IGNITE-11088:
---

Assignee: Ivan Pavlukhin

> Flacky LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge
> -
>
> Key: IGNITE-11088
> URL: https://issues.apache.org/jira/browse/IGNITE-11088
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Ivan Pavlukhin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, mvcc_stabilization_stage_1
>
> [LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge|https://ci.ignite.apache.org/viewLog.html?buildId=2895774=buildResultsDiv=IgniteTests24Java8_MvccPds2#testNameId-6585115376754732686]
>  fails sporadicatlly in MvccPds 2 suite.
> I've found no failures in non-mvcc Pds 2 suite, so, probably it is mvcc issue.
> See stacktraces from 2 failures that may have same reason. We have to 
> investigate this.
> {noformat}
> java.lang.AssertionError: nodeIdx=2, key=6606 expected:<13212> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge(LocalWalModeChangeDuringRebalancingSelfTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088)
>   at java.lang.Thread.run(Thread.java:748) {noformat}
> {noformat}
> [2019-01-23 
> 11:26:25,287][ERROR][sys-stripe-5-#6186%persistence.LocalWalModeChangeDuringRebalancingSelfTest3%][GridDhtColocatedCache]
>   Failed processing get request: GridNearSingleGetRequest 
> [futId=1548243502606, key=KeyCacheObjectImpl [part=381, val=7037, 
> hasValBytes=true], flags=1, topVer=AffinityTopologyVersion [topVer=8, 
> minorTopVer=1], subjId=f1fbb371-3232-4bfa-a20a-d4cad4b2, taskNameHash=0, 
> createTtl=-1, accessTtl=-1, txLbl=null, mvccSnapshot=MvccSnapshotResponse 
> [futId=7040, crdVer=1548242747966, cntr=20023, opCntr=1073741823, txs=null, 
> cleanupVer=0, tracking=0]] class org.apache.ignite.IgniteCheckedException: 
> Runtime failure on bounds: [lower=MvccSnapshotSearchRow [res=null, 
> snapshot=MvccSnapshotResponse [futId=7040, crdVer=1548242747966, cntr=20023, 
> opCntr=1073741823, txs=null, cleanupVer=0, tracking=0]], 
> upper=MvccMinSearchRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.iterate(BPlusTree.java:1043)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.mvccFind(IgniteCacheOffheapManagerImpl.java:2683)
>  at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.mvccFind(GridCacheOffheapManager.java:2141)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.mvccRead(IgniteCacheOffheapManagerImpl.java:666)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAllAsync0(GridCacheAdapter.java:2023)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:807)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:399)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:277)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map(GridDhtGetSingleFuture.java:259)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.init(GridDhtGetSingleFuture.java:182)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtSingleAsync(GridDhtCacheAdapter.java:918)
>  at 
> 

[jira] [Updated] (IGNITE-11436) sqlline is not working on Java 9+

2019-02-27 Thread Stanislav Lukyanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Lukyanov updated IGNITE-11436:

Description: 
{code}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.apache.ignite.internal.util.GridUnsafe$2 
(file:/var/lib/teamcity/data/work/ead1d0aeaa1f7813/i2test/var/suite-client/art-gg-pro/libs/ignite-core-2.7.2-p1.jar)
 to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of 
org.apache.ignite.internal.util.GridUnsafe$2
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.ignite.internal.util.IgniteUtils
at org.apache.ignite.internal.IgnitionEx.(IgnitionEx.java:209)
at 
org.apache.ignite.internal.jdbc2.JdbcConnection.loadConfiguration(JdbcConnection.java:323)
at 
org.apache.ignite.internal.jdbc2.JdbcConnection.getIgnite(JdbcConnection.java:295)
at 
org.apache.ignite.internal.jdbc2.JdbcConnection.(JdbcConnection.java:229)
at org.apache.ignite.IgniteJdbcDriver.connect(IgniteJdbcDriver.java:437)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
at sqlline.Commands.connect(Commands.java:1095)
at sqlline.Commands.connect(Commands.java:1001)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:791)
at sqlline.SqlLine.initArgs(SqlLine.java:566)
at sqlline.SqlLine.begin(SqlLine.java:643)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
{code}

> sqlline is not working on Java 9+
> -
>
> Key: IGNITE-11436
> URL: https://issues.apache.org/jira/browse/IGNITE-11436
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Anton Kurbanov
>Assignee: Anton Kurbanov
>Priority: Major
>
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.ignite.internal.util.GridUnsafe$2 
> (file:/var/lib/teamcity/data/work/ead1d0aeaa1f7813/i2test/var/suite-client/art-gg-pro/libs/ignite-core-2.7.2-p1.jar)
>  to field java.nio.Buffer.address
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.ignite.internal.util.GridUnsafe$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.ignite.internal.util.IgniteUtils
>   at org.apache.ignite.internal.IgnitionEx.(IgnitionEx.java:209)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.loadConfiguration(JdbcConnection.java:323)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.getIgnite(JdbcConnection.java:295)
>   at 
> org.apache.ignite.internal.jdbc2.JdbcConnection.(JdbcConnection.java:229)
>   at org.apache.ignite.IgniteJdbcDriver.connect(IgniteJdbcDriver.java:437)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
>   at sqlline.Commands.connect(Commands.java:1095)
>   at sqlline.Commands.connect(Commands.java:1001)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:791)
>   at sqlline.SqlLine.initArgs(SqlLine.java:566)
>   at sqlline.SqlLine.begin(SqlLine.java:643)
>   at sqlline.SqlLine.start(SqlLine.java:373)
>   at sqlline.SqlLine.main(SqlLine.java:265)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11436) sqlline is not working on Java 9+

2019-02-27 Thread Anton Kurbanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kurbanov updated IGNITE-11436:

Component/s: (was: sql)

> sqlline is not working on Java 9+
> -
>
> Key: IGNITE-11436
> URL: https://issues.apache.org/jira/browse/IGNITE-11436
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Anton Kurbanov
>Assignee: Anton Kurbanov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11396) Actualize JUnit3TestLegacyAssert

2019-02-27 Thread Ivan Fedotov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Fedotov updated IGNITE-11396:
--
Summary: Actualize JUnit3TestLegacyAssert  (was: Remove 
JUnit3TestLegacyAssert)

> Actualize JUnit3TestLegacyAssert
> 
>
> Key: IGNITE-11396
> URL: https://issues.apache.org/jira/browse/IGNITE-11396
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replace assert methods by imports.
> That will lead to full remove JUnit3TestLegacyAssert class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11436) sqlline is not working on Java 9+

2019-02-27 Thread Anton Kurbanov (JIRA)
Anton Kurbanov created IGNITE-11436:
---

 Summary: sqlline is not working on Java 9+
 Key: IGNITE-11436
 URL: https://issues.apache.org/jira/browse/IGNITE-11436
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.7
Reporter: Anton Kurbanov
Assignee: Anton Kurbanov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10476) Merge similar tests.

2019-02-27 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779288#comment-16779288
 ] 

Vladislav Pyatkov commented on IGNITE-10476:


Looks good to me.

> Merge similar tests.
> 
>
> Key: IGNITE-10476
> URL: https://issues.apache.org/jira/browse/IGNITE-10476
> Project: Ignite
>  Issue Type: Test
>Reporter: Andrew Mashenkov
>Assignee: Andrey Kalinin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CacheNamesSelfTest and CacheNamesWithSpecialCharactersTest looks similar and 
> can be merged.
> We already have test suite these tests are related to, so we can merge them 
> into GridCacheConfigurationValidationSelfTest.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11245) Replace unused IGNITE_BINARY_META_UPDATE_TIMEOUT parameter.

2019-02-27 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779302#comment-16779302
 ] 

Vladislav Pyatkov commented on IGNITE-11245:


Hi [~6uest], I could not fond any test by {{IGNITE_WAIT_SCHEMA_UPDATE}} 
attribute.
Would be grate, if you can add new tests.

> Replace unused IGNITE_BINARY_META_UPDATE_TIMEOUT parameter.
> ---
>
> Key: IGNITE-11245
> URL: https://issues.apache.org/jira/browse/IGNITE-11245
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Andrey Kalinin
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replace unused IGNITE_BINARY_META_UPDATE_TIMEOUT with 
> IGNITE_WAIT_SCHEMA_UPDATE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11435) SQL: Create a view with query history

2019-02-27 Thread Yury Gerzhedovich (JIRA)
Yury Gerzhedovich created IGNITE-11435:
--

 Summary: SQL: Create a view with query history
 Key: IGNITE-11435
 URL: https://issues.apache.org/jira/browse/IGNITE-11435
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Yury Gerzhedovich
 Fix For: 1.8


Need to expose Query History view - NODE_SQL_QUERY_HISTORY

see QueryHistoryMetrics class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 

[jira] [Commented] (IGNITE-10138) Description is not provided for operations of org.apache.ignite.mxbean.TransactionMetricsMxBean

2019-02-27 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779198#comment-16779198
 ] 

Vladislav Pyatkov commented on IGNITE-10138:


[~6uest], looks good to me.

> Description is not provided for operations of 
> org.apache.ignite.mxbean.TransactionMetricsMxBean
> ---
>
> Key: IGNITE-10138
> URL: https://issues.apache.org/jira/browse/IGNITE-10138
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Max Shonichev
>Assignee: Andrey Kalinin
>Priority: Minor
> Fix For: 2.5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Description mismatch for bean 
> 'TransactionMetrics.TransactionMetricsMxBeanImpl' 
> operation 'commitTime()': expected 'Last commit time.', actual 'Operation 
> exposed for management'
> operation 'rollbackTime()': expected 'Last rollback time.', actual 'Operation 
> exposed for management'
> operation 'txCommits()': expected 'Number of transaction commits.', actual 
> 'Operation exposed for management'
> operation 'txRollbacks()': expected 'Number of transaction rollbacks.', 
> actual 'Operation exposed for management'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.


Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside an index page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 

[jira] [Assigned] (IGNITE-11432) Add ability to specify auto-generated consistent ID in IgniteConfiguration

2019-02-27 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov reassigned IGNITE-11432:
---

Assignee: Dmitriy Pavlov

> Add ability to specify auto-generated consistent ID in IgniteConfiguration
> --
>
> Key: IGNITE-11432
> URL: https://issues.apache.org/jira/browse/IGNITE-11432
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Dmitriy Pavlov
>Priority: Major
>
> Let's consider the following scenario:
> 1) A user starts an empty node, the node generates a consistent ID as UUID 
> and creates a persistence folder {{node00-UUID}}
> 2) If a user cleans up the persistence directory, the node will generate 
> another consistent ID.
> Now, the user has no option to specify the old consistent ID in 
> configuration: if we set the conistent ID to the UUD, the persistece folder 
> will be named {{UUID}}. If the user specifies {{node00-UUID}}, the folder 
> will be named properly, but the actual consistent ID will be {{node00-UUID}}.
> We need to add an option to specify the proper consistent ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11190) Fix Apache Ignite tests of Camel Streamer under Java 11

2019-02-27 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779181#comment-16779181
 ] 

Roman Shtykh commented on IGNITE-11190:
---

[~dpavlov] How do you build Ignite with Java 11 locally?
I used
{quote}mvn clean package -Pjava-9+ -DskipTests 
{quote}
with OpenJdk 11.0.2 but it gives me lots of errors, related to 'sun' packages 
etc.

> Fix Apache Ignite tests of Camel Streamer under Java 11
> ---
>
> Key: IGNITE-11190
> URL: https://issues.apache.org/jira/browse/IGNITE-11190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Roman Shtykh
>Priority: Major
>
> Under Java 11 tests failed with an Error 500 - internal server error
> https://ci.ignite.apache.org/viewLog.html?buildId=2973663=buildResultsDiv=IgniteTests24Java8_Streamers
> Probably we need to pass startup parameters to 3rd party product/JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all tuple versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over tuple versions at update time under a read (or even 
write) lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We index all tuple versions which makes indexes use much more space than 
needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next tuple in a versions chain) and {{lock}} (a tx, which holds a write 
lock on the HEAD of the chain) There are several possible optimizations: 1) 
leave lock as is (in the cache index item) 2) use max version as lock version 
as well
2) Do not save all versions of a tuple in indexes; this mean removing version 
from key - newest version will overwrite an existing entry

There are two approaches with some pros and cons of how to link versions:

1) N2O (newer to older) - a reader (writer) gets the newest tuple version first 
and iterates over tuple versions from newer to older until it gets a position 
where it's snapshot placed between min and max versions of the examined tuple. 
Approach implies faster reads (more actual versions are get first) and 
necessity of updating all involved indexes on each write operation - slower 
writes in other words (may be optimized using logical pointers to the head of 
tuple versions chain). Cooperative VAC (update operations remove invisible for 
all readers tuple versions) is possible.
2) O2N (older to newer) - a reader gets the oldest visible tuple version and 
iterates over versions until it gets visible version. It allows not to update 
all indexes (except the case when an index value is changed), write operations 
become lighter. Cooperative VAC almost impossible.

We need to decide which approach to use depending on that load profile is 
preferable (OLTP/OLAP)

  was:
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next entry in the versions chain) and {{lock}} (a tx, which holds a 
write lock on the entry) There are several possible optimizations: 1) leave 
lock as is (in index leaf item) 2) use max version as lock version as well


> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all tuple versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over tuple versions at update time under a read (or 
> even write) lock on an index page which blocks other write (read) operations 
> for a relatively long period of time.
> 2) We index all 

[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

Using versions linking at the Data Store only (like it do other vendors) solves 
or decreases impact of that issues.

So, the proposed changes:

1) Change data page layout adding two fields into its header: {{link}} (a link 
to the next entry in the versions chain) and {{lock}} (a tx, which holds a 
write lock on the entry) There are several possible optimizations: 1) leave 
lock as is (in index leaf item) 2) use max version as lock version as well

  was:
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.


> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all entry versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over versions at update time under a read (or even 
> write) lock on an index page which blocks other write (read) operations for a 
> relatively long period of time.
> 2) We hold all versions of row in all indexes which makes them use much more 
> space than needed
> 3) We cannot implement several important improvements (data streamer 
> optimizations) because having several versions of one key in an index page 
> doesn't allow using of Invoke operations.
> 3) Write amplification suffers not only Data Store layer, but indexes as 
> well, which makes read/lookup ops into indexes much slower.
> Using versions linking at the Data Store only (like it do other vendors) 
> solves or decreases impact of that issues.
> So, the proposed changes:
> 1) Change data page layout adding two fields into its header: {{link}} (a 
> link to the next entry in the versions chain) and {{lock}} (a tx, which holds 
> a write lock on the entry) There are several possible optimizations: 1) leave 
> lock as is (in index leaf item) 2) use max version as lock version as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-11433:
--
Description: 
At now all entry versions are placed inside index trees. CacheDataTree is used 
to link versions each to other (using their order inside a data page).

Despite the fact that this approach is easy to implement and preferable at the 
first point, it brings several disadvantages:

1) We need to iterate over versions at update time under a read (or even write) 
lock on an index page which blocks other write (read) operations for a 
relatively long period of time.
2) We hold all versions of row in all indexes which makes them use much more 
space than needed
3) We cannot implement several important improvements (data streamer 
optimizations) because having several versions of one key in an index page 
doesn't allow using of Invoke operations.
3) Write amplification suffers not only Data Store layer, but indexes as well, 
which makes read/lookup ops into indexes much slower.

> MVCC: Link entry versions at the Data Store layer.
> --
>
> Key: IGNITE-11433
> URL: https://issues.apache.org/jira/browse/IGNITE-11433
> Project: Ignite
>  Issue Type: Improvement
>  Components: mvcc, sql
>Reporter: Igor Seliverstov
>Priority: Major
>
> At now all entry versions are placed inside index trees. CacheDataTree is 
> used to link versions each to other (using their order inside a data page).
> Despite the fact that this approach is easy to implement and preferable at 
> the first point, it brings several disadvantages:
> 1) We need to iterate over versions at update time under a read (or even 
> write) lock on an index page which blocks other write (read) operations for a 
> relatively long period of time.
> 2) We hold all versions of row in all indexes which makes them use much more 
> space than needed
> 3) We cannot implement several important improvements (data streamer 
> optimizations) because having several versions of one key in an index page 
> doesn't allow using of Invoke operations.
> 3) Write amplification suffers not only Data Store layer, but indexes as 
> well, which makes read/lookup ops into indexes much slower.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11434) SQL: Create a view with list of existing COLUMNS

2019-02-27 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11434:
---
Labels: iep-29  (was: )

> SQL: Create a view with list of existing COLUMNS
> 
>
> Key: IGNITE-11434
> URL: https://issues.apache.org/jira/browse/IGNITE-11434
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
>
> Need to expose SQL system view with COLUMNS information.
> Need to investigate more deeper which of information should be there.
>  
> As start point we can take 
> [https://dev.mysql.com/doc/refman/8.0/en/columns-table.html] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11434) SQL: Create a view with list of existing COLUMNS

2019-02-27 Thread Yury Gerzhedovich (JIRA)
Yury Gerzhedovich created IGNITE-11434:
--

 Summary: SQL: Create a view with list of existing COLUMNS
 Key: IGNITE-11434
 URL: https://issues.apache.org/jira/browse/IGNITE-11434
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Yury Gerzhedovich


Need to expose SQL system view with COLUMNS information.

Need to investigate more deeper which of information should be there.

 

As start point we can take 
[https://dev.mysql.com/doc/refman/8.0/en/columns-table.html] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11433) MVCC: Link entry versions at the Data Store layer.

2019-02-27 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-11433:
-

 Summary: MVCC: Link entry versions at the Data Store layer.
 Key: IGNITE-11433
 URL: https://issues.apache.org/jira/browse/IGNITE-11433
 Project: Ignite
  Issue Type: Improvement
  Components: mvcc, sql
Reporter: Igor Seliverstov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11190) Fix Apache Ignite tests of Camel Streamer under Java 11

2019-02-27 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779143#comment-16779143
 ] 

Roman Shtykh edited comment on IGNITE-11190 at 2/27/19 11:04 AM:
-

Looks like Camel 3.x is needed to work with Java 11
 
[http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406]

But 3.0.0-M1 that can be found in online maven repository seems to be built 
with 1.8

{{file ./org/apache/camel/main/Main.class}}
 {{./org/apache/camel/main/Main.class: compiled Java class data, version 52.0 
(Java 1.8)}}

  


was (Author: roman_s):
Looks like Camel 3.x is needed to work with Java 11
 
[http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406]

But 3.0.0-M1 that can be found in online maven repository seems to be built 
with 1.8

{{file ./org/apache/camel/main/Main.class}}
{{./org/apache/camel/main/Main.class: compiled Java class data, version 52.0 
(Java 1.8)}}
{{  }}
 

> Fix Apache Ignite tests of Camel Streamer under Java 11
> ---
>
> Key: IGNITE-11190
> URL: https://issues.apache.org/jira/browse/IGNITE-11190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Roman Shtykh
>Priority: Major
>
> Under Java 11 tests failed with an Error 500 - internal server error
> https://ci.ignite.apache.org/viewLog.html?buildId=2973663=buildResultsDiv=IgniteTests24Java8_Streamers
> Probably we need to pass startup parameters to 3rd party product/JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11190) Fix Apache Ignite tests of Camel Streamer under Java 11

2019-02-27 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779143#comment-16779143
 ] 

Roman Shtykh edited comment on IGNITE-11190 at 2/27/19 11:04 AM:
-

Looks like Camel 3.x is needed to work with Java 11
 
[http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406]

But 3.0.0-M1 that can be found in online maven repository seems to be built 
with 1.8

{{file ./org/apache/camel/main/Main.class}}
{{./org/apache/camel/main/Main.class: compiled Java class data, version 52.0 
(Java 1.8)}}
{{  }}
 


was (Author: roman_s):
Looks like Camel 3.x is needed to work with Java 11
 
[http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406]

But 3.0.0-M1 that can be found in online maven repository seems to be built 
with 1.8

{{$ file ./org/apache/camel/main/Main.class }}
{{./org/apache/camel/main/Main.class: compiled Java class data, version 52.0 
(Java 1.8)}}
 

> Fix Apache Ignite tests of Camel Streamer under Java 11
> ---
>
> Key: IGNITE-11190
> URL: https://issues.apache.org/jira/browse/IGNITE-11190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Roman Shtykh
>Priority: Major
>
> Under Java 11 tests failed with an Error 500 - internal server error
> https://ci.ignite.apache.org/viewLog.html?buildId=2973663=buildResultsDiv=IgniteTests24Java8_Streamers
> Probably we need to pass startup parameters to 3rd party product/JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11190) Fix Apache Ignite tests of Camel Streamer under Java 11

2019-02-27 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779143#comment-16779143
 ] 

Roman Shtykh edited comment on IGNITE-11190 at 2/27/19 11:03 AM:
-

Looks like Camel 3.x is needed to work with Java 11
 
[http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406]

But 3.0.0-M1 that can be found in online maven repository seems to be built 
with 1.8

{{$ file ./org/apache/camel/main/Main.class }}
{{./org/apache/camel/main/Main.class: compiled Java class data, version 52.0 
(Java 1.8)}}
 


was (Author: roman_s):
Looks like Camel 3.x is needed to work with Java 11
http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406

> Fix Apache Ignite tests of Camel Streamer under Java 11
> ---
>
> Key: IGNITE-11190
> URL: https://issues.apache.org/jira/browse/IGNITE-11190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Roman Shtykh
>Priority: Major
>
> Under Java 11 tests failed with an Error 500 - internal server error
> https://ci.ignite.apache.org/viewLog.html?buildId=2973663=buildResultsDiv=IgniteTests24Java8_Streamers
> Probably we need to pass startup parameters to 3rd party product/JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11190) Fix Apache Ignite tests of Camel Streamer under Java 11

2019-02-27 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779143#comment-16779143
 ] 

Roman Shtykh commented on IGNITE-11190:
---

Looks like Camel 3.x is needed to work with Java 11
http://camel.465427.n5.nabble.com/CAMEL-3-Java-8-and-Java-11-discussion-td5827333.html#a5827406

> Fix Apache Ignite tests of Camel Streamer under Java 11
> ---
>
> Key: IGNITE-11190
> URL: https://issues.apache.org/jira/browse/IGNITE-11190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Roman Shtykh
>Priority: Major
>
> Under Java 11 tests failed with an Error 500 - internal server error
> https://ci.ignite.apache.org/viewLog.html?buildId=2973663=buildResultsDiv=IgniteTests24Java8_Streamers
> Probably we need to pass startup parameters to 3rd party product/JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11432) Add ability to specify auto-generated consistent ID in IgniteConfiguration

2019-02-27 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-11432:
--
Description: 
Let's consider the following scenario:
1) A user starts an empty node, the node generates a consistent ID as UUID and 
creates a persistence folder {{node00-UUID}}
2) If a user cleans up the persistence directory, the node will generate 
another consistent ID.

Now, the user has no option to specify the old consistent ID in configuration: 
if we set the conistent ID to the UUD, the persistece folder will be named 
{{UUID}}. If the user specifies {{node00-UUID}}, the folder will be named 
properly, but the actual consistent ID will be {{node00-UUID}}.

We need to add an option to specify the proper consistent ID.

> Add ability to specify auto-generated consistent ID in IgniteConfiguration
> --
>
> Key: IGNITE-11432
> URL: https://issues.apache.org/jira/browse/IGNITE-11432
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
>
> Let's consider the following scenario:
> 1) A user starts an empty node, the node generates a consistent ID as UUID 
> and creates a persistence folder {{node00-UUID}}
> 2) If a user cleans up the persistence directory, the node will generate 
> another consistent ID.
> Now, the user has no option to specify the old consistent ID in 
> configuration: if we set the conistent ID to the UUD, the persistece folder 
> will be named {{UUID}}. If the user specifies {{node00-UUID}}, the folder 
> will be named properly, but the actual consistent ID will be {{node00-UUID}}.
> We need to add an option to specify the proper consistent ID.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11432) Add ability to specify auto-generated consistent ID in IgniteConfiguration

2019-02-27 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-11432:
-

 Summary: Add ability to specify auto-generated consistent ID in 
IgniteConfiguration
 Key: IGNITE-11432
 URL: https://issues.apache.org/jira/browse/IGNITE-11432
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11299) During SSL Handshake GridNioServer.processWrite is invoked constantly

2019-02-27 Thread Ilya Kasnacheev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779048#comment-16779048
 ] 

Ilya Kasnacheev commented on IGNITE-11299:
--

Also fixed TcpDiscoverySslParametersTest.testNonExistentCipherSuite under Java 
11.

Still waiting for review!

> During SSL Handshake GridNioServer.processWrite is invoked constantly
> -
>
> Key: IGNITE-11299
> URL: https://issues.apache.org/jira/browse/IGNITE-11299
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Labels: ssl
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Causes busy looping in processSelectionKeyOptimized()
> This also causes problems on Windows/Java 11 since if key is always ready for 
> writing it will never be shown as ready for reading.
> The reason for this behavior that during handshake we never un-listen OP_WRITE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7664) SQL: throw sane exception on unsupported SQL statements

2019-02-27 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779041#comment-16779041
 ] 

Ignite TC Bot commented on IGNITE-7664:
---

{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3185837buildTypeId=IgniteTests24Java8_RunAll]

> SQL: throw sane exception on unsupported SQL statements
> ---
>
> Key: IGNITE-7664
> URL: https://issues.apache.org/jira/browse/IGNITE-7664
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Alexander Paschenko
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Inspired by this SO issue:
> [https://stackoverflow.com/questions/48708238/ignite-database-create-schema-assertionerror]
> We should handle unsupported stuff more gracefully both in core code and 
> drivers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11431) SQL: Create a view with list of existing SCHEMAS

2019-02-27 Thread Yury Gerzhedovich (JIRA)
Yury Gerzhedovich created IGNITE-11431:
--

 Summary: SQL: Create a view with list of existing SCHEMAS
 Key: IGNITE-11431
 URL: https://issues.apache.org/jira/browse/IGNITE-11431
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Yury Gerzhedovich
Assignee: Yury Gerzhedovich
 Fix For: 2.8


We need to create a system view of currently available SQL schemas.

Minimal required information is Schema name

May be considered the follow info:

1) flag system or user schema

2) number of usages a schema.

Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11322) [USABILITY] Extend Node FAILED message by add consistentId if it exist

2019-02-27 Thread Sergey Antonov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779035#comment-16779035
 ] 

Sergey Antonov commented on IGNITE-11322:
-

[~6uest] looks good for me!

> [USABILITY] Extend Node FAILED message by add consistentId if it exist
> --
>
> Key: IGNITE-11322
> URL: https://issues.apache.org/jira/browse/IGNITE-11322
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: ARomantsov
>Assignee: Andrey Kalinin
>Priority: Major
>  Labels: newbie, usability
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Now I having only this 
> [GridDiscoveryManager] Node FAILED: TcpDiscoveryNode 
> [id=f8cd73a1-8da5-4a07-b298-55634dd7c9f8, addrs=ArrayList [127.0.0.1], 
> sockAddrs=HashSet [/127.0.0.1:47500], discPort=47500, order=1, intOrder=1, 
> lastExchangeTime=1550141566893, loc=false, isClient=false]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11430) Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view

2019-02-27 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11430:
---
Labels: iep-29  (was: )

> Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view
> ---
>
> Key: IGNITE-11430
> URL: https://issues.apache.org/jira/browse/IGNITE-11430
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Blocker
>  Labels: iep-29
> Fix For: 2.8
>
>
> Need to fix TIMESTAMP_WITH_TIMEZONE issue in the LOCAL_SQL_RUNNING_QUERIES 
> view.
> It appears during quering through JSDBC client, like a Dbeaver
> SELECT * FROM IGNITE.LOCAL_SQL_RUNNING_QUERIES;
>  
> [2019-02-27 
> 11:28:24,357][ERROR][client-connector-#56][ClientListenerNioListener] Failed 
> to process client request [req=JdbcQueryExecuteRequest [schemaName=PUBLIC, 
> pageSize=1024, maxRows=200, sqlQry=SELECT * FROM 
> IGNITE.LOCAL_SQL_RUNNING_QUERIES, args=Object[] [], 
> stmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
> class org.apache.ignite.binary.BinaryObjectException: Custom objects are not 
> supported
>  at 
> org.apache.ignite.internal.processors.odbc.SqlListenerUtils.writeObject(SqlListenerUtils.java:219)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcUtils.writeItems(JdbcUtils.java:44)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult.writeBinary(JdbcQueryExecuteResult.java:128)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcResponse.writeBinary(JdbcResponse.java:88)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcMessageParser.encode(JdbcMessageParser.java:91)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:198)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:48)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>  at 
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
>  at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>  at 
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11430) Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view

2019-02-27 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11430:
---
Ignite Flags:   (was: Docs Required)

> Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view
> ---
>
> Key: IGNITE-11430
> URL: https://issues.apache.org/jira/browse/IGNITE-11430
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Blocker
> Fix For: 2.8
>
>
> Need to fix TIMESTAMP_WITH_TIMEZONE issue in the LOCAL_SQL_RUNNING_QUERIES 
> view.
> It appears during quering through JSDBC client, like a Dbeaver
> SELECT * FROM IGNITE.LOCAL_SQL_RUNNING_QUERIES;
>  
> [2019-02-27 
> 11:28:24,357][ERROR][client-connector-#56][ClientListenerNioListener] Failed 
> to process client request [req=JdbcQueryExecuteRequest [schemaName=PUBLIC, 
> pageSize=1024, maxRows=200, sqlQry=SELECT * FROM 
> IGNITE.LOCAL_SQL_RUNNING_QUERIES, args=Object[] [], 
> stmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
> class org.apache.ignite.binary.BinaryObjectException: Custom objects are not 
> supported
>  at 
> org.apache.ignite.internal.processors.odbc.SqlListenerUtils.writeObject(SqlListenerUtils.java:219)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcUtils.writeItems(JdbcUtils.java:44)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult.writeBinary(JdbcQueryExecuteResult.java:128)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcResponse.writeBinary(JdbcResponse.java:88)
>  at 
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcMessageParser.encode(JdbcMessageParser.java:91)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:198)
>  at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:48)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
>  at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
>  at 
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
>  at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>  at 
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11430) Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL view

2019-02-27 Thread Yury Gerzhedovich (JIRA)
Yury Gerzhedovich created IGNITE-11430:
--

 Summary: Error on querying IGNITE.LOCAL_SQL_RUNNING_QUERIES SQL 
view
 Key: IGNITE-11430
 URL: https://issues.apache.org/jira/browse/IGNITE-11430
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Yury Gerzhedovich
Assignee: Yury Gerzhedovich
 Fix For: 2.8


Need to fix TIMESTAMP_WITH_TIMEZONE issue in the LOCAL_SQL_RUNNING_QUERIES view.

It appears during quering through JSDBC client, like a Dbeaver

SELECT * FROM IGNITE.LOCAL_SQL_RUNNING_QUERIES;

 

[2019-02-27 
11:28:24,357][ERROR][client-connector-#56][ClientListenerNioListener] Failed to 
process client request [req=JdbcQueryExecuteRequest [schemaName=PUBLIC, 
pageSize=1024, maxRows=200, sqlQry=SELECT * FROM 
IGNITE.LOCAL_SQL_RUNNING_QUERIES, args=Object[] [], 
stmtType=ANY_STATEMENT_TYPE, autoCommit=true]]
class org.apache.ignite.binary.BinaryObjectException: Custom objects are not 
supported
 at 
org.apache.ignite.internal.processors.odbc.SqlListenerUtils.writeObject(SqlListenerUtils.java:219)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcUtils.writeItems(JdbcUtils.java:44)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult.writeBinary(JdbcQueryExecuteResult.java:128)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcResponse.writeBinary(JdbcResponse.java:88)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcMessageParser.encode(JdbcMessageParser.java:91)
 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:198)
 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:48)
 at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
 at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
 at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
 at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
 at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11428) Schemas don't show in dbeaver

2019-02-27 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11428:
---
Summary: Schemas don't show in dbeaver  (was: schemas don't show in dbeaver)

> Schemas don't show in dbeaver
> -
>
> Key: IGNITE-11428
> URL: https://issues.apache.org/jira/browse/IGNITE-11428
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>
> At Database navigator tab we can see just single schema PUBLIC. Need to add 
> to jdbc driver support show all schemas except INFORMATIONAL, due to it H2 
> schema and contains incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11428) Schemas don't show in Dbeaver

2019-02-27 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11428:
---
Summary: Schemas don't show in Dbeaver  (was: Schemas don't show in dbeaver)

> Schemas don't show in Dbeaver
> -
>
> Key: IGNITE-11428
> URL: https://issues.apache.org/jira/browse/IGNITE-11428
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>
> At Database navigator tab we can see just single schema PUBLIC. Need to add 
> to jdbc driver support show all schemas except INFORMATIONAL, due to it H2 
> schema and contains incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11428) schemas don't show in dbeaver

2019-02-27 Thread Yury Gerzhedovich (JIRA)
Yury Gerzhedovich created IGNITE-11428:
--

 Summary: schemas don't show in dbeaver
 Key: IGNITE-11428
 URL: https://issues.apache.org/jira/browse/IGNITE-11428
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Yury Gerzhedovich
 Fix For: 2.8


At Database navigator tab we can see just single schema PUBLIC. Need to add to 
jdbc driver support show all schemas except INFORMATIONAL, due to it H2 schema 
and contains incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)