[jira] [Commented] (IGNITE-11512) Add counter left partition for index rebuild in CacheGroupMetricsMXBean

2019-04-23 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824634#comment-16824634
 ] 

Ivan Rakov commented on IGNITE-11512:
-

[~a-polyakov], some comments:
1) CacheGroupMetricsMXBean lacks @MXBeanDescription for new method (you can 
override method like it's done in CacheMetricsMXBean).
2) Please put spaces around binary operator '='
{code:java}
metrics=new CacheGroupMetricsImpl();
{code}
3) Both places in code where you call "setIndexBuildCountPartitionsLeft" use 
SchemaIndexCacheVisitorImpl. Can we move "setIndexBuildCountPartitionsLeft" 
inside SchemaIndexCacheVisitorImpl to avoid code duplication?
4) Please add public modifier to 
CacheGroupMetrics#getIndexBuildCountPartitionsLeft
5) CacheGroupMetricsImpl.java:26 - unnecessary blank line
6) Abbreviation plugin reports about "indexBuildCountPartitionsLeft" in 
CacheGroupMetricsImpl and some field names in 
CacheGroupMetricsMBeanWithIndexTest, please fix.
7) From my point of view, it would be better to move all logic from 
CacheGroupMetricsMXBeanImpl to CacheGroupMetricsImpl 
(CacheGroupMetricsMXBeanImpl would just delegate all method calls to 
CacheGroupMetricsImpl). Advantages: we'll be able to unit-test cache group 
metrics without registering MXBean, we'll be able to post metrics to another 
metrics consumer without registering MXBean and so on. What do you think?

> Add counter left partition for index rebuild in CacheGroupMetricsMXBean
> ---
>
> Key: IGNITE-11512
> URL: https://issues.apache.org/jira/browse/IGNITE-11512
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.7
>Reporter: Alexand Polyakov
>Assignee: Alexand Polyakov
>Priority: Major
>
> Now, if ran rebuild indexes, this can determined only on load CPU and thread 
> dump. The presence of the "how many partitions left to index" metric will 
> help determine whether the rebuilding is going on and on which cache, as well 
> as the percentage of completion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11579) Add new commands to deal with garbage in partitions which left after cache destroy in shared cache groups

2019-04-23 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824585#comment-16824585
 ] 

Ivan Rakov commented on IGNITE-11579:
-

[~EdShangGG], I've looked through the changes and left some comments in PR.
Also, let's create a documentation ticket for the new command.

> Add new commands to deal with garbage in partitions which left after cache 
> destroy in shared cache groups
> -
>
> Key: IGNITE-11579
> URL: https://issues.apache.org/jira/browse/IGNITE-11579
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Eduard Shangareev
>Assignee: Eduard Shangareev
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The scenario of how to get to this situation (garbage in partition) described 
> in IGNITE-11578.
> We need to add a new command for control.sh:
> - show such keys;
> - remove such keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11256) Implement read-only mode for grid

2019-04-23 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824549#comment-16824549
 ] 

Ignite TC Bot commented on IGNITE-11256:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET{color} [[tests 
17|https://ci.ignite.apache.org/viewLog.html?buildId=3682257]]
* exe: ClusterMetricsParityTest.TestClusterMetrics - 0,0% fails in last 185 
master runs.
* exe: ClusterParityTest.TestCluster - 0,0% fails in last 185 master runs.

{color:#d04437}Basic 1{color} [[tests 
9|https://ci.ignite.apache.org/viewLog.html?buildId=3682261]]
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustThrowExceptionWhenBaselineChangedManually
 - 0,0% fails in last 31 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustDisabledAfterGridHasLostPart
 - 0,0% fails in last 31 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.shouldJoinSuccessBecauseClusterHasPersistentNode 
- 0,0% fails in last 31 master runs.
* IgniteBasicTestSuite: 
BPlusTreeFakeReuseSelfTest.testTestRandomPutRemoveMultithreaded_1_30_0 - 0,0% 
fails in last 185 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustSinceSecondNodeLeft - 0,0% 
fails in last 31 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustSinceCoordinatorLeft - 
0,0% fails in last 31 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustAfterNodeLeft - 0,0% fails 
in last 31 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustAfterNodeJoin - 0,0% fails 
in last 31 master runs.

{color:#d04437}Start Nodes{color} [[tests 
14|https://ci.ignite.apache.org/viewLog.html?buildId=3682263]]
* IgniteStartStopRestartTestSuite: 
IgniteProjectionStartStopRestartSelfTest.testCustomScript - 0,0% fails in last 
132 master runs.
* IgniteStartStopRestartTestSuite: 
IgniteProjectionStartStopRestartSelfTest.testRestartNodesByIds - 0,0% fails in 
last 132 master runs.

{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3682259]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3679361buildTypeId=IgniteTests24Java8_RunAll]

> Implement read-only mode for grid
> -
>
> Key: IGNITE-11256
> URL: https://issues.apache.org/jira/browse/IGNITE-11256
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Sergey Antonov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should be triggered from control.sh utility.
> Useful for maintenance work, for example checking partition consistency 
> (idle_verify)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11103) "Control utility --cache idle_verify --dump --cache-filter ALL" comand result doesn't contain ignite-sys-cache group

2019-04-23 Thread Sergey Antonov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824477#comment-16824477
 ] 

Sergey Antonov commented on IGNITE-11103:
-

[~ilyak] I left some comments. Please fix them!

> "Control utility --cache idle_verify --dump --cache-filter ALL" comand result 
> doesn't contain ignite-sys-cache group
> 
>
> Key: IGNITE-11103
> URL: https://issues.apache.org/jira/browse/IGNITE-11103
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: ARomantsov
>Assignee: Ilya Kasnacheev
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Look at functional add in https://issues.apache.org/jira/browse/IGNITE-9980 
> and find that issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11798) Memory leak on unstable topology caused by partition reservation

2019-04-23 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-11798:
-
Affects Version/s: 2.7

> Memory leak on unstable topology caused by partition reservation
> 
>
> Key: IGNITE-11798
> URL: https://issues.apache.org/jira/browse/IGNITE-11798
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, sql
>Affects Versions: 2.7
>Reporter: Pavel Vinokurov
>Priority: Major
> Attachments: PartitionReservationReproducer.java
>
>
> Executing queries on unstable topology leads to OOM caused by leak of  the 
> partition reservation.
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11798) Memory leak on unstable topology caused by partition reservation

2019-04-23 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-11798:
-
Summary: Memory leak on unstable topology caused by partition reservation  
(was: Memory leak on unstable topology caused by reservation partitions)

> Memory leak on unstable topology caused by partition reservation
> 
>
> Key: IGNITE-11798
> URL: https://issues.apache.org/jira/browse/IGNITE-11798
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, sql
>Reporter: Pavel Vinokurov
>Priority: Major
> Attachments: PartitionReservationReproducer.java
>
>
> Executing queries on unstable topology leads to OOM caused by leak of  the 
> partition reservation.
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11798) Memory leak on unstable topology caused by reservation partitions

2019-04-23 Thread Pavel Vinokurov (JIRA)
Pavel Vinokurov created IGNITE-11798:


 Summary: Memory leak on unstable topology caused by reservation 
partitions
 Key: IGNITE-11798
 URL: https://issues.apache.org/jira/browse/IGNITE-11798
 Project: Ignite
  Issue Type: Bug
  Components: cache, sql
Reporter: Pavel Vinokurov
 Attachments: PartitionReservationReproducer.java

Executing queries on unstable topology leads to OOM caused by leak of  the 
partition reservation.
The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11256) Implement read-only mode for grid

2019-04-23 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824266#comment-16824266
 ] 

Ignite TC Bot commented on IGNITE-11256:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache 2{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3679311]]
* IgniteClientCacheStartFailoverTest.testClientStartCloseServersRestart (last 
started)

{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3679333]]

{color:#d04437}Scala (Visor Console){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3679283]]

{color:#d04437}Platform .NET{color} [[tests 
6|https://ci.ignite.apache.org/viewLog.html?buildId=3679332]]
* exe: ClusterMetricsParityTest.TestClusterMetrics - 0,0% fails in last 185 
master runs.
* exe: ContinuousQueryAbstractTest.TestTimeout - 0,0% fails in last 185 master 
runs.
* exe: ClusterParityTest.TestCluster - 0,0% fails in last 185 master runs.

{color:#d04437}SPI{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3679277]]
* IgniteSpiTestSuite: 
IgniteClientReconnectMassiveShutdownTest.testMassiveServersShutdown2 - 0,0% 
fails in last 127 master runs.

{color:#d04437}Cache 3{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3679312]]
* IgniteBinaryObjectsCacheTestSuite3: 
IgniteCacheGroupsTest.testCreateDestroyCachesMvccTxPartitioned - 0,0% fails in 
last 129 master runs.

{color:#d04437}Cache 6{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3679315]]
* IgniteCacheTestSuite6: 
CacheExchangeMergeTest.testFailExchangeCoordinatorChange_NoMerge_2 - 0,0% fails 
in last 125 master runs.

{color:#d04437}Basic 1{color} [[tests 
8|https://ci.ignite.apache.org/viewLog.html?buildId=3679289]]
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustThrowExceptionWhenBaselineChangedManually
 - 0,0% fails in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustDisabledAfterGridHasLostPart
 - 0,0% fails in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.shouldJoinSuccessBecauseClusterHasPersistentNode 
- 0,0% fails in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustSinceSecondNodeLeft - 0,0% 
fails in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustSinceCoordinatorLeft - 
0,0% fails in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustAfterNodeLeft - 0,0% fails 
in last 28 master runs.
* IgniteBasicTestSuite: 
BaselineAutoAdjustInMemoryTest.testBaselineAutoAdjustAfterNodeJoin - 0,0% fails 
in last 28 master runs.

{color:#d04437}Start Nodes{color} [[tests 
14|https://ci.ignite.apache.org/viewLog.html?buildId=3679276]]
* IgniteStartStopRestartTestSuite: 
IgniteProjectionStartStopRestartSelfTest.testRestartNodesByIds - 0,0% fails in 
last 130 master runs.

{color:#d04437}Cache 9{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3679318]]
* IgniteCacheTestSuite9: 
IgniteTxCacheWriteSynchronizationModesMultithreadedTest.testMultithreadedFullSyncRestart
 - 0,0% fails in last 128 master runs.

{color:#d04437}Platform C++ (Win x64 / Release){color} [[tests 1 
BuildFailureOnMessage 
|https://ci.ignite.apache.org/viewLog.html?buildId=3679287]]
* IgniteOdbcTest: SslQueriesTestSuite: TestConnectionSslSuccess - 2,2% fails in 
last 359 master runs.

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3679361buildTypeId=IgniteTests24Java8_RunAll]

> Implement read-only mode for grid
> -
>
> Key: IGNITE-11256
> URL: https://issues.apache.org/jira/browse/IGNITE-11256
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Sergey Antonov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should be triggered from control.sh utility.
> Useful for maintenance work, for example checking partition consistency 
> (idle_verify)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11470) Views don't show in Dbeaver

2019-04-23 Thread Taras Ledkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824248#comment-16824248
 ] 

Taras Ledkov commented on IGNITE-11470:
---

[~jooger], the patch is OK with me.

> Views don't show in Dbeaver
> ---
>
> Key: IGNITE-11470
> URL: https://issues.apache.org/jira/browse/IGNITE-11470
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At Database navigator tab we can see no a views. As of now we should see at 
> least SQL system views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7285) Add default query timeout

2019-04-23 Thread Andrew Mashenkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824241#comment-16824241
 ] 

Andrew Mashenkov commented on IGNITE-7285:
--

[~samaitra], I've left few comments to PR.

> Add default query timeout
> -
>
> Key: IGNITE-7285
> URL: https://issues.apache.org/jira/browse/IGNITE-7285
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, sql
>Affects Versions: 2.3
>Reporter: Valentin Kulichenko
>Assignee: Saikat Maitra
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.8
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently it's possible to provide timeout only on query level. It would be 
> very useful to have default timeout value provided on cache startup. Let's 
> add {{CacheConfiguration#defaultQueryTimeout}} configuration property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11796) WAL recovery stopped with 'Partition consistency failure: newPageId'

2019-04-23 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824205#comment-16824205
 ] 

Dmitriy Pavlov commented on IGNITE-11796:
-

[~agoncharuk] could you please take a look at [the 
PR|https://github.com/apache/ignite/pull/6496/files] ?

> WAL recovery stopped with 'Partition consistency failure: newPageId'
> 
>
> Key: IGNITE-11796
> URL: https://issues.apache.org/jira/browse/IGNITE-11796
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Blocker
> Fix For: 2.8, 2.7.5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://repository.apache.org/content/repositories/orgapacheignite-1436/ test 
> using TC bot showed that there can be a situation when DB contains an 
> incorrect record. This record probably does not lead to corruption (because 
> 2.7.0 works well with this DB). 
> But validation introduced in 
> https://issues.apache.org/jira/browse/IGNITE-11030 stops WAL recovery.
> {noformat}
> java.util.concurrent.ExecutionException: java.lang.AssertionError: Partition 
> consistency failure: newPageId=1f3d3 (newPartId: 0) 
> pageId=101001af3d3 (partId: 26)
>   at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.ignite.ci.di.IgniteTcBotModule.lambda$configure$0(IgniteTcBotModule.java:61)
>   at 
> com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:85)
>   at 
> com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:57)
>   at 
> com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:59)
>   at 
> com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:47)
>   at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1050)
>   at 
> com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1086)
>   at 
> org.apache.ignite.ci.web.auth.AuthenticationFilter.filter(AuthenticationFilter.java:127)
>   at 
> org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
>   at 
> org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
>   at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
>   at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:245)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:857)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:535)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> 

[jira] [Created] (IGNITE-11797) Repair historical rebalancing for atomic and mixed tx-atomic cache groups.

2019-04-23 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-11797:
--

 Summary: Repair historical rebalancing for atomic and mixed 
tx-atomic cache groups.
 Key: IGNITE-11797
 URL: https://issues.apache.org/jira/browse/IGNITE-11797
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov


IGNITE-10078 only solves consistency problems for tx mode.

For atomic caches the rebalance consistency issues still remain and should be 
fixed together with improvement of atomic cache protocol consistency.

Mixed tx-atomic mode for cache group should be not allowed at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11470) Views don't show in Dbeaver

2019-04-23 Thread Yury Gerzhedovich (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824144#comment-16824144
 ] 

Yury Gerzhedovich commented on IGNITE-11470:


[~tledkov-gridgain],

Please review the patch again after some modifications. 

 

Bot Vise in progress

> Views don't show in Dbeaver
> ---
>
> Key: IGNITE-11470
> URL: https://issues.apache.org/jira/browse/IGNITE-11470
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At Database navigator tab we can see no a views. As of now we should see at 
> least SQL system views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11794) Remove initial counter from update counter contract.

2019-04-23 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-11794:
---
Description: 
We have 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#initial and 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#updateInitial
 method in patition update counter contract but they are not needed.

LWM should be used for initial update counter.

 

  was:We gave 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#initial and 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#updateInitial
 method in patition update counter contract but they are not needed.


> Remove initial counter from update counter contract.
> 
>
> Key: IGNITE-11794
> URL: https://issues.apache.org/jira/browse/IGNITE-11794
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
>
> We have 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#initial 
> and 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#updateInitial
>  method in patition update counter contract but they are not needed.
> LWM should be used for initial update counter.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11412) Actualize JUnit3TestLegacySupport class

2019-04-23 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824100#comment-16824100
 ] 

Ivan Pavlukhin commented on IGNITE-11412:
-

[~ivanan.fed], merged to master. Thank you for contribution!

> Actualize JUnit3TestLegacySupport class
> ---
>
> Key: IGNITE-11412
> URL: https://issues.apache.org/jira/browse/IGNITE-11412
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor JUnit3TestLegacySupport class and remove, if it is possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11796) WAL recovery stopped with 'Partition consistency failure: newPageId'

2019-04-23 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-11796:
---

 Summary: WAL recovery stopped with 'Partition consistency failure: 
newPageId'
 Key: IGNITE-11796
 URL: https://issues.apache.org/jira/browse/IGNITE-11796
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitriy Pavlov
Assignee: Dmitriy Pavlov
 Fix For: 2.8, 2.7.5


https://repository.apache.org/content/repositories/orgapacheignite-1436/ test 
using TC bot showed that there can be a situation when DB contains an incorrect 
record. This record probably does not lead to corruption (because 2.7.0 works 
well with this DB). 

But validation introduced in https://issues.apache.org/jira/browse/IGNITE-11030 
stops WAL recovery.

{noformat}
java.util.concurrent.ExecutionException: java.lang.AssertionError: Partition 
consistency failure: newPageId=1f3d3 (newPartId: 0) 
pageId=101001af3d3 (partId: 26)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.ignite.ci.di.IgniteTcBotModule.lambda$configure$0(IgniteTcBotModule.java:61)
at 
com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:85)
at 
com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:57)
at 
com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:59)
at 
com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:47)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1050)
at 
com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1086)
at 
org.apache.ignite.ci.web.auth.AuthenticationFilter.filter(AuthenticationFilter.java:127)
at 
org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
at 
org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
at 
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:245)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:857)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:535)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:503)

[jira] [Updated] (IGNITE-11786) Implement thread-local stack for trucking page locks

2019-04-23 Thread Dmitriy Govorukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Govorukhin updated IGNITE-11786:

Description: 
The new structure should work as a stack. 
When thread obtains lock we push pageId (+meta) on the top of the stack when 
thread release lock we pop pageId from the stack. Their cases when thread may 
unlock page not from current thread frame (some split pages in B-tree), from 
previous, in this case, we should go down to stack and find this page and 
update meta.

The stack should implement PageLockListener, and provide functionality for 
tracking page locks.

  was:
The new structure should work as a stack. 
When thread obtains lock we push pageId (+meta) on the top of the stack when 
thread release lock we pop pageId from the stack. Their cases when thread may 
unlock page not from current thread frame (some split pages in B-tree), from 
previous, in this case, we should go down to stack and find this page and 
update meta.

{code}
public interface LockStack {
void push(int cacheId, long pageId, int flags);

void pop(int cacheId, long pageId, int flags);

int R = 1;
int W = 2;
}
{code}


> Implement thread-local stack for trucking page locks
> 
>
> Key: IGNITE-11786
> URL: https://issues.apache.org/jira/browse/IGNITE-11786
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.8
>
>
> The new structure should work as a stack. 
> When thread obtains lock we push pageId (+meta) on the top of the stack when 
> thread release lock we pop pageId from the stack. Their cases when thread may 
> unlock page not from current thread frame (some split pages in B-tree), from 
> previous, in this case, we should go down to stack and find this page and 
> update meta.
> The stack should implement PageLockListener, and provide functionality for 
> tracking page locks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11412) Actualize JUnit3TestLegacySupport class

2019-04-23 Thread Ivan Fedotov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824054#comment-16824054
 ] 

Ivan Fedotov commented on IGNITE-11412:
---

[~Pavlukhin], sorry, did not noticed the latest master changes.

Conflicts are resolved.

> Actualize JUnit3TestLegacySupport class
> ---
>
> Key: IGNITE-11412
> URL: https://issues.apache.org/jira/browse/IGNITE-11412
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor JUnit3TestLegacySupport class and remove, if it is possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11790) Optimize rebalance history calculation.

2019-04-23 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-11790:
---
Description: 
Currently we pass initial update counters to coordinator during PME.

But this is not needed for calculation rebalance history.

It can be calculated like: maxCntr - updateCounter(last counter for sequential 
history)

Moreother this leads to excessive rebalance volume

 

 

 

  was:
Currently we pass initial update counters to coordinator during PME.

But this is not needed for calculation rebalance history.

It can be calculated like: maxCntr - updateCounter(last counter for sequential 
history)

 

 

 


> Optimize rebalance history calculation.
> ---
>
> Key: IGNITE-11790
> URL: https://issues.apache.org/jira/browse/IGNITE-11790
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
>
> Currently we pass initial update counters to coordinator during PME.
> But this is not needed for calculation rebalance history.
> It can be calculated like: maxCntr - updateCounter(last counter for 
> sequential history)
> Moreother this leads to excessive rebalance volume
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11795) JDBC thin datastreamer don't throws exception is case of problems on closing streamer.

2019-04-23 Thread Yury Gerzhedovich (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-11795:
---
Component/s: sql

> JDBC thin datastreamer don't throws exception is case of problems on closing 
> streamer.
> --
>
> Key: IGNITE-11795
> URL: https://issues.apache.org/jira/browse/IGNITE-11795
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql, thin client
>Reporter: Sergey Antonov
>Priority: Major
> Fix For: 2.8
>
>
> Now from code we can't detect problems, If some errors occurs in closing jdbc 
> thin datastreamer. Now we could detect it in logs only. The main reason of 
> this is using {{U.close()}} for streamers in 
> {{SqlClientContext#disableStreaming()}}. 
> You could add test to {{JdbcThinStreamingAbstractSelfTest}} and reproduce 
> problem.
> {code:java}
> /**
>  * @throws Exception if failed.
>  */
> @Test
> public void testStreamedInsertFailsOnReadOnlyMode() throws Exception {
> for (Ignite grid : G.allGrids())
> ((IgniteEx) grid).context().cache().context().readOnlyMode(true);
> try {
> boolean failed = false;
> try (Connection ordinalCon = createOrdinaryConnection();
>  Statement selectStmt = ordinalCon.createStatement()
> ) {
> try (ResultSet rs = selectStmt.executeQuery("select count(*) 
> from PUBLIC.Person")) {
> assertTrue(rs.next());
> assertEquals(0, rs.getLong(1));
> }
> try (Connection conn = createStreamedConnection(true)) {
> try (PreparedStatement stmt =
>  conn.prepareStatement("insert into 
> PUBLIC.Person(\"id\", \"name\") values (?, ?)")
> ) {
> for (int i = 1; i <= 2; i++) {
> stmt.setInt(1, i);
> stmt.setString(2, nameForId(i));
> stmt.executeUpdate();
> }
> }
> }
> catch (Exception e) {
> log.error("Insert failed", e);
> failed = true;
> }
> try (ResultSet rs = selectStmt.executeQuery("select count(*) 
> from PUBLIC.Person")) {
> assertTrue(rs.next());
> assertEquals("Insert should be failed!", 0, 
> rs.getLong(1));
> }
> }
> assertTrue("Insert should be failed!", failed);
> }
> finally {
> for (Ignite grid : G.allGrids())
> ((IgniteEx) 
> grid).context().cache().context().readOnlyMode(false);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11795) JDBC thin datastreamer don't throws exception is case of problems on closing streamer.

2019-04-23 Thread Sergey Antonov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Antonov updated IGNITE-11795:

Description: 
Now from code we can't detect problems, If some errors occurs in closing jdbc 
thin datastreamer. Now we could detect it in logs only. The main reason of this 
is using {{U.close()}} for streamers in 
{{SqlClientContext#disableStreaming()}}. 

You could add test to {{JdbcThinStreamingAbstractSelfTest}} and reproduce 
problem.
{code:java}
/**
 * @throws Exception if failed.
 */
@Test
public void testStreamedInsertFailsOnReadOnlyMode() throws Exception {
for (Ignite grid : G.allGrids())
((IgniteEx) grid).context().cache().context().readOnlyMode(true);

try {
boolean failed = false;

try (Connection ordinalCon = createOrdinaryConnection();
 Statement selectStmt = ordinalCon.createStatement()
) {
try (ResultSet rs = selectStmt.executeQuery("select count(*) 
from PUBLIC.Person")) {
assertTrue(rs.next());

assertEquals(0, rs.getLong(1));
}

try (Connection conn = createStreamedConnection(true)) {
try (PreparedStatement stmt =
 conn.prepareStatement("insert into 
PUBLIC.Person(\"id\", \"name\") values (?, ?)")
) {
for (int i = 1; i <= 2; i++) {
stmt.setInt(1, i);
stmt.setString(2, nameForId(i));

stmt.executeUpdate();
}
}
}
catch (Exception e) {
log.error("Insert failed", e);

failed = true;
}

try (ResultSet rs = selectStmt.executeQuery("select count(*) 
from PUBLIC.Person")) {
assertTrue(rs.next());

assertEquals("Insert should be failed!", 0, rs.getLong(1));
}
}

assertTrue("Insert should be failed!", failed);
}
finally {
for (Ignite grid : G.allGrids())
((IgniteEx) 
grid).context().cache().context().readOnlyMode(false);
}
}
{code}



  was:Now from code can't detect problems, If some errors occurs in closing 
jdbc thin datastreamer. Now we could detect it in logs only. The main reason is 
using `U.close()` for streamers in 


> JDBC thin datastreamer don't throws exception is case of problems on closing 
> streamer.
> --
>
> Key: IGNITE-11795
> URL: https://issues.apache.org/jira/browse/IGNITE-11795
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, thin client
>Reporter: Sergey Antonov
>Priority: Major
> Fix For: 2.8
>
>
> Now from code we can't detect problems, If some errors occurs in closing jdbc 
> thin datastreamer. Now we could detect it in logs only. The main reason of 
> this is using {{U.close()}} for streamers in 
> {{SqlClientContext#disableStreaming()}}. 
> You could add test to {{JdbcThinStreamingAbstractSelfTest}} and reproduce 
> problem.
> {code:java}
> /**
>  * @throws Exception if failed.
>  */
> @Test
> public void testStreamedInsertFailsOnReadOnlyMode() throws Exception {
> for (Ignite grid : G.allGrids())
> ((IgniteEx) grid).context().cache().context().readOnlyMode(true);
> try {
> boolean failed = false;
> try (Connection ordinalCon = createOrdinaryConnection();
>  Statement selectStmt = ordinalCon.createStatement()
> ) {
> try (ResultSet rs = selectStmt.executeQuery("select count(*) 
> from PUBLIC.Person")) {
> assertTrue(rs.next());
> assertEquals(0, rs.getLong(1));
> }
> try (Connection conn = createStreamedConnection(true)) {
> try (PreparedStatement stmt =
>  conn.prepareStatement("insert into 
> PUBLIC.Person(\"id\", \"name\") values (?, ?)")
> ) {
> for (int i = 1; i <= 2; i++) {
> stmt.setInt(1, i);
> stmt.setString(2, nameForId(i));
> stmt.executeUpdate();
> }
> }
> }
> catch (Exception e) {
> log.error("Insert failed", e);
> failed = true;
> }
> try (ResultSet rs = selectStmt.executeQuery("select count(*) 
> from PUBLIC.Person")) {
>

[jira] [Updated] (IGNITE-11795) JDBC thin datastreamer don't throws exception is case of problems on closing streamer.

2019-04-23 Thread Sergey Antonov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Antonov updated IGNITE-11795:

Description: Now from code can't detect problems, If some errors occurs in 
closing jdbc thin datastreamer. Now we could detect it in logs only. The main 
reason is using `U.close()` for streamers in 

> JDBC thin datastreamer don't throws exception is case of problems on closing 
> streamer.
> --
>
> Key: IGNITE-11795
> URL: https://issues.apache.org/jira/browse/IGNITE-11795
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, thin client
>Reporter: Sergey Antonov
>Priority: Major
> Fix For: 2.8
>
>
> Now from code can't detect problems, If some errors occurs in closing jdbc 
> thin datastreamer. Now we could detect it in logs only. The main reason is 
> using `U.close()` for streamers in 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11795) JDBC thin datastreamer don't throws exception is case of problems on closing streamer.

2019-04-23 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-11795:
---

 Summary: JDBC thin datastreamer don't throws exception is case of 
problems on closing streamer.
 Key: IGNITE-11795
 URL: https://issues.apache.org/jira/browse/IGNITE-11795
 Project: Ignite
  Issue Type: Bug
  Components: jdbc, thin client
Reporter: Sergey Antonov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11412) Actualize JUnit3TestLegacySupport class

2019-04-23 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823963#comment-16823963
 ] 

Ivan Pavlukhin edited comment on IGNITE-11412 at 4/23/19 11:28 AM:
---

[~ivanan.fed] could you please merge fresh master into PR and resolve 
conflicts? Automatic merge is not possible without it.


was (Author: pavlukhin):
[~ivanan.fed] could you please merge fresh master into and resolve conflicts in 
your PR? Automatic merge is not possible without it.

> Actualize JUnit3TestLegacySupport class
> ---
>
> Key: IGNITE-11412
> URL: https://issues.apache.org/jira/browse/IGNITE-11412
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor JUnit3TestLegacySupport class and remove, if it is possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11657) Deadlock on Cache.putAll(Map)

2019-04-23 Thread Gaurav Aggarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823982#comment-16823982
 ] 

Gaurav Aggarwal commented on IGNITE-11657:
--

Even DataStreamers face the same kind of issues, that means we have to get rid 
of them too

> Deadlock on Cache.putAll(Map)
> -
>
> Key: IGNITE-11657
> URL: https://issues.apache.org/jira/browse/IGNITE-11657
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5
>Reporter: Gaurav Aggarwal
>Priority: Major
>
> We have been seeing consistent Deadlocks with Ignite Latest versions on 
> Cache.putAll, as putAll tries to lock all the keys before updating them. 
>   
>  As per the documentation (below) putAll should be Equivalent to individual 
> iterative puts and these individual puts should behave atomically but not the 
> entire pull, But the error logs pasted further below seem to suggest otherwise
>   
>  *putAll Documentation*
> h5. void javax.cache.Cache.putAll(Map map)
> Copies all of the entries from the specified map to the {{Cache}}.
>  The effect of this call is equivalent to that of calling {{put(k, v)}} on 
> this cache once for each mapping from key {{k}} to value {{v}} in the 
> specified map.
>  The order in which the individual puts occur is undefined.
>  The behavior of this operation is undefined if entries in the cache 
> corresponding to entries in the map are modified or removed while this 
> operation is in progress. or if map is modified while the operation is in 
> progress.
>  In Default Consistency mode, individual puts occur atomically but not the 
> entire putAll. Listeners may observe individual updates.
>   
>   
>  *Error Log suggesting otherwise*
>   
>  Deadlock: true
>  Completed: 12808
>  Thread [name="sys-stripe-3-#4%VPS%", id=27, state=WAITING, blockCnt=3, 
> waitCnt=121340|#4%VPS%", id=27, state=WAITING, blockCnt=3, waitCnt=121340]
>    Lock 
> [object=java.util.concurrent.locks.ReentrantLock$NonfairSync@138205af, 
> ownerName=sys-stripe-26-#27%VPS%, ownerId=50]
>    at sun.misc.Unsafe.park(Native Method)
>    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>    at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>    at 
> o.a.i.i.processors.cache.GridCacheMapEntry.lockEntry(GridCacheMapEntry.java:4272)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2848)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1706)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>    at 
> o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>    at 
> o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>    at 
> o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>    at 
> o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>    at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>    at java.lang.Thread.run(Thread.java:748)
>   
>  I have tried sorting my keys but the same doesn't helps
>   



--
This message was 

[jira] [Updated] (IGNITE-11657) Deadlock on Cache.putAll(Map)

2019-04-23 Thread Gaurav Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aggarwal updated IGNITE-11657:
-
Affects Version/s: 2.7

> Deadlock on Cache.putAll(Map)
> -
>
> Key: IGNITE-11657
> URL: https://issues.apache.org/jira/browse/IGNITE-11657
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5, 2.7
>Reporter: Gaurav Aggarwal
>Priority: Major
>
> We have been seeing consistent Deadlocks with Ignite Latest versions on 
> Cache.putAll, as putAll tries to lock all the keys before updating them. 
>   
>  As per the documentation (below) putAll should be Equivalent to individual 
> iterative puts and these individual puts should behave atomically but not the 
> entire pull, But the error logs pasted further below seem to suggest otherwise
>   
>  *putAll Documentation*
> h5. void javax.cache.Cache.putAll(Map map)
> Copies all of the entries from the specified map to the {{Cache}}.
>  The effect of this call is equivalent to that of calling {{put(k, v)}} on 
> this cache once for each mapping from key {{k}} to value {{v}} in the 
> specified map.
>  The order in which the individual puts occur is undefined.
>  The behavior of this operation is undefined if entries in the cache 
> corresponding to entries in the map are modified or removed while this 
> operation is in progress. or if map is modified while the operation is in 
> progress.
>  In Default Consistency mode, individual puts occur atomically but not the 
> entire putAll. Listeners may observe individual updates.
>   
>   
>  *Error Log suggesting otherwise*
>   
>  Deadlock: true
>  Completed: 12808
>  Thread [name="sys-stripe-3-#4%VPS%", id=27, state=WAITING, blockCnt=3, 
> waitCnt=121340|#4%VPS%", id=27, state=WAITING, blockCnt=3, waitCnt=121340]
>    Lock 
> [object=java.util.concurrent.locks.ReentrantLock$NonfairSync@138205af, 
> ownerName=sys-stripe-26-#27%VPS%, ownerId=50]
>    at sun.misc.Unsafe.park(Native Method)
>    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>    at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>    at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>    at 
> o.a.i.i.processors.cache.GridCacheMapEntry.lockEntry(GridCacheMapEntry.java:4272)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2848)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1706)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
>    at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>    at 
> o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>    at 
> o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>    at 
> o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>    at 
> o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>    at 
> o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>    at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>    at java.lang.Thread.run(Thread.java:748)
>   
>  I have tried sorting my keys but the same doesn't helps
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11657) Deadlock on Cache.putAll(Map)

2019-04-23 Thread Gaurav Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aggarwal updated IGNITE-11657:
-
Description: 
We have been seeing consistent Deadlocks with Ignite Latest versions, as putAll 
tries to lock all the keys before updating them. 
  
 As per the documentation (below) putAll should be Equivalent to individual 
iterative puts and these individual puts should behave atomically but not the 
entire pull, But the error logs pasted further below seem to suggest otherwise
  
 *putAll Documentation*
h5. void javax.cache.Cache.putAll(Map map)

Copies all of the entries from the specified map to the {{Cache}}.
 The effect of this call is equivalent to that of calling {{put(k, v)}} on this 
cache once for each mapping from key {{k}} to value {{v}} in the specified map.
 The order in which the individual puts occur is undefined.
 The behavior of this operation is undefined if entries in the cache 
corresponding to entries in the map are modified or removed while this 
operation is in progress. or if map is modified while the operation is in 
progress.
 In Default Consistency mode, individual puts occur atomically but not the 
entire putAll. Listeners may observe individual updates.
  
  
 *Error Log suggesting otherwise*
  
 Deadlock: true
 Completed: 12808
 Thread [name="sys-stripe-3-#4%VPS%", id=27, state=WAITING, blockCnt=3, 
waitCnt=121340|#4%VPS%", id=27, state=WAITING, blockCnt=3, waitCnt=121340]
   Lock [object=java.util.concurrent.locks.ReentrantLock$NonfairSync@138205af, 
ownerName=sys-stripe-26-#27%VPS%, ownerId=50]
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
   at 
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
    at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
   at 
o.a.i.i.processors.cache.GridCacheMapEntry.lockEntry(GridCacheMapEntry.java:4272)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2848)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1706)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
   at 
o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
   at 
o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
   at 
o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
   at 
o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
   at 
o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
   at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
   at java.lang.Thread.run(Thread.java:748)
  
 I have tried sorting my keys but the same doesn't helps
  

  was:
We have been seeing some consistent Deadlocks with Ignite Latest versions, as 
putAll tries to lock all the keys before updating them. 
 
As per the documentation (below) putAll should be Equivalent to individual 
iterative puts and these individual puts should behave atomically but not the 
entire pull, But the error logs pasted further below seem to suggest otherwise
 
*putAll Documentation*
h5. void javax.cache.Cache.putAll(Map map)
Copies all of the entries from the specified map to the {{Cache}}.
The effect of this call is equivalent to that of calling {{put(k, v)}} on this 
cache once for each mapping from key {{k}} to value {{v}} in the specified map.
The order 

[jira] [Updated] (IGNITE-11657) Deadlock on Cache.putAll(Map)

2019-04-23 Thread Gaurav Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aggarwal updated IGNITE-11657:
-
Description: 
We have been seeing consistent Deadlocks with Ignite Latest versions on 
Cache.putAll, as putAll tries to lock all the keys before updating them. 
  
 As per the documentation (below) putAll should be Equivalent to individual 
iterative puts and these individual puts should behave atomically but not the 
entire pull, But the error logs pasted further below seem to suggest otherwise
  
 *putAll Documentation*
h5. void javax.cache.Cache.putAll(Map map)

Copies all of the entries from the specified map to the {{Cache}}.
 The effect of this call is equivalent to that of calling {{put(k, v)}} on this 
cache once for each mapping from key {{k}} to value {{v}} in the specified map.
 The order in which the individual puts occur is undefined.
 The behavior of this operation is undefined if entries in the cache 
corresponding to entries in the map are modified or removed while this 
operation is in progress. or if map is modified while the operation is in 
progress.
 In Default Consistency mode, individual puts occur atomically but not the 
entire putAll. Listeners may observe individual updates.
  
  
 *Error Log suggesting otherwise*
  
 Deadlock: true
 Completed: 12808
 Thread [name="sys-stripe-3-#4%VPS%", id=27, state=WAITING, blockCnt=3, 
waitCnt=121340|#4%VPS%", id=27, state=WAITING, blockCnt=3, waitCnt=121340]
   Lock [object=java.util.concurrent.locks.ReentrantLock$NonfairSync@138205af, 
ownerName=sys-stripe-26-#27%VPS%, ownerId=50]
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
   at 
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
    at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
   at 
o.a.i.i.processors.cache.GridCacheMapEntry.lockEntry(GridCacheMapEntry.java:4272)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2848)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1706)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
   at 
o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
   at 
o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
   at 
o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
   at 
o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
   at 
o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
   at 
o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
   at 
o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
   at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
   at java.lang.Thread.run(Thread.java:748)
  
 I have tried sorting my keys but the same doesn't helps
  

  was:
We have been seeing consistent Deadlocks with Ignite Latest versions, as putAll 
tries to lock all the keys before updating them. 
  
 As per the documentation (below) putAll should be Equivalent to individual 
iterative puts and these individual puts should behave atomically but not the 
entire pull, But the error logs pasted further below seem to suggest otherwise
  
 *putAll Documentation*
h5. void javax.cache.Cache.putAll(Map map)

Copies all of the entries from the specified map to the {{Cache}}.
 The effect of this call is equivalent to that of calling {{put(k, v)}} on this 
cache once for each mapping from key {{k}} to value {{v}} in the 

[jira] [Updated] (IGNITE-11657) Deadlock on Cache.putAll(Map)

2019-04-23 Thread Gaurav Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Aggarwal updated IGNITE-11657:
-
Summary: Deadlock on Cache.putAll(Map)  (was: Stripe Threads Deadlock 
on Cache.putAll(Map))

> Deadlock on Cache.putAll(Map)
> -
>
> Key: IGNITE-11657
> URL: https://issues.apache.org/jira/browse/IGNITE-11657
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5
>Reporter: Gaurav Aggarwal
>Priority: Major
>
> We have been seeing some consistent Deadlocks with Ignite Latest versions, as 
> putAll tries to lock all the keys before updating them. 
>  
> As per the documentation (below) putAll should be Equivalent to individual 
> iterative puts and these individual puts should behave atomically but not the 
> entire pull, But the error logs pasted further below seem to suggest otherwise
>  
> *putAll Documentation*
> h5. void javax.cache.Cache.putAll(Map map)
> Copies all of the entries from the specified map to the {{Cache}}.
> The effect of this call is equivalent to that of calling {{put(k, v)}} on 
> this cache once for each mapping from key {{k}} to value {{v}} in the 
> specified map.
> The order in which the individual puts occur is undefined.
> The behavior of this operation is undefined if entries in the cache 
> corresponding to entries in the map are modified or removed while this 
> operation is in progress. or if map is modified while the operation is in 
> progress.
> In Default Consistency mode, individual puts occur atomically but not the 
> entire putAll. Listeners may observe individual updates.
>  
>  
> *Error Log suggesting otherwise*
>  
> Deadlock: true
> Completed: 12808
> Thread [name="sys-stripe-3-#4%VPS%", id=27, state=WAITING, blockCnt=3, 
> waitCnt=121340]
>   Lock [object=java.util.concurrent.locks.ReentrantLock$NonfairSync@138205af, 
> ownerName=sys-stripe-26-#27%VPS%, ownerId=50]
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>    at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>   at 
> o.a.i.i.processors.cache.GridCacheMapEntry.lockEntry(GridCacheMapEntry.java:4272)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2848)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1706)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
>   at 
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>   at 
> o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>   at 
> o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>   at 
> o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>   at 
> o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>   at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>   at java.lang.Thread.run(Thread.java:748)
>  
> I have tried sorting my keys but the same doesn't helps
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11412) Actualize JUnit3TestLegacySupport class

2019-04-23 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823963#comment-16823963
 ] 

Ivan Pavlukhin commented on IGNITE-11412:
-

[~ivanan.fed] could you please merge fresh master into and resolve conflicts in 
your PR? Automatic merge is not possible without it.

> Actualize JUnit3TestLegacySupport class
> ---
>
> Key: IGNITE-11412
> URL: https://issues.apache.org/jira/browse/IGNITE-11412
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor JUnit3TestLegacySupport class and remove, if it is possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8578) .NET: Add baseline auto-adjust parameters (definition, run-time change)

2019-04-23 Thread Igor Sapego (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823959#comment-16823959
 ] 

Igor Sapego edited comment on IGNITE-8578 at 4/23/19 10:47 AM:
---

[~ashapkin], I reviewed. Looks good, there is only 2 typos in comments in 
ICluster.cs: "manuale".


was (Author: isapego):
[~ashapkin], I reviewed. Looks good, there is only 2 typos comments in 
ICluster.cs: "manuale".

> .NET: Add baseline auto-adjust parameters (definition, run-time change)
> ---
>
> Key: IGNITE-8578
> URL: https://issues.apache.org/jira/browse/IGNITE-8578
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Eduard Shangareev
>Assignee: Alexandr Shapkin
>Priority: Major
>  Labels: .NET, IEP-4, Phase-2
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We would introduce at IGNITE-8571 new parameters. 
> We need their support in .Net side.
> See new methods on IgniteCluster interface IGNITE-11509



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8578) .NET: Add baseline auto-adjust parameters (definition, run-time change)

2019-04-23 Thread Igor Sapego (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823959#comment-16823959
 ] 

Igor Sapego commented on IGNITE-8578:
-

[~ashapkin], I reviewed. Looks good, there is only 2 typos comments in 
ICluster.cs: "manuale".

> .NET: Add baseline auto-adjust parameters (definition, run-time change)
> ---
>
> Key: IGNITE-8578
> URL: https://issues.apache.org/jira/browse/IGNITE-8578
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Eduard Shangareev
>Assignee: Alexandr Shapkin
>Priority: Major
>  Labels: .NET, IEP-4, Phase-2
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We would introduce at IGNITE-8571 new parameters. 
> We need their support in .Net side.
> See new methods on IgniteCluster interface IGNITE-11509



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11412) Actualize JUnit3TestLegacySupport class

2019-04-23 Thread Ivan Fedotov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823947#comment-16823947
 ] 

Ivan Fedotov commented on IGNITE-11412:
---

[~Pavlukhin], Thank you for the review.

Could you please merge it?

> Actualize JUnit3TestLegacySupport class
> ---
>
> Key: IGNITE-11412
> URL: https://issues.apache.org/jira/browse/IGNITE-11412
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor JUnit3TestLegacySupport class and remove, if it is possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11794) Remove initial counter from update counter contract.

2019-04-23 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-11794:
--

 Summary: Remove initial counter from update counter contract.
 Key: IGNITE-11794
 URL: https://issues.apache.org/jira/browse/IGNITE-11794
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexei Scherbakov


We gave 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#initial and 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounter#updateInitial
 method in patition update counter contract but they are not needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11731) CPP: Implement minimal Cluster API

2019-04-23 Thread Igor Sapego (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-11731:
-
Description: 
Let's start implementing Cluster API for C++.

Desired functionality from Java: 
* {{IgniteCluster.active()}}
* {{IgniteClusterGroup.forAttribute()}}
* {{IgniteClusterGroup.forDataNodes()}}
* {{IgniteClusterGroup.forServers()}}
* {{ClusterNode.id()}}
* {{ClusterNode.attribute()}}
* {{IgniteCompute.compute(ClusterGroup)}}

Also, we need to have one platform-specific method:
* {{IgniteClusterGroup.forCpp()}}


  was:
Let's start implementing Cluster API for C++.

Desired functionality from Java: 
* {{ClusterNode.id()}}
* {{ClusterNode.attribute()}}
* {{IgniteCluster.active()}}
* {{IgniteCluster.disableWal()}}
* {{IgntieCluster.enableWal()}}
* {{IgniteCluster.isWalEnabled()}}
* {{IgniteClusterGroup.forAttribute()}}
* {{IgniteClusterGroup.forDataNodes()}}
* {{IgniteClusterGroup.forServers()}}

Also, we need to have one platform-specific method:
* {{IgniteClusterGroup.forCpp()}}



> CPP: Implement minimal Cluster API
> --
>
> Key: IGNITE-11731
> URL: https://issues.apache.org/jira/browse/IGNITE-11731
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Affects Versions: 2.7
>Reporter: Igor Sapego
>Priority: Major
>  Labels: cpp
> Fix For: 2.8
>
>
> Let's start implementing Cluster API for C++.
> Desired functionality from Java: 
> * {{IgniteCluster.active()}}
> * {{IgniteClusterGroup.forAttribute()}}
> * {{IgniteClusterGroup.forDataNodes()}}
> * {{IgniteClusterGroup.forServers()}}
> * {{ClusterNode.id()}}
> * {{ClusterNode.attribute()}}
> * {{IgniteCompute.compute(ClusterGroup)}}
> Also, we need to have one platform-specific method:
> * {{IgniteClusterGroup.forCpp()}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11699) Node can't start after forced shutdown if the wal archiver disabled

2019-04-23 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-11699:
-
Ignite Flags:   (was: Docs Required)

> Node can't start after forced shutdown if the wal archiver disabled
> ---
>
> Key: IGNITE-11699
> URL: https://issues.apache.org/jira/browse/IGNITE-11699
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: Pavel Vinokurov
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.8
>
> Attachments: disabled-wal-archive-reproducer.zip
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a server node killed with the disabled wal archive, it could fail on start 
> with following exception:
> {code:java}
> [18:37:53,887][SEVERE][sys-stripe-1-#2][G] Failed to execute runnable.
> java.lang.IllegalStateException: Failed to get page IO instance (page content 
> is corrupted)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:85)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:97)
>   at 
> org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageUpdatePartitionDataRecord.applyDelta(MetaPageUpdatePartitionDataRecord.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyPageDelta(GridCacheDatabaseSharedManager.java:2532)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$11(GridCacheDatabaseSharedManager.java:2327)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApplyPage$12(GridCacheDatabaseSharedManager.java:2441)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApply$13(GridCacheDatabaseSharedManager.java:2479)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:550)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The reproducer is attached(works only on Linux).
> Steps to run the reproducer.
> 1. Copy config/server.xml into IGNITE_HOME/config folder;
> 2. Set IGNITE_HOME in the CorruptionReproducer class;
> 3. Launch  CorruptionReproducer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11699) Node can't start after forced shutdown if the wal archiver disabled

2019-04-23 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823818#comment-16823818
 ] 

Ignite TC Bot commented on IGNITE-11699:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache 2{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3676723]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3672996buildTypeId=IgniteTests24Java8_RunAll]

> Node can't start after forced shutdown if the wal archiver disabled
> ---
>
> Key: IGNITE-11699
> URL: https://issues.apache.org/jira/browse/IGNITE-11699
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: Pavel Vinokurov
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Attachments: disabled-wal-archive-reproducer.zip
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a server node killed with the disabled wal archive, it could fail on start 
> with following exception:
> {code:java}
> [18:37:53,887][SEVERE][sys-stripe-1-#2][G] Failed to execute runnable.
> java.lang.IllegalStateException: Failed to get page IO instance (page content 
> is corrupted)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:85)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:97)
>   at 
> org.apache.ignite.internal.pagemem.wal.record.delta.MetaPageUpdatePartitionDataRecord.applyDelta(MetaPageUpdatePartitionDataRecord.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyPageDelta(GridCacheDatabaseSharedManager.java:2532)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$11(GridCacheDatabaseSharedManager.java:2327)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApplyPage$12(GridCacheDatabaseSharedManager.java:2441)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApply$13(GridCacheDatabaseSharedManager.java:2479)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:550)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The reproducer is attached(works only on Linux).
> Steps to run the reproducer.
> 1. Copy config/server.xml into IGNITE_HOME/config folder;
> 2. Set IGNITE_HOME in the CorruptionReproducer class;
> 3. Launch  CorruptionReproducer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11592) NPE in case of continuing tx and cache stop operation.

2019-04-23 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823787#comment-16823787
 ] 

Ignite TC Bot commented on IGNITE-11592:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache 2{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3671356]]

{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3671378]]

{color:#d04437}MVCC Cache{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3671338]]
* IgniteCacheMvccTestSuite: CacheMvccClientReconnectTest.testClientReconnect - 
0,0% fails in last 81 master runs.

{color:#d04437}Cache (Restarts) 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3671352]]
* IgniteCacheRestartTestSuite: 
GridCacheReplicatedNodeRestartSelfTest.testRestartWithPutFourNodesOneBackupsOffheapEvict
 - 0,0% fails in last 104 master runs.

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3671407buildTypeId=IgniteTests24Java8_RunAll]

> NPE in case of continuing tx and cache stop operation. 
> ---
>
> Key: IGNITE-11592
> URL: https://issues.apache.org/jira/browse/IGNITE-11592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Parallel cache stop and tx operations may lead to NPE.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.finishUnmarshal(CacheObjectImpl.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.TxEntryValueHolder.unmarshal(TxEntryValueHolder.java:151)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry.unmarshal(IgniteTxEntry.java:964)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.unmarshal(IgniteTxHandler.java:306)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:338)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.lambda$null$0(IgniteTxHandler.java:580)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:496)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11793) Improve isolated updater mode.

2019-04-23 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-11793:
---
Summary: Improve isolated updater mode.  (was: Failover for isolated 
updater mode.)

> Improve isolated updater mode.
> --
>
> Key: IGNITE-11793
> URL: https://issues.apache.org/jira/browse/IGNITE-11793
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
>
> Currently with isolated updater (datastream + allowOverride=false) even for 
> transactional mode counters are generated independently on all owners.
> In case of some nodes fail there is high risk of partition desync.
> Also this mode couldn't be used together with concurrent transactions after 
> IGNITE-10078.
> I suggest to introduce special loading mode for cache where concurrent 
> updates are prohibited until initial data loading (using isolated updater) is 
> completed.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11793) Improve isolated updater mode.

2019-04-23 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-11793:
---
Description: 
Currently with isolated updater (datastreamer + allowOverride=false) even for 
transactional mode counters are generated independently on all owners.

In case of some nodes fail there is high risk of partition desync.

Also this mode couldn't be used together with concurrent transactions after 
IGNITE-10078.

I suggest to introduce special loading mode for cache where concurrent updates 
are prohibited until initial data loading (using isolated updater) is completed.

 

 

  was:
Currently with isolated updater (datastream + allowOverride=false) even for 
transactional mode counters are generated independently on all owners.

In case of some nodes fail there is high risk of partition desync.

Also this mode couldn't be used together with concurrent transactions after 
IGNITE-10078.

I suggest to introduce special loading mode for cache where concurrent updates 
are prohibited until initial data loading (using isolated updater) is completed.

 

 


> Improve isolated updater mode.
> --
>
> Key: IGNITE-11793
> URL: https://issues.apache.org/jira/browse/IGNITE-11793
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
>
> Currently with isolated updater (datastreamer + allowOverride=false) even for 
> transactional mode counters are generated independently on all owners.
> In case of some nodes fail there is high risk of partition desync.
> Also this mode couldn't be used together with concurrent transactions after 
> IGNITE-10078.
> I suggest to introduce special loading mode for cache where concurrent 
> updates are prohibited until initial data loading (using isolated updater) is 
> completed.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11793) Failover for isolated updater mode.

2019-04-23 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-11793:
--

 Summary: Failover for isolated updater mode.
 Key: IGNITE-11793
 URL: https://issues.apache.org/jira/browse/IGNITE-11793
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexei Scherbakov


Currently with isolated updater (datastream + allowOverride=false) even for 
transactional mode counters are generated independently on all owners.

In case of some nodes fail there is high risk of partition desync.

Also this mode couldn't be used together with concurrent transactions after 
IGNITE-10078.

I suggest to introduce special loading mode for cache where concurrent updates 
are prohibited until initial data loading (using isolated updater) is completed.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11776) IgnitePdsStartWIthEmptyArchive is flaky

2019-04-23 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-11776:
-
Affects Version/s: 2.8

> IgnitePdsStartWIthEmptyArchive is flaky
> ---
>
> Key: IGNITE-11776
> URL: https://issues.apache.org/jira/browse/IGNITE-11776
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It looks like the root cause of the issue is late registration of the 
> listener. It should be done statically via {{IgniteConfiguration}} I think.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11776) IgnitePdsStartWIthEmptyArchive is flaky

2019-04-23 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823716#comment-16823716
 ] 

Ignite TC Bot commented on IGNITE-11776:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3672833]]

{color:#d04437}_Check Code Style_{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3672862]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3672863buildTypeId=IgniteTests24Java8_RunAll]

> IgnitePdsStartWIthEmptyArchive is flaky
> ---
>
> Key: IGNITE-11776
> URL: https://issues.apache.org/jira/browse/IGNITE-11776
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It looks like the root cause of the issue is late registration of the 
> listener. It should be done statically via {{IgniteConfiguration}} I think.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)