[jira] [Commented] (IGNITE-3303) Apache Flink Integration - Flink source to run a continuous query against one or multiple caches

2018-07-10 Thread Saikat Maitra (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539556#comment-16539556
 ] 

Saikat Maitra commented on IGNITE-3303:
---

[~avinogradov]

Can you please review the changes and share feedback.

Regards,

Saikat

> Apache Flink Integration - Flink source to run a continuous query against one 
> or multiple caches
> 
>
> Key: IGNITE-3303
> URL: https://issues.apache.org/jira/browse/IGNITE-3303
> Project: Ignite
>  Issue Type: New Feature
>  Components: streaming
>Reporter: Saikat Maitra
>Priority: Major
> Attachments: Screen Shot 2016-10-07 at 12.44.47 AM.png, 
> testFlinkIgniteSourceWithLargeBatch.log, win7.PNG
>
>
> Apache Flink integration 
> +++ *Ignite as a bidirectional Connector* +++
> As a Flink source => run a continuous query against one or multiple
> caches [4].
> Related discussion : 
> http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Flink-lt-gt-Apache-Ignite-integration-td8163.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8968) Failed to shutdown node due to "Error saving backup value"

2018-07-10 Thread Mikhail Cherkasov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539304#comment-16539304
 ] 

Mikhail Cherkasov commented on IGNITE-8968:
---

[~pvinokurov]

I think just adding break would mean that we don't process this propertly and 
don't send responce to primary(or client might be?) and we will get "long 
running atomic update future" or something like this.

I think we need explicitly check that error was caused by ignite stop and only 
then break cycle. In this case on node_left event, all futures related to this 
node will be handled properly(canceled with an error or finished?) 

> Failed to shutdown node due to "Error saving backup value"
> --
>
> Key: IGNITE-8968
> URL: https://issues.apache.org/jira/browse/IGNITE-8968
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> On node shutdown ignite prints following logs infinitely:
> org.apache.ignite.internal.NodeStoppingException: Operation has been 
> cancelled (node is stopping).
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539111#comment-16539111
 ] 

Andrew Medvedev commented on IGNITE-8971:
-

[~kuaw26] thank you for your comments!

Should be fixed now!

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Assignee: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8978) Affinity throws "IgniteException: Failed to find cache" after an Ignite client re-connect

2018-07-10 Thread Dmitry Konstantinov (JIRA)
Dmitry Konstantinov created IGNITE-8978:
---

 Summary: Affinity throws "IgniteException: Failed to find cache" 
after an Ignite client re-connect
 Key: IGNITE-8978
 URL: https://issues.apache.org/jira/browse/IGNITE-8978
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
 Environment: ver. 2.5.0#20180523-sha1:86e110c7
OS: Windows 7 6.1 amd64
Java(TM) SE Runtime Environment 1.8.0_101-b13 Oracle Corporation Java 
HotSpot(TM) 64-Bit Server VM 25.101-b13
Reporter: Dmitry Konstantinov


Use case:
 # A single Ignite server node is deployed and running.
 # An ignite Java client connects to the server node and starts to do cache 
operations (put/get) + invoke Affinity.mapKeyToNode() method.
 # The Ignite server process is killed
 # Waiting for some time
 # Starting the Ignite server back.

{code:java}
public static void main(String ... args) throws InterruptedException {
Ignition.setClientMode(true);
String config = "ignite-config.xml";
try (Ignite ignite = Ignition.start(config)) {
String cacheName = "testCache";
IgniteCache cache = ignite.cache(cacheName);
Affinity affinity = ignite.affinity(cacheName);

while (true) {
try {
String key = "testKey";
cache.put(key, "testValue");
String value = cache.get(key);
ClusterNode primary = affinity.mapKeyToNode(key);
System.out.println("read value: " + value + ", primary node: " 
+ primary);
} catch (Exception e) {
System.out.println("Error: " + e.toString());
e.printStackTrace();
} finally {
Thread.sleep(1000);
}
}
}
}
{code}

Expected result:
 affinity.mapKeyToNode(key) starts to work after a re-connection to the 
restarted server

Actual result:
 affinity.mapKeyToNode(key) continues to throw the following exception:
{code:java}
class org.apache.ignite.IgniteException: Failed to find cache (cache was not 
started yet or cache was already stopped): testCache
at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.affinityTopologyVersion(GridCacheAffinityManager.java:402)
at 
org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.topologyVersion(GridCacheAffinityImpl.java:241)
at 
org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeysToNodes(GridCacheAffinityImpl.java:189)
at 
org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl.mapKeyToNode(GridCacheAffinityImpl.java:182)
at test.ignite.Main.main(Main.java:25)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8912) PartitionLossPolicy.READ_ONLY_SAFE does not detect partition loss

2018-07-10 Thread Vyacheslav Koptilin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538971#comment-16538971
 ] 

Vyacheslav Koptilin commented on IGNITE-8912:
-

Hi [~pvinokurov]
It seems to me, your changes look correct.
At the moment when `GridDhtPartitionTopologyImpl.afterExchange()` is called, 
the `exchange` process is already completed,
and so I can imagine one possible situation when we have a partition in the 
`MOVING` state and there is no owner. Owner node left the cluster and 
`discoCache` was updated by `discovery` thread, and as you mentioned, new 
exchange will mark that partition as lost.

> PartitionLossPolicy.READ_ONLY_SAFE does not detect partition loss 
> --
>
> Key: IGNITE-8912
> URL: https://issues.apache.org/jira/browse/IGNITE-8912
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
> Attachments: MissedPartitionLostReproducer.java
>
>
> Cluster of 4 for a cache with 1 backup and READ_ONLY_SAFE .
> After forcefully killed two nodes, a partition lost without including in 
> partitionsLost collection. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8912) PartitionLossPolicy.READ_ONLY_SAFE does not detect partition loss

2018-07-10 Thread Pavel Vinokurov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538902#comment-16538902
 ] 

Pavel Vinokurov commented on IGNITE-8912:
-

I removed changing state for MOVING partition if all owners left topology in 
GridDhtPartitionTopologyImpl#afterExchange  method. If an owner node left 
cluster during exchange, then new exchange should be scheduled with detecting 
lost partitions. So It isn't necessary to detect lost partition and raise event 
in GridDhtPartitionTopologyImpl#afterExchange. [~slava.koptilin] Does it make 
sense?

> PartitionLossPolicy.READ_ONLY_SAFE does not detect partition loss 
> --
>
> Key: IGNITE-8912
> URL: https://issues.apache.org/jira/browse/IGNITE-8912
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.5
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
> Attachments: MissedPartitionLostReproducer.java
>
>
> Cluster of 4 for a cache with 1 backup and READ_ONLY_SAFE .
> After forcefully killed two nodes, a partition lost without including in 
> partitionsLost collection. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538877#comment-16538877
 ] 

ASF GitHub Bot commented on IGNITE-8973:


GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/4347

IGNITE-8973 calculate partition hash and print into standard output



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8973

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4347


commit 7cdf066a7dc7573bf350df8ce8394aae2bb13396
Author: Anton Kalashnikov 
Date:   2018-07-10T16:15:55Z

IGNITE-8973 calculate partition hash and print into standard output




> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions. Will be useful to have ability dump current state for 
> all partition to file or standard output. This dump can help an investigation 
> of some kind of problem with partition counters or sizes because it is a 
> cluster partition hash snapshot by some partition state (hash include all 
> keys in the partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8935) IgniteConfiguration dependents should have toString

2018-07-10 Thread Ilya Kasnacheev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538853#comment-16538853
 ] 

Ilya Kasnacheev commented on IGNITE-8935:
-

[~dpavlov] please review the proposed fix

> IgniteConfiguration dependents should have toString
> ---
>
> Key: IGNITE-8935
> URL: https://issues.apache.org/jira/browse/IGNITE-8935
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.5
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Minor
>  Labels: usability
>
> Ignite configuration is printed on startup, but some classes which are 
> usually referred do not have toString() implemented, leading to gems such as 
> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@7d8704ef
> Those classes should have toString implemented which will conform to 
> https://cwiki.apache.org/confluence/display/IGNITE/Coding+Guidelines#CodingGuidelines-StringOutput



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8564) DataStreamer fails if client reconnected to the cluster and allowOverride = true

2018-07-10 Thread Ilya Kasnacheev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538852#comment-16538852
 ] 

Ilya Kasnacheev commented on IGNITE-8564:
-

[~slava.koptilin] I have updated pull request according to your suggestions.

[~agoncharuk] please review the proposed fix.

> DataStreamer fails if client reconnected to the cluster and allowOverride = 
> true
> 
>
> Key: IGNITE-8564
> URL: https://issues.apache.org/jira/browse/IGNITE-8564
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Labels: test
> Fix For: 2.7
>
> Attachments: DataStreamerClientReconnectAfterClusterRestartTest.java
>
>
> But wait, there's more.
> This only happens when client has reconnected to a new cluster and topology 
> version after that is exactly the same like when it left.
> Please see the attached test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8975) Invalid initialization of compressed archived WAL segment when WAL compression is switched off.

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538771#comment-16538771
 ] 

ASF GitHub Bot commented on IGNITE-8975:


GitHub user ivandasch opened a pull request:

https://github.com/apache/ignite/pull/4345

IGNITE-8975: Correct handling compressed archived wal segment when co…

…mpression is switched off.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8975

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4345.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4345


commit fb2019963a0e7408af39a27d816fec775652f661
Author: Ivan Daschinskiy 
Date:   2018-07-10T15:23:00Z

IGNITE-8975: Correct handling compressed archived wal segment when 
compression is switched off.




> Invalid initialization of compressed archived WAL segment when WAL 
> compression is switched off.
> ---
>
> Key: IGNITE-8975
> URL: https://issues.apache.org/jira/browse/IGNITE-8975
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.7
>
>
> After restarting node with WAL compression disabled and when compressed wal 
> archive 
> presentd, current implementation of FileWriteAheadLogManager ignores 
> presenting compressed wal segment and initalizes empty brand new one. This 
> causes following error:
> {code:java}
> 2018-07-05 16:14:25.761 
> [ERROR][exchange-worker-#153%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.c.CheckpointHistory]
>  Failed to process checkpoint: CheckpointEntry 
> [id=8dc4b1cc-dedd-4a57-8748-f5a7ecfd389d, timestamp=1530785506909, 
> ptr=FileWALPointer [idx=4520, fileOff=860507725, len=691515]]
> org.apache.ignite.IgniteCheckedException: Failed to find checkpoint record at 
> the given WAL pointer: FileWALPointer [idx=4520, fileOff=860507725, 
> len=691515]
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.initIfNeeded(CheckpointEntry.java:346)
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.access$300(CheckpointEntry.java:231)
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.initIfNeeded(CheckpointEntry.java:123)
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.groupState(CheckpointEntry.java:105)
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.isCheckpointApplicableForGroup(CheckpointHistory.java:377)
> at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.searchAndReserveCheckpoints(CheckpointHistory.java:304)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.reserveHistoryForExchange(GridCacheDatabaseSharedManager.java:1614)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1139)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:724)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2477)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2357)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8976) Add native persistence section to Key-Value data grid page

2018-07-10 Thread Dmitriy Setrakyan (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-8976:
--
Description: 
Main goal - less text, better structure.
 # We need to add a section for Ignite native persistence
 # We need to add a native persistence image to this section.
 # We need to place current 3rd party DB image next to the 3rd party DB section
 # We need to add a section for collocation - compute + data and data + data

  was:
# We need to add a section for Ignite native persistence
 # We need to add a native persistence image to this section.
 # We need to place current 3rd party DB image next to the 3rd party DB section
 # We need to add a section for collocation - compute + data and data + data


> Add native persistence section to Key-Value data grid page
> --
>
> Key: IGNITE-8976
> URL: https://issues.apache.org/jira/browse/IGNITE-8976
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dmitriy Setrakyan
>Assignee: Prachi Garg
>Priority: Blocker
> Fix For: 2.7
>
>
> Main goal - less text, better structure.
>  # We need to add a section for Ignite native persistence
>  # We need to add a native persistence image to this section.
>  # We need to place current 3rd party DB image next to the 3rd party DB 
> section
>  # We need to add a section for collocation - compute + data and data + data



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6070) Infinite redirects at Spring Security Web Session Clustering with Tomcat

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538752#comment-16538752
 ] 

ASF GitHub Bot commented on IGNITE-6070:


Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/2621


> Infinite redirects at Spring Security Web Session Clustering with Tomcat
> 
>
> Key: IGNITE-6070
> URL: https://issues.apache.org/jira/browse/IGNITE-6070
> Project: Ignite
>  Issue Type: Bug
>  Components: websession
>Affects Versions: 2.1
> Environment: Spring Security, Apache Tomcat
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.3
>
> Attachments: IGNITE-6070.patch, webtest.zip
>
>
> See 
> https://stackoverflow.com/questions/45648884/apache-ignite-spring-secutiry-error
>  description.
> When Session comes from Ignite but its Authentication is anonymous, Spring 
> Security does the following check:
> {code}
> } else if (request.getRequestedSessionId() != null && 
> !request.isRequestedSessionIdValid()) {
> this.logger.debug("Requested session ID " + 
> request.getRequestedSessionId() + " is invalid.");
> 
> this.invalidSessionStrategy.onInvalidSessionDetected(request, response);
> }
> {code}
> The problem is, in Ignite we never override isRequestedSessionIdValid() in 
> our request wrappers, so it falls back to Tomcat's own (session) Manager, 
> which will then find that it has never seen this Session and it is therefore 
> invalid. Thus failover is triggered, and if there's "invalid-session-url" 
> clause in Spring Security config, redirect will be issued, possibly circular.
> I've attaching a sample Maven WAR project. It should be deployed to two 
> different Tomcat instances operating on two different ports of same machine, 
> e.g. 8080 and 8180, and then 
> http://localhost:PORT/webtest-1.0-SNAPSHOT/index.jsp should be opened in the 
> same Web Browser one after another for two ports. The second one should cause 
> infinite 302 Redirect to same URL.
> There's also a minor bug in the same code: see session.jsp in the example. It 
> will needlessly throw NPE in WebSessionFilter:1001 and confuse web server. 
> Should output "OK" when fixed.
> Discussion:
> By the way, letting the web server to get hold on Sessions that it creates 
> for us causes additional problems: it's going to store these sessions in 
> parallel with Ignite, consuming memory in the process that first saw a given 
> session. We should probably have (possibly a third party) an Ignite-using 
> Manager implementation for Tomcat specifically. It will be much simpler than 
> filter-based approach while performing better.
> Or maybe we should create our own Sessions that we never allow the web server 
> to see.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8976) Add native persistence section to Key-Value data grid page

2018-07-10 Thread Dmitriy Setrakyan (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-8976:
--
Description: 
# We need to add a section for Ignite native persistence
 # We need to add a native persistence image to this section.
 # We need to place current 3rd party DB image next to the 3rd party DB section
 # We need to add a section for collocation - compute + data and data + data

  was:
# We need to add a section for Ignite native persistence
 # We need to add a native persistence image to this section.
 # We need to place current 3rd party DB image next to the 3rd party DB section


> Add native persistence section to Key-Value data grid page
> --
>
> Key: IGNITE-8976
> URL: https://issues.apache.org/jira/browse/IGNITE-8976
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dmitriy Setrakyan
>Assignee: Prachi Garg
>Priority: Blocker
> Fix For: 2.7
>
>
> # We need to add a section for Ignite native persistence
>  # We need to add a native persistence image to this section.
>  # We need to place current 3rd party DB image next to the 3rd party DB 
> section
>  # We need to add a section for collocation - compute + data and data + data



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8977) Need to update distributed SQL page

2018-07-10 Thread Dmitriy Setrakyan (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-8977:
--
Description: 
Main goal - less text, better structure.

Let's add the following sections:
 # Distributed SQL Indexes
 # {color:#33}Distributed SQL JOINs{color}
 # {color:#33}Memory-only{color}
 # {color:#33}Native Persistence{color}
 # {color:#33}Memory Management - here we should mention that due to Ignite 
memory management policies, the most used indexes will always end up being 
cached in memory, even when the persistence is enabled.{color}
 # {color:#33}3rd Party Databases - here we should mention that Ignite SQL 
does write-through to 3rd party databases, if needed.{color}

 

  was:
Main goal - less text, better structure.

Let's add the following sections:
 # Distributed SQL Indexes
 # {color:#33}Distributed SQL JOINs{color}
 # {color:#33}Memory-only{color}
 # {color:#33}Native Persistence{color}
 # {color:#33}Memory Management - here we should mention that due to Ignite 
memory management policies, the most used indexes will always end up being 
cached in memory, even when the persistence is enabled.{color}

 


> Need to update distributed SQL page
> ---
>
> Key: IGNITE-8977
> URL: https://issues.apache.org/jira/browse/IGNITE-8977
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dmitriy Setrakyan
>Assignee: Prachi Garg
>Priority: Blocker
> Fix For: 2.7
>
>
> Main goal - less text, better structure.
> Let's add the following sections:
>  # Distributed SQL Indexes
>  # {color:#33}Distributed SQL JOINs{color}
>  # {color:#33}Memory-only{color}
>  # {color:#33}Native Persistence{color}
>  # {color:#33}Memory Management - here we should mention that due to 
> Ignite memory management policies, the most used indexes will always end up 
> being cached in memory, even when the persistence is enabled.{color}
>  # {color:#33}3rd Party Databases - here we should mention that Ignite 
> SQL does write-through to 3rd party databases, if needed.{color}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6711) DataRegionMetrics#totalAllocatedPages is not valid after node restart

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538747#comment-16538747
 ] 

ASF GitHub Bot commented on IGNITE-6711:


Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3668


> DataRegionMetrics#totalAllocatedPages is not valid after node restart
> -
>
> Key: IGNITE-6711
> URL: https://issues.apache.org/jira/browse/IGNITE-6711
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.2
>Reporter: Alexey Goncharuk
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-6, newbie
> Fix For: 2.4
>
>
> Currently, data region metric tracks total allocated pages by a callback on 
> page allocation. However, when a node with enabled persistence is started, 
> some of the pages are already allocated, which leads to an incorrect metric 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8951) Need to validate nodes configuration across cluster and warn on different parameters value

2018-07-10 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538746#comment-16538746
 ] 

Ivan Rakov commented on IGNITE-8951:


Offtopic, but still related to the config validation: 
We should extend GridCacheProcessor#checkMemoryConfiguration by checking that 
persistence enabled flag is equal for data regions with the same name. I 
propose to implement it under this ticket.

> Need to validate nodes configuration across cluster and warn on different 
> parameters value
> --
>
> Key: IGNITE-8951
> URL: https://issues.apache.org/jira/browse/IGNITE-8951
> Project: Ignite
>  Issue Type: Task
>Reporter: Yakov Zhdanov
>Priority: Major
>
> On node start, node should output in its logs the  list of parameters havnig 
> values different from values on remote node. This should be skipped for 
> parameters that are always different e.g. host name, node ID or IP, however 
> should be an option to include parameters from default ignore list as well.
> Another requrement is that the intended output may be fully supressed by 
> setting sysmem property -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
> It seems that the implementation approach should be similar to performance 
> suggestions Ignite currently has.
> The output may be as following
> {noformat}
> Local node has different configuration comparted to remote nodes for 
> paramenters (fix if possible). To disable, set 
> -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
>   ^-- rebalanceThreadPoolSize [locNodeVal=4, rmtNodeId=X1X1, rmtNodeVal=8]
>   ^-- commSpi.selectorsCnt [locNodeVal=2, rmtNodeId=Y1Y1, rmtNodeVal=4]
>   ^-- commSpi.selectorsCnt [locNodeVal=2, rmtNodeId=Z1Z1, rmtNodeVal=8]
> {noformat}
> All components should add messages to {{cfgConsistencyRegister}} on startup 
> and then all differences should be output in one step.
> If node aborts startup due to any problem differences collected so far should 
> be output to logs.
> If there are more than 1 value for some config parameter among remote nodes 
> then all distinct options should be output (see {{commSpi.selectorsCnt}} in 
> the example above).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8977) Need to update distributed SQL page

2018-07-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-8977:
-

 Summary: Need to update distributed SQL page
 Key: IGNITE-8977
 URL: https://issues.apache.org/jira/browse/IGNITE-8977
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Dmitriy Setrakyan
Assignee: Prachi Garg
 Fix For: 2.7


Main goal - less text, better structure.

Let's add the following sections:
 # Distributed SQL Indexes
 # {color:#33}Distributed SQL JOINs{color}
 # {color:#33}Memory-only{color}
 # {color:#33}Native Persistence{color}
 # {color:#33}Memory Management - here we should mention that due to Ignite 
memory management policies, the most used indexes will always end up being 
cached in memory, even when the persistence is enabled.{color}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8976) Add native persistence section to Key-Value data grid page

2018-07-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-8976:
-

 Summary: Add native persistence section to Key-Value data grid page
 Key: IGNITE-8976
 URL: https://issues.apache.org/jira/browse/IGNITE-8976
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Dmitriy Setrakyan
Assignee: Prachi Garg
 Fix For: 2.7


# We need to add a section for Ignite native persistence
 # We need to add a native persistence image to this section.
 # We need to place current 3rd party DB image next to the 3rd party DB section



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8875) Add JMX methods to block\unblock new incoming connections from thin clients.

2018-07-10 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538729#comment-16538729
 ] 

Andrey Gura commented on IGNITE-8875:
-

[~amashenkov] Could you please add issue description. Idea, motivation, 
proposal.

> Add JMX methods to block\unblock new incoming connections from thin clients.
> 
>
> Key: IGNITE-8875
> URL: https://issues.apache.org/jira/browse/IGNITE-8875
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc, odbc, thin client
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8935) IgniteConfiguration dependents should have toString

2018-07-10 Thread Stanislav Lukyanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538728#comment-16538728
 ] 

Stanislav Lukyanov commented on IGNITE-8935:


[~ilyak], reviewed, looks great!

> IgniteConfiguration dependents should have toString
> ---
>
> Key: IGNITE-8935
> URL: https://issues.apache.org/jira/browse/IGNITE-8935
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.5
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Minor
>  Labels: usability
>
> Ignite configuration is printed on startup, but some classes which are 
> usually referred do not have toString() implemented, leading to gems such as 
> connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@7d8704ef
> Those classes should have toString implemented which will conform to 
> https://cwiki.apache.org/confluence/display/IGNITE/Coding+Guidelines#CodingGuidelines-StringOutput



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8975) Invalid initialization of compressed archived WAL segment when WAL compression is switched off.

2018-07-10 Thread Ivan Daschinskiy (JIRA)
Ivan Daschinskiy created IGNITE-8975:


 Summary: Invalid initialization of compressed archived WAL segment 
when WAL compression is switched off.
 Key: IGNITE-8975
 URL: https://issues.apache.org/jira/browse/IGNITE-8975
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Ivan Daschinskiy
Assignee: Ivan Daschinskiy
 Fix For: 2.7


After restarting node with WAL compression disabled and when compressed wal 
archive 
presentd, current implementation of FileWriteAheadLogManager ignores presenting 
compressed wal segment and initalizes empty brand new one. This causes 
following error:

{code:java}
2018-07-05 16:14:25.761 
[ERROR][exchange-worker-#153%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.c.CheckpointHistory]
 Failed to process checkpoint: CheckpointEntry 
[id=8dc4b1cc-dedd-4a57-8748-f5a7ecfd389d, timestamp=1530785506909, 
ptr=FileWALPointer [idx=4520, fileOff=860507725, len=691515]]
org.apache.ignite.IgniteCheckedException: Failed to find checkpoint record at 
the given WAL pointer: FileWALPointer [idx=4520, fileOff=860507725, len=691515]
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.initIfNeeded(CheckpointEntry.java:346)
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.access$300(CheckpointEntry.java:231)
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.initIfNeeded(CheckpointEntry.java:123)
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.groupState(CheckpointEntry.java:105)
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.isCheckpointApplicableForGroup(CheckpointHistory.java:377)
at 
org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.searchAndReserveCheckpoints(CheckpointHistory.java:304)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.reserveHistoryForExchange(GridCacheDatabaseSharedManager.java:1614)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1139)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:724)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2477)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2357)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8900) SqlFieldsQuery provides incorrect result when item size exceeds page size

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538662#comment-16538662
 ] 

ASF GitHub Bot commented on IGNITE-8900:


GitHub user ilantukh opened a pull request:

https://github.com/apache/ignite/pull/4344

IGNITE-8900



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8900

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4344.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4344


commit b3707b44c89b3bb4cb30d07cb1c9936264b0666e
Author: devozerov 
Date:   2018-06-29T20:01:26Z

Reproducer for IGNITE-8900 issue with broken links.

commit 1e05b9963592e20150d6006752da8a7b9fc87b09
Author: Alexey Goncharuk 
Date:   2018-06-30T11:08:45Z

Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/ignite 
into ignite-8900-repro

commit 1c9c746c9ebae59a83f030689d0d7e90a429152e
Author: Alexey Goncharuk 
Date:   2018-06-30T13:23:21Z

IGNITE-8900 Attempting to fix the link issue

commit f819202d254f3b52f34a68d29cb48e120dc45810
Author: Ilya Lantukh 
Date:   2018-07-10T14:12:32Z

IGNITE-8900 : Hotfix for JVM crash.




> SqlFieldsQuery provides incorrect result when item size exceeds page size
> -
>
> Key: IGNITE-8900
> URL: https://issues.apache.org/jira/browse/IGNITE-8900
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Ilya Lantukh
>Priority: Blocker
> Attachments: Main.java, Node.java
>
>
> Start several server nodes, then start client, execute queries with value 
> range in where clause. Duplicate entries may be found, some entries may be 
> missing.
> Results as an example:
> expected 5 results but got back 3 results (query range 61002664327616 to 
> 610026643276160004), cache.getAll returned 5 entries.
> expected 8 results but got back 7 results (query range 61002664327616 to 
> 610026643276160007), cache.getAll returned 8 entries.
>  Query results: [61002664327616, 610026643276160003, 610026643276160004, 
> 610026643276160005, 610026643276160005, 610026643276160006, 
> 610026643276160007]
> Please find reproducer attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8899) IgniteJdbcDriver directly create JavaLogger in static context

2018-07-10 Thread Evgenii Zhuravlev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zhuravlev updated IGNITE-8899:
--
Affects Version/s: 2.5

> IgniteJdbcDriver directly create JavaLogger in static context
> -
>
> Key: IGNITE-8899
> URL: https://issues.apache.org/jira/browse/IGNITE-8899
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
>Priority: Major
> Fix For: 2.7
>
>
> It means, that it always prints error in logs, if Jul logging file doesn't 
> exist. I suggest to use the same approach as in thin driver:
> replace 
> new JavaLogger()
> with
> Logger.getLogger(IgniteJdbcDriver.class.getName())



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8934) LongJVMPauseDetector prints error on thread interruption when node stopping

2018-07-10 Thread Evgenii Zhuravlev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zhuravlev updated IGNITE-8934:
--
Fix Version/s: 2.7

> LongJVMPauseDetector prints error on thread interruption when node stopping
> ---
>
> Key: IGNITE-8934
> URL: https://issues.apache.org/jira/browse/IGNITE-8934
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
>Priority: Major
> Fix For: 2.7
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8899) IgniteJdbcDriver directly create JavaLogger in static context

2018-07-10 Thread Evgenii Zhuravlev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zhuravlev updated IGNITE-8899:
--
Fix Version/s: 2.7

> IgniteJdbcDriver directly create JavaLogger in static context
> -
>
> Key: IGNITE-8899
> URL: https://issues.apache.org/jira/browse/IGNITE-8899
> Project: Ignite
>  Issue Type: Bug
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
>Priority: Major
> Fix For: 2.7
>
>
> It means, that it always prints error in logs, if Jul logging file doesn't 
> exist. I suggest to use the same approach as in thin driver:
> replace 
> new JavaLogger()
> with
> Logger.getLogger(IgniteJdbcDriver.class.getName())



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Anton Kalashnikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov reassigned IGNITE-8973:
-

Assignee: Anton Kalashnikov

> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions. Will be useful to have ability dump current state for 
> all partition to file or standard output. This dump can help an investigation 
> of some kind of problem with partition counters or sizes because it is a 
> cluster partition hash snapshot by some partition state (hash include all 
> keys in the partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8974) MVCC TX: Vacuum cleanup version obtaining optimization.

2018-07-10 Thread Roman Kondakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Kondakov updated IGNITE-8974:
---
Description: 
At the moment vacuum process obtains cleanup version as the same way as 
transactions do. It implies some unnecessary complications and even minor 
performance drop due to calculation entire tx snapshot instead of just a 
cleanup version number or sending unnsecessary tx end acks back to the 
coordinator. Possible solutions are:
 * Local caching cleanup version from the last obtained tx snapshot and use it 
in vacuum process. But in this way not all outdated versions could be cleaned 
(i.e. keys updated by this last tx).
 * Implement a special method for calculating cleanup version on the 
coordinator site and Request and Response messages for vacuum runned on 
non-coordinator site.

  was:
At the moment vacuum process obtains cleanup version as the same way as 
transactions do. It implies some unnecessary complications and even minor 
performance drop due to calculation entire tx snapshot instead of just a 
cleanup version number or sending unnsecessary tx end acks back to the 
coordinator. Possible solutions are:
 * Local caching cleanup version from the last obtained tx snapshot and use it 
in vacuum process. But in this way not all outdated versions could be cleaned 
(i.e. keys updated by this last tx).
 * Implement a special method for calculating cleanup version on the 
coordinator site and Request and Response messages for vacuum runned on 
non-coordinator side.


> MVCC TX: Vacuum cleanup version obtaining optimization.
> ---
>
> Key: IGNITE-8974
> URL: https://issues.apache.org/jira/browse/IGNITE-8974
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, sql
>Reporter: Roman Kondakov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc
>
> At the moment vacuum process obtains cleanup version as the same way as 
> transactions do. It implies some unnecessary complications and even minor 
> performance drop due to calculation entire tx snapshot instead of just a 
> cleanup version number or sending unnsecessary tx end acks back to the 
> coordinator. Possible solutions are:
>  * Local caching cleanup version from the last obtained tx snapshot and use 
> it in vacuum process. But in this way not all outdated versions could be 
> cleaned (i.e. keys updated by this last tx).
>  * Implement a special method for calculating cleanup version on the 
> coordinator site and Request and Response messages for vacuum runned on 
> non-coordinator site.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8974) MVCC TX: Vacuum cleanup version obtaining optimization.

2018-07-10 Thread Roman Kondakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Kondakov updated IGNITE-8974:
---
Description: 
At the moment vacuum process obtains cleanup version as the same way as 
transactions do. It implies some unnecessary complications and even minor 
performance drop due to calculation entire tx snapshot instead of just a 
cleanup version number or sending unnsecessary tx end acks back to the 
coordinator. Possible solutions are:
 * Local caching cleanup version from the last obtained tx snapshot and use it 
in vacuum process. But in this way not all outdated versions could be cleaned 
(i.e. keys updated by this last tx).
 * Implement a special method for calculating cleanup version on the 
coordinator site and Request and Response messages for vacuum runned on 
non-coordinator side.

  was:
At the moment vacuum process obtains cleanup version as the same way as 
transactions do. It implies some unnecessary complications and even minor 
performance drop due to calculation entire tx snapshot instead of just a 
cleanup version number or sending unnsecessary tx end acks back to the 
coordinator. Possible solutions are:
 * Local caching cleanup version from the last obtained tx snapshot and use it 
in vacuum process. But in this way not all outdated versions could be cleaned 
(i.e. keys updated by this last tx).
 * Implement a special method for calculating cleanup version on the 
coordinator state and Request and Response messages for vacuum runned on 
non-coordinator side.


> MVCC TX: Vacuum cleanup version obtaining optimization.
> ---
>
> Key: IGNITE-8974
> URL: https://issues.apache.org/jira/browse/IGNITE-8974
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, sql
>Reporter: Roman Kondakov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc
>
> At the moment vacuum process obtains cleanup version as the same way as 
> transactions do. It implies some unnecessary complications and even minor 
> performance drop due to calculation entire tx snapshot instead of just a 
> cleanup version number or sending unnsecessary tx end acks back to the 
> coordinator. Possible solutions are:
>  * Local caching cleanup version from the last obtained tx snapshot and use 
> it in vacuum process. But in this way not all outdated versions could be 
> cleaned (i.e. keys updated by this last tx).
>  * Implement a special method for calculating cleanup version on the 
> coordinator site and Request and Response messages for vacuum runned on 
> non-coordinator side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8974) MVCC TX: Vacuum cleanup version obtaining optimization.

2018-07-10 Thread Roman Kondakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Kondakov reassigned IGNITE-8974:
--

Assignee: Roman Kondakov

> MVCC TX: Vacuum cleanup version obtaining optimization.
> ---
>
> Key: IGNITE-8974
> URL: https://issues.apache.org/jira/browse/IGNITE-8974
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, sql
>Reporter: Roman Kondakov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc
>
> At the moment vacuum process obtains cleanup version as the same way as 
> transactions do. It implies some unnecessary complications and even minor 
> performance drop due to calculation entire tx snapshot instead of just a 
> cleanup version number or sending unnsecessary tx end acks back to the 
> coordinator. Possible solutions are:
>  * Local caching cleanup version from the last obtained tx snapshot and use 
> it in vacuum process. But in this way not all outdated versions could be 
> cleaned (i.e. keys updated by this last tx).
>  * Implement a special method for calculating cleanup version on the 
> coordinator state and Request and Response messages for vacuum runned on 
> non-coordinator side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8974) MVCC TX: Vacuum cleanup version obtaining optimization.

2018-07-10 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-8974:
--

 Summary: MVCC TX: Vacuum cleanup version obtaining optimization.
 Key: IGNITE-8974
 URL: https://issues.apache.org/jira/browse/IGNITE-8974
 Project: Ignite
  Issue Type: Improvement
  Components: cache, sql
Reporter: Roman Kondakov


At the moment vacuum process obtains cleanup version as the same way as 
transactions do. It implies some unnecessary complications and even minor 
performance drop due to calculation entire tx snapshot instead of just a 
cleanup version number or sending unnsecessary tx end acks back to the 
coordinator. Possible solutions are:
 * Local caching cleanup version from the last obtained tx snapshot and use it 
in vacuum process. But in this way not all outdated versions could be cleaned 
(i.e. keys updated by this last tx).
 * Implement a special method for calculating cleanup version on the 
coordinator state and Request and Response messages for vacuum runned on 
non-coordinator side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Dmitriy Govorukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Govorukhin updated IGNITE-8973:
---
Description: 
In a current implementation, idle_verify checking consistency between primary 
and backup partitions. Will be useful to have ability dump current state for 
all partition to file or standard output. This dump can help an investigation 
of some kind of problem with partition counters or sizes because it is a 
cluster partition hash snapshot by some partition state (hash include all keys 
in the partition).

idle_verify --dump - calculate partition hash and print into standard output
idle_verify --dump {path} - calculate partition hash and write output to file


  was:
In a current implementation, idle_verify checking consistency between primary 
and backup partitions. Will be useful to have ability dump current state for 
all partition to file. This dump can help an investigation of some kind of 
problem with partition counters or sizes because it is a cluster partition hash 
snapshot by some partition state (hash include all keys in the partition).

idle_verify --dump - calculate partition hash and print into standard output
idle_verify --dump {path} - calculate partition hash and write output to file



> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions. Will be useful to have ability dump current state for 
> all partition to file or standard output. This dump can help an investigation 
> of some kind of problem with partition counters or sizes because it is a 
> cluster partition hash snapshot by some partition state (hash include all 
> keys in the partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-8973:
--

 Summary: Need to support dump for idle_verify 
 Key: IGNITE-8973
 URL: https://issues.apache.org/jira/browse/IGNITE-8973
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitriy Govorukhin


In a current implementation, idle_verify checking consistency between primary 
and backup partitions will be useful to have ability dump current state for all 
partition to file. This dump can help an investigation of some kind of problem 
with partition counters or sizes because it is a cluster partition hash 
snapshot by some partition state (hash include all keys in the partition).

idle_verify --dump - calculate partition hash and print into standard output
idle_verify --dump {path} - calculate partition hash and write output to file




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Dmitriy Govorukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Govorukhin updated IGNITE-8973:
---
Fix Version/s: 2.7

> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions will be useful to have ability dump current state for 
> all partition to file. This dump can help an investigation of some kind of 
> problem with partition counters or sizes because it is a cluster partition 
> hash snapshot by some partition state (hash include all keys in the 
> partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Dmitriy Govorukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Govorukhin updated IGNITE-8973:
---
Issue Type: Improvement  (was: Bug)

> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions will be useful to have ability dump current state for 
> all partition to file. This dump can help an investigation of some kind of 
> problem with partition counters or sizes because it is a cluster partition 
> hash snapshot by some partition state (hash include all keys in the 
> partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8973) Need to support dump for idle_verify

2018-07-10 Thread Dmitriy Govorukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Govorukhin updated IGNITE-8973:
---
Description: 
In a current implementation, idle_verify checking consistency between primary 
and backup partitions. Will be useful to have ability dump current state for 
all partition to file. This dump can help an investigation of some kind of 
problem with partition counters or sizes because it is a cluster partition hash 
snapshot by some partition state (hash include all keys in the partition).

idle_verify --dump - calculate partition hash and print into standard output
idle_verify --dump {path} - calculate partition hash and write output to file


  was:
In a current implementation, idle_verify checking consistency between primary 
and backup partitions will be useful to have ability dump current state for all 
partition to file. This dump can help an investigation of some kind of problem 
with partition counters or sizes because it is a cluster partition hash 
snapshot by some partition state (hash include all keys in the partition).

idle_verify --dump - calculate partition hash and print into standard output
idle_verify --dump {path} - calculate partition hash and write output to file



> Need to support dump for idle_verify 
> -
>
> Key: IGNITE-8973
> URL: https://issues.apache.org/jira/browse/IGNITE-8973
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.7
>
>
> In a current implementation, idle_verify checking consistency between primary 
> and backup partitions. Will be useful to have ability dump current state for 
> all partition to file. This dump can help an investigation of some kind of 
> problem with partition counters or sizes because it is a cluster partition 
> hash snapshot by some partition state (hash include all keys in the 
> partition).
> idle_verify --dump - calculate partition hash and print into standard output
> idle_verify --dump {path} - calculate partition hash and write output to file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8900) SqlFieldsQuery provides incorrect result when item size exceeds page size

2018-07-10 Thread Ilya Lantukh (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Lantukh reassigned IGNITE-8900:


Assignee: Ilya Lantukh

> SqlFieldsQuery provides incorrect result when item size exceeds page size
> -
>
> Key: IGNITE-8900
> URL: https://issues.apache.org/jira/browse/IGNITE-8900
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Ilya Lantukh
>Priority: Blocker
> Attachments: Main.java, Node.java
>
>
> Start several server nodes, then start client, execute queries with value 
> range in where clause. Duplicate entries may be found, some entries may be 
> missing.
> Results as an example:
> expected 5 results but got back 3 results (query range 61002664327616 to 
> 610026643276160004), cache.getAll returned 5 entries.
> expected 8 results but got back 7 results (query range 61002664327616 to 
> 610026643276160007), cache.getAll returned 8 entries.
>  Query results: [61002664327616, 610026643276160003, 610026643276160004, 
> 610026643276160005, 610026643276160005, 610026643276160006, 
> 610026643276160007]
> Please find reproducer attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8907) [ML] Using vectors in featureExtractor

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538557#comment-16538557
 ] 

ASF GitHub Bot commented on IGNITE-8907:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4293


> [ML] Using vectors in featureExtractor
> --
>
> Key: IGNITE-8907
> URL: https://issues.apache.org/jira/browse/IGNITE-8907
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Yury Babak
>Assignee: Alexey Platonov
>Priority: Major
> Fix For: 2.7
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538505#comment-16538505
 ] 

Alexey Kuznetsov commented on IGNITE-8971:
--

[~andmed] I left  several comments in github take a look.

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Assignee: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8776) Eviction policy MBeans are never registered if evictionPolicyFactory is used

2018-07-10 Thread Stanislav Lukyanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538496#comment-16538496
 ] 

Stanislav Lukyanov commented on IGNITE-8776:


[~kcheng.mvp], thanks for the changes, I have no more comments.

[~dpavlov], could you please also take a look and merge the fix if it's OK?

> Eviction policy MBeans are never registered if evictionPolicyFactory is used
> 
>
> Key: IGNITE-8776
> URL: https://issues.apache.org/jira/browse/IGNITE-8776
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Stanislav Lukyanov
>Assignee: kcheng.mvp
>Priority: Minor
>  Labels: newbie
> Fix For: 2.7
>
>
> Eviction policy MBeans, such as LruEvictionPolicyMBean, are never registered 
> if evictionPolicyFactory is set instead of evictionPolicy (the latter is 
> deprecated).
> This happens because GridCacheProcessor::registerMbean attempts to find 
> either an *MBean interface or IgniteMBeanAware interface on the passed 
> object. It works for LruEvictionPolicy but not for LruEvictionPolicyFactory 
> (which doesn't implement these interfaces).
> The code needs to be adjusted to handle factories correctly.
> New tests are needed to make sure that all standard beans are registered 
> (IgniteKernalMbeansTest does that for kernal mbeans - need the same for cache 
> beans).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8972) CPP Thin: Add thin client example

2018-07-10 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-8972:
---

 Summary: CPP Thin: Add thin client example
 Key: IGNITE-8972
 URL: https://issues.apache.org/jira/browse/IGNITE-8972
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 2.5
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 2.7


Add thin C++ client example that shows its basic functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-8494) CPP Thin: Implement Thin CPP client

2018-07-10 Thread Igor Sapego (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego resolved IGNITE-8494.
-
Resolution: Fixed

Merged to master.

> CPP Thin: Implement Thin CPP client
> ---
>
> Key: IGNITE-8494
> URL: https://issues.apache.org/jira/browse/IGNITE-8494
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Affects Versions: 2.4
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: cpp
> Fix For: 2.7
>
>
> We need a thin client for C++.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8946) AssertionError can occur during release of WAL history that was reserved for historical rebalance

2018-07-10 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538491#comment-16538491
 ] 

Ivan Rakov commented on IGNITE-8946:


[~ilantukh], can you please review the changes?

> AssertionError can occur during release of WAL history that was reserved for 
> historical rebalance
> -
>
> Key: IGNITE-8946
> URL: https://issues.apache.org/jira/browse/IGNITE-8946
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Critical
> Fix For: 2.7
>
>
> Attempt to release WAL history after exchange may fail with AssertionError. 
> Seems like we have a bug and may try to release more WAL segments than we 
> have reserved:
> {noformat}
> java.lang.AssertionError: null
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.SegmentReservationStorage.release(SegmentReservationStorage.java:54)
> - locked <0x1c12> (a 
> org.apache.ignite.internal.processors.cache.persistence.wal.SegmentReservationStorage)
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.release(FileWriteAheadLogManager.java:862)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.releaseHistoryForExchange(GridCacheDatabaseSharedManager.java:1691)
> - locked <0x1c17> (a 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1751)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2858)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2591)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processSingleMessage(GridDhtPartitionsExchangeFuture.java:2283)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$100(GridDhtPartitionsExchangeFuture.java:129)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2140)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2128)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveSingleMessage(GridDhtPartitionsExchangeFuture.java:2128)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processSinglePartitionUpdate(GridCachePartitionExchangeManager.java:1580)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1000(GridCachePartitionExchangeManager.java:138)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:345)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:325)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2848)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2827)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)
> at 
> 

[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538489#comment-16538489
 ] 

Andrew Medvedev commented on IGNITE-8971:
-

[~skosarev] please review

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Assignee: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8946) AssertionError can occur during release of WAL history that was reserved for historical rebalance

2018-07-10 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538488#comment-16538488
 ] 

Ivan Rakov commented on IGNITE-8946:


More recent TC: 
https://ci.ignite.apache.org/viewLog.html?buildId=1473232=buildResultsDiv=IgniteTests24Java8_RunAll

> AssertionError can occur during release of WAL history that was reserved for 
> historical rebalance
> -
>
> Key: IGNITE-8946
> URL: https://issues.apache.org/jira/browse/IGNITE-8946
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Critical
> Fix For: 2.7
>
>
> Attempt to release WAL history after exchange may fail with AssertionError. 
> Seems like we have a bug and may try to release more WAL segments than we 
> have reserved:
> {noformat}
> java.lang.AssertionError: null
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.SegmentReservationStorage.release(SegmentReservationStorage.java:54)
> - locked <0x1c12> (a 
> org.apache.ignite.internal.processors.cache.persistence.wal.SegmentReservationStorage)
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.release(FileWriteAheadLogManager.java:862)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.releaseHistoryForExchange(GridCacheDatabaseSharedManager.java:1691)
> - locked <0x1c17> (a 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1751)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2858)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2591)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processSingleMessage(GridDhtPartitionsExchangeFuture.java:2283)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$100(GridDhtPartitionsExchangeFuture.java:129)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2140)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2128)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveSingleMessage(GridDhtPartitionsExchangeFuture.java:2128)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processSinglePartitionUpdate(GridCachePartitionExchangeManager.java:1580)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1000(GridCachePartitionExchangeManager.java:138)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:345)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:325)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2848)
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2827)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)
> at 
> 

[jira] [Resolved] (IGNITE-8969) Unable to await partitions release latch

2018-07-10 Thread Anton Kalashnikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov resolved IGNITE-8969.
---
Resolution: Incomplete

Fixed by https://issues.apache.org/jira/browse/IGNITE-8869

> Unable to await partitions release latch
> 
>
> Key: IGNITE-8969
> URL: https://issues.apache.org/jira/browse/IGNITE-8969
> Project: Ignite
>  Issue Type: Test
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
>  Unable to await partitions release latch within timeout for ClientLatch 
> after this node become latch coordinator after old latch coordinator was 
> failed.
> Reproduced by 
> TcpDiscoverySslSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished, 
> TcpDiscoverySslTrustedSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Medvedev reassigned IGNITE-8971:
---

Assignee: Andrew Medvedev

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Assignee: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8905) Incorrect assertion in GridDhtPartitionsExchangeFuture

2018-07-10 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538475#comment-16538475
 ] 

Sergey Chugunov commented on IGNITE-8905:
-

IGNITE-8853 will be fixed as part of this issue.

> Incorrect assertion in GridDhtPartitionsExchangeFuture
> --
>
> Key: IGNITE-8905
> URL: https://issues.apache.org/jira/browse/IGNITE-8905
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.7
>
>   Original Estimate: 2h
>  Time Spent: 2h
>  Remaining Estimate: 59m
>
> Assertion was added as part IGNITE-8657 into GridDhtPartitionsExchangeFuture 
> which is correct only in situation when client has to reconnect due to too 
> short EXCHANGE_HISTORY.
> Exceptions from other situations like not being able to acquire file lock are 
> also passed to client node in FullMessage.
> This assertion should be removed and check should be introduced instead: if 
> this exception is intended to be thrown on current client node, we should do 
> this, otherwise old program flow should be executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538473#comment-16538473
 ] 

Andrew Medvedev commented on IGNITE-8971:
-

TC [https://ci.ignite.apache.org/viewQueued.html?itemId=1473893]

 

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538458#comment-16538458
 ] 

ASF GitHub Bot commented on IGNITE-8971:


GitHub user andrewmed opened a pull request:

https://github.com/apache/ignite/pull/4342

IGNITE-8971 GridRestProcessor should propagate error message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-13472-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4342.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4342


commit 39e586d74d39312485815f840ff0680cf4bd4512
Author: AMedvedev 
Date:   2018-07-10T11:25:48Z

GG-13472: handle disk full error on writing snapshot metadata




> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Medvedev updated IGNITE-8971:

Description: GridRestProcessor should propagate error message (stack trace) 
for handling disk full error messages  (was: GridRestProcessor should propagate 
error message (stack trace))

> GridRestProcessor should propagate error message
> 
>
> Key: IGNITE-8971
> URL: https://issues.apache.org/jira/browse/IGNITE-8971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Medvedev
>Priority: Major
>
> GridRestProcessor should propagate error message (stack trace) for handling 
> disk full error messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8942) In some cases grid cannot be deactivated because of hanging CQ internal cleanup.

2018-07-10 Thread Alexei Scherbakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538455#comment-16538455
 ] 

Alexei Scherbakov commented on IGNITE-8942:
---

[~agoncharuk],

Please review.

> In some cases grid cannot be deactivated because of hanging CQ internal 
> cleanup.
> 
>
> Key: IGNITE-8942
> URL: https://issues.apache.org/jira/browse/IGNITE-8942
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.7
>
> Attachments: thread_dump_eip-server_2018-07-05-18-02.log
>
>
> See the attachment for thread dump.
> Most probably caused by blocking of message worker while waiting for cluster 
> state change:
> {noformat}
> "tcp-disco-msg-worker-#2%DPL_GRID%DplGridNodeName%" #380 daemon prio=10 
> os_prio=0 tid=0x7fe084c4c000 nid=0x39aa waiting on condition 
> [0x7fdcd76f5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
> at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.publicApiActiveState(GridClusterStateProcessor.java:193)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateCache(GridDhtTopologyFutureAdapter.java:83)
> at 
> org.apache.ignite.internal.processors.cache.CacheMetricsImpl.isValidForOperation(CacheMetricsImpl.java:715)
> at 
> org.apache.ignite.internal.processors.cache.CacheMetricsImpl.isValidForReading(CacheMetricsImpl.java:724)
> at 
> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.(CacheMetricsSnapshot.java:334)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.localMetrics(GridCacheAdapter.java:3255)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$7.cacheMetrics(GridDiscoveryManager.java:1098)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMetricsUpdateMessage(ServerImpl.java:5141)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2794)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2570)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:6903)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2657)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:6847)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {noformat}
> Another problem:
> org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor#onDeActivate
>  is called during exchange before transactions have completed, having 
> probability of losing CQ updates for current transactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8971) GridRestProcessor should propagate error message

2018-07-10 Thread Andrew Medvedev (JIRA)
Andrew Medvedev created IGNITE-8971:
---

 Summary: GridRestProcessor should propagate error message
 Key: IGNITE-8971
 URL: https://issues.apache.org/jira/browse/IGNITE-8971
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Andrew Medvedev


GridRestProcessor should propagate error message (stack trace)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7570) Client nodes failed with "Failed to process invalid partitions response" during failover

2018-07-10 Thread Oscar Torreno (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538423#comment-16538423
 ] 

Oscar Torreno commented on IGNITE-7570:
---

We are facing a similar situation.

Our setup:

JMeter JUnit tests
 - 6 server nodes running inside kubernetes, 4 client nodes at 2 JMeter slaves
 - 1 cache: partitioned, atomic, 2 backups, read from backup set to true
 - keys: always the same (10 and 12 in this test)
 - operations: PUT GET
 - 1 of the 6 servers (kubernetes pods) is being restarted every 2 minutes

The baseline topology is being updated correctly every node restart, having 6 
online servers after the restart takes place.

> Client nodes failed with "Failed to process invalid partitions response" 
> during failover
> 
>
> Key: IGNITE-7570
> URL: https://issues.apache.org/jira/browse/IGNITE-7570
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ksenia Rybakova
>Priority: Major
> Attachments: ignite-base-load-config.xml, run-load.properties, 
> run-load.xml
>
>
> Some client nodes fail with "Failed to process invalid partitions response" 
> during failover test:
> {noformat}
> [2018-01-30 16:27:58,610][INFO ][sys-#190][GridDhtPartitionsExchangeFuture] 
> Received full message, will finish exchange 
> [node=80ebd2ac-1432-4bfc-bab7-d9dbf56cdeb4, resVer=AffinityTopologyVersion 
> [topVer=37, minorTopVer=0]]
> [2018-01-30 16:27:58,688][INFO ][sys-#190][GridDhtPartitionsExchangeFuture] 
> Finish exchange future [startVer=AffinityTopologyVersion [topVer=37, 
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], 
> err=null]
> <16:27:58> The benchmark of random operation 
> failed.
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> Failed to process invalid partitions response (remote node reported invalid 
> partitions but remote topology version does not differ from local) 
> [topVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], 
> rmtTopVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], part=204, 
> nodeId=80ebd2ac-1432-4bfc-bab7-d9dbf56cdeb4]
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1294)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1673)
>  at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:852)
>  at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
>  at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.doGet(IgniteCacheRandomOperationBenchmark.java:776)
>  at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.executeRandomOperation(IgniteCacheRandomOperationBenchmark.java:624)
>  at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.executeOutOfTx(IgniteCacheRandomOperationBenchmark.java:602)
>  at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.test(IgniteCacheRandomOperationBenchmark.java:207)
>  at 
> org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:178)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to process 
> invalid partitions response (remote node reported invalid partitions but 
> remote topology version does not differ from local) 
> [topVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], 
> rmtTopVer=AffinityTopologyVersion [topVer=37, minorTopVer=0], part=204, 
> nodeId=80ebd2ac-1432-4bfc-bab7-d9dbf56cdeb4]
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.checkError(GridPartitionedSingleGetFuture.java:596)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:505)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:349)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1400(GridDhtAtomicCache.java:130)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$15.apply(GridDhtAtomicCache.java:422)
>  at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$15.apply(GridDhtAtomicCache.java:417)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>  at 
> 

[jira] [Commented] (IGNITE-8873) Optimize cache scans with enabled persistence.

2018-07-10 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538387#comment-16538387
 ] 

Vladislav Pyatkov commented on IGNITE-8873:
---

I think, that will to by done across preload every partition pages before 
iterate over entry.

And other idea: to implement specific configuration parameter (like 
\{{CacheConfiguration#warnUpOnStart(boolean)}}), that preload all pages from 
storage for particular cache (cache group) after the cache started.

Anyone have idea, how to implement it in public API?

> Optimize cache scans with enabled persistence.
> --
>
> Key: IGNITE-8873
> URL: https://issues.apache.org/jira/browse/IGNITE-8873
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
> Fix For: 2.7
>
>
> Currently cache scans with enabled persistence involve link resolution, which 
> can lead to radom disk access resulting in bad performace on SAS disks.
> One possibility is to preload cache data pages to remove slow random disk 
> access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8970) Web Console: Add the Clone action to a new configuration UI

2018-07-10 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-8970:
--

 Summary: Web Console: Add the Clone action to a new configuration 
UI
 Key: IGNITE-8970
 URL: https://issues.apache.org/jira/browse/IGNITE-8970
 Project: Ignite
  Issue Type: Task
Reporter: Pavel Konstantinov
Assignee: Alexey Kuznetsov


A new configuration UI has no the Clone action for all cluster's components 
(caches, schemas,...). We need to add that action due to it was on 'old' UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8705) CacheMBStatisticsBeanTest.testClear failed

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538369#comment-16538369
 ] 

ASF GitHub Bot commented on IGNITE-8705:


GitHub user dgarus opened a pull request:

https://github.com/apache/ignite/pull/4340

IGNITE-8705. Added the way to clean metrics on cluster.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dgarus/ignite ignite-8705

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4340.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4340


commit 1677e3cc906c2c3502da7826fa04ddf955593532
Author: Garus Denis 
Date:   2018-07-10T10:22:20Z

IGNITE-8705. Added the way to clean metrics on cluster.




> CacheMBStatisticsBeanTest.testClear failed
> --
>
> Key: IGNITE-8705
> URL: https://issues.apache.org/jira/browse/IGNITE-8705
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-21
> Fix For: 2.7
>
>
> JCache TCK 1.1 now have test for CacheClusterMetricsMXBeanImpl#clear(), but 
> we currently throw UnsupportedOperationException.
> *UPD1:*
> There are two options how we can fix the problem:
>  # May be we can use local MXBean for this test.
>  # Add the way to clean metrics on cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8969) Unable to await partitions release latch

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538350#comment-16538350
 ] 

ASF GitHub Bot commented on IGNITE-8969:


GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/4339

IGNITE-8969 Restore server latches if needed after node left the topo…

…logy.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8969

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4339.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4339


commit e0f03efc8b10e6c630433f3e6ba5bb7218714b57
Author: Anton Kalashnikov 
Date:   2018-07-10T10:10:14Z

IGNITE-8969 Restore server latches if needed after node left the topology.




> Unable to await partitions release latch
> 
>
> Key: IGNITE-8969
> URL: https://issues.apache.org/jira/browse/IGNITE-8969
> Project: Ignite
>  Issue Type: Test
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
>  Unable to await partitions release latch within timeout for ClientLatch 
> after this node become latch coordinator after old latch coordinator was 
> failed.
> Reproduced by 
> TcpDiscoverySslSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished, 
> TcpDiscoverySslTrustedSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8969) Unable to await partitions release latch

2018-07-10 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-8969:
-

 Summary: Unable to await partitions release latch
 Key: IGNITE-8969
 URL: https://issues.apache.org/jira/browse/IGNITE-8969
 Project: Ignite
  Issue Type: Test
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov


 Unable to await partitions release latch within timeout for ClientLatch after 
this node become latch coordinator after old latch coordinator was failed.

Reproduced by 
TcpDiscoverySslSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished, 
TcpDiscoverySslTrustedSelfTest.testNodeShutdownOnRingMessageWorkerStartNotFinished



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-640) Implement IgniteMultimap data structures

2018-07-10 Thread Anton Vinogradov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538346#comment-16538346
 ] 

Anton Vinogradov commented on IGNITE-640:
-

1) We have to implement one improvement at a time.
In case you want to make size faster, lets do this for set to, but later and 
inside another issue.

Pending issues:
IGNITE-7823 - almost ready to be merged (hope, we'll merge it tomorrow)

IGNITE-5553 - should be ready in two weeks. 
Anyway, we can fix it after your commit (multimap and set will be fixed 
together)

2) Generic type for key into MapItemKey sounds ok in case it will not cause 
huge code refactor.

3) I sent you an email with contacts and possible periods.

> Implement IgniteMultimap data structures
> 
>
> Key: IGNITE-640
> URL: https://issues.apache.org/jira/browse/IGNITE-640
> Project: Ignite
>  Issue Type: Sub-task
>  Components: data structures
>Reporter: Dmitriy Setrakyan
>Assignee: Amir Akhmedov
>Priority: Major
> Fix For: 2.7
>
>
> We need to add {{IgniteMultimap}} data structure in addition to other data 
> structures provided by Ignite. {{IgniteMultiMap}} should have similar API to 
> {{java.util.Map}} class in JDK, but support the semantics of multiple values 
> per key, similar to [Guava 
> Multimap|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Multimap.html].
>  
> However, unlike in Guava, our multi-map should work with Lists, not 
> Collections. Lists should make it possible to support the following methods:
> {code}
> // Gets value at a certain index for a key.
> V get(K, index);
> // Gets all values for a collection of keys at a certain index.
> Map getAll(Collection, index);
> // Gets values for specified indexes for a key.
> List get(K, Iterable indexes);
> // Gets all values for a collection of keys at specified indexes.
> Map> getAll(Collection, Iterable indexes);
> // Gets values for specified range of indexes, between min and max.
> List get(K, int min, int max);
> // Gets all values for a collection of keys for a specified index range, 
> between min and max.
> Map> getAll(Collection, int min, int max);
> // Gets all values for a specific key.
> List get(K);
> // Gets all values for a collection of keys.
> Map> getAll(Collection);
> // Iterate through all elements with a certain index.
> Iterator> iterate(int idx);
> // Do we need this?
> Collection> get(K, IgniteBiPredicate)
> {code}
> Multimap should also support colocated and non-colocated modes, similar to 
> [IgniteQueue|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/IgniteQueue.java]
>  and its implementation, 
> [GridAtomicCacheQueueImpl|https://github.com/apache/incubator-ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastructures/GridAtomicCacheQueueImpl.java].
> h2. Design Details
> The most natural way to implement such map, would be to store every value 
> under a separate key in an Ignite cache. For example, let's say that we have 
> a key {{K}} with multiple values: {{V0, V1, V2, ...}}. Then the cache should 
> end up with the following values {{K0, V0}}, {{K1, V1}}, {{K2, V2}}, etc. 
> This means that we need to wrap user key into our own, internal key, which 
> will also have {{index}} field. 
> Also note that we need to collocate all the values for the same key on the 
> same node, which means that we need to define user key K as the affinity key, 
> like so:
> {code}
> class MultiKey {
> @CacheAffinityMapped
> private K key;
> int index;
> }
> {code}
> Look ups of values at specific indexes becomes very simple. Just attach a 
> specific index to a key and do a cache lookup. Look ups for all values for a 
> key should work as following:
> {code}
> MultiKey key;
> V v = null;
> int index = 0;
> List res = new LinkedList<>();
> do {
> v = cache.get(MultiKey(K, index));
> if (v != null)
> res.add(v);
> index++;
> }
> while (v != null);
> return res;
> {code}
> We could also use batching for performance reason. In this case the batch 
> size should be configurable.
> {code}
> int index = 0;
> List res = new LinkedList<>();
> while (true) {
> List batch = new ArrayList<>(batchSize);
> // Populate batch.
> for (; index < batchSize; index++)
> batch.add(new MultiKey(K, index % batchSize);
> Map batchRes = cache.getAll(batch);
> // Potentially need to properly sort values, based on the key order,
> // if the returning map does not do it automatically.
> res.addAll(batchRes.values());
> if (res.size() < batch.size())
> break;
> }
> return res;
> {code}
> h2. Evictions
> Evictions in the {{IgniteMultiMap}} should have 2 levels: maximum number of 
> keys, and maximum number 

[jira] [Commented] (IGNITE-8968) Failed to shutdown node due to "Error saving backup value"

2018-07-10 Thread Pavel Vinokurov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538218#comment-16538218
 ] 

Pavel Vinokurov commented on IGNITE-8968:
-

Probably the same issue could in GridDhtTransactionalCacheAdapter#removeLocks

> Failed to shutdown node due to "Error saving backup value"
> --
>
> Key: IGNITE-8968
> URL: https://issues.apache.org/jira/browse/IGNITE-8968
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> On node shutdown ignite prints following logs infinitely:
> org.apache.ignite.internal.NodeStoppingException: Operation has been 
> cancelled (node is stopping).
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8869) PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538211#comment-16538211
 ] 

ASF GitHub Bot commented on IGNITE-8869:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4337


> PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity
> --
>
> Key: IGNITE-8869
> URL: https://issues.apache.org/jira/browse/IGNITE-8869
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.7
>
>
> After introduction of ExhangeLatches, 
> PartitionsExchangeOnDiscoveryHistoryOverflowTest  will hangs permanently.  In 
> current implementation, ExchangeLatchManager retrieves alive nodes from 
> discoveryCache with specific affinity topology version and fails because of a 
> too short discovery history. This causes fail of exchange-worker and 
> therefore NoOpFailureHandler leaves node in hanging state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8869) PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity

2018-07-10 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538209#comment-16538209
 ] 

Dmitriy Pavlov commented on IGNITE-8869:


I've applied pacth with revert of changes. Reopened issue and retriggered TC.

> PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity
> --
>
> Key: IGNITE-8869
> URL: https://issues.apache.org/jira/browse/IGNITE-8869
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.7
>
>
> After introduction of ExhangeLatches, 
> PartitionsExchangeOnDiscoveryHistoryOverflowTest  will hangs permanently.  In 
> current implementation, ExchangeLatchManager retrieves alive nodes from 
> discoveryCache with specific affinity topology version and fails because of a 
> too short discovery history. This causes fail of exchange-worker and 
> therefore NoOpFailureHandler leaves node in hanging state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (IGNITE-8869) PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity

2018-07-10 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov reopened IGNITE-8869:


> PartitionsExchangeOnDiscoveryHistoryOverflowTest hangs on TeamCity
> --
>
> Key: IGNITE-8869
> URL: https://issues.apache.org/jira/browse/IGNITE-8869
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.7
>
>
> After introduction of ExhangeLatches, 
> PartitionsExchangeOnDiscoveryHistoryOverflowTest  will hangs permanently.  In 
> current implementation, ExchangeLatchManager retrieves alive nodes from 
> discoveryCache with specific affinity topology version and fails because of a 
> too short discovery history. This causes fail of exchange-worker and 
> therefore NoOpFailureHandler leaves node in hanging state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8957) testFailGetLock() constantly fails. Last entry checkpoint history can be empty

2018-07-10 Thread Andrew Medvedev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538196#comment-16538196
 ] 

Andrew Medvedev commented on IGNITE-8957:
-

[~Mmuzaf] Early exit vs if/else is a matter of taste.  np to change that for me

> testFailGetLock() constantly fails. Last entry checkpoint history can be empty
> --
>
> Key: IGNITE-8957
> URL: https://issues.apache.org/jira/browse/IGNITE-8957
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: Maxim Muzafarov
>Assignee: Andrew Medvedev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> IgniteChangeGlobalStateTest#testFailGetLock constantly fails with exception:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.onCheckpointFinished(CheckpointHistory.java:205)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointEnd(GridCacheDatabaseSharedManager.java:3654)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3178)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2953)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> As Sergey Chugunov 
> [mentioned|https://issues.apache.org/jira/browse/IGNITE-8737?focusedCommentId=16535062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16535062],
>  issue can be solved different ways:
> {quote}
> It seems we missed a case when lastEntry may be empty. We may choose here 
> from two options:
> * Check if histMap is empty inside onCheckpointFinished. If it is just don't 
> log anything (it was the very first checkpoint).
> * Check in caller that there is no history, calculate necessary index in 
> caller and pass it to onCheckpointFinished to prepare correct log 
> message.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8364) Propagate deployed services to joining nodes

2018-07-10 Thread Vyacheslav Daradur (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Daradur reassigned IGNITE-8364:
--

Assignee: Vyacheslav Daradur

> Propagate deployed services to joining nodes
> 
>
> Key: IGNITE-8364
> URL: https://issues.apache.org/jira/browse/IGNITE-8364
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Mekhanikov
>Assignee: Vyacheslav Daradur
>Priority: Major
>  Labels: iep-17
> Fix For: 2.7
>
>
> Joining nodes should receive information about service configurations and 
> assignments in discovery data, and deploy services, assigned to them, 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8364) Propagate deployed services to joining nodes

2018-07-10 Thread Vyacheslav Daradur (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Daradur updated IGNITE-8364:
---
Fix Version/s: 2.7

> Propagate deployed services to joining nodes
> 
>
> Key: IGNITE-8364
> URL: https://issues.apache.org/jira/browse/IGNITE-8364
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Mekhanikov
>Assignee: Vyacheslav Daradur
>Priority: Major
>  Labels: iep-17
> Fix For: 2.7
>
>
> Joining nodes should receive information about service configurations and 
> assignments in discovery data, and deploy services, assigned to them, 
> automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8962) Web console: Failed to load blob on configuration pages

2018-07-10 Thread Vasiliy Sisko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko reassigned IGNITE-8962:
-

Assignee: Alexander Kalinin

> Web console: Failed to load blob on configuration pages
> ---
>
> Key: IGNITE-8962
> URL: https://issues.apache.org/jira/browse/IGNITE-8962
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.7
>Reporter: Vasiliy Sisko
>Assignee: Alexander Kalinin
>Priority: Major
>
> On opening *Advanced* tab of cluster configuration in log printed several 
> messages:
> {code:java}
> Refused to create a worker from 
> 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it 
> violates the following Content Security Policy directive: "script-src 'self' 
> 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was 
> not explicitly set, so 'script-src' is used as a fallback.
> WorkerClient  @   index.js:16810
>   createWorker@   xml.js:647
>   $startWorker@   index.js:9120{code}
> and
> {code:java}
> Could not load worker DOMException: Failed to construct 'Worker': Access to 
> the script at 
> 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied 
> by the document's Content Security Policy.
>     at new WorkerClient 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28)
>     at Mode.createWorker 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22)
>     at EditSession.$startWorker 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39)
>     at EditSession.$onChangeMode 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18)
>     at EditSession.setMode 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18)
>     at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59)
>     at updateOptions 
> (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17)
>     at Object.link 
> (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13)
>     at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18
>     at invokeLinkFn 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9)
> warn  @   index.js:3532
>   $startWorker@   index.js:9122
>   $onChangeMode   @   index.js:9076{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8837) windows ignite.bat ignores command-line parameters with the count of arguments-J greater than 4

2018-07-10 Thread ARomantsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ARomantsov updated IGNITE-8837:
---
Priority: Major  (was: Critical)

> windows ignite.bat ignores command-line parameters with the count of 
> arguments-J greater than 4
> ---
>
> Key: IGNITE-8837
> URL: https://issues.apache.org/jira/browse/IGNITE-8837
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
> Environment: Windows 10
> java version "1.8.0_171"
> Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
>Reporter: ARomantsov
>Priority: Major
> Fix For: 2.7
>
> Attachments: run_with_4arg-J.txt, run_with_5arg-J.txt
>
>
> Try to run 
> C:\Users\artur\Downloads\apache-ignite-fabric-2.5.0-bin\apache-ignite-fabric-2.5.0-bin>bin\ignite.bat
>  
> C:\Users\artur\Downloads\apache-ignite-fabric-2.5.0-bin\apache-ignite-fabric-2.5.0-bin\examples\config\example-data-regions.xml
>  -v -J-Da1=1 -J-Da2=2 -J-Da3=3 -J-DA4=4 > run_with_4arg-J.txt 2>&1
> *Run ok, take normal config*
> C:\Users\artur\Downloads\apache-ignite-fabric-2.5.0-bin\apache-ignite-fabric-2.5.0-bin>bin\ignite.bat
>  
> C:\Users\artur\Downloads\apache-ignite-fabric-2.5.0-bin\apache-ignite-fabric-2.5.0-bin\examples\config\example-data-regions.xml
>  -v -J-Da1=1 -J-Da2=2 -J-Da3=3 -J-DA4=4 -J-DA5=5 > run_with_5arg-J.txt
> *Run not ok, ignoring all options and take default config*
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8962) Web console: Failed to load blob on configuration pages

2018-07-10 Thread Vasiliy Sisko (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538165#comment-16538165
 ] 

Vasiliy Sisko commented on IGNITE-8962:
---

After update of headers exception is not reproduced. 
 Not reproduced in docker image.
[~alexdel] Please test fix in branch

> Web console: Failed to load blob on configuration pages
> ---
>
> Key: IGNITE-8962
> URL: https://issues.apache.org/jira/browse/IGNITE-8962
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.7
>Reporter: Vasiliy Sisko
>Priority: Major
>
> On opening *Advanced* tab of cluster configuration in log printed several 
> messages:
> {code:java}
> Refused to create a worker from 
> 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it 
> violates the following Content Security Policy directive: "script-src 'self' 
> 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was 
> not explicitly set, so 'script-src' is used as a fallback.
> WorkerClient  @   index.js:16810
>   createWorker@   xml.js:647
>   $startWorker@   index.js:9120{code}
> and
> {code:java}
> Could not load worker DOMException: Failed to construct 'Worker': Access to 
> the script at 
> 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied 
> by the document's Content Security Policy.
>     at new WorkerClient 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28)
>     at Mode.createWorker 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22)
>     at EditSession.$startWorker 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39)
>     at EditSession.$onChangeMode 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18)
>     at EditSession.setMode 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18)
>     at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59)
>     at updateOptions 
> (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17)
>     at Object.link 
> (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13)
>     at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18
>     at invokeLinkFn 
> (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9)
> warn  @   index.js:3532
>   $startWorker@   index.js:9122
>   $onChangeMode   @   index.js:9076{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8861) Wrong method call in IgniteServise documentation snippet

2018-07-10 Thread Roman Shtykh (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shtykh reassigned IGNITE-8861:


Assignee: Roman Shtykh

> Wrong method call in IgniteServise documentation snippet
> 
>
> Key: IGNITE-8861
> URL: https://issues.apache.org/jira/browse/IGNITE-8861
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Oleg Ostanin
>Assignee: Roman Shtykh
>Priority: Minor
>
> [https://apacheignite.readme.io/docs/service-example]
> {{ClusterGroup cacheGrp = ignite.cluster().forCache("myCounterService");}}
> {{This string does not compile if we use 2.5 version:}}
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile 
> (default-compile) on project poc-tester: Compilation failure}}
> {{[ERROR] 
> /home/oostanin/gg-qa/poc-tester/src/main/java/org/apache/ignite/scenario/ServiceTask.java:[53,51]
>  cannot find symbol}}
> {{[ERROR]   symbol:   method forCache(java.lang.String)}}
> {{[ERROR]   location: interface org.apache.ignite.IgniteCluster}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8918) Redis examples don't work

2018-07-10 Thread Roman Shtykh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538102#comment-16538102
 ] 

Roman Shtykh commented on IGNITE-8918:
--

[~skozlov] In comments,

{{To execute this script, run an Ignite instance with 
'redis-ignite-internal-cache-0' cache specified and configured.}}

Have you run the scripts with {{'redis-ignite-internal-cache-0'}} cache?

> Redis examples don't work
> -
>
> Key: IGNITE-8918
> URL: https://issues.apache.org/jira/browse/IGNITE-8918
> Project: Ignite
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.9
>Reporter: Sergey Kozlov
>Priority: Major
>
> Ther is no default cache anymore but redis examples (both php/python) still 
> based on that apporach:
> {noformat}
> [08:23:32,245][SEVERE][rest-#70%null%][GridCacheCommandHandler] Failed to 
> execute cache command: GridRestCacheRequest [cacheName=null, cacheFlags=0, 
> ttl=null, super=GridRestRequest [destId=null, 
> clientId=d6594e45-41eb-46db-a77e-8cbcd2310a55, addr=null, cmd=CACHE_PUT]]
> class org.apache.ignite.IgniteCheckedException: Failed to find cache for 
> given cache name (null for default cache): null
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.localCache(GridCacheCommandHandler.java:754)
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.executeCommand(GridCacheCommandHandler.java:677)
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.handleAsync(GridCacheCommandHandler.java:468)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:264)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:87)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:153)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> [08:23:32,251][SEVERE][rest-#70%null%][GridRestProcessor] Failed to handle 
> request: CACHE_PUT
> class org.apache.ignite.IgniteCheckedException: Failed to find cache for 
> given cache name (null for default cache): null
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.localCache(GridCacheCommandHandler.java:754)
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.executeCommand(GridCacheCommandHandler.java:677)
>   at 
> org.apache.ignite.internal.processors.rest.handlers.cache.GridCacheCommandHandler.handleAsync(GridCacheCommandHandler.java:468)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.handleRequest(GridRestProcessor.java:264)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor.access$100(GridRestProcessor.java:87)
>   at 
> org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:153)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Fix examples



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8968) Failed to shutdown node due to "Error saving backup value"

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538097#comment-16538097
 ] 

ASF GitHub Bot commented on IGNITE-8968:


GitHub user pvinokurov opened a pull request:

https://github.com/apache/ignite/pull/4338

IGNITE-8968 Failed to shutdown node due to "Error saving backup value"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8968

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4338.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4338


commit 1a5eb67a1814f3ca3e69860aaada329771435d11
Author: pvinokurov 
Date:   2018-07-10T06:11:23Z

IGNITE-8968 Failed to shutdown node due to "Error saving backup value"




> Failed to shutdown node due to "Error saving backup value"
> --
>
> Key: IGNITE-8968
> URL: https://issues.apache.org/jira/browse/IGNITE-8968
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> On node shutdown ignite prints following logs infinitely:
> org.apache.ignite.internal.NodeStoppingException: Operation has been 
> cancelled (node is stopping).
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8968) Failed to shutdown node due to "Error saving backup value"

2018-07-10 Thread Pavel Vinokurov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov updated IGNITE-8968:

Affects Version/s: 2.4
  Description: 
On node shutdown ignite prints following logs infinitely:

org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled 
(node is stopping).
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)

  was:



org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled 
(node is stopping).
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)


> Failed to shutdown node due to "Error saving backup value"
> --
>
> Key: IGNITE-8968
> URL: https://issues.apache.org/jira/browse/IGNITE-8968
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
>
> On node shutdown ignite prints following logs infinitely:
> org.apache.ignite.internal.NodeStoppingException: Operation has been 
> cancelled (node is stopping).
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)