[jira] [Created] (IGNITE-8417) Intermittent failure to insert tuples using JDBC Thin driver

2018-04-27 Thread mark pettovello (JIRA)
mark pettovello created IGNITE-8417:
---

 Summary: Intermittent failure to insert tuples using JDBC Thin 
driver
 Key: IGNITE-8417
 URL: https://issues.apache.org/jira/browse/IGNITE-8417
 Project: Ignite
  Issue Type: Bug
  Components: clients
Affects Versions: 2.4
 Environment: Ignite version 2.4

 

Single Node Ignite running locally on Windows

 

Java 1.8.0_152

Ignite-core-2.4.0.jar

Class.forName("org.apache.ignite.IgniteJdbcThinDriver");

 

 

 
Reporter: mark pettovello


Insert Statement Failed.  Failed to execute DML statement.

Error: Failed to map keys for cache (all partition nodes left the grid)

Using basic Java SQL process: Connection, PreparedStatement, bind variables, 
execute()

Some inserts are failing, approximately 30%.  Problem is repeatable.

 

[14:43:54,644][SEVERE][client-connector-#66][JdbcRequestHandler] Failed to 
execute SQL query [reqId=0, req=JdbcQueryExecuteRequest [schemaName=PUBLIC, 
pageSize=1024, maxRows=0, sqlQry=Insert Into ...redacted...]]
class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to 
execute DML statement [stmt=Insert Into ...redacted... ]]
 at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:1633)
 at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1577)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2039)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2034)
 at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2555)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2048)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1979)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:310)
 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:169)
 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:148)
 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:41)
 at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
 at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
 at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
 at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
 at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: class 
org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException: 
Failed to map keys for cache (all partition nodes left the grid).
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapSingleUpdate(GridNearAtomicSingleUpdateFuture.java:562)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:454)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1117)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:606)
 at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2369)
 at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putIfAbsent(GridCacheAdapter.java:2752)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:795)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.processDmlSelectResult(DmlStatementsProcessor.java:576)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:534)
 at 

[GitHub] ignite pull request #3901: IGNITE-8156 Ignite Compatibility: common improvem...

2018-04-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3901


---


[GitHub] ignite pull request #3934: IGNITE-8416 CommandHandlerParsingTest stably fail...

2018-04-27 Thread glukos
GitHub user glukos opened a pull request:

https://github.com/apache/ignite/pull/3934

IGNITE-8416 CommandHandlerParsingTest stably fails with parsing error



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8416

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3934


commit bf77823ee3b669d7403ce29d2348e69597c71ab4
Author: Ivan Rakov 
Date:   2018-04-27T18:38:22Z

IGNITE-8416 CommandHandlerParsingTest stably fails with parsing error




---


Re: Apache Ignite 2.5 release

2018-04-27 Thread Andrey Gura
Of course I meant bug fixes, not bugs :)

On Fri, Apr 27, 2018 at 9:28 PM, Andrey Gura  wrote:
> Eduard, Pavel,
>
> mentioned bugs are good candidates for including to release.
>
> Targeted to 2.5.
>
> Thanks!
>
> On Fri, Apr 27, 2018 at 5:06 PM, Pavel Kovalenko  wrote:
>> Igniters,
>>
>> I would like to add this issue
>> https://issues.apache.org/jira/browse/IGNITE-8405 to 2.5 scope.
>>
>>
>> 2018-04-27 16:59 GMT+03:00 Eduard Shangareev :
>>
>>> Guys,
>>> I have finished
>>> ttps://issues.apache.org/jira/browse/IGNITE-7628.
>>>
>>> It looks like a good candidate to merge to 2.5.
>>>
>>> On Fri, Apr 27, 2018 at 1:32 PM, Ivan Rakov  wrote:
>>>
>>> > Dmitriy,
>>> >
>>> > There are some notes about WAL compaction in "Persistence Under the Hood"
>>> > article: https://cwiki.apache.org/confluence/display/IGNITE/Ignite+
>>> > Persistent+Store+-+under+the+hood#IgnitePersistentStore-und
>>> > erthehood-WALcompaction
>>> > Please take a look - it has all the answers.
>>> >
>>> > Best Regards,
>>> > Ivan Rakov
>>> >
>>> >
>>> > On 27.04.2018 2:28, Dmitriy Setrakyan wrote:
>>> >
>>> >> On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov 
>>> >> wrote:
>>> >>
>>> >> Folks,
>>> >>>
>>> >>> I have a fix for a critical issue related to WAL compaction:
>>> >>> https://issues.apache.org/jira/browse/IGNITE-8393
>>> >>> In short, if part of WAL archive is broken, attempt to compress it may
>>> >>> result in spamming warnings in infinite loop.
>>> >>> I think it's also worth being included in 2.5 release.
>>> >>>
>>> >>> Let's include it, especially given that the fix is done.
>>> >>
>>> >> Ivan, on another note, is WAL compaction described anywhere? What do we
>>> do
>>> >> internally to compact WAL and by what factor are we able to reduce the
>>> WAL
>>> >> file size?
>>> >>
>>> >> D.
>>> >>
>>> >>
>>> >
>>>
>>>
>>> --
>>> Best regards,
>>> Eduard.
>>>


[jira] [Created] (IGNITE-8416) CommandHandlerParsingTest stably fails with parsing error

2018-04-27 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-8416:
--

 Summary: CommandHandlerParsingTest stably fails with parsing error
 Key: IGNITE-8416
 URL: https://issues.apache.org/jira/browse/IGNITE-8416
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Rakov
Assignee: Ivan Rakov
 Fix For: 2.5


Example stacktrace:
{noformat}
java.lang.IllegalArgumentException: Arguments are expected for --cache 
subcommand, run --cache help for more info.
at 
org.apache.ignite.internal.commandline.CommandHandlerParsingTest.testConnectionSettings(CommandHandlerParsingTest.java:94)
--- Stderr: ---
java.lang.IllegalArgumentException: Invalid value for port: wrong-port
at 
org.apache.ignite.internal.commandline.CommandHandler.parseAndValidate(CommandHandler.java:1112)
at 
org.apache.ignite.internal.commandline.CommandHandlerParsingTest.testConnectionSettings(CommandHandlerParsingTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.5 release

2018-04-27 Thread Andrey Gura
Eduard, Pavel,

mentioned bugs are good candidates for including to release.

Targeted to 2.5.

Thanks!

On Fri, Apr 27, 2018 at 5:06 PM, Pavel Kovalenko  wrote:
> Igniters,
>
> I would like to add this issue
> https://issues.apache.org/jira/browse/IGNITE-8405 to 2.5 scope.
>
>
> 2018-04-27 16:59 GMT+03:00 Eduard Shangareev :
>
>> Guys,
>> I have finished
>> ttps://issues.apache.org/jira/browse/IGNITE-7628.
>>
>> It looks like a good candidate to merge to 2.5.
>>
>> On Fri, Apr 27, 2018 at 1:32 PM, Ivan Rakov  wrote:
>>
>> > Dmitriy,
>> >
>> > There are some notes about WAL compaction in "Persistence Under the Hood"
>> > article: https://cwiki.apache.org/confluence/display/IGNITE/Ignite+
>> > Persistent+Store+-+under+the+hood#IgnitePersistentStore-und
>> > erthehood-WALcompaction
>> > Please take a look - it has all the answers.
>> >
>> > Best Regards,
>> > Ivan Rakov
>> >
>> >
>> > On 27.04.2018 2:28, Dmitriy Setrakyan wrote:
>> >
>> >> On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov 
>> >> wrote:
>> >>
>> >> Folks,
>> >>>
>> >>> I have a fix for a critical issue related to WAL compaction:
>> >>> https://issues.apache.org/jira/browse/IGNITE-8393
>> >>> In short, if part of WAL archive is broken, attempt to compress it may
>> >>> result in spamming warnings in infinite loop.
>> >>> I think it's also worth being included in 2.5 release.
>> >>>
>> >>> Let's include it, especially given that the fix is done.
>> >>
>> >> Ivan, on another note, is WAL compaction described anywhere? What do we
>> do
>> >> internally to compact WAL and by what factor are we able to reduce the
>> WAL
>> >> file size?
>> >>
>> >> D.
>> >>
>> >>
>> >
>>
>>
>> --
>> Best regards,
>> Eduard.
>>


Re: Transparent Data Encryption (TDE) in Apache Ignite

2018-04-27 Thread Nikolay Izhikov
Hello, Igniters.

We've discussed TDE design privately with some respected community members 
including Vladimir Ozerov and Alexey Goncharyuk.
Here the list of questions we have to address before starting TDE 
implementation:

1. MEK, CEK rotation. 
Should we provide the way to change(regenerate) MEK, CEK from time to time? 
Is it required by PCI DSS standard? 

2. Does CEK(table keys) relate to user access permission? 
Need to study other vendors implementation.  

3. WAL encryption. How will it be implemented? What issues we have to solve? 

4. We should keep CEKs in MetaStore. 
Not a question, just to write down decision.

5. How should we handle following case:  
a. Node X left cluster. 
b. Node X joins cluster.
c. Between steps a and b encryption keys has been changed 

6. Public API to deal with CEK should be provided. 
Looks like we need to support following methods:
a. Generate new CEK when encrypted cache are created 
b. Decrypt CEK whenever needed. 

7. Can each node use own CEK? What are pros and cons for that? 

8. How we can ensure that decryption succeed? 
In case CEK is broken. It can be broken because of memory corruption, network 
errors, etc. 

9. Specific encryption algorithm has to be chosen prior an implementation. 
We have to support usage of other algorithms. 

10. Page integrity are checked by CRC. How this process would be changed for 
encrypted pages?

11. Page header has well-known content. This can be used for known-plain-text 
attacks. 
We should measure the treatment and find the way to deal with it.

signature.asc
Description: This is a digitally signed message part


Re: IEP-4, Phase 2. Using BL(A)T for in-memory caches.

2018-04-27 Thread Ivan Rakov

Eduard,

+1 to your proposed API for configuring Affinity Topology change policies.
Obviously we should use "auto" as default behavior. I believe, automatic 
rebalancing is expected and more convenient for majority of users.


Best Regards,
Ivan Rakov

On 26.04.2018 19:27, Eduard Shangareev wrote:

Igniters,

Ok, I want to share my thoughts about "affinity topology (AT) changing
policies".


There would be three major option:
-auto;
-manual;
-custom.

1. Automatic change.
A user could set timeouts for:
a. change AT on any topology change after some timeout (setATChangeTimeout
in seconds);
b. change AT on node left after some timeout (setATChangeOnNodeLeftTimeout
in seconds);
c. change AT on node join after some timeout (setATChangeOnNodeJoinTimeout
in seconds).

b and c are more specific, so they would override a.

Also, I want to introduce a mechanism of merging AT changes, which would be
turned on by default.
Other words, if we reached timeout than we would change AT to current
topology, not that one which was on timeout schedule.

2. Manual change.

Current behavior. A user change AT himself by console tools or web console.

3. Custom.

We would give the option to set own realization of changing policy (class
name in config).
We should pass as incoming parameters:
- current topology (collection of cluster nodes);
- current AT (affinity topology);
- map of GroupId to minimal alive backup number;
- list of configuration (1.a, 1.b, 1.c);
- scheduler.

Plus to these configurations, I propose orthogonal option.
4. Emergency affinity topology change.
It would change AT even MANUAL option is set if at least one cache group
backup factor goes below  or equal chosen one (by default 0).
So, if we came to situation when after node left there was only primary
partion (without backups) for some cache group we would change AT
immediately.


Thank you for your attention.


On Thu, Apr 26, 2018 at 6:57 PM, Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:


Dmitriy,

I also think that we should think about 2.6 as the target.


On Thu, Apr 26, 2018 at 3:27 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:


Dmitriy,

I doubt we will be able to fit this in 2.5 given that we did not even
agree
on the policy interface. Forcing in-memory caches to use baseline topology
will be an easy technical fix, however, we will need to update and
probably
fix lots of failover tests, add new ones.

I think it makes sense to target this change to 2.6.

2018-04-25 22:25 GMT+03:00 Ilya Lantukh :


Eduard,

I'm not sure I understand what you mean by "policy". Is it an interface
that will have a few default implementations and user will be able to
create his own one? If so, could you please write an example of such
interface (how you see it) and how and when it's methods will be

invoked.

On Wed, Apr 25, 2018 at 10:10 PM, Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:


Igniters,
I have described the issue with current approach in "New definition

for

affinity node (issues with baseline)" topic[1].

Now we have 2 different affinity topology (one for in-memory, another

for

persistent caches).

It causes problems:
- we lose (in general) co-location between different caches;
- we can't avoid PME when non-BLAT node joins cluster;
- implementation should consider 2 different approaches to affinity
calculation.

So, I suggest unifying behavior of in-memory and persistent caches.
They should all use BLAT.

Their behaviors were different because we couldn't guarantee the

safety

of

in-memory data.
It should be fixed by a new mechanism of BLAT changing policy which

was

already discussed there - "Triggering rebalancing on timeout or

manually

if

the baseline topology is not reassembled" [2].

And we should have a policy by default which similar to current one
(add nodes, remove nodes automatically but after some reasonable delay
[seconds]).

After this change, we could stop using the term 'BLAT', Basline and so

on.

Because there would not be an alternative. So, it would be only one
possible Affinity Topology.


[1]
http://apache-ignite-developers.2346864.n4.nabble.

com/New-definition-for-

affinity-node-issues-with-baseline-td29868.html
[2]
http://apache-ignite-developers.2346864.n4.nabble.
com/Triggering-rebalancing-on-timeout-or-manually-if-the-
baseline-topology-is-not-reassembled-td29299.html#none




--
Best regards,
Ilya







[GitHub] ignite pull request #3933: Ignite 8186 - test base for sql feature coverage

2018-04-27 Thread pavel-kuznetsov
GitHub user pavel-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/3933

Ignite 8186 - test base for sql feature coverage

Created test base and basic tests to cover sql features with tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3933.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3933


commit 8da6b1c01b8d7583d2c3e39786c1f1bd133174a1
Author: Pavel Kuznetsov 
Date:   2018-04-13T17:40:30Z

IGNITE-8186: Draft of test base.

commit 6000d573c8d030fa471cb9ce1b7b2c05bb2209cf
Author: Pavel Kuznetsov 
Date:   2018-04-16T15:34:02Z

IGNITE-7743: Implemented base with ddl setup.

commit 7254770c69c7a7d0404b8dec55e61ae031e0382a
Author: Pavel Kuznetsov 
Date:   2018-04-16T16:36:38Z

IGNITE-8186: Updated doc, fixed grid startup.

commit 1c35a7a65844d049193fab5cb75c0d49c26a9824
Author: Pavel Kuznetsov 
Date:   2018-04-17T10:11:19Z

IGNITE-8186: Updated assert methods.

commit 604fd9c5b6ec3918b4737b61a980e52c3a7c7ca2
Author: Pavel Kuznetsov 
Date:   2018-04-17T10:52:49Z

IGNITE-8186: added age field; cleanup.

commit ed0970997ad2419b12417b97d5ccead65d05b2aa
Author: Pavel Kuznetsov 
Date:   2018-04-17T11:27:06Z

IGNITE-8186: Testing on all nodes.

commit b9b40e4572fe3fdba2496b8ed69685af7e63794a
Author: Pavel Kuznetsov 
Date:   2018-04-17T11:34:00Z

IGNITE-8186: use cache from node.

commit 568f7199c308070defbb1d976c072da54c146b7f
Author: Pavel Kuznetsov 
Date:   2018-04-17T12:40:43Z

Merge remote-tracking branch 'origin/master' into ignite-8186

commit f9449a89e654c985e628f386452724620e90d44e
Author: devozerov 
Date:   2018-04-17T15:09:20Z

Review.

commit ff03b8f73599dfd93d575cc72f5f7a702d02775e
Author: Pavel Kuznetsov 
Date:   2018-04-17T17:27:09Z

IGNITE-8186: test with scan query.

commit 3660ba07c085dab1bd7f0e88378da80f31f4d5bb
Author: Pavel Kuznetsov 
Date:   2018-04-17T17:33:57Z

IGNITE-8186: updated according to comments.

commit 88da5188cee104b0048bfaf1d0ea1a2552c44ba0
Author: Pavel Kuznetsov 
Date:   2018-04-17T18:41:27Z

IGNITE-8186: fixed select(); fixed test.

commit cee4e16e1bef02d04fc982486abd682eb61f156d
Author: Pavel Kuznetsov 
Date:   2018-04-18T14:12:17Z

IGNITE-8186: updated tests.

commit 44ef8e76a54f3e132dacc1e8af7f4d14427dc8b3
Author: Pavel Kuznetsov 
Date:   2018-04-18T17:36:12Z

IGNITE-8186: fixed assertion.

commit 14a59b21967c8a7cd0fe6cbf0977f2d314edb7ed
Author: Pavel Kuznetsov 
Date:   2018-04-18T17:41:47Z

IGNITE-8186: more stable comparison.

commit 067ae256ffc161c76c6d32c6edfb080d104657b9
Author: Pavel Kuznetsov 
Date:   2018-04-19T11:41:57Z

IGNITE-8186: fixed partitioned test setup.

commit 517cccf3cdc0c11ebc4d9a7d65212148137fe63d
Author: Pavel Kuznetsov 
Date:   2018-04-19T13:25:59Z

IGNITE-8186: Property injection; Added disk/inmemory configurations.

commit 3e63dd23250dbc79dffdccb9813dac5cfdb89316
Author: Pavel Kuznetsov 
Date:   2018-04-19T19:02:09Z

IGNITE-8186: Fixed lookup for fields in Cache.Entry.

commit d9e3776e841d900c5eca5bf538c2ab74e343be3b
Author: Pavel Kuznetsov 
Date:   2018-04-19T19:17:57Z

IGNITE-8186: Refactoring.

commit 1e96576a9d68fe9609174fcf8f3203da397afcc7
Author: Pavel Kuznetsov 
Date:   2018-04-19T19:27:14Z

IGNITE-8186: node -> cache for select method.

commit efb1ff3bd4851cee8726907ba724b8085c7f6062
Author: Pavel Kuznetsov 
Date:   2018-04-20T12:22:55Z

IGNITE-8186: Added create index.

commit b954815081d66288f087b98ab011b67976ccf5f8
Author: Pavel Kuznetsov 
Date:   2018-04-20T13:23:15Z

IGNITE-8186: separated create table and fill data.

commit 4369ef2b00aef4eed40736d9cdd346bdc6c45aba
Author: Pavel Kuznetsov 
Date:   2018-04-20T13:25:24Z

IGNITE-8186: fixed create index freeze.

commit 7a12b9ccebef5eb8a7f4c239c9ce9bfed6eb38c0
Author: Pavel Kuznetsov 
Date:   2018-04-20T18:19:45Z

IGNITE-8186: Refactored select method.

commit 8f86e3005a691121e1a07dc142000d4049b6bc78
Author: Pavel Kuznetsov 
Date:   2018-04-20T18:33:29Z

IGNITE-8186: added tests for in operator.

commit 266d9795adc29c9c005234d6c5079c879fbf1af6
Author: Pavel Kuznetsov 
Date:   2018-04-20T19:01:54Z

IGNITE-8186: Tests for ><= operators.

commit e49f387ac655e11b8393af31b510f54919965254
Author: Pavel Kuznetsov 
Date:   2018-04-23T14:33:40Z

IGNITE-8186: Got rid of 

Re: Looks like a bug in ServerImpl.joinTopology()

2018-04-27 Thread Александр Меньшиков
Yakov, thank you for the advice.

The thread.sleep is not enough, but some latch + future give me a way to the
reproducer.

I have created PR [1] into my master, for showing a test and modification of
ServerImpl which help me to slow down execution inside a danger section.

A code of test a bit long, but basically it about two parts:

In the first part, I randomly start and stop nodes to get a moment when
a server is starting to execute the dangerous code which I described in the
first message.

In the second part, I'm waiting while the first part produces this situation
and after that, I call public method of ServerImpl which fails with an
exception:

java.lang.AssertionError: Invalid node order: TcpDiscoveryNode
[id=f6bf048d-378b-4960-94cb-84e3d332, addrs=[127.0.0.1], sockAddrs=[/
127.0.0.1:47502], discPort=47502, order=0, intOrder=2,
lastExchangeTime=1524836605995, loc=false,
ver=2.5.0#20180426-sha1:34e22396, isClient=false]
at
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNodesRing$1.apply(TcpDiscoveryNodesRing.java:52)
at
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNodesRing$1.apply(TcpDiscoveryNodesRing.java:49)
at
org.apache.ignite.internal.util.lang.GridFunc.isAll(GridFunc.java:2014)
at
org.apache.ignite.internal.util.IgniteUtils.arrayList(IgniteUtils.java:9679)
at
org.apache.ignite.internal.util.IgniteUtils.arrayList(IgniteUtils.java:9652)
at
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNodesRing.nodes(TcpDiscoveryNodesRing.java:590)
at
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNodesRing.visibleRemoteNodes(TcpDiscoveryNodesRing.java:164)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.getRemoteNodes(ServerImpl.java:304)

As I told in the first message the problem arises because of the current
code
changes local node internal order and breaks sorting in
TcpDiscoveryNodesRing.nodes collection.

Is this reproducer convince enough?

[1] Reproducer: https://github.com/SharplEr/ignite/pull/10/files



2018-02-13 20:17 GMT+03:00 Yakov Zhdanov :

> Alex, you can alter ServerImpl and insert a latch or thread.sleep(xxx)
> anywhere you like to show the incorrect behavior you describe.
>
> --Yakov
>


[jira] [Created] (IGNITE-8415) Manual cache().rebalance() invocation may cancel currently running rebalance

2018-04-27 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8415:
---

 Summary: Manual cache().rebalance() invocation may cancel 
currently running rebalance
 Key: IGNITE-8415
 URL: https://issues.apache.org/jira/browse/IGNITE-8415
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Kovalenko
 Fix For: 2.6


If historical rebalance happens and during this process we manually invoke 
{noformat}
Ignite.cache(CACHE_NAME).rebalance().get();
{noformat}
then currently running rebalance will be cancelled and started new which seems 
not right way. Moreover, after new rebalance finish we can lost some data in 
case of rebalancing entry removes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8414) In-memory cache should use BLAT as their Affinity Topology

2018-04-27 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-8414:
-

 Summary: In-memory cache should use BLAT as their Affinity Topology
 Key: IGNITE-8414
 URL: https://issues.apache.org/jira/browse/IGNITE-8414
 Project: Ignite
  Issue Type: Improvement
Reporter: Eduard Shangareev
Assignee: Eduard Shangareev


Now in-memory caches use all active server nodes as affinity topology and it 
changes with each node join and exit. What differs from persistent caches 
behavior which uses BLAT (BaseLine Affinity Topology) as their affinity 
topology.

It causes problems:
- we lose (in general) co-location between different caches;
- we can't avoid PME when a non-BLAT node joins cluster;
- implementation should consider 2 different approaches to affinity
calculation.

To handle these problems we should make in-memory and persistent cache work 
similar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.5 release

2018-04-27 Thread Pavel Kovalenko
Igniters,

I would like to add this issue
https://issues.apache.org/jira/browse/IGNITE-8405 to 2.5 scope.


2018-04-27 16:59 GMT+03:00 Eduard Shangareev :

> Guys,
> I have finished
> ttps://issues.apache.org/jira/browse/IGNITE-7628.
>
> It looks like a good candidate to merge to 2.5.
>
> On Fri, Apr 27, 2018 at 1:32 PM, Ivan Rakov  wrote:
>
> > Dmitriy,
> >
> > There are some notes about WAL compaction in "Persistence Under the Hood"
> > article: https://cwiki.apache.org/confluence/display/IGNITE/Ignite+
> > Persistent+Store+-+under+the+hood#IgnitePersistentStore-und
> > erthehood-WALcompaction
> > Please take a look - it has all the answers.
> >
> > Best Regards,
> > Ivan Rakov
> >
> >
> > On 27.04.2018 2:28, Dmitriy Setrakyan wrote:
> >
> >> On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov 
> >> wrote:
> >>
> >> Folks,
> >>>
> >>> I have a fix for a critical issue related to WAL compaction:
> >>> https://issues.apache.org/jira/browse/IGNITE-8393
> >>> In short, if part of WAL archive is broken, attempt to compress it may
> >>> result in spamming warnings in infinite loop.
> >>> I think it's also worth being included in 2.5 release.
> >>>
> >>> Let's include it, especially given that the fix is done.
> >>
> >> Ivan, on another note, is WAL compaction described anywhere? What do we
> do
> >> internally to compact WAL and by what factor are we able to reduce the
> WAL
> >> file size?
> >>
> >> D.
> >>
> >>
> >
>
>
> --
> Best regards,
> Eduard.
>


Re: Apache Ignite 2.5 release

2018-04-27 Thread Eduard Shangareev
Guys,
I have finished
ttps://issues.apache.org/jira/browse/IGNITE-7628.

It looks like a good candidate to merge to 2.5.

On Fri, Apr 27, 2018 at 1:32 PM, Ivan Rakov  wrote:

> Dmitriy,
>
> There are some notes about WAL compaction in "Persistence Under the Hood"
> article: https://cwiki.apache.org/confluence/display/IGNITE/Ignite+
> Persistent+Store+-+under+the+hood#IgnitePersistentStore-und
> erthehood-WALcompaction
> Please take a look - it has all the answers.
>
> Best Regards,
> Ivan Rakov
>
>
> On 27.04.2018 2:28, Dmitriy Setrakyan wrote:
>
>> On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov 
>> wrote:
>>
>> Folks,
>>>
>>> I have a fix for a critical issue related to WAL compaction:
>>> https://issues.apache.org/jira/browse/IGNITE-8393
>>> In short, if part of WAL archive is broken, attempt to compress it may
>>> result in spamming warnings in infinite loop.
>>> I think it's also worth being included in 2.5 release.
>>>
>>> Let's include it, especially given that the fix is done.
>>
>> Ivan, on another note, is WAL compaction described anywhere? What do we do
>> internally to compact WAL and by what factor are we able to reduce the WAL
>> file size?
>>
>> D.
>>
>>
>


-- 
Best regards,
Eduard.


[GitHub] ignite pull request #3932: IGNITE-8413: Fixed CacheGroupMetricsMBeanTest.tes...

2018-04-27 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/3932

IGNITE-8413: Fixed CacheGroupMetricsMBeanTest.testCacheGroupMetrics.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8413

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3932


commit 805b262c5e206ebc1879904ae46606e4806d05bb
Author: Andrey V. Mashenkov 
Date:   2018-04-27T13:56:14Z

Fixed CacheGroupMetricsMBeanTest.testCacheGroupMetrics.




---


[GitHub] ignite pull request #3883: IGNITE-7909 Java code examples are needed for Spa...

2018-04-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3883


---


[jira] [Created] (IGNITE-8413) CacheGroupMetricsMBeanTest.testCacheGroupMetrics fails on master.

2018-04-27 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-8413:


 Summary: CacheGroupMetricsMBeanTest.testCacheGroupMetrics fails on 
master.
 Key: IGNITE-8413
 URL: https://issues.apache.org/jira/browse/IGNITE-8413
 Project: Ignite
  Issue Type: Test
Reporter: Andrew Mashenkov


CacheGroupMetricsMBeanTest.testCacheGroupMetrics fails due to a race.
Seems, we should pause rebalance to see expected instant metrics.


{code:java}
junit.framework.AssertionFailedError
 at junit.framework.Assert.fail(Assert.java:55)
 at junit.framework.Assert.assertTrue(Assert.java:22)
 at junit.framework.Assert.assertTrue(Assert.java:31)
 at junit.framework.TestCase.assertTrue(TestCase.java:201)
 at 
org.apache.ignite.internal.processors.cache.CacheGroupMetricsMBeanTest.testCacheGroupMetrics(CacheGroupMetricsMBeanTest.java:262)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
 at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
 at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
 at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3931: IGNITE-8401: errorneous WalPointers comparation.

2018-04-27 Thread zstan
GitHub user zstan opened a pull request:

https://github.com/apache/ignite/pull/3931

IGNITE-8401: errorneous WalPointers comparation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8401

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3931.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3931


commit 248b8247cde47f8e5a8e9c5152ed7c9817180b3e
Author: Evgeny Stanilovskiy 
Date:   2018-04-27T13:44:10Z

IGNITE-8401: errorneous WalPointers comparation.




---


management events for visor and web console tasks

2018-04-27 Thread Andrei Aleksandrov

Hi Igniters,

I would like to start a discussion about adding of the new management 
events for tasks that was produced by web console and visor.


Problem:

There is no possibility to handle tasks called from web console or 
visor. Useful information that could be handled:


1)Operations on cached data (read/write/delete)
2)Operation on cluster nodes(start/stop/restart/adding to group/etc)

Maybe some other tasks that possible could somehow influence on Ignite 
cluster behavior e.g garbage collector call.


I propose to add events that will listen for these tasks and provide to 
user possibility to process them. Please share your thoughts about it.


Best Regards,
Andrei



[jira] [Created] (IGNITE-8412) Bug with cache name in org.apache.ignite.util.GridCommandHandlerTest#testCacheContention brokes tests in security module.

2018-04-27 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-8412:
-

 Summary: Bug with cache name in 
org.apache.ignite.util.GridCommandHandlerTest#testCacheContention brokes tests 
in security module.
 Key: IGNITE-8412
 URL: https://issues.apache.org/jira/browse/IGNITE-8412
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov
Assignee: Alexei Scherbakov
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8411) Binary Client Protocol spec: other parts clarifications

2018-04-27 Thread Alexey Kosenchuk (JIRA)
Alexey Kosenchuk created IGNITE-8411:


 Summary: Binary Client Protocol spec: other parts clarifications
 Key: IGNITE-8411
 URL: https://issues.apache.org/jira/browse/IGNITE-8411
 Project: Ignite
  Issue Type: Bug
  Components: documentation, thin client
Affects Versions: 2.4
Reporter: Alexey Kosenchuk


Cache Configuration
 ---
 
[https://apacheignite.readme.io/docs/binary-client-protocol-cache-configuration-operations]
 - OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - 
QueryEntity - Structure of QueryField:
 absent "default value - type Object" - it is the last field of the QueryField 
in reality.

 - OP_CACHE_GET_CONFIGURATION - Structure of Cache Configuration:
 Absent CacheAtomicityMode - is the first field in reality.
 Absent MaxConcurrentAsyncOperations - is between DefaultLockTimeout and 
MaxQueryIterators in reality.
 "Invalidate" field - does not exist in reality.

 - meaning and possible values of every configuration parameter must be 
clarified. If clarified in other docs, this spec must have link(s) to that docs.

 - suggest to combine somehow Cache Configuration descriptions in 
OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - to avoid 
duplicated descriptions.

 

SQL and Scan Queries
 
 [https://apacheignite.readme.io/docs/binary-client-protocol-sql-operations]
 - "Flag. Pass 0 for default, or 1 to keep the value in binary form.":
 "the value in binary form" flag should be left end clarified in the operations 
to which it is applicable for.

 - OP_QUERY_SQL:
 most of the fields in the request must be clarified. If clarified in other 
docs, this spec must have link(s) to that docs.
 For example:
 ** "Name of a type or SQL table": name of what type?

 - OP_QUERY_SQL_FIELDS:
 most of the fields in the request must be clarified. If clarified in other 
docs, this spec must have link(s) to that docs.
 For example:
 ** is there any correlation between "Query cursor page size" and "Max rows"?
 ** "Statement type": why there are only three types? what about INSERT, etc.?

 - OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE Response does not contain Cursor id. But 
responses for all other query operations contain it. Is it intentional?

 - OP_QUERY_SCAN_CURSOR_GET_PAGE Response - Row count field: says type "long". 
But Row count field in responses for all other query operations is "int".

 - OP_QUERY_SCAN:
 format and rules of the Filter object must be clarified. If clarified in other 
docs, this spec must have link(s) to that docs.

 - OP_QUERY_SCAN:
 in general, it's not clear how this operation should be supported on platforms 
other than the mentioned in "Filter platform" field.

 

Binary Types
 
 
[https://apacheignite.readme.io/docs/binary-client-protocol-binary-type-operations]
 - somewhere should be explained when and why these operations need to be 
supported by a client.

 - Type id and Field id:
 should be clarified that before an Id calculation Type and Field names must be 
updated to low case.

 - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - BinaryField - Type id:
 in reality it is not a type id (hash code) but a type code (1, 2,... 10,... 
103,...).

 - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - "Affinity key field name":
 should be explained what is it. If explained in other docs, this spec must 
have link(s) to that docs.

 - OP_PUT_BINARY_TYPE - schema id:
 mandatory algorithm of schema Id calculation must be described somewhere. If 
described in other docs, this spec must have link(s) to that docs.

 - OP_REGISTER_BINARY_TYPE_NAME and OP_GET_BINARY_TYPE_NAME:
 should be explained when and why these operations need to be supported by a 
client.
 How this operation should be supported on platforms other than the mentioned 
in "Platform id" field.

 - OP_REGISTER_BINARY_TYPE_NAME:
 Type name - is it "full" or "short" name here?
 Type id - is it a hash from "full" or "short" name here?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8410) [ML] Unify KNNClassification/KNNRegression Model Trainer .fit() signatures

2018-04-27 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-8410:


 Summary: [ML] Unify KNNClassification/KNNRegression Model Trainer 
.fit() signatures
 Key: IGNITE-8410
 URL: https://issues.apache.org/jira/browse/IGNITE-8410
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Affects Versions: 2.6
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev


Make fit calls similar.

Should refactor one of trainers and remove one signature. The possible solution 
to pass dataCache and ignite separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8409) Ignite gets stuck on IgniteDataStreamer.addData when using Object with AffinityKeyMapped

2018-04-27 Thread Andrey Aleksandrov (JIRA)
Andrey Aleksandrov created IGNITE-8409:
--

 Summary: Ignite gets stuck on IgniteDataStreamer.addData when 
using Object with AffinityKeyMapped
 Key: IGNITE-8409
 URL: https://issues.apache.org/jira/browse/IGNITE-8409
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Andrey Aleksandrov
 Fix For: 2.6
 Attachments: ContextCpty.java, TradeKey.java, TradeKeyNew.java

This problem reproduces from time to time when we are streaming the data 
(TradeKey.java) to Ignite sql cache. As AffinityKeyMapped we used the object 
type (ContextCpty.java)

When we change AffinityKeyMapped type from object to long type 
(TradeKeyNew.java) then problem disappears.

Investigation help to understand that we hang in BPlusTree.java class in next 
method:

private Result putDown(final Put p, final long pageId, final long fwdId, final 
int lvl)

In this method:

res = p.tryReplaceInner(pageId, page, fwdId, lvl);

if (res != RETRY) // Go down recursively.
res = putDown(p, p.pageId, p.fwdId, lvl - 1);

if (res == RETRY_ROOT || p.isFinished())
return res;

if (res == RETRY)
checkInterrupted(); //WE ALWAYS GO TO THIS PLACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.5 release

2018-04-27 Thread Ivan Rakov

Dmitriy,

There are some notes about WAL compaction in "Persistence Under the 
Hood" article: 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALcompaction

Please take a look - it has all the answers.

Best Regards,
Ivan Rakov

On 27.04.2018 2:28, Dmitriy Setrakyan wrote:

On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov  wrote:


Folks,

I have a fix for a critical issue related to WAL compaction:
https://issues.apache.org/jira/browse/IGNITE-8393
In short, if part of WAL archive is broken, attempt to compress it may
result in spamming warnings in infinite loop.
I think it's also worth being included in 2.5 release.


Let's include it, especially given that the fix is done.

Ivan, on another note, is WAL compaction described anywhere? What do we do
internally to compact WAL and by what factor are we able to reduce the WAL
file size?

D.





[GitHub] ignite pull request #3924: IGNITE-8403: Added Logistic Regression

2018-04-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3924


---


[jira] [Created] (IGNITE-8408) IgniteUtils.invoke does not look list superclass methods

2018-04-27 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8408:


 Summary: IgniteUtils.invoke does not look list superclass methods
 Key: IGNITE-8408
 URL: https://issues.apache.org/jira/browse/IGNITE-8408
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Goncharuk
Assignee: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.5 release

2018-04-27 Thread Andrey Gura
Hi there!

>From my point of view it is blocker too. Targeted to 2.5 release.

Thanks, Ivan!

пт, 27 апр. 2018 г., 2:29 Dmitriy Setrakyan :

> On Fri, Apr 27, 2018 at 6:38 AM, Ivan Rakov  wrote:
>
> > Folks,
> >
> > I have a fix for a critical issue related to WAL compaction:
> > https://issues.apache.org/jira/browse/IGNITE-8393
> > In short, if part of WAL archive is broken, attempt to compress it may
> > result in spamming warnings in infinite loop.
> > I think it's also worth being included in 2.5 release.
> >
>
> Let's include it, especially given that the fix is done.
>
> Ivan, on another note, is WAL compaction described anywhere? What do we do
> internally to compact WAL and by what factor are we able to reduce the WAL
> file size?
>
> D.
>


[jira] [Created] (IGNITE-8407) Wrong memory size printing in IgniteCacheDatabaseSnaredManager

2018-04-27 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-8407:


 Summary: Wrong memory size printing in 
IgniteCacheDatabaseSnaredManager
 Key: IGNITE-8407
 URL: https://issues.apache.org/jira/browse/IGNITE-8407
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.4
Reporter: Alexander Belyak


In checkDataRegionSize regCfg printing in "si" format (based on 1000, not 
1024). Need to fix it and any other usages of getInitialSize()/getMaxSize()) 
with U.readableSize(8, true) 
{noformat}
throw new IgniteCheckedException("DataRegion must have size more than 10MB (use 
" +
"DataRegionConfiguration.initialSize and .maxSize properties to 
set correct size in bytes) " +
"[name=" + regCfg.getName() + ", initialSize=" + 
U.readableSize(regCfg.getInitialSize(), true) +
", maxSize=" + U.readableSize(regCfg.getMaxSize(), true) + "]"
{noformat}
should be replaced with
{noformat}
throw new IgniteCheckedException("DataRegion must have size more than 10MB (use 
" +
"DataRegionConfiguration.initialSize and .maxSize properties to 
set correct size in bytes) " +
"[name=" + regCfg.getName() + ", initialSize=" + 
U.readableSize(regCfg.getInitialSize(), false) +
", maxSize=" + U.readableSize(regCfg.getMaxSize(), false) + "]"
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Data Loss while upgrading custom jar from old jar in server and client nodes

2018-04-27 Thread Denis Garus
Ivan, 

there are some questions related to ticket IGNITE-8162:

1. deleting a file of cache configuration leads to successfuly getting cache
clonfiguration. But, is there guarantee that cache will work correctly?

2. if deleting a file of cache configuration is enough to get cache able to
work, can we delete this file in catch block and continue cache start?



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #3930: IGNITE-8400 IgniteTopologyValidatorGridSplitCache...

2018-04-27 Thread alex-plekhanov
GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3930

IGNITE-8400 
IgniteTopologyValidatorGridSplitCacheTest.testTopologyValidatorWithCacheGroup 
(30 times TC run)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-8400-debug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3930


commit 7c68ac55dd9ec9957b7bf0769811ee7a0663325a
Author: Aleksey Plekhanov 
Date:   2018-04-26T13:27:51Z

IGNITE-8400 
IgniteTopologyValidatorGridSplitCacheTest.testTopologyValidatorWithCacheGroup 
node failure fix (50 times)

commit 5c1c297f676441306967aecedbaff80f7139ca81
Author: Aleksey Plekhanov 
Date:   2018-04-26T15:52:43Z

IGNITE-8400 
IgniteTopologyValidatorGridSplitCacheTest.testTopologyValidatorWithCacheGroup 
node failure fix

commit 2118a9353dde9517e47cbda3beaacc6df4ea27f6
Author: Aleksey Plekhanov 
Date:   2018-04-27T06:59:39Z

IGNITE-8400 
IgniteTopologyValidatorGridSplitCacheTest.testTopologyValidatorWithCacheGroup 
node failure fix (30 times)




---