Re: MTCGA: JVM crash in .NET tests - IGNITE-7871

2018-04-11 Thread Pavel Tupitsyn
Looks good now, thanks!

On Wed, Apr 11, 2018 at 6:31 PM, Pavel Kovalenko  wrote:

> Hello Pavel,
>
> Seems the problem was gone after this hotfix:
> https://github.com/apache/ignite/commit/0e73fa2c10dcd96ff98279018bdd3f
> 8b36568008
> Could you please double check that everything is ok now? Here is latest run
> including fix:
> https://ci.ignite.apache.org/viewLog.html?buildId=1192257;
> tab=buildResultsDiv=IgniteTests24Java8_
> IgnitePlatformNetCoreLinux
>
> 2018-04-11 17:17 GMT+03:00 Pavel Tupitsyn :
>
> > Igniters,
> >
> > There is JVM crash in .NET Core Linux tests [1] after IGNITE-7871 merge
> > [2].
> >
> > Pavel Kovalenko, Alexey Goncharuk, please have a look.
> > In the log [3] there are some details:
> >
> > Unknown connection detected (is some other software connecting to this
> > Ignite port? missing SSL configuration on remote node?)
> > [rmtAddr=/127.0.0.1]
> >
> > ...
> > No verification for local node leave has been received from
> > coordinator (will stop node anyway).
> > Critical failure. Will be handled accordingly to configured handler
> > [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
> > failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
> > err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
> > is terminated unexpectedly.]]
> > JVM will be halted immediately due to the failure:
> > [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
> > err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
> > is terminated unexpectedly.]]
> >
> >
> >
> >
> >
> > [1]
> > https://ci.ignite.apache.org/viewType.html?buildTypeId=
> IgniteTests24Java8_
> > IgnitePlatformNetCoreLinux_IgniteTests24Java8=%3Cdefault%3E=
> > buildTypeStatusDiv
> >
> > [2]
> > https://github.com/apache/ignite/commit/da77b9818a70495b7afdf6899ebd91
> > 80dadd7f68
> >
> > [3]
> > https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_
> > IgnitePlatformNetCoreLinux/1191602:id/logs.zip%21/
> > modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/bin/Debug/
> > netcoreapp2.0/dotnet-test.log
> >
>


Node.js client update: rev. 2

2018-04-11 Thread Pavel Petroshenko
Igniters,

Just to give you an update on the next iteration of the Ignite Node.js thin
client implementation.

The second iteration is available for review/testing.

The changes are available in the pull request [1] or directly from our
repository [2].

The short README file [3] covers:

- the list of supported features
- simple instructions:
  * how to install the client
  * how to run the examples
  * how to run the tests

And we actually encourage you to give it a look or even a try.

The APIs are available both: in sources [4] and in a form of a generated
specification, produced automatically from the jsdoc comments [5]

Please let us know if you have any questions.

Thanks!

P.

[1] https://github.com/apache/ignite/pull/3680
[2] https://github.com/nobitlost/ignite/tree/master/modules/clients/nodejs
[3] https://github.com/nobitlost/ignite/blob/master/
modules/clients/nodejs/README.md
[4] https://github.com/nobitlost/ignite/blob/master/
modules/clients/nodejs/lib
[5] https://rawgit.com/nobitlost/ignite/master/modules/clients/nodejs/api_
spec/index.html


Re: Triggering rebalancing on timeout or manually if the baseline topology is not reassembled

2018-04-11 Thread Denis Magda
Pavel, Val,

So, it means that the rebalancing will be initiated only after an
administrator remove the failed node from the topology, right?

Next, imagine that you are that IT administrator who has to automate the
rebalancing activation if the node failed and not recovered within 1
minute. What would you do and what Ignite provides to fulfill the task?

--
Denis

On Wed, Apr 11, 2018 at 1:01 PM, Pavel Kovalenko  wrote:

> Denis,
>
> In case of incomplete baseline topology IgniteCache.rebalance() will do
> nothing, because this event doesn't trigger partitions exchange or affinity
> change, so states of existing partitions are hold.
>
> 2018-04-11 22:27 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Denis,
> >
> > In my understanding, in this case you should remove node from BLT and
> that
> > will trigger the rebalancing, no?
> >
> > -Val
> >
> > On Wed, Apr 11, 2018 at 12:23 PM, Denis Magda 
> wrote:
> >
> > > Igniters,
> > >
> > > As we know the rebalancing doesn't happen if one of the nodes goes
> down,
> > > thus, shrinking the baseline topology. It complies with our assumption
> > that
> > > the node should be recovered soon and there is no need to waste
> > > CPU/memory/networking resources of the cluster shifting the data
> around.
> > >
> > > However, there are always edge cases. I was reasonably asked how to
> > trigger
> > > the rebalancing within the baseline topology manually or on timeout if:
> > >
> > >- It's not expected that the failed node would be resurrected in the
> > >nearest time and
> > >- It's not likely that that node will be replaced by the other one.
> > >
> > > The question. If I call IgniteCache.rebalance() or configure
> > > CacheConfiguration.rebalanceTimeout will the rebalancing be fired
> within
> > > the baseline topology?
> > >
> > > --
> > > Denis
> > >
> >
>


[GitHub] ignite pull request #3799: Ignite-2.5.1.b4

2018-04-11 Thread DmitriyGovorukhin
GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/3799

Ignite-2.5.1.b4



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.5.1.b4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3799.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3799


commit 1cea80d29f4f1c61ed56ad1261b74ed42611bf64
Author: Ilya Lantukh 
Date:   2018-04-06T10:49:10Z

IGNITE-8018 Optimized GridCacheMapEntry initialValue() - Fixes #3686.

Signed-off-by: Alexey Goncharuk 

commit 37fc72542eb6baa8be8b41aecd08a194102d13c1
Author: Алексей Стельмак 
Date:   2018-04-06T15:28:22Z

IGNITE-8049 Limit the number of operation cycles in B+Tree - Fixes #3769.

Signed-off-by: dpavlov 

(cherry picked from commit e491f10)

commit 76e293654e34c927d6c9efc85a12e736b58a21f2
Author: Eduard Shangareev 
Date:   2018-04-06T16:22:07Z

IGNITE-8114 Add fail recovery mechanism to tracking pages - Fixes #3734.

Signed-off-by: dpavlov 

(cherry picked from commit 0829397)

commit 49f11db727febc83297c7f0f5de9e6f98f0197fa
Author: Alexey Kuznetsov 
Date:   2018-04-09T02:25:50Z

IGNITE-8159 control.sh: Fixed NPE on adding nodes on empty baseline and not 
active cluster.

(cherry picked from commit 834869c)

commit 9ad7be2f51b6dcdcdf43fedb298cd4e240f0adab
Author: Ilya Borisov 
Date:   2018-04-09T13:59:32Z

IGNITE-8155 Web Console: Fixed number pattern warning in browser console.

(cherry picked from commit 5d8f570)

commit 4aa56751906e5db7aad025a7193933fa929aae26
Author: Vasiliy Sisko 
Date:   2018-04-09T15:13:21Z

IGNITE-7940 Visor CMD: Added "cache -slp" and "cache -rlp" commands to show 
and reset lost partitions for specified cache.

(cherry picked from commit abfa0f5)

commit cc04c5c70af1bdbba834f73330e73277b60e23fc
Author: Eduard Shangareev 
Date:   2018-04-09T16:15:50Z

IGNITE-8114 Additional fix for Add fail recovery mechanism to tracking pages

(cherry picked from commit 961fc35)

commit c70d85aa36c702ea0f29bd8668e9bf0790f9ba11
Author: Vasiliy Sisko 
Date:   2018-04-10T08:42:24Z

IGNITE-8126 Web Console: Fixed code generation for cache load.

(cherry picked from commit a0a187b)

commit 8d3755b9c58eef12c5fc9cabfc0b1c05f6db716e
Author: Semyon Boikov 
Date:   2018-04-10T08:37:39Z

IGNITE-7222 Added ZooKeeper discovery SPI

commit b096a463c338565a7661f8a853a257518d872997
Author: Stanislav Lukyanov 
Date:   2018-04-09T11:33:13Z

IGNITE-7904: Changed IgniteUtils::cast not to trim exception chains. This 
closes #3683.

commit 82a4c024fe06ef8c8deeaf762f0cc20a8e481252
Author: Roman Guseinov 
Date:   2018-04-09T11:45:44Z

IGNITE-7944: Disconnected client node tries to send JOB_CANCEL message. 
Applied fix:
- Skip sending message if client disconnected;
- Throw IgniteCheckedException if a client node is disconnected and 
communication client is null.
This closes #3737.

commit c1745de37891026e0a719f0c1d1afe768dfccbf3
Author: Vasiliy Sisko 
Date:   2018-04-10T10:48:52Z

IGNITE-7927 Web Console: Fixed demo for non-collocated joins.

(cherry picked from commit 647620b)

commit b28287d1861fd841a18d0eef95eff309d21a55ef
Author: Alexey Goncharuk 
Date:   2018-04-10T13:22:28Z

IGNITE-8025 Future must fail if assertion error has been thrown in the 
worker thread

commit a832f2b2e5788c45114c3cb5529d7cf53d08f9a6
Author: Andrey Kuznetsov 
Date:   2018-04-10T14:30:12Z

ignite-7772 System workers critical failures handling

Signed-off-by: Andrey Gura 

commit 912433ba9aa113508d05930691b251eccd8f5870
Author: Aleksey Plekhanov 
Date:   2018-04-10T15:54:03Z

IGNITE-8069 IgniteOutOfMemoryException should be handled accordingly to 
provided failure handler

Signed-off-by: Andrey Gura 

commit 99feab6ace66d011b677fd4d57b44fc54da8fd4f
Author: Alexey Goncharuk 
Date:   2018-04-10T17:33:47Z

IGNITE-6430 Complete failing test early

commit 526fb0ee612ef71fde58a1274db35e8205304a63
Author: Dmitriy Sorokin 
Date:   2018-04-10T19:20:41Z

IGNITE-8101 Ability to terminate system workers by JMX for test purposes.

Signed-off-by: Andrey Gura 

commit b4cb2f0df944534743a9d73811e047eda572258c
Author: mcherkasov 
Date:   2018-04-11T00:27:20Z

IGNITE-8153 Nodes fail to connect each other when SSL is 

Re: Triggering rebalancing on timeout or manually if the baseline topology is not reassembled

2018-04-11 Thread Pavel Kovalenko
Denis,

In case of incomplete baseline topology IgniteCache.rebalance() will do
nothing, because this event doesn't trigger partitions exchange or affinity
change, so states of existing partitions are hold.

2018-04-11 22:27 GMT+03:00 Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> Denis,
>
> In my understanding, in this case you should remove node from BLT and that
> will trigger the rebalancing, no?
>
> -Val
>
> On Wed, Apr 11, 2018 at 12:23 PM, Denis Magda  wrote:
>
> > Igniters,
> >
> > As we know the rebalancing doesn't happen if one of the nodes goes down,
> > thus, shrinking the baseline topology. It complies with our assumption
> that
> > the node should be recovered soon and there is no need to waste
> > CPU/memory/networking resources of the cluster shifting the data around.
> >
> > However, there are always edge cases. I was reasonably asked how to
> trigger
> > the rebalancing within the baseline topology manually or on timeout if:
> >
> >- It's not expected that the failed node would be resurrected in the
> >nearest time and
> >- It's not likely that that node will be replaced by the other one.
> >
> > The question. If I call IgniteCache.rebalance() or configure
> > CacheConfiguration.rebalanceTimeout will the rebalancing be fired within
> > the baseline topology?
> >
> > --
> > Denis
> >
>


Re: Service grid redesign

2018-04-11 Thread Denis Magda
Denis,

I think that the service deployment state needs be persisted cluster-wide.
I guess that our meta-store is capable of doing so. Alex G., Vladimir,
could you confirm?

As for the split-brain scenarios, I would put them aside for now because,
anyway, they have to be solved at lower levels (meta store, discovery,
etc.).

Also, I heard that presently we store a service configuration in the system
cache that doesn't give us a way to deploy a new version of a service
without undeployment of the previous one. Will this issue be addressed by
the new deployment approach?

--
Denis

On Wed, Apr 11, 2018 at 1:28 AM, Denis Mekhanikov 
wrote:

> Denis,
>
> Sounds reasonable. It's not clear, though, what should happen, if a joining
> node has some services persisted, that are missing on other nodes.
> Should we deploy them?
> If we do so, it could lead to surprising behaviour. For example you could
> kill a node, undeploy a service, then bring back an old node, and it would
> make the service resurrect.
> We could store some deployment counter along with the service
> configurations on all nodes, that would show how many times the service
> state has changed, i.e. it has been undeployed/redeployed. It should be
> kept for undeployed services as well to avoid situations like I described.
>
> But it still leaves a possibility of incorrect behaviour, if there was a
> split-brain situation at some point. I don't think we should precess it
> somehow, though. If we choose to tackle it, it will overcomplicate things
> for a sake of a minor improvement.
>
> Denis
>
> вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > I was responding to another Denis :) Agree with you on your point though.
> >
> > -Val
> >
> > On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda  wrote:
> >
> > > Val,
> > >
> > > Guess we're talking about other situations. I'm bringing up the case
> > when a
> > > service was deployed dynamically and has to be brought up after a full
> > > cluster restart w/o user intervention. To achieve this we need to
> persist
> > > the service's configuration somewhere.
> > >
> > > --
> > > Denis
> > >
> > > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com> wrote:
> > >
> > > > Denis,
> > > >
> > > > EVT_CLASS_DEPLOYED should be fired every time a class is deployed or
> > > > redeployed. If this doesn't happen in some cases, I believe this
> would
> > > be a
> > > > bug. I don't think we need to add any new events.
> > > >
> > > > -Val
> > > >
> > > > On Mon, Apr 9, 2018 at 10:50 AM, Denis Magda 
> > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > I would encourage us to persist a service configuration in the meta
> > > store
> > > > > and have this capability enabled by default. That's essential for
> > > > services
> > > > > started dynamically. Moreover, we support similar behavior for
> > caches,
> > > > > indexes, and other DDL changes happened at runtime.
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Mon, Apr 9, 2018 at 9:34 AM, Denis Mekhanikov <
> > > dmekhani...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Another question, that I would like to discuss is whether
> services
> > > > should
> > > > > > be preserved on cluster restarts.
> > > > > >
> > > > > > Currently it depends on persistence configuration. If persistence
> > for
> > > > any
> > > > > > data region is enabled, then services will be persisted as well.
> > This
> > > > is
> > > > > a
> > > > > > pretty strange way of configuring this behaviour.
> > > > > > I'm not sure, if anybody relies on this functionality right now.
> > > Should
> > > > > we
> > > > > > support it at all? If yes, should we make it configurable?
> > > > > >
> > > > > > Denis
> > > > > >
> > > > > > пн, 9 апр. 2018 г. в 19:27, Denis Mekhanikov <
> > dmekhani...@gmail.com
> > > >:
> > > > > >
> > > > > > > Val,
> > > > > > >
> > > > > > > Sounds reasonable. I just think, that user should have some way
> > to
> > > > > know,
> > > > > > > that new version of a service class was deployed.
> > > > > > > One way to do it is to listen to *EVT_CLASS_DEPLOYED. *I'm not
> > > sure,
> > > > > > > whether it is triggered on class redeployment, though. If not,
> > then
> > > > > > another
> > > > > > > event type should be added.
> > > > > > >
> > > > > > > I don't think, that a lot of people will implement their own
> > > > > > > *DeploymentSpi*-s, so we should make work with
> *UriDeploymentSpi*
> > > as
> > > > > > > comfortable as possible.
> > > > > > >
> > > > > > > Denis
> > > > > > >
> > > > > > > пт, 6 апр. 2018 г. в 23:40, Valentin Kulichenko <
> > > > > > > valentin.kuliche...@gmail.com>:
> > > > > > >
> > > > > > >> Yes, the class deployment itself has to be explicit. I.e.,
> there
> > > has
> > > > > to
> > > > > > be
> > > > > > >> a manual step where user updates the class, and the exact step
> > > > > required
> > 

Re: Triggering rebalancing on timeout or manually if the baseline topology is not reassembled

2018-04-11 Thread Valentin Kulichenko
Denis,

In my understanding, in this case you should remove node from BLT and that
will trigger the rebalancing, no?

-Val

On Wed, Apr 11, 2018 at 12:23 PM, Denis Magda  wrote:

> Igniters,
>
> As we know the rebalancing doesn't happen if one of the nodes goes down,
> thus, shrinking the baseline topology. It complies with our assumption that
> the node should be recovered soon and there is no need to waste
> CPU/memory/networking resources of the cluster shifting the data around.
>
> However, there are always edge cases. I was reasonably asked how to trigger
> the rebalancing within the baseline topology manually or on timeout if:
>
>- It's not expected that the failed node would be resurrected in the
>nearest time and
>- It's not likely that that node will be replaced by the other one.
>
> The question. If I call IgniteCache.rebalance() or configure
> CacheConfiguration.rebalanceTimeout will the rebalancing be fired within
> the baseline topology?
>
> --
> Denis
>


Triggering rebalancing on timeout or manually if the baseline topology is not reassembled

2018-04-11 Thread Denis Magda
Igniters,

As we know the rebalancing doesn't happen if one of the nodes goes down,
thus, shrinking the baseline topology. It complies with our assumption that
the node should be recovered soon and there is no need to waste
CPU/memory/networking resources of the cluster shifting the data around.

However, there are always edge cases. I was reasonably asked how to trigger
the rebalancing within the baseline topology manually or on timeout if:

   - It's not expected that the failed node would be resurrected in the
   nearest time and
   - It's not likely that that node will be replaced by the other one.

The question. If I call IgniteCache.rebalance() or configure
CacheConfiguration.rebalanceTimeout will the rebalancing be fired within
the baseline topology?

--
Denis


[jira] [Created] (IGNITE-8230) SQL: CREATE TABLE doesn't take backups from template

2018-04-11 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-8230:
-

 Summary: SQL: CREATE TABLE doesn't take backups from template
 Key: IGNITE-8230
 URL: https://issues.apache.org/jira/browse/IGNITE-8230
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Evgenii Zhuravlev
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3798: IGNITE-7829

2018-04-11 Thread zaleslaw
GitHub user zaleslaw opened a pull request:

https://github.com/apache/ignite/pull/3798

IGNITE-7829



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7829

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3798


commit 098d2832b3f4af23c72bc25f43e4ab8a95f2f416
Author: Zinoviev Alexey 
Date:   2018-04-11T18:40:27Z

IGNITE-7829: Added example

commit da58736d2c223061ebbc8e54a252661165d10919
Author: Zinoviev Alexey 
Date:   2018-04-11T18:47:59Z

IGNITE-7829: Added example




---


[jira] [Created] (IGNITE-8229) Warning: Ignoring query projection because it's executed over LOCAL cache

2018-04-11 Thread Aleksandr Tceluiko (JIRA)
Aleksandr Tceluiko created IGNITE-8229:
--

 Summary: Warning: Ignoring query projection because it's executed 
over LOCAL cache 
 Key: IGNITE-8229
 URL: https://issues.apache.org/jira/browse/IGNITE-8229
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Aleksandr Tceluiko


Every scan query get warning:

[13:26:25 WRN]  Ignoring query projection because it's 
executed over LOCAL cache (only local node will be queried): 
GridCacheQueryAdapter [type=SCAN, clsName=null, clause=null, 
filter=o.a.i.i.processors.platform.cache.PlatformCacheEntryF 
ilterImpl@7629939a, transform=null, part=null, incMeta=false, 
metrics=GridCacheQueryMetricsAdapter [minTime=9223372036854775807, maxTime=0, 
sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, 
timeout=0, keepAll=true, incBackups=false, dedup=false, 
prj=o.a.i.i.cluster.ClusterGroupAdapter@708472f7, keepBinary=true, subjId=null, 
taskHash=0]

 

Valentin Kulichenko wrote:
{quote}This looks like a bug as there is no way to provide cluster group for a 
query in the latest versions.
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3797: Ignite-gg-13703

2018-04-11 Thread DmitriyGovorukhin
GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/3797

Ignite-gg-13703



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-13703

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3797


commit 1cea80d29f4f1c61ed56ad1261b74ed42611bf64
Author: Ilya Lantukh 
Date:   2018-04-06T10:49:10Z

IGNITE-8018 Optimized GridCacheMapEntry initialValue() - Fixes #3686.

Signed-off-by: Alexey Goncharuk 

commit 37fc72542eb6baa8be8b41aecd08a194102d13c1
Author: Алексей Стельмак 
Date:   2018-04-06T15:28:22Z

IGNITE-8049 Limit the number of operation cycles in B+Tree - Fixes #3769.

Signed-off-by: dpavlov 

(cherry picked from commit e491f10)

commit 76e293654e34c927d6c9efc85a12e736b58a21f2
Author: Eduard Shangareev 
Date:   2018-04-06T16:22:07Z

IGNITE-8114 Add fail recovery mechanism to tracking pages - Fixes #3734.

Signed-off-by: dpavlov 

(cherry picked from commit 0829397)

commit 49f11db727febc83297c7f0f5de9e6f98f0197fa
Author: Alexey Kuznetsov 
Date:   2018-04-09T02:25:50Z

IGNITE-8159 control.sh: Fixed NPE on adding nodes on empty baseline and not 
active cluster.

(cherry picked from commit 834869c)

commit 9ad7be2f51b6dcdcdf43fedb298cd4e240f0adab
Author: Ilya Borisov 
Date:   2018-04-09T13:59:32Z

IGNITE-8155 Web Console: Fixed number pattern warning in browser console.

(cherry picked from commit 5d8f570)

commit 4aa56751906e5db7aad025a7193933fa929aae26
Author: Vasiliy Sisko 
Date:   2018-04-09T15:13:21Z

IGNITE-7940 Visor CMD: Added "cache -slp" and "cache -rlp" commands to show 
and reset lost partitions for specified cache.

(cherry picked from commit abfa0f5)

commit cc04c5c70af1bdbba834f73330e73277b60e23fc
Author: Eduard Shangareev 
Date:   2018-04-09T16:15:50Z

IGNITE-8114 Additional fix for Add fail recovery mechanism to tracking pages

(cherry picked from commit 961fc35)

commit c70d85aa36c702ea0f29bd8668e9bf0790f9ba11
Author: Vasiliy Sisko 
Date:   2018-04-10T08:42:24Z

IGNITE-8126 Web Console: Fixed code generation for cache load.

(cherry picked from commit a0a187b)

commit 8d3755b9c58eef12c5fc9cabfc0b1c05f6db716e
Author: Semyon Boikov 
Date:   2018-04-10T08:37:39Z

IGNITE-7222 Added ZooKeeper discovery SPI

commit b096a463c338565a7661f8a853a257518d872997
Author: Stanislav Lukyanov 
Date:   2018-04-09T11:33:13Z

IGNITE-7904: Changed IgniteUtils::cast not to trim exception chains. This 
closes #3683.

commit 82a4c024fe06ef8c8deeaf762f0cc20a8e481252
Author: Roman Guseinov 
Date:   2018-04-09T11:45:44Z

IGNITE-7944: Disconnected client node tries to send JOB_CANCEL message. 
Applied fix:
- Skip sending message if client disconnected;
- Throw IgniteCheckedException if a client node is disconnected and 
communication client is null.
This closes #3737.

commit c1745de37891026e0a719f0c1d1afe768dfccbf3
Author: Vasiliy Sisko 
Date:   2018-04-10T10:48:52Z

IGNITE-7927 Web Console: Fixed demo for non-collocated joins.

(cherry picked from commit 647620b)

commit b28287d1861fd841a18d0eef95eff309d21a55ef
Author: Alexey Goncharuk 
Date:   2018-04-10T13:22:28Z

IGNITE-8025 Future must fail if assertion error has been thrown in the 
worker thread

commit a832f2b2e5788c45114c3cb5529d7cf53d08f9a6
Author: Andrey Kuznetsov 
Date:   2018-04-10T14:30:12Z

ignite-7772 System workers critical failures handling

Signed-off-by: Andrey Gura 

commit 912433ba9aa113508d05930691b251eccd8f5870
Author: Aleksey Plekhanov 
Date:   2018-04-10T15:54:03Z

IGNITE-8069 IgniteOutOfMemoryException should be handled accordingly to 
provided failure handler

Signed-off-by: Andrey Gura 

commit 99feab6ace66d011b677fd4d57b44fc54da8fd4f
Author: Alexey Goncharuk 
Date:   2018-04-10T17:33:47Z

IGNITE-6430 Complete failing test early

commit 526fb0ee612ef71fde58a1274db35e8205304a63
Author: Dmitriy Sorokin 
Date:   2018-04-10T19:20:41Z

IGNITE-8101 Ability to terminate system workers by JMX for test purposes.

Signed-off-by: Andrey Gura 

commit b4cb2f0df944534743a9d73811e047eda572258c
Author: mcherkasov 
Date:   2018-04-11T00:27:20Z

IGNITE-8153 Nodes fail to connect each other when SSL is 

Re: Ignite documentation is broken

2018-04-11 Thread Denis Magda
You can come across some glitches on readme side. Faced several personally
this week. It seems they are rolling out updates.

--
Denis

On Wed, Apr 11, 2018 at 1:49 AM, Dmitriy Setrakyan 
wrote:

> Igniters,
>
> The readme documentation seems broken. Is it only for me, or others
> experience the same thing?
>
> https://apacheignite.readme.io/docs
>
> Did anyone change anything in the docs settings?
>
> D.
>


Re: Please, add me to contributors

2018-04-11 Thread Denis Magda
Hello Evgenii,

You are all set and welcome to the community! Just a bit of references for
you that should boost your onboarding.

Please subscribe to both dev and user lists:
https://ignite.apache.org/community/resources.html#mail-lists

Get familiar with Ignite development process described here:
https://cwiki.apache.org/confluence/display/IGNITE/Development+Process

Instructions on how to contribute can be found here:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

Project setup in Intellij IDEA:
https://cwiki.apache.org/confluence/display/IGNITE/Project+Setup

Once you got familiar and were able to run a few examples, you should pick
a Jira ticket you would like to start on. Send an email to the dev list
sharing your JIRA id, so we can add you as a contributor in Jira.

These are the easy tickets to start with:
https://issues.apache.org/jira/browse/IGNITE-4549?jql=project%20%3D%20IGNITE%20AND%20labels%20in%20(newbie)%20and%20status%20%3D%20OPEN

While those are more advanced but appealing:
https://ignite.apache.org/community/contribute.html#pick-tickets

Looking forward to your contributions!
Denis

On Wed, Apr 11, 2018 at 2:30 AM, Загумённов Евгений 
wrote:

> Hello, I'm Evgenii Zagumennov, My JiraID is "ezagumennov". Please, add me
> to contributors.
>
> Regards, Evgenii
>
>


[jira] [Created] (IGNITE-8228) Log exception stack trace in failure processor

2018-04-11 Thread Andrey Gura (JIRA)
Andrey Gura created IGNITE-8228:
---

 Summary: Log exception stack trace in failure processor
 Key: IGNITE-8228
 URL: https://issues.apache.org/jira/browse/IGNITE-8228
 Project: Ignite
  Issue Type: Improvement
Reporter: Andrey Gura
Assignee: Andrey Gura
 Fix For: 2.5


At present failure processor prints only exception class. It must also log 
stack trace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8227) Research possibility and implement JUnit test failure handler for TeamCity

2018-04-11 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8227:
--

 Summary: Research possibility and implement JUnit test failure 
handler for TeamCity
 Key: IGNITE-8227
 URL: https://issues.apache.org/jira/browse/IGNITE-8227
 Project: Ignite
  Issue Type: Test
Reporter: Dmitriy Pavlov
Assignee: Dmitriy Pavlov
 Fix For: 2.6


After IEP-14 we found a lot of TC failures involving unexpected nodes stop.

To avoid suites exit codes, tests have NoOpFailureHandler as default.

But instead of this, better handler could be 
stopNode + fail currenly running test with message.

This default allows to identify such failures without log-message fail 
condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Anton Vinogradov
Vova,
thanks for explanations.

Comments really valuable to me.

>> 1) Please see my original message explaining how this could be fixed
>> without cache groups.

I have questions about you initial statements.

  >> 1) "Merge" partition data from different caches
  Ptoposal is just to automate grouping?

  >> 2) Employ segment-extent based approach instead of file-per-partition
  Idea is to keep all colocated partitions at one or some files?
  Something like: keep some colocated patritions together (for example to
have files ~ 2gb) with automatic grouping/splitting?

In case both answers are "yes":
There is no need to wait for 3.0 to implement this.

1) #2 sound like a storage optimization and can be implemented not like a
cache groups replacement, but like a "too many fsyncs" solution.
It looks to be a good idea to keep all replicated cache's partitions
together.

2) We can just deprecate cache groups since caches will be grouped
automatically, no need to atomicaly replace groups by proposed solution.

>> Once we have p.1 and p.2 ready cache groups could be removed, couldn't
they?
Sounds correct


2018-04-11 14:32 GMT+03:00 Vladimir Ozerov :

> Anton,
>
> Your example is extremely unlikely use case which we've never seen in a
> wild. But nevertheless:
> 1) Please see my original message explaining how this could be fixed
> without cache groups.
> 2) Logical cache creation also causes PME.
> 3) Yes, it is real. No fundamental limitations. In addition, removal of
> logical cache is costly operation with O(N) complexity, where N is number
> of records in cache. Removal of physical cache is constant-time operation.
> 4) I do not see how monitoring is related to cache groups.
>
> On Wed, Apr 11, 2018 at 2:02 PM, Anton Vinogradov  wrote:
>
> > Vova,
> >
> > 1) Each real cache have some megabytes overhead of memory on affinity
> each
> > node.
> > Virtual cache inside cache group consumes much less memory (~ 0mb).
> >
> > 2) Real cache creation cause PME,
> > Virtual cache creation just cause minor topology increment and do not
> stops
> > tx.
> >
> > Not sure about this staterment, is it correct?
> >
> > 3) In case we're talking about multi-tenant environment, we can have
> > 10_000+ organisations (or even some millions) inside one cluster, each
> can
> > have ~20 caches.
> > Is it real to have 200_000+ caches? I dont think so. Rebalancing will
> > freeze cluster in that case.
> >
> > Also, organisation/removal creation is a regular orepation (eg. 100+ per
> > day) and it should be fast and not cause performance degradation.
> >
> > 4) It very useful to have monitoring based on cache groups in case of
> > multi-tenant environment.
> > Each organisation will consume some megabytes, but, for example, all
> Loans
> > will require terabytes or have update rate over 9000 per second, and
> you'll
> > see that.
> >
> > The main Idea that virtual cache inside cache group require almost 0
> space,
> > but works as cool as real and even better.
> >
> >
> > 2018-04-11 13:45 GMT+03:00 Dmitry Pavlov :
> >
> > > Hi Igniters,
> > >
> > > Actually I do not understand both points of view: we need to
> > (keep/remove)
> > > cache groups.
> > >
> > > Only one reason for refactoring I see : 'too much fsyncs', but it may
> be
> > > solved at level of FilePageStoreV2 with new virtual FS for
> > partitions/index
> > > data, without any other changes.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > ср, 11 апр. 2018 г. в 13:30, Vladimir Ozerov :
> > >
> > > > Anton,
> > > >
> > > > I do not see the point. What is the problem with creation or removal
> of
> > > > real cache?
> > > >
> > > > On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov 
> > wrote:
> > > >
> > > > > Vova,
> > > > >
> > > > > Cache groups are very useful.
> > > > >
> > > > > For example, you can develop multi-tenant applications using cache
> > > groups
> > > > > as a templates.
> > > > > In case you have some cache groups, eg. Users, Loans, Deposits, you
> > can
> > > > > keep records for Organisation_A, Organisation_B and Organisation_C
> at
> > > > same
> > > > > data sctuctures, but logically separated.
> > > > > Addition/Removal of orgatisation will not cause creation or removal
> > of
> > > > real
> > > > > caches.
> > > > >
> > > > > ASAIK, you can use GridSecurity [1] over caches inside cache
> groups,
> > > and
> > > > > gain secured multi-tenant environment as a result.
> > > > >
> > > > > Can you propose better solution without cache groups usage?
> > > > >
> > > > > [1] https://docs.gridgain.com/docs/security-concepts
> > > > >
> > > > > 2018-04-11 0:24 GMT+03:00 Denis Magda :
> > > > >
> > > > > > Vladimir,
> > > > > >
> > > > > > - Data size per-cache
> > > > > >
> > > > > >
> > > > > > Could you elaborate how the data size per-cache/table task will
> be
> > > > > > addressed with proposed architecture? Are you going to store 

[GitHub] ignite pull request #3796: IGNITE-8226 Logs minor improvement.

2018-04-11 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3796

IGNITE-8226 Logs minor improvement.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8226

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3796


commit fb9372fde043bd349ab6a1660a1b7dabb7453b46
Author: Pavel Kovalenko 
Date:   2018-04-11T15:25:12Z

IGNITE-8226 Hide not important warn messages.




---


Re: MTCGA: JVM crash in .NET tests - IGNITE-7871

2018-04-11 Thread Pavel Kovalenko
Hello Pavel,

Seems the problem was gone after this hotfix:
https://github.com/apache/ignite/commit/0e73fa2c10dcd96ff98279018bdd3f8b36568008
Could you please double check that everything is ok now? Here is latest run
including fix:
https://ci.ignite.apache.org/viewLog.html?buildId=1192257=buildResultsDiv=IgniteTests24Java8_IgnitePlatformNetCoreLinux

2018-04-11 17:17 GMT+03:00 Pavel Tupitsyn :

> Igniters,
>
> There is JVM crash in .NET Core Linux tests [1] after IGNITE-7871 merge
> [2].
>
> Pavel Kovalenko, Alexey Goncharuk, please have a look.
> In the log [3] there are some details:
>
> Unknown connection detected (is some other software connecting to this
> Ignite port? missing SSL configuration on remote node?)
> [rmtAddr=/127.0.0.1]
>
> ...
> No verification for local node leave has been received from
> coordinator (will stop node anyway).
> Critical failure. Will be handled accordingly to configured handler
> [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
> err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
> is terminated unexpectedly.]]
> JVM will be halted immediately due to the failure:
> [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
> err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
> is terminated unexpectedly.]]
>
>
>
>
>
> [1]
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_
> IgnitePlatformNetCoreLinux_IgniteTests24Java8=%3Cdefault%3E=
> buildTypeStatusDiv
>
> [2]
> https://github.com/apache/ignite/commit/da77b9818a70495b7afdf6899ebd91
> 80dadd7f68
>
> [3]
> https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_
> IgnitePlatformNetCoreLinux/1191602:id/logs.zip%21/
> modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/bin/Debug/
> netcoreapp2.0/dotnet-test.log
>


[jira] [Created] (IGNITE-8226) Thousands of warning messages per second in log files.

2018-04-11 Thread Oleg Ostanin (JIRA)
Oleg Ostanin created IGNITE-8226:


 Summary: Thousands of warning messages per second in log files.
 Key: IGNITE-8226
 URL: https://issues.apache.org/jira/browse/IGNITE-8226
 Project: Ignite
  Issue Type: Improvement
Reporter: Oleg Ostanin


Sometimes I see this message in log file:

[2018-04-11 15:45:30,999][WARN ][sys-#454] Partition has been scheduled for 
rebalancing due to outdated update counter 
[nodeId=bed11708-090f-4e44-a1a7-e3d2b717fcb2, grp=cache_group_5, partId=239, 
haveHistory=false]

The problem is that there is about 4 messages per 2 seconds.

Also this message:

[2018-04-11 15:03:39,997][WARN ][sys-#75] Stale update for single partition map 
update (will ignore) [grp=cache_group_46, exchId=null, 
curMap=GridDhtPartitionMap [moving=1024, top=AffinityTopologyVersion [topVer=4, 
minorTopVer=1], updateSeq=6, size=1024], newMap=GridDhtPartitionMap 
[moving=1024, top=AffinityTopologyVersion [topVer=4, minorTopVer=1], 
updateSeq=6, size=1024]]

appears about 1 times per 2 seconds.

 

Can we move this messages to debug level or do something else?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: IGNITE-3999 review

2018-04-11 Thread Nikolay Izhikov
Hello, Amir.

Sorry, but no.

I will take a look in a next few days.


В Ср, 11/04/2018 в 14:51 +, Amir Akhmedov пишет:
> Hi Nikolay,
> 
> Did you have a chance to check my changes?
> 
> Thanks,
> Amir

signature.asc
Description: This is a digitally signed message part


Re: IGNITE-3999 review

2018-04-11 Thread Amir Akhmedov
Hi Nikolay,

Did you have a chance to check my changes?

Thanks,
Amir


[jira] [Created] (IGNITE-8225) Add a command to control script to print current topology version

2018-04-11 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8225:


 Summary: Add a command to control script to print current topology 
version
 Key: IGNITE-8225
 URL: https://issues.apache.org/jira/browse/IGNITE-8225
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8224) Print out a warning message if there are partitions mapped only to offline nodes

2018-04-11 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8224:


 Summary: Print out a warning message if there are partitions 
mapped only to offline nodes
 Key: IGNITE-8224
 URL: https://issues.apache.org/jira/browse/IGNITE-8224
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


MTCGA: JVM crash in .NET tests - IGNITE-7871

2018-04-11 Thread Pavel Tupitsyn
Igniters,

There is JVM crash in .NET Core Linux tests [1] after IGNITE-7871 merge [2].

Pavel Kovalenko, Alexey Goncharuk, please have a look.
In the log [3] there are some details:

Unknown connection detected (is some other software connecting to this
Ignite port? missing SSL configuration on remote node?)
[rmtAddr=/127.0.0.1]

...
No verification for local node leave has been received from
coordinator (will stop node anyway).
Critical failure. Will be handled accordingly to configured handler
[hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
is terminated unexpectedly.]]
JVM will be halted immediately due to the failure:
[failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
err=java.lang.IllegalStateException: Thread tcp-disco-srvr-#671%grid2%
is terminated unexpectedly.]]





[1]
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgnitePlatformNetCoreLinux_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv

[2]
https://github.com/apache/ignite/commit/da77b9818a70495b7afdf6899ebd9180dadd7f68

[3]
https://ci.ignite.apache.org/repository/download/IgniteTests24Java8_IgnitePlatformNetCoreLinux/1191602:id/logs.zip%21/modules/platforms/dotnet/Apache.Ignite.Core.Tests.DotNetCore/bin/Debug/netcoreapp2.0/dotnet-test.log


[GitHub] ignite pull request #2436: IGNITE-5439 JDBC thin: support query cancel

2018-04-11 Thread tledkov-gridgain
Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/2436


---


[jira] [Created] (IGNITE-8223) GridNearTxLocal.clearPrepareFuture does effectively nothing

2018-04-11 Thread Andrey Kuznetsov (JIRA)
Andrey Kuznetsov created IGNITE-8223:


 Summary: GridNearTxLocal.clearPrepareFuture does effectively 
nothing
 Key: IGNITE-8223
 URL: https://issues.apache.org/jira/browse/IGNITE-8223
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.4
Reporter: Andrey Kuznetsov
 Fix For: 2.6


It's unclear whether {{GridNearTxLocal.clearPrepareFuture}} is called at all, 
but the method does nothing, since its argument type is never used as target 
field value. Proposed change is to make the method no-op explicitly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3794: IGNITE-8148

2018-04-11 Thread devozerov
Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/3794


---


[GitHub] ignite-release pull request #1: IGNITE-8172

2018-04-11 Thread vveider
Github user vveider commented on a diff in the pull request:

https://github.com/apache/ignite-release/pull/1#discussion_r180753764
  
--- Diff: scripts/vote_3_step_1[rpm]create_repository.sh ---
@@ -19,17 +27,19 @@ then
 fi
 echo
 
+
 #
 # Build package
 #
 echo "# Building RPM package #"
-if [ ! -f rpmbuild ]; then rm -rf rpmbuild; fi
-mkdir -pv rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+if [ -d rpmbuild ]; then rm -r rpmbuild; fi
+mkdir -pv rpmbuild/{BUILD,RPMS,SRPMS}
 cp -rfv git/packaging/rpm/* rpmbuild
-cp -rfv svn/vote/apache-ignite-fabric-${ignite_version}-bin.zip 
rpmbuild/SOURCES/apache-ignite.zip
+cp -rfv svn/vote/apache-ignite-fabric-${ignite_version}-bin.zip 
rpmbuild/SOURCES/
 rpmbuild -bb --define "_topdir $(pwd)/rpmbuild" 
rpmbuild/SPECS/apache-ignite.spec
--- End diff --

Not until https://issues.apache.org/jira/browse/IGNITE-7251 is reviewed and 
merged to master.


---


[GitHub] ignite pull request #3785: IGNITE-8204

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3785


---


Re: Upsource update required

2018-04-11 Thread Vyacheslav Daradur
Vitaliy, thank you!

Dmitriy, review recreating may help if Upsource fixed the problem.

On Mon, Apr 9, 2018 at 4:40 PM, Dmitry Pavlov  wrote:
> Hi Vitaliy,
>
> Thank you for your time and updating Upsource.
>
> It seems for PR branches with master merge into branch, Upsource still
> shows different changes with PR (github). Example
> https://reviews.ignite.apache.org/ignite/review/IGNT-CR-556  &
> https://github.com/apache/ignite/pull/3243
>
> This means we probably need to re-create PR for case `master` was merged
> into branch. I guess if such PR will be merged with squash to new branch
> and new PR created, Upsource will show correct picture.
>
> Sincerely,
> Dmitriy Pavlov
>
> пн, 9 апр. 2018 г. в 14:56, Vitaliy Osipov :
>
>> Hi All
>>
>> Upsource server (https://reviews.ignite.apache.org) has been upgraded to
>> build 2017.3.2888
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>>



-- 
Best Regards, Vyacheslav D.


[GitHub] ignite-release pull request #1: IGNITE-8172

2018-04-11 Thread alamar
Github user alamar commented on a diff in the pull request:

https://github.com/apache/ignite-release/pull/1#discussion_r180746462
  
--- Diff: scripts/vote_3_step_1[rpm]create_repository.sh ---
@@ -19,17 +27,19 @@ then
 fi
 echo
 
+
 #
 # Build package
 #
 echo "# Building RPM package #"
-if [ ! -f rpmbuild ]; then rm -rf rpmbuild; fi
-mkdir -pv rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+if [ -d rpmbuild ]; then rm -r rpmbuild; fi
+mkdir -pv rpmbuild/{BUILD,RPMS,SRPMS}
 cp -rfv git/packaging/rpm/* rpmbuild
-cp -rfv svn/vote/apache-ignite-fabric-${ignite_version}-bin.zip 
rpmbuild/SOURCES/apache-ignite.zip
+cp -rfv svn/vote/apache-ignite-fabric-${ignite_version}-bin.zip 
rpmbuild/SOURCES/
 rpmbuild -bb --define "_topdir $(pwd)/rpmbuild" 
rpmbuild/SPECS/apache-ignite.spec
--- End diff --

I thought we don't want to be a 'fabric' anymore.


---


[GitHub] ignite pull request #3795: IGNITE-8129: fix test: setup default SSL context ...

2018-04-11 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/3795

IGNITE-8129: fix test: setup default SSL context at the test

(because sometimes default SSL context may be setup by build system)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8129

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3795


commit cb053d060d77d80bbb848578e72808ec0c8d16f5
Author: tledkov-gridgain 
Date:   2018-04-11T12:14:26Z

IGNITE-8129: fix test: setup default SSL context at the test
(because sometimes default SSL context may be setup by build system)




---


[GitHub] ignite pull request #3735: IGNITE-8106 Collect suppressed exceptions from ca...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3735


---


[jira] [Created] (IGNITE-8222) Add docker image build for Nightly Release

2018-04-11 Thread Peter Ivanov (JIRA)
Peter Ivanov created IGNITE-8222:


 Summary: Add docker image build for Nightly Release
 Key: IGNITE-8222
 URL: https://issues.apache.org/jira/browse/IGNITE-8222
 Project: Ignite
  Issue Type: New Feature
Reporter: Peter Ivanov
Assignee: Peter Ivanov


# Create Meta-runner for Docker images building in TeamCity.
# Add new build on TeamCity which will build docker image from master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3794: IGNITE-8148

2018-04-11 Thread devozerov
GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/3794

IGNITE-8148



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8148

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3794


commit 97b7a5ec172a1bb8fa5ca05d56255d1b44411d57
Author: devozerov 
Date:   2018-04-11T12:06:21Z

Done.

commit 421494724879d14ba1694f91c50f4f3864d829ee
Author: devozerov 
Date:   2018-04-11T12:21:58Z

Tests.




---


[jira] [Created] (IGNITE-8221) Cache management and server node authorisation

2018-04-11 Thread Alexey Kukushkin (JIRA)
Alexey Kukushkin created IGNITE-8221:


 Summary: Cache management and server node authorisation
 Key: IGNITE-8221
 URL: https://issues.apache.org/jira/browse/IGNITE-8221
 Project: Ignite
  Issue Type: Task
Reporter: Alexey Kukushkin
Assignee: Vladimir Ozerov


Add new authorisation checks requested by multiple Apache Ignite users:

CACHE_CREATE

CACHE_DESTROY

JOIN_AS_SERVER

Also, create an Ignite system property to allow disabling "on-heap" cache 
feature. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Ignite documentation is broken

2018-04-11 Thread Dmitry Pavlov
Hi Dmitriy,

It seems attachment is not displayed, probaly image attachment is not
allowed by dev.list.

readme works for me too.

Sincerely,
Dmitriy Pavlov

ср, 11 апр. 2018 г. в 11:56, Dmitriy Setrakyan :

> This is what I see (attached)
>
> On Wed, Apr 11, 2018 at 1:52 AM, Alexey Kuznetsov 
> wrote:
>
>> I have no problems.
>>
>> Try other browser  / or force refresh of page.
>>
>> On Wed, Apr 11, 2018 at 3:49 PM, Dmitriy Setrakyan > >
>> wrote:
>>
>> > Igniters,
>> >
>> > The readme documentation seems broken. Is it only for me, or others
>> > experience the same thing?
>> >
>> > https://apacheignite.readme.io/docs
>> >
>> > Did anyone change anything in the docs settings?
>> >
>> > D.
>> >
>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>
>


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Vladimir Ozerov
Anton,

Your example is extremely unlikely use case which we've never seen in a
wild. But nevertheless:
1) Please see my original message explaining how this could be fixed
without cache groups.
2) Logical cache creation also causes PME.
3) Yes, it is real. No fundamental limitations. In addition, removal of
logical cache is costly operation with O(N) complexity, where N is number
of records in cache. Removal of physical cache is constant-time operation.
4) I do not see how monitoring is related to cache groups.

On Wed, Apr 11, 2018 at 2:02 PM, Anton Vinogradov  wrote:

> Vova,
>
> 1) Each real cache have some megabytes overhead of memory on affinity each
> node.
> Virtual cache inside cache group consumes much less memory (~ 0mb).
>
> 2) Real cache creation cause PME,
> Virtual cache creation just cause minor topology increment and do not stops
> tx.
>
> Not sure about this staterment, is it correct?
>
> 3) In case we're talking about multi-tenant environment, we can have
> 10_000+ organisations (or even some millions) inside one cluster, each can
> have ~20 caches.
> Is it real to have 200_000+ caches? I dont think so. Rebalancing will
> freeze cluster in that case.
>
> Also, organisation/removal creation is a regular orepation (eg. 100+ per
> day) and it should be fast and not cause performance degradation.
>
> 4) It very useful to have monitoring based on cache groups in case of
> multi-tenant environment.
> Each organisation will consume some megabytes, but, for example, all Loans
> will require terabytes or have update rate over 9000 per second, and you'll
> see that.
>
> The main Idea that virtual cache inside cache group require almost 0 space,
> but works as cool as real and even better.
>
>
> 2018-04-11 13:45 GMT+03:00 Dmitry Pavlov :
>
> > Hi Igniters,
> >
> > Actually I do not understand both points of view: we need to
> (keep/remove)
> > cache groups.
> >
> > Only one reason for refactoring I see : 'too much fsyncs', but it may be
> > solved at level of FilePageStoreV2 with new virtual FS for
> partitions/index
> > data, without any other changes.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > ср, 11 апр. 2018 г. в 13:30, Vladimir Ozerov :
> >
> > > Anton,
> > >
> > > I do not see the point. What is the problem with creation or removal of
> > > real cache?
> > >
> > > On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov 
> wrote:
> > >
> > > > Vova,
> > > >
> > > > Cache groups are very useful.
> > > >
> > > > For example, you can develop multi-tenant applications using cache
> > groups
> > > > as a templates.
> > > > In case you have some cache groups, eg. Users, Loans, Deposits, you
> can
> > > > keep records for Organisation_A, Organisation_B and Organisation_C at
> > > same
> > > > data sctuctures, but logically separated.
> > > > Addition/Removal of orgatisation will not cause creation or removal
> of
> > > real
> > > > caches.
> > > >
> > > > ASAIK, you can use GridSecurity [1] over caches inside cache groups,
> > and
> > > > gain secured multi-tenant environment as a result.
> > > >
> > > > Can you propose better solution without cache groups usage?
> > > >
> > > > [1] https://docs.gridgain.com/docs/security-concepts
> > > >
> > > > 2018-04-11 0:24 GMT+03:00 Denis Magda :
> > > >
> > > > > Vladimir,
> > > > >
> > > > > - Data size per-cache
> > > > >
> > > > >
> > > > > Could you elaborate how the data size per-cache/table task will be
> > > > > addressed with proposed architecture? Are you going to store data
> of
> > a
> > > > > specific cache in dedicated pages/segments? What's about index
> size?
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov <
> > voze...@gridgain.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Dima,
> > > > > >
> > > > > > 1) Easy to understand for users
> > > > > > AI 2.x: cluster -> cache group -> cache -> table
> > > > > > AI 3.x: cluster -> cache(==table)
> > > > > >
> > > > > > 2) Fine grained cache management
> > > > > > - MVCC on/off per-cache
> > > > > > - WAL mode on/off per-cache
> > > > > > - Data size per-cache
> > > > > >
> > > > > > 3) Performance:
> > > > > > - Efficient scans are not possible with cache groups
> > > > > > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> > > > > >
> > > > > > "Huge refactoring" is not precise estimate. Let's think on how to
> > do
> > > > that
> > > > > > instead of how not to do :-)
> > > > > >
> > > > > > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> > > > > dsetrak...@apache.org
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Vladimir, sounds like a huge refactoring. Other than "cache
> > groups
> > > > are
> > > > > > > confusing", are we solving any other big issues with the new
> > > proposed
> > > > > > > approach?
> > > > > > >
> > > > > > > (every time we try to refactor rebalancing, I get goose bumps)
> > > > > > >
> > > > 

Re: Remove cache groups in AI 3.0

2018-04-11 Thread Vladimir Ozerov
Dima,

The question is: would we need cache groups if physical caches have the
same performance as logical?

On Wed, Apr 11, 2018 at 1:45 PM, Dmitry Pavlov 
wrote:

> Hi Igniters,
>
> Actually I do not understand both points of view: we need to (keep/remove)
> cache groups.
>
> Only one reason for refactoring I see : 'too much fsyncs', but it may be
> solved at level of FilePageStoreV2 with new virtual FS for partitions/index
> data, without any other changes.
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 11 апр. 2018 г. в 13:30, Vladimir Ozerov :
>
> > Anton,
> >
> > I do not see the point. What is the problem with creation or removal of
> > real cache?
> >
> > On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov  wrote:
> >
> > > Vova,
> > >
> > > Cache groups are very useful.
> > >
> > > For example, you can develop multi-tenant applications using cache
> groups
> > > as a templates.
> > > In case you have some cache groups, eg. Users, Loans, Deposits, you can
> > > keep records for Organisation_A, Organisation_B and Organisation_C at
> > same
> > > data sctuctures, but logically separated.
> > > Addition/Removal of orgatisation will not cause creation or removal of
> > real
> > > caches.
> > >
> > > ASAIK, you can use GridSecurity [1] over caches inside cache groups,
> and
> > > gain secured multi-tenant environment as a result.
> > >
> > > Can you propose better solution without cache groups usage?
> > >
> > > [1] https://docs.gridgain.com/docs/security-concepts
> > >
> > > 2018-04-11 0:24 GMT+03:00 Denis Magda :
> > >
> > > > Vladimir,
> > > >
> > > > - Data size per-cache
> > > >
> > > >
> > > > Could you elaborate how the data size per-cache/table task will be
> > > > addressed with proposed architecture? Are you going to store data of
> a
> > > > specific cache in dedicated pages/segments? What's about index size?
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Dima,
> > > > >
> > > > > 1) Easy to understand for users
> > > > > AI 2.x: cluster -> cache group -> cache -> table
> > > > > AI 3.x: cluster -> cache(==table)
> > > > >
> > > > > 2) Fine grained cache management
> > > > > - MVCC on/off per-cache
> > > > > - WAL mode on/off per-cache
> > > > > - Data size per-cache
> > > > >
> > > > > 3) Performance:
> > > > > - Efficient scans are not possible with cache groups
> > > > > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> > > > >
> > > > > "Huge refactoring" is not precise estimate. Let's think on how to
> do
> > > that
> > > > > instead of how not to do :-)
> > > > >
> > > > > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Vladimir, sounds like a huge refactoring. Other than "cache
> groups
> > > are
> > > > > > confusing", are we solving any other big issues with the new
> > proposed
> > > > > > approach?
> > > > > >
> > > > > > (every time we try to refactor rebalancing, I get goose bumps)
> > > > > >
> > > > > > D.
> > > > > >
> > > > > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov <
> > > voze...@gridgain.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Igniters,
> > > > > > >
> > > > > > > Cache groups were implemented for a sole purpose - to hide
> > internal
> > > > > > > inefficiencies. Namely (add more if I missed something):
> > > > > > > 1) Excessive heap usage for affinity/partition data
> > > > > > > 2) Too much data files as we employ file-per-partition
> approach.
> > > > > > >
> > > > > > > These problems were resolved, but now cache groups are a great
> > > source
> > > > > of
> > > > > > > confusion both for users and us - hard to understand, no way to
> > > > > configure
> > > > > > > it in deterministic way. Should we resolve mentioned
> performance
> > > > issues
> > > > > > we
> > > > > > > would never had cache groups. I propose to think we would it
> take
> > > for
> > > > > us
> > > > > > to
> > > > > > > get rid of cache groups.
> > > > > > >
> > > > > > > Please provide your inputs to suggestions below.
> > > > > > >
> > > > > > > 1) "Merge" partition data from different caches
> > > > > > > Consider that we start a new cache with the same affinity
> > > > configuration
> > > > > > > (cache mode, partition number, affinity function) as some of
> > > already
> > > > > > > existing caches, Is it possible to re-use partition
> distribution
> > > and
> > > > > > > history of existing cache for a new cache? Think of it as a
> kind
> > of
> > > > > > > automatic cache grouping which is transparent to the user. This
> > > would
> > > > > > > remove heap pressure. Also it could resolve our long-standing
> > issue
> > > > > with
> > > > > > > FairAffinityFunction when tow caches with the same affinity
> > > > > configuration
> > > > > > > are not co-located when started on different 

[GitHub] ignite pull request #3793: IGNITE-7871 Check local join future on error.

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3793


---


[GitHub] ignite pull request #3793: IGNITE-7871 Check local join future on error.

2018-04-11 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3793

IGNITE-7871 Check local join future on error.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7871-micro-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3793


commit 427cb4c4025534134bb448ecbdc1172845a7adaa
Author: Pavel Kovalenko 
Date:   2018-04-11T11:11:22Z

IGNITE-7871 Check local join future on error.




---


[jira] [Created] (IGNITE-8220) Discovery worker termination in PDS test

2018-04-11 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8220:
--

 Summary: Discovery worker termination in PDS test
 Key: IGNITE-8220
 URL: https://issues.apache.org/jira/browse/IGNITE-8220
 Project: Ignite
  Issue Type: Test
  Components: persistence
Reporter: Dmitriy Pavlov
Assignee: Pavel Kovalenko
 Fix For: 2.6




https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_IgnitePds1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_PdsDirectIo1_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ActivateDeactivateCluster_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv

{noformat}
[2018-04-11 
02:43:09,769][ERROR][tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0%][IgniteTestResources]
 Critical failure. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.NoOpFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
tcp-disco-srvr-#2298%cache.IgniteClusterActivateDeactivateTestWithPersistence0% 
is terminated unexpectedly.]] 
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Anton Vinogradov
Vova,

1) Each real cache have some megabytes overhead of memory on affinity each
node.
Virtual cache inside cache group consumes much less memory (~ 0mb).

2) Real cache creation cause PME,
Virtual cache creation just cause minor topology increment and do not stops
tx.

Not sure about this staterment, is it correct?

3) In case we're talking about multi-tenant environment, we can have
10_000+ organisations (or even some millions) inside one cluster, each can
have ~20 caches.
Is it real to have 200_000+ caches? I dont think so. Rebalancing will
freeze cluster in that case.

Also, organisation/removal creation is a regular orepation (eg. 100+ per
day) and it should be fast and not cause performance degradation.

4) It very useful to have monitoring based on cache groups in case of
multi-tenant environment.
Each organisation will consume some megabytes, but, for example, all Loans
will require terabytes or have update rate over 9000 per second, and you'll
see that.

The main Idea that virtual cache inside cache group require almost 0 space,
but works as cool as real and even better.


2018-04-11 13:45 GMT+03:00 Dmitry Pavlov :

> Hi Igniters,
>
> Actually I do not understand both points of view: we need to (keep/remove)
> cache groups.
>
> Only one reason for refactoring I see : 'too much fsyncs', but it may be
> solved at level of FilePageStoreV2 with new virtual FS for partitions/index
> data, without any other changes.
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 11 апр. 2018 г. в 13:30, Vladimir Ozerov :
>
> > Anton,
> >
> > I do not see the point. What is the problem with creation or removal of
> > real cache?
> >
> > On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov  wrote:
> >
> > > Vova,
> > >
> > > Cache groups are very useful.
> > >
> > > For example, you can develop multi-tenant applications using cache
> groups
> > > as a templates.
> > > In case you have some cache groups, eg. Users, Loans, Deposits, you can
> > > keep records for Organisation_A, Organisation_B and Organisation_C at
> > same
> > > data sctuctures, but logically separated.
> > > Addition/Removal of orgatisation will not cause creation or removal of
> > real
> > > caches.
> > >
> > > ASAIK, you can use GridSecurity [1] over caches inside cache groups,
> and
> > > gain secured multi-tenant environment as a result.
> > >
> > > Can you propose better solution without cache groups usage?
> > >
> > > [1] https://docs.gridgain.com/docs/security-concepts
> > >
> > > 2018-04-11 0:24 GMT+03:00 Denis Magda :
> > >
> > > > Vladimir,
> > > >
> > > > - Data size per-cache
> > > >
> > > >
> > > > Could you elaborate how the data size per-cache/table task will be
> > > > addressed with proposed architecture? Are you going to store data of
> a
> > > > specific cache in dedicated pages/segments? What's about index size?
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Dima,
> > > > >
> > > > > 1) Easy to understand for users
> > > > > AI 2.x: cluster -> cache group -> cache -> table
> > > > > AI 3.x: cluster -> cache(==table)
> > > > >
> > > > > 2) Fine grained cache management
> > > > > - MVCC on/off per-cache
> > > > > - WAL mode on/off per-cache
> > > > > - Data size per-cache
> > > > >
> > > > > 3) Performance:
> > > > > - Efficient scans are not possible with cache groups
> > > > > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> > > > >
> > > > > "Huge refactoring" is not precise estimate. Let's think on how to
> do
> > > that
> > > > > instead of how not to do :-)
> > > > >
> > > > > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Vladimir, sounds like a huge refactoring. Other than "cache
> groups
> > > are
> > > > > > confusing", are we solving any other big issues with the new
> > proposed
> > > > > > approach?
> > > > > >
> > > > > > (every time we try to refactor rebalancing, I get goose bumps)
> > > > > >
> > > > > > D.
> > > > > >
> > > > > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov <
> > > voze...@gridgain.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Igniters,
> > > > > > >
> > > > > > > Cache groups were implemented for a sole purpose - to hide
> > internal
> > > > > > > inefficiencies. Namely (add more if I missed something):
> > > > > > > 1) Excessive heap usage for affinity/partition data
> > > > > > > 2) Too much data files as we employ file-per-partition
> approach.
> > > > > > >
> > > > > > > These problems were resolved, but now cache groups are a great
> > > source
> > > > > of
> > > > > > > confusion both for users and us - hard to understand, no way to
> > > > > configure
> > > > > > > it in deterministic way. Should we resolve mentioned
> performance
> > > > issues
> > > > > > we
> > > > > > > would never had cache 

Re: Remove cache groups in AI 3.0

2018-04-11 Thread Dmitry Pavlov
Hi Igniters,

Actually I do not understand both points of view: we need to (keep/remove)
cache groups.

Only one reason for refactoring I see : 'too much fsyncs', but it may be
solved at level of FilePageStoreV2 with new virtual FS for partitions/index
data, without any other changes.

Sincerely,
Dmitriy Pavlov

ср, 11 апр. 2018 г. в 13:30, Vladimir Ozerov :

> Anton,
>
> I do not see the point. What is the problem with creation or removal of
> real cache?
>
> On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov  wrote:
>
> > Vova,
> >
> > Cache groups are very useful.
> >
> > For example, you can develop multi-tenant applications using cache groups
> > as a templates.
> > In case you have some cache groups, eg. Users, Loans, Deposits, you can
> > keep records for Organisation_A, Organisation_B and Organisation_C at
> same
> > data sctuctures, but logically separated.
> > Addition/Removal of orgatisation will not cause creation or removal of
> real
> > caches.
> >
> > ASAIK, you can use GridSecurity [1] over caches inside cache groups, and
> > gain secured multi-tenant environment as a result.
> >
> > Can you propose better solution without cache groups usage?
> >
> > [1] https://docs.gridgain.com/docs/security-concepts
> >
> > 2018-04-11 0:24 GMT+03:00 Denis Magda :
> >
> > > Vladimir,
> > >
> > > - Data size per-cache
> > >
> > >
> > > Could you elaborate how the data size per-cache/table task will be
> > > addressed with proposed architecture? Are you going to store data of a
> > > specific cache in dedicated pages/segments? What's about index size?
> > >
> > > --
> > > Denis
> > >
> > > On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Dima,
> > > >
> > > > 1) Easy to understand for users
> > > > AI 2.x: cluster -> cache group -> cache -> table
> > > > AI 3.x: cluster -> cache(==table)
> > > >
> > > > 2) Fine grained cache management
> > > > - MVCC on/off per-cache
> > > > - WAL mode on/off per-cache
> > > > - Data size per-cache
> > > >
> > > > 3) Performance:
> > > > - Efficient scans are not possible with cache groups
> > > > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> > > >
> > > > "Huge refactoring" is not precise estimate. Let's think on how to do
> > that
> > > > instead of how not to do :-)
> > > >
> > > > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> > > dsetrak...@apache.org
> > > > >
> > > > wrote:
> > > >
> > > > > Vladimir, sounds like a huge refactoring. Other than "cache groups
> > are
> > > > > confusing", are we solving any other big issues with the new
> proposed
> > > > > approach?
> > > > >
> > > > > (every time we try to refactor rebalancing, I get goose bumps)
> > > > >
> > > > > D.
> > > > >
> > > > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov <
> > voze...@gridgain.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Igniters,
> > > > > >
> > > > > > Cache groups were implemented for a sole purpose - to hide
> internal
> > > > > > inefficiencies. Namely (add more if I missed something):
> > > > > > 1) Excessive heap usage for affinity/partition data
> > > > > > 2) Too much data files as we employ file-per-partition approach.
> > > > > >
> > > > > > These problems were resolved, but now cache groups are a great
> > source
> > > > of
> > > > > > confusion both for users and us - hard to understand, no way to
> > > > configure
> > > > > > it in deterministic way. Should we resolve mentioned performance
> > > issues
> > > > > we
> > > > > > would never had cache groups. I propose to think we would it take
> > for
> > > > us
> > > > > to
> > > > > > get rid of cache groups.
> > > > > >
> > > > > > Please provide your inputs to suggestions below.
> > > > > >
> > > > > > 1) "Merge" partition data from different caches
> > > > > > Consider that we start a new cache with the same affinity
> > > configuration
> > > > > > (cache mode, partition number, affinity function) as some of
> > already
> > > > > > existing caches, Is it possible to re-use partition distribution
> > and
> > > > > > history of existing cache for a new cache? Think of it as a kind
> of
> > > > > > automatic cache grouping which is transparent to the user. This
> > would
> > > > > > remove heap pressure. Also it could resolve our long-standing
> issue
> > > > with
> > > > > > FairAffinityFunction when tow caches with the same affinity
> > > > configuration
> > > > > > are not co-located when started on different topology versions.
> > > > > >
> > > > > > 2) Employ segment-extent based approach instead of
> > file-per-partition
> > > > > > - Every object (cache, index) reside in dedicated segment
> > > > > > - Segment consists of extents (minimal allocation units)
> > > > > > - Extents are allocated and deallocated as needed
> > > > > > - *Ignite specific*: particular extent can be used by only one
> > > > partition
> > > > > > - Segments may be located in any number of data files we find

Re: Remove cache groups in AI 3.0

2018-04-11 Thread Vladimir Ozerov
Denis,

Normally, every database object, whether it is a table or an index, is kept
in it's own exclusive segment. Segment can span one or more real files.
Segment always have a kind of allocation map allowing to quickly get number
of allocated pages for specific object.

On Wed, Apr 11, 2018 at 12:24 AM, Denis Magda  wrote:

> Vladimir,
>
> - Data size per-cache
>
>
> Could you elaborate how the data size per-cache/table task will be
> addressed with proposed architecture? Are you going to store data of a
> specific cache in dedicated pages/segments? What's about index size?
>
> --
> Denis
>
> On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov 
> wrote:
>
> > Dima,
> >
> > 1) Easy to understand for users
> > AI 2.x: cluster -> cache group -> cache -> table
> > AI 3.x: cluster -> cache(==table)
> >
> > 2) Fine grained cache management
> > - MVCC on/off per-cache
> > - WAL mode on/off per-cache
> > - Data size per-cache
> >
> > 3) Performance:
> > - Efficient scans are not possible with cache groups
> > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> >
> > "Huge refactoring" is not precise estimate. Let's think on how to do that
> > instead of how not to do :-)
> >
> > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > wrote:
> >
> > > Vladimir, sounds like a huge refactoring. Other than "cache groups are
> > > confusing", are we solving any other big issues with the new proposed
> > > approach?
> > >
> > > (every time we try to refactor rebalancing, I get goose bumps)
> > >
> > > D.
> > >
> > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Igniters,
> > > >
> > > > Cache groups were implemented for a sole purpose - to hide internal
> > > > inefficiencies. Namely (add more if I missed something):
> > > > 1) Excessive heap usage for affinity/partition data
> > > > 2) Too much data files as we employ file-per-partition approach.
> > > >
> > > > These problems were resolved, but now cache groups are a great source
> > of
> > > > confusion both for users and us - hard to understand, no way to
> > configure
> > > > it in deterministic way. Should we resolve mentioned performance
> issues
> > > we
> > > > would never had cache groups. I propose to think we would it take for
> > us
> > > to
> > > > get rid of cache groups.
> > > >
> > > > Please provide your inputs to suggestions below.
> > > >
> > > > 1) "Merge" partition data from different caches
> > > > Consider that we start a new cache with the same affinity
> configuration
> > > > (cache mode, partition number, affinity function) as some of already
> > > > existing caches, Is it possible to re-use partition distribution and
> > > > history of existing cache for a new cache? Think of it as a kind of
> > > > automatic cache grouping which is transparent to the user. This would
> > > > remove heap pressure. Also it could resolve our long-standing issue
> > with
> > > > FairAffinityFunction when tow caches with the same affinity
> > configuration
> > > > are not co-located when started on different topology versions.
> > > >
> > > > 2) Employ segment-extent based approach instead of file-per-partition
> > > > - Every object (cache, index) reside in dedicated segment
> > > > - Segment consists of extents (minimal allocation units)
> > > > - Extents are allocated and deallocated as needed
> > > > - *Ignite specific*: particular extent can be used by only one
> > partition
> > > > - Segments may be located in any number of data files we find
> > convenient
> > > > With this approach "too many fsyncs" problem goes away automatically.
> > At
> > > > the same time it would be possible to implement efficient rebalance
> > still
> > > > as partition data will be split across moderate number of extents,
> not
> > > > chaotically.
> > > >
> > > > Once we have p.1 and p.2 ready cache groups could be removed,
> couldn't
> > > > they?
> > > >
> > > > Vladimir.
> > > >
> > >
> >
>


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Vladimir Ozerov
Anton,

I do not see the point. What is the problem with creation or removal of
real cache?

On Wed, Apr 11, 2018 at 1:05 PM, Anton Vinogradov  wrote:

> Vova,
>
> Cache groups are very useful.
>
> For example, you can develop multi-tenant applications using cache groups
> as a templates.
> In case you have some cache groups, eg. Users, Loans, Deposits, you can
> keep records for Organisation_A, Organisation_B and Organisation_C at same
> data sctuctures, but logically separated.
> Addition/Removal of orgatisation will not cause creation or removal of real
> caches.
>
> ASAIK, you can use GridSecurity [1] over caches inside cache groups, and
> gain secured multi-tenant environment as a result.
>
> Can you propose better solution without cache groups usage?
>
> [1] https://docs.gridgain.com/docs/security-concepts
>
> 2018-04-11 0:24 GMT+03:00 Denis Magda :
>
> > Vladimir,
> >
> > - Data size per-cache
> >
> >
> > Could you elaborate how the data size per-cache/table task will be
> > addressed with proposed architecture? Are you going to store data of a
> > specific cache in dedicated pages/segments? What's about index size?
> >
> > --
> > Denis
> >
> > On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov 
> > wrote:
> >
> > > Dima,
> > >
> > > 1) Easy to understand for users
> > > AI 2.x: cluster -> cache group -> cache -> table
> > > AI 3.x: cluster -> cache(==table)
> > >
> > > 2) Fine grained cache management
> > > - MVCC on/off per-cache
> > > - WAL mode on/off per-cache
> > > - Data size per-cache
> > >
> > > 3) Performance:
> > > - Efficient scans are not possible with cache groups
> > > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> > >
> > > "Huge refactoring" is not precise estimate. Let's think on how to do
> that
> > > instead of how not to do :-)
> > >
> > > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> > dsetrak...@apache.org
> > > >
> > > wrote:
> > >
> > > > Vladimir, sounds like a huge refactoring. Other than "cache groups
> are
> > > > confusing", are we solving any other big issues with the new proposed
> > > > approach?
> > > >
> > > > (every time we try to refactor rebalancing, I get goose bumps)
> > > >
> > > > D.
> > > >
> > > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Igniters,
> > > > >
> > > > > Cache groups were implemented for a sole purpose - to hide internal
> > > > > inefficiencies. Namely (add more if I missed something):
> > > > > 1) Excessive heap usage for affinity/partition data
> > > > > 2) Too much data files as we employ file-per-partition approach.
> > > > >
> > > > > These problems were resolved, but now cache groups are a great
> source
> > > of
> > > > > confusion both for users and us - hard to understand, no way to
> > > configure
> > > > > it in deterministic way. Should we resolve mentioned performance
> > issues
> > > > we
> > > > > would never had cache groups. I propose to think we would it take
> for
> > > us
> > > > to
> > > > > get rid of cache groups.
> > > > >
> > > > > Please provide your inputs to suggestions below.
> > > > >
> > > > > 1) "Merge" partition data from different caches
> > > > > Consider that we start a new cache with the same affinity
> > configuration
> > > > > (cache mode, partition number, affinity function) as some of
> already
> > > > > existing caches, Is it possible to re-use partition distribution
> and
> > > > > history of existing cache for a new cache? Think of it as a kind of
> > > > > automatic cache grouping which is transparent to the user. This
> would
> > > > > remove heap pressure. Also it could resolve our long-standing issue
> > > with
> > > > > FairAffinityFunction when tow caches with the same affinity
> > > configuration
> > > > > are not co-located when started on different topology versions.
> > > > >
> > > > > 2) Employ segment-extent based approach instead of
> file-per-partition
> > > > > - Every object (cache, index) reside in dedicated segment
> > > > > - Segment consists of extents (minimal allocation units)
> > > > > - Extents are allocated and deallocated as needed
> > > > > - *Ignite specific*: particular extent can be used by only one
> > > partition
> > > > > - Segments may be located in any number of data files we find
> > > convenient
> > > > > With this approach "too many fsyncs" problem goes away
> automatically.
> > > At
> > > > > the same time it would be possible to implement efficient rebalance
> > > still
> > > > > as partition data will be split across moderate number of extents,
> > not
> > > > > chaotically.
> > > > >
> > > > > Once we have p.1 and p.2 ready cache groups could be removed,
> > couldn't
> > > > > they?
> > > > >
> > > > > Vladimir.
> > > > >
> > > >
> > >
> >
>


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Vladimir Ozerov
Dmitry,

If you do this, why would you need cache groups at all?

On Tue, Apr 10, 2018 at 1:58 PM, Dmitry Pavlov 
wrote:

> Hi Vladimir,
>
> We can solve "too many fsyncs" or 'too many small files' by placing several
> partitions of cache group in one file.
>
> We don't need to get rid from cache groups in this case.
>
> It is not trivial task, but it is doable. We need to create simplest FS for
> paritition chunks inside one file.
>
> Sincerely,
> Dmitriy Pavlov
>
> вт, 10 апр. 2018 г. в 12:31, Vladimir Ozerov :
>
> > Dima,
> >
> > 1) Easy to understand for users
> > AI 2.x: cluster -> cache group -> cache -> table
> > AI 3.x: cluster -> cache(==table)
> >
> > 2) Fine grained cache management
> > - MVCC on/off per-cache
> > - WAL mode on/off per-cache
> > - Data size per-cache
> >
> > 3) Performance:
> > - Efficient scans are not possible with cache groups
> > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> >
> > "Huge refactoring" is not precise estimate. Let's think on how to do that
> > instead of how not to do :-)
> >
> > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > wrote:
> >
> > > Vladimir, sounds like a huge refactoring. Other than "cache groups are
> > > confusing", are we solving any other big issues with the new proposed
> > > approach?
> > >
> > > (every time we try to refactor rebalancing, I get goose bumps)
> > >
> > > D.
> > >
> > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Igniters,
> > > >
> > > > Cache groups were implemented for a sole purpose - to hide internal
> > > > inefficiencies. Namely (add more if I missed something):
> > > > 1) Excessive heap usage for affinity/partition data
> > > > 2) Too much data files as we employ file-per-partition approach.
> > > >
> > > > These problems were resolved, but now cache groups are a great source
> > of
> > > > confusion both for users and us - hard to understand, no way to
> > configure
> > > > it in deterministic way. Should we resolve mentioned performance
> issues
> > > we
> > > > would never had cache groups. I propose to think we would it take for
> > us
> > > to
> > > > get rid of cache groups.
> > > >
> > > > Please provide your inputs to suggestions below.
> > > >
> > > > 1) "Merge" partition data from different caches
> > > > Consider that we start a new cache with the same affinity
> configuration
> > > > (cache mode, partition number, affinity function) as some of already
> > > > existing caches, Is it possible to re-use partition distribution and
> > > > history of existing cache for a new cache? Think of it as a kind of
> > > > automatic cache grouping which is transparent to the user. This would
> > > > remove heap pressure. Also it could resolve our long-standing issue
> > with
> > > > FairAffinityFunction when tow caches with the same affinity
> > configuration
> > > > are not co-located when started on different topology versions.
> > > >
> > > > 2) Employ segment-extent based approach instead of file-per-partition
> > > > - Every object (cache, index) reside in dedicated segment
> > > > - Segment consists of extents (minimal allocation units)
> > > > - Extents are allocated and deallocated as needed
> > > > - *Ignite specific*: particular extent can be used by only one
> > partition
> > > > - Segments may be located in any number of data files we find
> > convenient
> > > > With this approach "too many fsyncs" problem goes away automatically.
> > At
> > > > the same time it would be possible to implement efficient rebalance
> > still
> > > > as partition data will be split across moderate number of extents,
> not
> > > > chaotically.
> > > >
> > > > Once we have p.1 and p.2 ready cache groups could be removed,
> couldn't
> > > > they?
> > > >
> > > > Vladimir.
> > > >
> > >
> >
>


Re: Remove cache groups in AI 3.0

2018-04-11 Thread Anton Vinogradov
Vova,

Cache groups are very useful.

For example, you can develop multi-tenant applications using cache groups
as a templates.
In case you have some cache groups, eg. Users, Loans, Deposits, you can
keep records for Organisation_A, Organisation_B and Organisation_C at same
data sctuctures, but logically separated.
Addition/Removal of orgatisation will not cause creation or removal of real
caches.

ASAIK, you can use GridSecurity [1] over caches inside cache groups, and
gain secured multi-tenant environment as a result.

Can you propose better solution without cache groups usage?

[1] https://docs.gridgain.com/docs/security-concepts

2018-04-11 0:24 GMT+03:00 Denis Magda :

> Vladimir,
>
> - Data size per-cache
>
>
> Could you elaborate how the data size per-cache/table task will be
> addressed with proposed architecture? Are you going to store data of a
> specific cache in dedicated pages/segments? What's about index size?
>
> --
> Denis
>
> On Tue, Apr 10, 2018 at 2:31 AM, Vladimir Ozerov 
> wrote:
>
> > Dima,
> >
> > 1) Easy to understand for users
> > AI 2.x: cluster -> cache group -> cache -> table
> > AI 3.x: cluster -> cache(==table)
> >
> > 2) Fine grained cache management
> > - MVCC on/off per-cache
> > - WAL mode on/off per-cache
> > - Data size per-cache
> >
> > 3) Performance:
> > - Efficient scans are not possible with cache groups
> > - Efficient destroy/DROP - O(N) now, O(1) afterwards
> >
> > "Huge refactoring" is not precise estimate. Let's think on how to do that
> > instead of how not to do :-)
> >
> > On Tue, Apr 10, 2018 at 11:41 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > wrote:
> >
> > > Vladimir, sounds like a huge refactoring. Other than "cache groups are
> > > confusing", are we solving any other big issues with the new proposed
> > > approach?
> > >
> > > (every time we try to refactor rebalancing, I get goose bumps)
> > >
> > > D.
> > >
> > > On Tue, Apr 10, 2018 at 1:32 AM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Igniters,
> > > >
> > > > Cache groups were implemented for a sole purpose - to hide internal
> > > > inefficiencies. Namely (add more if I missed something):
> > > > 1) Excessive heap usage for affinity/partition data
> > > > 2) Too much data files as we employ file-per-partition approach.
> > > >
> > > > These problems were resolved, but now cache groups are a great source
> > of
> > > > confusion both for users and us - hard to understand, no way to
> > configure
> > > > it in deterministic way. Should we resolve mentioned performance
> issues
> > > we
> > > > would never had cache groups. I propose to think we would it take for
> > us
> > > to
> > > > get rid of cache groups.
> > > >
> > > > Please provide your inputs to suggestions below.
> > > >
> > > > 1) "Merge" partition data from different caches
> > > > Consider that we start a new cache with the same affinity
> configuration
> > > > (cache mode, partition number, affinity function) as some of already
> > > > existing caches, Is it possible to re-use partition distribution and
> > > > history of existing cache for a new cache? Think of it as a kind of
> > > > automatic cache grouping which is transparent to the user. This would
> > > > remove heap pressure. Also it could resolve our long-standing issue
> > with
> > > > FairAffinityFunction when tow caches with the same affinity
> > configuration
> > > > are not co-located when started on different topology versions.
> > > >
> > > > 2) Employ segment-extent based approach instead of file-per-partition
> > > > - Every object (cache, index) reside in dedicated segment
> > > > - Segment consists of extents (minimal allocation units)
> > > > - Extents are allocated and deallocated as needed
> > > > - *Ignite specific*: particular extent can be used by only one
> > partition
> > > > - Segments may be located in any number of data files we find
> > convenient
> > > > With this approach "too many fsyncs" problem goes away automatically.
> > At
> > > > the same time it would be possible to implement efficient rebalance
> > still
> > > > as partition data will be split across moderate number of extents,
> not
> > > > chaotically.
> > > >
> > > > Once we have p.1 and p.2 ready cache groups could be removed,
> couldn't
> > > > they?
> > > >
> > > > Vladimir.
> > > >
> > >
> >
>


[GitHub] ignite pull request #3768: IGNITE-8111 Add extra validation for WAL segment ...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3768


---


[jira] [Created] (IGNITE-8219) B+Tree operation may result in an infinite loop in some case

2018-04-11 Thread Alexey Stelmak (JIRA)
Alexey Stelmak created IGNITE-8219:
--

 Summary:  B+Tree operation may result in an infinite loop in some 
case
 Key: IGNITE-8219
 URL: https://issues.apache.org/jira/browse/IGNITE-8219
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Alexey Stelmak
Assignee: Alexey Stelmak
 Fix For: 2.5


B+Tree operation may result in an infinite loop in case. Test 
DynamicIndexServerCoordinatorBasicSelfTest#testCreateIndexWithInlineSizePartitionedAtomic
 region size = 512Mb, KEY_BEFORE=1, KEY_AFTER=2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8218) Add exchange latch state to diagnostic messages

2018-04-11 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8218:
---

 Summary: Add exchange latch state to diagnostic messages
 Key: IGNITE-8218
 URL: https://issues.apache.org/jira/browse/IGNITE-8218
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Affects Versions: 2.5
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko
 Fix For: 2.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8217) Example DbH2ServerStartup produces a huge annoying output

2018-04-11 Thread Sergey Kozlov (JIRA)
Sergey Kozlov created IGNITE-8217:
-

 Summary: Example DbH2ServerStartup produces a huge annoying output
 Key: IGNITE-8217
 URL: https://issues.apache.org/jira/browse/IGNITE-8217
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Sergey Kozlov
 Fix For: 2.5


Example DbH2ServerStartup produces a huge annoying output:
{noformat}
Type 'q' and press 'Enter' to stop H2 TCP server...
 {noformat}
Due to code following:
{code:java}
try {
do {
System.out.println("Type 'q' and press 'Enter' to stop H2 TCP 
server...");
}
while ('q' != System.in.read());
}
catch (IOException ignored) {
// No-op.
}
}
{code}
I suppose we can put {{Thread.sleep(1000)}} in the \{{while}} loop and reduce 
repeating lines



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3583: IGNITE-7830: Adopt kNN regression model to the ne...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3583


---


Please, add me to contributors

2018-04-11 Thread Загумённов Евгений
Hello, I'm Evgenii Zagumennov, My JiraID is "ezagumennov". Please, add me to 
contributors.
 
Regards, Evgenii



Re: Ignite documentation is broken

2018-04-11 Thread Dmitriy Setrakyan
This is what I see (attached)

On Wed, Apr 11, 2018 at 1:52 AM, Alexey Kuznetsov 
wrote:

> I have no problems.
>
> Try other browser  / or force refresh of page.
>
> On Wed, Apr 11, 2018 at 3:49 PM, Dmitriy Setrakyan 
> wrote:
>
> > Igniters,
> >
> > The readme documentation seems broken. Is it only for me, or others
> > experience the same thing?
> >
> > https://apacheignite.readme.io/docs
> >
> > Did anyone change anything in the docs settings?
> >
> > D.
> >
>
>
>
> --
> Alexey Kuznetsov
>


Re: Ignite documentation is broken

2018-04-11 Thread Petr Ivanov
+1 fo OK.
Successfully logged in into admin panel, everything seems is on the right place.



> On 11 Apr 2018, at 11:52, Alexey Kuznetsov  wrote:
> 
> I have no problems.
> 
> Try other browser  / or force refresh of page.
> 
> On Wed, Apr 11, 2018 at 3:49 PM, Dmitriy Setrakyan 
> wrote:
> 
>> Igniters,
>> 
>> The readme documentation seems broken. Is it only for me, or others
>> experience the same thing?
>> 
>> https://apacheignite.readme.io/docs
>> 
>> Did anyone change anything in the docs settings?
>> 
>> D.
>> 
> 
> 
> 
> -- 
> Alexey Kuznetsov



Re: Service grid redesign

2018-04-11 Thread Denis Mekhanikov
Guys,

I'm also thinking, at which moment services should be deployed on the
joining nodes.
It would be good to have all services deployed by the moment, when a node
is accepted to the topology.
I think, it should work like this:

   1. connecting node sends a *TcpDiscoveryJoinRequestMessage *with
   persisted services configurations attached to it;
   2. coordinator recalculates service assignments and attaches them to the
   successive *TcpDiscoveryNodeAddedMessage*;
   3. connecting node receives the assignments, initialises all needed
   services and sends confirmation to the coordinator on completion;
   4. coordinator sends *TcpDiscoveryNodeAddFinishedMessage* only when it
   receives confirmation about deployed services from the joining node.

What do you think? Any pitfalls? Some discovery expert should look at this
procedure and tell, if it is viable.

Denis

ср, 11 апр. 2018 г. в 11:28, Denis Mekhanikov :

> Denis,
>
> Sounds reasonable. It's not clear, though, what should happen, if a
> joining node has some services persisted, that are missing on other nodes.
> Should we deploy them?
> If we do so, it could lead to surprising behaviour. For example you could
> kill a node, undeploy a service, then bring back an old node, and it would
> make the service resurrect.
> We could store some deployment counter along with the service
> configurations on all nodes, that would show how many times the service
> state has changed, i.e. it has been undeployed/redeployed. It should be
> kept for undeployed services as well to avoid situations like I described.
>
> But it still leaves a possibility of incorrect behaviour, if there was a
> split-brain situation at some point. I don't think we should precess it
> somehow, though. If we choose to tackle it, it will overcomplicate things
> for a sake of a minor improvement.
>
> Denis
>
> вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
>> I was responding to another Denis :) Agree with you on your point though.
>>
>> -Val
>>
>> On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda  wrote:
>>
>> > Val,
>> >
>> > Guess we're talking about other situations. I'm bringing up the case
>> when a
>> > service was deployed dynamically and has to be brought up after a full
>> > cluster restart w/o user intervention. To achieve this we need to
>> persist
>> > the service's configuration somewhere.
>> >
>> > --
>> > Denis
>> >
>> > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
>> > valentin.kuliche...@gmail.com> wrote:
>> >
>> > > Denis,
>> > >
>> > > EVT_CLASS_DEPLOYED should be fired every time a class is deployed or
>> > > redeployed. If this doesn't happen in some cases, I believe this would
>> > be a
>> > > bug. I don't think we need to add any new events.
>> > >
>> > > -Val
>> > >
>> > > On Mon, Apr 9, 2018 at 10:50 AM, Denis Magda 
>> wrote:
>> > >
>> > > > Denis,
>> > > >
>> > > > I would encourage us to persist a service configuration in the meta
>> > store
>> > > > and have this capability enabled by default. That's essential for
>> > > services
>> > > > started dynamically. Moreover, we support similar behavior for
>> caches,
>> > > > indexes, and other DDL changes happened at runtime.
>> > > >
>> > > > --
>> > > > Denis
>> > > >
>> > > > On Mon, Apr 9, 2018 at 9:34 AM, Denis Mekhanikov <
>> > dmekhani...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Another question, that I would like to discuss is whether services
>> > > should
>> > > > > be preserved on cluster restarts.
>> > > > >
>> > > > > Currently it depends on persistence configuration. If persistence
>> for
>> > > any
>> > > > > data region is enabled, then services will be persisted as well.
>> This
>> > > is
>> > > > a
>> > > > > pretty strange way of configuring this behaviour.
>> > > > > I'm not sure, if anybody relies on this functionality right now.
>> > Should
>> > > > we
>> > > > > support it at all? If yes, should we make it configurable?
>> > > > >
>> > > > > Denis
>> > > > >
>> > > > > пн, 9 апр. 2018 г. в 19:27, Denis Mekhanikov <
>> dmekhani...@gmail.com
>> > >:
>> > > > >
>> > > > > > Val,
>> > > > > >
>> > > > > > Sounds reasonable. I just think, that user should have some way
>> to
>> > > > know,
>> > > > > > that new version of a service class was deployed.
>> > > > > > One way to do it is to listen to *EVT_CLASS_DEPLOYED. *I'm not
>> > sure,
>> > > > > > whether it is triggered on class redeployment, though. If not,
>> then
>> > > > > another
>> > > > > > event type should be added.
>> > > > > >
>> > > > > > I don't think, that a lot of people will implement their own
>> > > > > > *DeploymentSpi*-s, so we should make work with
>> *UriDeploymentSpi*
>> > as
>> > > > > > comfortable as possible.
>> > > > > >
>> > > > > > Denis
>> > > > > >
>> > > > > > пт, 6 апр. 2018 г. в 23:40, Valentin Kulichenko <
>> > > > > > valentin.kuliche...@gmail.com>:
>> > > > > >
>> > > > > >> Yes, the class 

Re: Ignite documentation is broken

2018-04-11 Thread Alexey Kuznetsov
I have no problems.

Try other browser  / or force refresh of page.

On Wed, Apr 11, 2018 at 3:49 PM, Dmitriy Setrakyan 
wrote:

> Igniters,
>
> The readme documentation seems broken. Is it only for me, or others
> experience the same thing?
>
> https://apacheignite.readme.io/docs
>
> Did anyone change anything in the docs settings?
>
> D.
>



-- 
Alexey Kuznetsov


Ignite documentation is broken

2018-04-11 Thread Dmitriy Setrakyan
Igniters,

The readme documentation seems broken. Is it only for me, or others
experience the same thing?

https://apacheignite.readme.io/docs

Did anyone change anything in the docs settings?

D.


[jira] [Created] (IGNITE-8216) Zookeeper test-jar artifact building and minor javadoc improvement

2018-04-11 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8216:
---

 Summary: Zookeeper test-jar artifact building and minor javadoc 
improvement
 Key: IGNITE-8216
 URL: https://issues.apache.org/jira/browse/IGNITE-8216
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov
Assignee: Sergey Chugunov
 Fix For: 2.5


Zookeeper test-jar artifact should be built like core module test-jar.

Javadoc title should be provided to *org.apache.ignite.failure* package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Service grid redesign

2018-04-11 Thread Denis Mekhanikov
Denis,

Sounds reasonable. It's not clear, though, what should happen, if a joining
node has some services persisted, that are missing on other nodes.
Should we deploy them?
If we do so, it could lead to surprising behaviour. For example you could
kill a node, undeploy a service, then bring back an old node, and it would
make the service resurrect.
We could store some deployment counter along with the service
configurations on all nodes, that would show how many times the service
state has changed, i.e. it has been undeployed/redeployed. It should be
kept for undeployed services as well to avoid situations like I described.

But it still leaves a possibility of incorrect behaviour, if there was a
split-brain situation at some point. I don't think we should precess it
somehow, though. If we choose to tackle it, it will overcomplicate things
for a sake of a minor improvement.

Denis

вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> I was responding to another Denis :) Agree with you on your point though.
>
> -Val
>
> On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda  wrote:
>
> > Val,
> >
> > Guess we're talking about other situations. I'm bringing up the case
> when a
> > service was deployed dynamically and has to be brought up after a full
> > cluster restart w/o user intervention. To achieve this we need to persist
> > the service's configuration somewhere.
> >
> > --
> > Denis
> >
> > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > > Denis,
> > >
> > > EVT_CLASS_DEPLOYED should be fired every time a class is deployed or
> > > redeployed. If this doesn't happen in some cases, I believe this would
> > be a
> > > bug. I don't think we need to add any new events.
> > >
> > > -Val
> > >
> > > On Mon, Apr 9, 2018 at 10:50 AM, Denis Magda 
> wrote:
> > >
> > > > Denis,
> > > >
> > > > I would encourage us to persist a service configuration in the meta
> > store
> > > > and have this capability enabled by default. That's essential for
> > > services
> > > > started dynamically. Moreover, we support similar behavior for
> caches,
> > > > indexes, and other DDL changes happened at runtime.
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Mon, Apr 9, 2018 at 9:34 AM, Denis Mekhanikov <
> > dmekhani...@gmail.com>
> > > > wrote:
> > > >
> > > > > Another question, that I would like to discuss is whether services
> > > should
> > > > > be preserved on cluster restarts.
> > > > >
> > > > > Currently it depends on persistence configuration. If persistence
> for
> > > any
> > > > > data region is enabled, then services will be persisted as well.
> This
> > > is
> > > > a
> > > > > pretty strange way of configuring this behaviour.
> > > > > I'm not sure, if anybody relies on this functionality right now.
> > Should
> > > > we
> > > > > support it at all? If yes, should we make it configurable?
> > > > >
> > > > > Denis
> > > > >
> > > > > пн, 9 апр. 2018 г. в 19:27, Denis Mekhanikov <
> dmekhani...@gmail.com
> > >:
> > > > >
> > > > > > Val,
> > > > > >
> > > > > > Sounds reasonable. I just think, that user should have some way
> to
> > > > know,
> > > > > > that new version of a service class was deployed.
> > > > > > One way to do it is to listen to *EVT_CLASS_DEPLOYED. *I'm not
> > sure,
> > > > > > whether it is triggered on class redeployment, though. If not,
> then
> > > > > another
> > > > > > event type should be added.
> > > > > >
> > > > > > I don't think, that a lot of people will implement their own
> > > > > > *DeploymentSpi*-s, so we should make work with *UriDeploymentSpi*
> > as
> > > > > > comfortable as possible.
> > > > > >
> > > > > > Denis
> > > > > >
> > > > > > пт, 6 апр. 2018 г. в 23:40, Valentin Kulichenko <
> > > > > > valentin.kuliche...@gmail.com>:
> > > > > >
> > > > > >> Yes, the class deployment itself has to be explicit. I.e., there
> > has
> > > > to
> > > > > be
> > > > > >> a manual step where user updates the class, and the exact step
> > > > required
> > > > > >> would depend on DeploymentSpi implementation. But then Ignite
> > takes
> > > > care
> > > > > >> of
> > > > > >> everything else - service redeployment and restart is automatic.
> > > > > >>
> > > > > >> Dmitriy Pavlov, all this is going to be disabled if
> DeploymentSpi
> > is
> > > > not
> > > > > >> configured. In this case service class definitions have to be
> > > deployed
> > > > > on
> > > > > >> local classpath and can't be updated in runtime. Just like it
> > works
> > > > > right
> > > > > >> now.
> > > > > >>
> > > > > >> -Val
> > > > > >>
> > > > > >> On Fri, Apr 6, 2018 at 10:20 AM, Dmitriy Setrakyan <
> > > > > dsetrak...@apache.org
> > > > > >> >
> > > > > >> wrote:
> > > > > >>
> > > > > >> > On Fri, Apr 6, 2018 at 9:13 AM, Dmitry Pavlov <
> > > > dpavlov@gmail.com>
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > Hi Igniters,
> > > > > >> > >
> > > > > >> > > I like 

Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-04-11 Thread Dmitriy Setrakyan
On Tue, Apr 10, 2018 at 11:57 PM, Ilya Suntsov 
wrote:

> Dmitriy,
>
> I've measured performance on the current master and haven't found any
> problems with in-memory mode.
>

Got it. I would still say that the performance drop is too big with
persistence turned on. It seems like we did not just fix the bug, we also
introduced some additional slow down there. I would investigate if we could
optimize.


Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-04-11 Thread Ilya Suntsov
Dmitriy,

I've measured performance on the current master and haven't found any
problems with in-memory mode.

On Tue, Apr 10, 2018, 20:33 Dmitriy Setrakyan  wrote:

> I am not convinced that the performance degradation is only due to the new
> change that fixes the incorrect behavior. To my knowledge, there is also a
> drop in memory-only mode. Can someone explain why do we have such a drop?
>
> D.
>
> On Tue, Apr 10, 2018 at 9:08 AM, Vladimir Ozerov 
> wrote:
>
> > 16% looks perfectly ok to me provided that we compare correct
> > implementation with incorrect one.
> >
> > вт, 10 апр. 2018 г. в 18:24, Dmitriy Setrakyan :
> >
> > > Ilya, can we find out why pure in-memory scenario also had a
> performance
> > > drop and which commit caused it? It should not be affected by changes
> in
> > > persistence at all.
> > >
> > > D.
> > >
> > > On Tue, Apr 10, 2018 at 7:56 AM, Ilya Suntsov 
> > > wrote:
> > >
> > > > Igniters,
> > > >
> > > > Looks like commit:
> > > >
> > > > d0adb61ecd9af0d9907e480ec747ea1465f97cd7 is the first bad commit
> > > > > commit d0adb61ecd9af0d9907e480ec747ea1465f97cd7
> > > > > Author: Ivan Rakov 
> > > > > Date:   Tue Mar 27 20:11:52 2018 +0300
> > > > > IGNITE-7754 WAL in LOG_ONLY mode doesn't execute fsync on
> > > checkpoint
> > > > > begin - Fixes #3656.
> > > >
> > > >
> > > > was the cause of performance drop ( > 10% vs AI 2.4.0) on the
> following
> > > > benchmarks (LOG_ONLY):
> > > >
> > > >- atomic-put  (16 %)
> > > >- atomic-putAll (14 %)
> > > >- tx-putAll (11 %)
> > > >
> > > > As I understand it is greater than initial assessment.
> > > >
> > > > Thoughts?
> > > >
> > > > 2018-03-27 20:13 GMT+03:00 Dmitry Pavlov :
> > > >
> > > > > Ivan, sure :)
> > > > >
> > > > > Thank you for this contribution, merged to master.
> > > > >
> > > > > вт, 27 мар. 2018 г. в 20:08, Ivan Rakov :
> > > > >
> > > > > > Dmitry,
> > > > > >
> > > > > > Firstly PR contained dirty fix for performance measurement, but
> now
> > > it
> > > > > > contains good fix. :) Sorry for inconvenience.
> > > > > > I've renamed the PR.
> > > > > >
> > > > > > Best Regards,
> > > > > > Ivan Rakov
> > > > > >
> > > > > > On 27.03.2018 19:40, Dmitry Pavlov wrote:
> > > > > > > Hi Eduard, thank you for review.
> > > > > > >
> > > > > > > Hi Ivan,
> > > > > > >
> > > > > > > I'm confused on PR naming
> > > > > > > https://github.com/apache/ignite/pull/3656
> > > > > > >
> > > > > > > Could you rename?
> > > > > > >
> > > > > > > Sincerely,
> > > > > > > Dmitriy Pavlov
> > > > > > >
> > > > > > > вт, 27 мар. 2018 г. в 19:38, Eduard Shangareev <
> > > > > > eduard.shangar...@gmail.com
> > > > > > >> :
> > > > > > >> Ivan, I have reviewed your changes, looks good.
> > > > > > >>
> > > > > > >> On Tue, Mar 27, 2018 at 2:56 PM, Ivan Rakov <
> > > ivan.glu...@gmail.com>
> > > > > > wrote:
> > > > > > >>
> > > > > > >>> Igniters,
> > > > > > >>>
> > > > > > >>> I've completed development of https://issues.apache.org/jira
> > > > > > >>> /browse/IGNITE-7754. TeamCity state is ok. Please, review my
> > > > changes.
> > > > > > >>> Please note that it will be possible to track time of WAL
> fsync
> > > on
> > > > > > >>> checkpoint begin by *walCpRecordFsyncDuration *metric in
> > > > "Checkpoint
> > > > > > >>> started" message.
> > > > > > >>>
> > > > > > >>> Also, I've created https://issues.apache.org/
> > > > jira/browse/IGNITE-8057
> > > > > > >> with
> > > > > > >>> description of possible further improvement of WAL fsync on
> > > > > checkpoint
> > > > > > >>> begin.
> > > > > > >>>
> > > > > > >>> Best Regards,
> > > > > > >>> Ivan Rakov
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> On 26.03.2018 23:45, Valentin Kulichenko wrote:
> > > > > > >>>
> > > > > >  Ivan,
> > > > > > 
> > > > > >  It's all good then :) Thanks!
> > > > > > 
> > > > > >  -Val
> > > > > > 
> > > > > >  On Mon, Mar 26, 2018 at 1:50 AM, Ivan Rakov <
> > > > ivan.glu...@gmail.com>
> > > > > >  wrote:
> > > > > > 
> > > > > >  Val,
> > > > > > > There's no any sense to use WalMode.NONE in production
> > > > environment,
> > > > > > >> it's
> > > > > > > kept for testing and debugging purposes (including possible
> > > user
> > > > > > > activities
> > > > > > > like capacity planning).
> > > > > > > We already print a warning at node start in case
> WalMode.NONE
> > > is
> > > > > set:
> > > > > > >
> > > > > > > U.quietAndWarn(log,"Started write-ahead log manager in NONE
> > > mode,
> > > > > > >
> > > > > > >> persisted data may be lost in " +
> > > > > > >>"a case of unexpected node failure. Make sure to
> > > > deactivate
> > > > > > the
> > > > > > >> cluster before shutdown.");
> > > > > > >>
> > > > > > >> Best Regards,
> > > > > > > Ivan Rakov
> > > > > > >
> >