Re: Apache Ignite RPM packaging (Stage II)

2018-04-13 Thread Peter Ivanov
Current packages design (after installation) does not differ from binary
archive - everything works (except necessity to run service instead
ignite.sh) just the same way, including libs/optional.

Also, there can be issues with system JDK version by default, but every
problem will be in journalctl and/or  /var/log, and package has strong
dependency on any implementation of JDK 1.8.


I am lacking enough feedback about Apache Ignite from packages from real
users, don’t know real use cases so still "moving in the dark".


On Fri, 13 Apr 2018 at 22:18, Denis Magda  wrote:

> Ilya,
>
> Thanks for your inputs. The reason why we decided to split Ignite into
> several packages mimics the reason why Java community introduced modular
> subsystem for JDK. That's all about size. Ignite distribution is too big,
> and we're trying to separate it into several components so that people can
> install only the features they need.
>
> The point of a package is to ship something into root file system that can
> > be used from root file system. If cpp files require compilation we should
> > not ship them, or ship them to 'examples'. Ditto with benchmarks. If
> > there's no mechanism to add optional libs to Ignite classpath, we should
> > not ship optional libs. Moreover, some of 'optional' modules such as yarn
> > don't make sense here because they're not supposed to be used with
> > standalone Ignite.
>
>
> Agree that we need to ship the code that is ready to be run. As for the
> classpath thing, if an optional package is installed into the root (core)
> package directory, then its jars have to be added to "ignite/libs" folder.
> After that, the one needs to restart a cluster node, nd it will add the
> just installed optional libs to the classpath. *Petr*, does it work this
> way or can be implemented this way to address Ilya's concerns?
>
> --
> Denis
>
> On Fri, Apr 13, 2018 at 7:00 AM, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com>
> wrote:
>
> > 2018-04-13 7:42 GMT+03:00 Peter Ivanov :
> >
> > > On Thu, 12 Apr 2018 at 20:04, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com
> > >
> > > wrote:
> > >
> > > >
> > > > Moreover I did not find a way to start service if default installed
> JVM
> > > is
> > > > Java 7 :( I understand it's EOL, still this is something that hit me.
> > >
> > >
> > > Apache Ignite >=2.4 does not support Java 7 - it is said in
> > documentation,
> > > DEVNOTES and even in startup scripts.
> > >
> > >  I have Java 8 too, but I could not get service from package to start
> > Ignite since there's nowhere to put JAVA_HOME (or JVM_ARGS for that
> > matter). Is it possible to specify it while running packaged Ignite?
> >
> >
> > >
> > > >
> > > > apache-ignite-libs is a totally unexpected package name.
> apache-ignite
> > > core
> > > > doesn't depend on it. It doesn't enable anything out of the box. The
> > > > package is huge.
> > >
> > > ‘apache-ignite-libs’ is an aggregation package (for now) for all
> optional
> > > libs we are delivering. Possibly later they will be split more granular
> > or
> > > even package per lib (like php, perl, python, etc. do for their libs).
> > > This package dependency on ‘apache-ignite-core’ may seem confusing
> > though,
> > > I will try to explain it in IEP at least for current iteration.
> > >
> >
> > Okay, but how do you add optional libs to be included into Ignite
> classpath
> > while being launched by service? Is it even possible? If it isn't, I
> think
> > it doesn't make sense to ship apache-ignite-libs at all.
> >
> >
> > >
> > > Further naming may become clear when we’ll start initiative on
> including
> > > packages to popular Linux distributions and theirs community will join
> > > naming discussions.
> > >
> > Renaming packages once they're deployed widely will be a pain point to
> out
> > users. Some things should probably be thought out in advance.
> >
> >
> > >
> > >
> > >
> > > >
> > > > Frankly speaking, I'm not sure that improvements over Stage I are
> > enough
> > > as
> > > > of now. For demo-like activity, we can probably go with one package
> > fits
> > > > all.
> > > >
> > >
> > > The process of finding the best package architecture is iterative, but
> > > previously community agreed in split design proposed for 2.5 release.
> > >
> > > Also, split architecture is half of proposed improvements. The other
> > half -
> > > new process for deploying packages to Bintray (with virtually
> indefinite
> > > storage capabilities).
> > >
> > I think we could drop the split for now, or at least drop
> > apache-ignite-libs package at all. Probably also drop apache-ignite-cpp
> > package and maybe apache-ignite-benchmarks.
> >
> > The point of a package is to ship something into root file system that
> can
> > be used from root file system. If cpp files require compilation we should
> > not ship them, or ship them to 'examples'. Ditto with benchmarks. If
> > there's no mechanism to add optional libs to Ignite classpath, 

Re: Service grid redesign

2018-04-13 Thread Denis Magda
It sounds like it's not a trivial thing to support the automatic services
redeployment after a restart. Let's postpone it for now, guys concentrating
on more urgent things related to the services.

Alex, Vladimir,

Could you have a look at Denis question about the discovery-based
deployment? Guess it's the only one thing that prevents us from the IEP
finalization.

--
Denis

On Fri, Apr 13, 2018 at 5:30 AM, Denis Mekhanikov 
wrote:

> Vladimir,
>
> Currently we don't save binary metadata to disk, when persistence is
> disabled.
> But we still persist marshaller mappings for some reason, and I personally
> believe, that we shouldn't.
>
> But I agree, that we should separate data and service persistence
> configuration.
> Right now persistence of services is configured in a pretty non-obvious
> manner.
> It should be a clear way to tell Ignite, whether you want services to be
> persisted or not.
>
> I'm not sure, that we should make "statefullness" in general configurable.
> Users don't care much, whether metadata is preserved on restarts, or not.
>
> Denis
>
> пт, 13 апр. 2018 г. в 14:29, Vladimir Ozerov :
>
> > Alex,
> >
> > I would say that we've already had this behavior for years - marshaller
> > cache. I think it is time to agree that "in-memory" != stateless. Instead
> > "in-memory" means "data is not stored on disk".
> > May be we can have a flag which will wipe out all metadata on node
> restart
> > (e.g. it could make sense for embedded clients)?
> >
> > On Fri, Apr 13, 2018 at 12:48 PM, Alexey Goncharuk <
> > alexey.goncha...@gmail.com> wrote:
> >
> > > Denis,
> > >
> > > This is a subtle question. It looks like we have now a number of
> > use-cases
> > > when persistent storage is required even for a pure in-memory mode. One
> > of
> > > the use-cases is thin client authentication, the other is service grid
> > > configuration persistence.
> > >
> > > Generally, I would agree that this is an expected behavior. However,
> this
> > > means that a user cannot simply start and stop nodes randomly anymore.
> > > Ignite start will require some sort of installation or work folder
> > > initialization (sort of initdb in postgres) which is ok for
> > > persistence-enabled modes, but I am not sure if this is expected for
> > > in-memory. Of course, we can run this initialization automatically, but
> > it
> > > is not always a good idea.
> > >
> > > If we are ok to have this restrictions for in-memory mode, then service
> > > persistence makes sense.
> > >
> > > --AG
> > >
> > > 2018-04-11 22:36 GMT+03:00 Denis Magda :
> > >
> > >> Denis,
> > >>
> > >> I think that the service deployment state needs be persisted
> > cluster-wide.
> > >> I guess that our meta-store is capable of doing so. Alex G., Vladimir,
> > >> could you confirm?
> > >>
> > >> As for the split-brain scenarios, I would put them aside for now
> > because,
> > >> anyway, they have to be solved at lower levels (meta store, discovery,
> > >> etc.).
> > >>
> > >> Also, I heard that presently we store a service configuration in the
> > >> system
> > >> cache that doesn't give us a way to deploy a new version of a service
> > >> without undeployment of the previous one. Will this issue be addressed
> > by
> > >> the new deployment approach?
> > >>
> > >> --
> > >> Denis
> > >>
> > >> On Wed, Apr 11, 2018 at 1:28 AM, Denis Mekhanikov <
> > dmekhani...@gmail.com>
> > >> wrote:
> > >>
> > >> > Denis,
> > >> >
> > >> > Sounds reasonable. It's not clear, though, what should happen, if a
> > >> joining
> > >> > node has some services persisted, that are missing on other nodes.
> > >> > Should we deploy them?
> > >> > If we do so, it could lead to surprising behaviour. For example you
> > >> could
> > >> > kill a node, undeploy a service, then bring back an old node, and it
> > >> would
> > >> > make the service resurrect.
> > >> > We could store some deployment counter along with the service
> > >> > configurations on all nodes, that would show how many times the
> > service
> > >> > state has changed, i.e. it has been undeployed/redeployed. It should
> > be
> > >> > kept for undeployed services as well to avoid situations like I
> > >> described.
> > >> >
> > >> > But it still leaves a possibility of incorrect behaviour, if there
> > was a
> > >> > split-brain situation at some point. I don't think we should precess
> > it
> > >> > somehow, though. If we choose to tackle it, it will overcomplicate
> > >> things
> > >> > for a sake of a minor improvement.
> > >> >
> > >> > Denis
> > >> >
> > >> > вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
> > >> > valentin.kuliche...@gmail.com>:
> > >> >
> > >> > > I was responding to another Denis :) Agree with you on your point
> > >> though.
> > >> > >
> > >> > > -Val
> > >> > >
> > >> > > On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda 
> > >> wrote:
> > >> > >
> > >> > > > Val,
> > >> > > >
> > >> > > > Guess we're talking about 

[GitHub] ignite pull request #3417: IGNITE-2766 Opportunistically reopen cache after ...

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3417


---


Re: Make TC Green in OSGI: IgniteKarafFeaturesInstallationTest

2018-04-13 Thread Raúl Kripalani
Hey Dmitry,

Loving the name of the endeavour [make TC green again] ;-)
Feel free to do that for now. I'll take a look as soon as I have some spare
cycles.

Cheers!

On Fri, Apr 13, 2018 at 3:24 PM, Dmitry Pavlov 
wrote:

> Hi Igniters,
>
> I've created https://issues.apache.org/jira/browse/IGNITE-8254 and muted
> test.
>
> Second test in OSGI suite is also flaky, and probably we should remove
> OSGI build from Run-All at all. What do you think?
>
> Sincerely,
> Dmitriy Pavlov
>
>
> вт, 10 апр. 2018 г. в 19:54, Dmitry Pavlov :
>
>> Hi Raúl, Igniters,
>>
>> Test related to OSGI/Karaf (IgniteKarafFeaturesInstallationTest.
>> testAllBundlesActiveAndFeaturesInstalled) is currently failing
>> https://ci.ignite.apache.org/project.html?projectId=
>> IgniteTests24Java8=451522206339372479=%
>> 3Cdefault%3E=testDetails  with low success rate.
>>
>> Recently Igniters have done 2 fixes for make this test passing (
>> https://issues.apache.org/jira/browse/IGNITE-7646 ,
>> https://issues.apache.org/jira/browse/IGNITE-7814 ) but test is failing
>> anyway.
>>
>> Could you please step in and help to make this test green?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>>


Re: Apache Ignite RPM packaging (Stage II)

2018-04-13 Thread Denis Magda
Ilya,

Thanks for your inputs. The reason why we decided to split Ignite into
several packages mimics the reason why Java community introduced modular
subsystem for JDK. That's all about size. Ignite distribution is too big,
and we're trying to separate it into several components so that people can
install only the features they need.

The point of a package is to ship something into root file system that can
> be used from root file system. If cpp files require compilation we should
> not ship them, or ship them to 'examples'. Ditto with benchmarks. If
> there's no mechanism to add optional libs to Ignite classpath, we should
> not ship optional libs. Moreover, some of 'optional' modules such as yarn
> don't make sense here because they're not supposed to be used with
> standalone Ignite.


Agree that we need to ship the code that is ready to be run. As for the
classpath thing, if an optional package is installed into the root (core)
package directory, then its jars have to be added to "ignite/libs" folder.
After that, the one needs to restart a cluster node, nd it will add the
just installed optional libs to the classpath. *Petr*, does it work this
way or can be implemented this way to address Ilya's concerns?

--
Denis

On Fri, Apr 13, 2018 at 7:00 AM, Ilya Kasnacheev 
wrote:

> 2018-04-13 7:42 GMT+03:00 Peter Ivanov :
>
> > On Thu, 12 Apr 2018 at 20:04, Ilya Kasnacheev  >
> > wrote:
> >
> > >
> > > Moreover I did not find a way to start service if default installed JVM
> > is
> > > Java 7 :( I understand it's EOL, still this is something that hit me.
> >
> >
> > Apache Ignite >=2.4 does not support Java 7 - it is said in
> documentation,
> > DEVNOTES and even in startup scripts.
> >
> >  I have Java 8 too, but I could not get service from package to start
> Ignite since there's nowhere to put JAVA_HOME (or JVM_ARGS for that
> matter). Is it possible to specify it while running packaged Ignite?
>
>
> >
> > >
> > > apache-ignite-libs is a totally unexpected package name. apache-ignite
> > core
> > > doesn't depend on it. It doesn't enable anything out of the box. The
> > > package is huge.
> >
> > ‘apache-ignite-libs’ is an aggregation package (for now) for all optional
> > libs we are delivering. Possibly later they will be split more granular
> or
> > even package per lib (like php, perl, python, etc. do for their libs).
> > This package dependency on ‘apache-ignite-core’ may seem confusing
> though,
> > I will try to explain it in IEP at least for current iteration.
> >
>
> Okay, but how do you add optional libs to be included into Ignite classpath
> while being launched by service? Is it even possible? If it isn't, I think
> it doesn't make sense to ship apache-ignite-libs at all.
>
>
> >
> > Further naming may become clear when we’ll start initiative on including
> > packages to popular Linux distributions and theirs community will join
> > naming discussions.
> >
> Renaming packages once they're deployed widely will be a pain point to out
> users. Some things should probably be thought out in advance.
>
>
> >
> >
> >
> > >
> > > Frankly speaking, I'm not sure that improvements over Stage I are
> enough
> > as
> > > of now. For demo-like activity, we can probably go with one package
> fits
> > > all.
> > >
> >
> > The process of finding the best package architecture is iterative, but
> > previously community agreed in split design proposed for 2.5 release.
> >
> > Also, split architecture is half of proposed improvements. The other
> half -
> > new process for deploying packages to Bintray (with virtually indefinite
> > storage capabilities).
> >
> I think we could drop the split for now, or at least drop
> apache-ignite-libs package at all. Probably also drop apache-ignite-cpp
> package and maybe apache-ignite-benchmarks.
>
> The point of a package is to ship something into root file system that can
> be used from root file system. If cpp files require compilation we should
> not ship them, or ship them to 'examples'. Ditto with benchmarks. If
> there's no mechanism to add optional libs to Ignite classpath, we should
> not ship optional libs. Moreover, some of 'optional' modules such as yarn
> don't make sense here because they're not supposed to be used with
> standalone Ignite.
>
> IMO it is not right to try and shove every file from Ignite distribution
> into some package. We should only put in packages things that can be used.
> If something can't be used without copying it to a different FS location,
> it should be in examples or not packaged at all.
>
> In my opinion, it doesn't make sense to implement an underwhelming package
> split right now just because we have agreed to have *some* package split in
> 2.5. Let's aim for happiness.
>
>
> >
> >
> >
> > >
> > > --
> > > Ilya Kasnacheev
> >
>
>
>
> > >
> > > 2018-04-12 19:10 GMT+03:00 Petr Ivanov :
> > >
> > > > If someone from PMCы or Committers 

[jira] [Created] (IGNITE-8265) TDE - MEK replacement

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8265:
---

 Summary: TDE - MEK replacement
 Key: IGNITE-8265
 URL: https://issues.apache.org/jira/browse/IGNITE-8265
 Project: Ignite
  Issue Type: Sub-task
Reporter: Nikolay Izhikov


If MEK is lost or stolen while the cluster is alive, TDE should provide way to 
replace(regenerate) MEK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8264) TDE - Node join enhanecements

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8264:
---

 Summary: TDE - Node join enhanecements
 Key: IGNITE-8264
 URL: https://issues.apache.org/jira/browse/IGNITE-8264
 Project: Ignite
  Issue Type: Sub-task
Reporter: Nikolay Izhikov


All nodes that are joined to the cluster with TDE enabled should be configured 
to get TDE working. 
They need access to MEK and CEK's. 
We should extend node join mechanism to support TDE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8263) TDE - Encryption/Decription of pages

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8263:
---

 Summary: TDE - Encryption/Decription of pages
 Key: IGNITE-8263
 URL: https://issues.apache.org/jira/browse/IGNITE-8263
 Project: Ignite
  Issue Type: Sub-task
Affects Versions: 2.4
Reporter: Nikolay Izhikov
 Fix For: 2.6


When data for an encrypted cache are written to the persistence store. 
Data page should be encrypted through configured encryption provider. 

* Encryption/decryption should be implemented



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3823: IGNITE-8232 ML package cleanup for 2.5 release.

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3823


---


[jira] [Created] (IGNITE-8262) TDE - MEK and CEK processing

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8262:
---

 Summary: TDE - MEK and CEK processing
 Key: IGNITE-8262
 URL: https://issues.apache.org/jira/browse/IGNITE-8262
 Project: Ignite
  Issue Type: Sub-task
Reporter: Nikolay Izhikov


To get TDE working we should implement managing MEK and CEK's

* MEK should be loaded from configured KeyStore
* CEK's should be stored in some internal data storage and be encrypted with 
MEK.
* Cluster shouldn't get activated before MEK are loaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8261) TDE - Configuration

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8261:
---

 Summary: TDE - Configuration
 Key: IGNITE-8261
 URL: https://issues.apache.org/jira/browse/IGNITE-8261
 Project: Ignite
  Issue Type: Sub-task
Affects Versions: 2.4
Reporter: Nikolay Izhikov
 Fix For: 2.6


Ignite configuration should be extended to support all TDE specific 
configuration parameters: 

* KeyStore configuration. 
* New option for encrypted caches. 
* Default KeyStore implementation should use JDK provided KeyStore - 
https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8260) Transparend data encryption

2018-04-13 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-8260:
---

 Summary: Transparend data encryption
 Key: IGNITE-8260
 URL: https://issues.apache.org/jira/browse/IGNITE-8260
 Project: Ignite
  Issue Type: New Feature
Affects Versions: 2.4
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov
 Fix For: 2.6


TDE feature should allow to user to protect data stored in the persistence 
storage with some cypher algorithm.
Design details described in 
[IEP-18|https://cwiki.apache.org/confluence/display/IGNITE/IEP-18%3A+Transparent+Data+Encryption].
When this task will be done production ready TDE implementation should be 
available for Ignite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Move documentation from readme.io to GitHub pages

2018-04-13 Thread Prachi Garg
I did a very shallow research on Jekyll. Work flow seemed similar to
Docusaurus but more flexible in terms of directory structure- how docs can
be placed in the repo (Docusaurus requires docs to be stored in a
particular directory structure), as well as sidebar menu ( Docusaurus does
not allow child pages). Additionally, if so many open source projects are
using Jekyll, then I think it's definitely worth a try.

-Prachi

On Tue, Apr 10, 2018 at 4:52 PM, Denis Magda  wrote:

> Look into both Docusaraus and Jekyll from the usage perspective. Here is
> my summary:
>
> *Documentation Sources *
>
> Will be stored on GitHub. My preference is to store them in "ignite/docs"
> folder as many other ASF projects do (Spark [1], Flink [2] and Storm [3]).
> If we need to update the sources of an already released version, then we
> can create ignite-{version}-docs branch, edit it directly and generate HTML
> pages from it.
>
> *Versioning*
>
> Since the docs are stored in the main repo, a doc version will correspond
> to an Ignite version. If changes incorporated in the master version of the
> docs have to be merged to a previous version and redeployed on the site, we
> will use standard 'git' facilities to propagate the changes whenever is
> needed.
>
> *Documentation Deployment and Automation*
>
> Documentation engines usually go with a set of scripts that produce an
> HTML version of the docs out of the sources. In our scenario, we will use
> the scripts to convert the sources stored in GitHub to HTML pages stored in
> SVN repo of Ignite site. The docs' HTML pages will be hosted on
> ignite.apache.org.
>
> By default, the one has to run the scripts on a local machine to produce
> the HTML pages. However, nothing prevents us from tweaking the scripts and
> using them in a way that would do this on a fly - "a page has changed in
> sources"->"update site button is pressed"->"HTML is generated and
> automatically deployed to the site".
>
>
> Btw, *Prachi*, have you checked up Jekyll [4]? It's used by Spark, Flink,
> Storm, and even Github Pages. It's simpler than Docusarus and still gives a
> way to generate customized sites with navigation menus and table of
> contents: https://ci.apache.org/projects/flink/flink-docs-release-1.4/
>
>
> Does anyone else have any open questions we need to solve before starting
> a migration process?
>
>
>
> [1] https://github.com/apache/spark/tree/master/docs
> [2] https://github.com/apache/flink/tree/master/docs
> [3] https://github.com/apache/storm/tree/master/docs
> [4] https://github.com/jekyll/jekyll
>
> On Wed, Mar 21, 2018 at 6:15 PM, Dmitriy Setrakyan 
> wrote:
>
>> On Wed, Mar 21, 2018 at 9:27 PM, Prachi Garg  wrote:
>>
>> > We can store the project (Markdown & Docusaurus config files) in Git,
>> use
>> > Docusaurus to build html, and upload them to Ignite website.
>> >
>>
>> Sounds good!
>>
>
>


[GitHub] ignite pull request #3823: IGNITE-8232 ML package cleanup for 2.5 release.

2018-04-13 Thread ybabak
GitHub user ybabak opened a pull request:

https://github.com/apache/ignite/pull/3823

IGNITE-8232 ML package cleanup for 2.5 release.

fixed javadoc

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8232

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3823.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3823


commit 098d2832b3f4af23c72bc25f43e4ab8a95f2f416
Author: Zinoviev Alexey 
Date:   2018-04-11T18:40:27Z

IGNITE-7829: Added example

commit da58736d2c223061ebbc8e54a252661165d10919
Author: Zinoviev Alexey 
Date:   2018-04-11T18:47:59Z

IGNITE-7829: Added example

commit f806dcff52a51692c4d80d20d3d33c2220eb4a9c
Author: zaleslaw 
Date:   2018-04-12T12:10:05Z

IGNITE-7829: Fixed Anton's review

commit d81ee7f72b78820d664661ef269eeb4fe1178c04
Author: zaleslaw 
Date:   2018-04-12T12:12:54Z

IGNITE-7829: Fixed Anton's review 2

commit 550eb79d771c97bfb9cfb6dc24882af7d1215692
Author: Zinoviev Alexey 
Date:   2018-04-12T12:14:31Z

Merge branch 'master' into ignite-7829

commit 3ea8e3df0f7b72ea6f58e2a47d1677dd6863d077
Author: Anton Dmitriev 
Date:   2018-04-12T12:30:08Z

IGNITE-8232 Remove all trainers except DatasetTrainer.

commit a79f28d671a66c1c8722dbe036686fdb9427fc61
Author: zaleslaw 
Date:   2018-04-12T12:30:09Z

IGNITE-7829: Fixed tests

commit 652581ac57eb0741677f5bd727ccbd04cc86e4e8
Author: zaleslaw 
Date:   2018-04-12T12:30:35Z

Merge branch 'ignite-7829' of https://github.com/gridgain/apache-ignite 
into ignite-7829

commit 0b7ec05427015dd014c811468b037c7ae308028c
Author: Anton Dmitriev 
Date:   2018-04-12T12:40:45Z

IGNITE-8232 Remove estimators.

commit 053bfb72105be69b6032459377ec41e92ab0d5ac
Author: Anton Dmitriev 
Date:   2018-04-12T12:59:50Z

IGNITE-8232 Use SimpleLabeledDatasetData instead of dedicated
LinSysOnHeapData.

commit 14a7357afc0f33b07f5d4f56d6081222ec5bb437
Author: Anton Dmitriev 
Date:   2018-04-12T13:35:59Z

IGNITE-8233 Add protection of dataset compute method from empty data.

commit c63105c2cd4d4d8829ff300eaa7d11ed6620ebdb
Author: Anton Dmitriev 
Date:   2018-04-12T13:44:55Z

IGNITE-8233 Fix tests after adding protection of dataset compute method
from empty data.

commit da4baaf246b24867636578ea95fd53f2e92ea86c
Author: Anton Dmitriev 
Date:   2018-04-12T13:53:33Z

IGNITE-8233 Fix tests after adding protection of dataset compute method
from empty data.

commit 64656650ed4d2ebd165b7d62f59b9ce8cf027a6e
Author: Anton Dmitriev 
Date:   2018-04-12T13:55:57Z

IGNITE-8233 Fix tests after adding protection of dataset compute method
from empty data.

commit bd7aa5cf6da0b37c37862851d50b85fd822da344
Author: Anton Dmitriev 
Date:   2018-04-12T13:56:30Z

IGNITE-8233 Fix tests after adding protection of dataset compute method
from empty data.

commit 95d30e4175a55761760e7b3f8bb1d9a5bc00823f
Author: Anton Dmitriev 
Date:   2018-04-12T14:06:54Z

Merge branch 'ignite-8233' into ignite-8232

commit 02bd96b25568af2edf36965dfa0c7d66c0a4a256
Author: Anton Dmitriev 
Date:   2018-04-12T14:17:25Z

IGNITE-8233 Use Precision.EPSILON in AbstractLSQR.

commit e6066f572f40ebc709aa011ed1083d09303f601b
Author: dmitrievanthony 
Date:   2018-04-13T08:33:02Z

Revert "IGNITE-8233 Fix tests after adding protection of dataset compute 
method from empty data."

This reverts commit bd7aa5c

commit f1638a475826c37dbd0b0f71585a3c3ebfc37e2e
Author: dmitrievanthony 
Date:   2018-04-13T08:33:06Z

Revert "IGNITE-8233 Fix tests after adding protection of dataset compute 
method from empty data."

This reverts commit 6465665

commit 887b45fb659378f4a4b9e786b8169bd402688ffd
Author: dmitrievanthony 
Date:   2018-04-13T08:34:45Z

IGNITE-8233 Test SVM on 10 partitions instead of 1.

commit 85fa1771da8dc84ca6e35a10c7da596170f1637f
Author: dmitrievanthony 
Date:   2018-04-13T08:37:24Z

Merge remote-tracking branch 'prof/ignite-7829' into ignite-8233

# Conflicts:
#   
modules/ml/src/test/java/org/apache/ignite/ml/knn/KNNClassificationTest.java
#   modules/ml/src/test/java/org/apache/ignite/ml/knn/KNNRegressionTest.java
#   
modules/ml/src/test/java/org/apache/ignite/ml/knn/LabeledDatasetHelper.java

commit 91ac83c513f00ad0183ce6e48427c921d7dc0fee
Author: dmitrievanthony 
Date:   

[jira] [Created] (IGNITE-8259) Node join should be failed if it has cache not contained in cluster

2018-04-13 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-8259:
-

 Summary: Node join should be failed if it has cache not contained 
in cluster
 Key: IGNITE-8259
 URL: https://issues.apache.org/jira/browse/IGNITE-8259
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Kalashnikov


Node join should be failed if it has cache not contained in cluster otherwise 
it may be cause of cache corrupt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3822: IGNITE-8097 Java thin client: throw handshake exc...

2018-04-13 Thread kukushal
GitHub user kukushal opened a pull request:

https://github.com/apache/ignite/pull/3822

IGNITE-8097 Java thin client: throw handshake exception on connect phase



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8097

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3822.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3822


commit 2286c06b8140cd2f86b404da2271e24b0a8fe26f
Author: Alexey Kukushkin 
Date:   2018-04-13T16:39:15Z

IGNITE-8097 Java thin client: throw handshake exception on connect phase




---


Re: Memory usage per cache

2018-04-13 Thread Denis Magda
Vladimir,

For now I would
> only show total size of all indexes, and add something like
> "indexSize(String indexName)" method later


Is there any technical or architectural limitation that prevents us from
adding this method right now? I thought that if we could show the size of a
PK, then we know how to get this information for secondary indexes as well.

--
Denis

On Fri, Apr 13, 2018 at 2:52 AM, Vladimir Ozerov 
wrote:

> Igniters,
>
> I have several questions regarding overall metrics design:
> 1) Why we split PK and non-PK indexes? This is merely implementation detail
> and It is not clear why we want to pin it on public API forever. Other
> database vendors allow users to get size of specific index. For now I would
> only show total size of all indexes, and add something like
> "indexSize(String indexName)" method later
> 2) What is the purpose of "reuseList" metric? Same as p.1 - this is
> internal stuff, why do we think users need it? I think it makes sense to
> split "public" and "private" parts, "Public" - this is what makes sense
> from user perspective and will not change in future. "Private" - is our
> internal details which we can show, but do not guarantee that they will not
> change over time.
> 3) What is the difference between "data size" and "data pages size"?
>
> On Fri, Apr 13, 2018 at 1:41 AM, Denis Magda  wrote:
>
> > Alex, Dmitriy,
> >
> > Please clarify/consider the following:
> >
> >- Can we get the size of a particular secondary index with a method
> like
> >getIndexSize(indexName)? Vladimir Ozerov
> > >,
> >it should be feasible, right?
> >- The new DataRegionMXBean metrics list is not the same as of
> >DataRegionMetricsMXBean interface. Why is so that and what's the
> >difference between such similar interfaces?
> >- I wouldn't do this - *Depricate
> >CacheMetrics.getRebalancingPartitionsCount(); and move to
> >CacheGroupMetricsMXBean.getRebalancingPartitionsCount()*. If we
> > redesign
> >the way we store our data within data pages in the future, then
> >CacheMetrics.getRebalancingPartitionsCount() would make sense.
> >
> >
> > --
> > Denis
> >
> > On Thu, Apr 12, 2018 at 8:46 AM, Alexey Goncharuk <
> > alexey.goncha...@gmail.com> wrote:
> >
> > > Sounds good to me.
> > >
> > > Folks, any other feedback on metrics API in IGNITE-8078?
> > >
> > > 2018-04-06 21:36 GMT+03:00 Denis Magda :
> > >
> > > > Alex,
> > > >
> > > > Why not return cache group metrics from this method by default and
> > > properly
> > > > > document it?
> > > >
> > > >
> > > > What do you think about Dmitry's suggestion? It sounds reasonable to
> > me.
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Apr 4, 2018 at 12:22 PM, Dmitriy Setrakyan <
> > > dsetrak...@apache.org>
> > > > wrote:
> > > >
> > > > > On Wed, Apr 4, 2018 at 5:27 AM, Alexey Goncharuk <
> > > > > alexey.goncha...@gmail.com
> > > > > > wrote:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > I think this particular metric should be deprecated. The most we
> > can
> > > do
> > > > > > about it is to return the actual allocated size when a cache is
> the
> > > > only
> > > > > > cache in a group and return -1 if there are multiple caches in a
> > > group.
> > > > > > However, this does not look like a consistent approach to me, so
> I
> > > > would
> > > > > > prefer to always return -1 and suggest that users use
> corresponding
> > > > cache
> > > > > > group metrics.
> > > > > >
> > > > >
> > > > > Why not return cache group metrics from this method by default and
> > > > properly
> > > > > document it?
> > > > >
> > > >
> > >
> >
>


[GitHub] ignite pull request #3821: IGNITE-8258 Fixed page acquire/write unlock order...

2018-04-13 Thread agoncharuk
GitHub user agoncharuk opened a pull request:

https://github.com/apache/ignite/pull/3821

IGNITE-8258 Fixed page acquire/write unlock order during checkpoint



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3821.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3821


commit 85f5ea404150d625f8c9d1ad172a3aff8f9c8dbc
Author: Alexey Goncharuk 
Date:   2018-04-13T15:58:49Z

IGNITE-8258 Fixed page acquire/write unlock order during checkpoint




---


Re: [ML] Remove Old FuzzyCMeans Implementation

2018-04-13 Thread Yury Babak
Hi Alexey,

Thats sounds reasonable for me especially if we have bugs in current
implementation. So I agree to remove for FCM for now and return it in
further release.

Regards,
Yury



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[ML] Remove Old FuzzyCMeans Implementation

2018-04-13 Thread Alexey Zinoviev
Hi Igniters,

Currently, I'm working on adoption of clustering algorithms
(KMeans/FuzzyCMeans) to the new Partitioned Dataset.

KMeans was adopted without any troubles, but FuzzyCMeans couldn't be
adopted so easy

1. It uses local data structures to collect indices of rows presented in
dataset. It works with old matrix-style approach, but it doesn't work with
the new partitioned dataset (it supports close integration with Ignite
Cache and works with any types of data, not only matrices)

2. It doesn't predict fuzzy belonging to the vector of clusters. There is a
copy-paste apply() method from the KMeans and it's incorrect behaviour.

3. I found a few bugs with weighted coeffiecient recalculation.

Summary, algorithm could be adopted fastly and doesn't work correctly
according its specification.

I suggest to remove the source files in the current release and return in
2.6 with a few fixes.

What do you think?

Sincerely,
Alexey Zinoviev


[jira] [Created] (IGNITE-8258) Ignite PDS 1 suite, test probably failed suite IgnitePdsPageReplacementTest.testPageReplacement (last started)

2018-04-13 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8258:
--

 Summary: Ignite PDS 1 suite, test probably failed suite 
IgnitePdsPageReplacementTest.testPageReplacement (last started)
 Key: IGNITE-8258
 URL: https://issues.apache.org/jira/browse/IGNITE-8258
 Project: Ignite
  Issue Type: Test
Reporter: Dmitriy Pavlov


https://ci.ignite.apache.org/viewLog.html?buildId=1199095=IgniteTests24Java8_IgnitePds1=buildLog

{noformat}
[2018-04-13 
03:06:28,479][ERROR][db-checkpoint-thread-#52662%file.IgnitePdsPageReplacementTest0%][IgniteTestResources]
 Critical failure. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.NoOpFailureHandler, failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.IgniteCheckedException: Compound 
exception for CountDownFuture.]]
class org.apache.ignite.IgniteCheckedException: Compound exception for 
CountDownFuture.
at 
org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:462)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3545)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.lang.IllegalMonitorStateException: Attempted to 
release write lock while not holding it [lock=7f360ad0d630, 
state=0001
at 
org.apache.ignite.internal.util.OffheapReadWriteLock.writeUnlock(OffheapReadWriteLock.java:266)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.copyPageForCheckpoint(PageMemoryImpl.java:1185)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.getForCheckpoint(PageMemoryImpl.java:1117)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3508)
... 3 more
[2018-04-13 
03:06:28,483][ERROR][db-checkpoint-thread-#52662%file.IgnitePdsPageReplacementTest0%][IgniteTestResources]
 Critical failure. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.NoOpFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Failed to 
begin checkpoint (it is already in progress).]]
class org.apache.ignite.IgniteException: Failed to begin checkpoint (it is 
already in progress).
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.beginCheckpoint(PageMemoryImpl.java:997)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.beginAllCheckpoints(GridCacheDatabaseSharedManager.java:3309)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointBegin(GridCacheDatabaseSharedManager.java:3183)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:2909)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2808)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[2018-04-13 
03:06:28,485][ERROR][db-checkpoint-thread-#52662%file.IgnitePdsPageReplacementTest0%][GridCacheDatabaseSharedManager]
 Runtime error caught during grid runnable execution: GridWorker 
[name=db-checkpoint-thread, 
igniteInstanceName=file.IgnitePdsPageReplacementTest0, finished=false, 
hashCode=564969718, interrupted=false, 
runner=db-checkpoint-thread-#52662%file.IgnitePdsPageReplacementTest0%]
class org.apache.ignite.IgniteException: Failed to begin checkpoint (it is 
already in progress).
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.beginCheckpoint(PageMemoryImpl.java:997)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.beginAllCheckpoints(GridCacheDatabaseSharedManager.java:3309)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointBegin(GridCacheDatabaseSharedManager.java:3183)
at 

[GitHub] ignite pull request #3820: IGNITE-8257: GridFutureAdapterSelfTest#testChaini...

2018-04-13 Thread BiryukovVA
GitHub user BiryukovVA opened a pull request:

https://github.com/apache/ignite/pull/3820

IGNITE-8257: GridFutureAdapterSelfTest#testChaining fixed.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/BiryukovVA/ignite IGNITE-8257

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3820.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3820


commit 7a2fe2b4a7d82b742f308936981149a4182a
Author: Vitaliy Biryukov 
Date:   2018-04-13T15:14:27Z

IGNITE-8257: Test fixed.




---


[GitHub] ignite pull request #3806: IGNITE-8232 ML package cleanup for 2.5 release.

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3806


---


[GitHub] ignite pull request #3807: IGNITE-8233 KNN and SVM algorithms don't work whe...

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3807


---


[jira] [Created] (IGNITE-8257) GridFutureAdapterSelfTest#testChaining flaky-fails on TC (rarely)

2018-04-13 Thread Vitaliy Biryukov (JIRA)
Vitaliy Biryukov created IGNITE-8257:


 Summary: GridFutureAdapterSelfTest#testChaining flaky-fails on TC 
(rarely)
 Key: IGNITE-8257
 URL: https://issues.apache.org/jira/browse/IGNITE-8257
 Project: Ignite
  Issue Type: Test
Reporter: Vitaliy Biryukov
Assignee: Vitaliy Biryukov
 Fix For: 2.6



{code:java}
class org.apache.ignite.internal.IgniteFutureTimeoutCheckedException: Timeout 
was reached before computation completed.
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:242)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:159)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:151)
at 
org.apache.ignite.internal.util.future.GridFutureAdapterSelfTest.checkChaining(GridFutureAdapterSelfTest.java:283)
at 
org.apache.ignite.internal.util.future.GridFutureAdapterSelfTest.testChaining(GridFutureAdapterSelfTest.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
at java.lang.Thread.run(Thread.java:745)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8256) TxRecoveryStoreEnabledTest.testPessimistic fails on TC

2018-04-13 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8256:


 Summary: TxRecoveryStoreEnabledTest.testPessimistic fails on TC
 Key: IGNITE-8256
 URL: https://issues.apache.org/jira/browse/IGNITE-8256
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk


The reason for the failure is that simulateNodeFailure does not work anymore if 
failure handler is not a StopNodeFailureHandler



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Make TC Green in OSGI: IgniteKarafFeaturesInstallationTest

2018-04-13 Thread Vyacheslav Daradur
Hi, I've spent some time to research the issue. The main problem is
wrong dependencies on artifacts which absent in "ignite-osgi-karaf"
repo:

https://repository.apache.org/content/groups/snapshots-group/org/apache/ignite/ignite-osgi-karaf/

There are not needed artifacts since Apache Ignite v.2.1.0

I vote for excluding/muting the test for now.


On Fri, Apr 13, 2018 at 5:42 PM, Ilya Kasnacheev
 wrote:
> Hello!
>
> I have tried this test and everything is very bad with it. As in, one slew
> of errors when running with mvn, different one when running from Idea,
> ungooglable errors as a result.
>
> I suggest remove this build for now. Wait if everybody with background in
> this technology cares enough to help us.
>
> --
> Ilya Kasnacheev
>
> 2018-04-13 17:24 GMT+03:00 Dmitry Pavlov :
>
>> Hi Igniters,
>>
>> I've created https://issues.apache.org/jira/browse/IGNITE-8254 and muted
>> test.
>>
>> Second test in OSGI suite is also flaky, and probably we should remove OSGI
>> build from Run-All at all. What do you think?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>>
>> вт, 10 апр. 2018 г. в 19:54, Dmitry Pavlov :
>>
>> > Hi Raúl, Igniters,
>> >
>> > Test related to OSGI/Karaf
>> > (IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeature
>> sInstalled)
>> > is currently failing
>> > https://ci.ignite.apache.org/project.html?projectId=
>> IgniteTests24Java8=451522206339372479=%
>> 3Cdefault%3E=testDetails
>> > with low success rate.
>> >
>> > Recently Igniters have done 2 fixes for make this test passing (
>> > https://issues.apache.org/jira/browse/IGNITE-7646 ,
>> > https://issues.apache.org/jira/browse/IGNITE-7814 ) but test is failing
>> > anyway.
>> >
>> > Could you please step in and help to make this test green?
>> >
>> > Sincerely,
>> > Dmitriy Pavlov
>> >
>> >
>>



-- 
Best Regards, Vyacheslav D.


[GitHub] ignite pull request #3819: IGNITE-8255: Possible name collisions in WorkersR...

2018-04-13 Thread x-kreator
GitHub user x-kreator opened a pull request:

https://github.com/apache/ignite/pull/3819

IGNITE-8255: Possible name collisions in WorkersRegistry.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/x-kreator/ignite ignite-8255

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3819.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3819


commit cd5805c4e61254bcc9f69ca8d2a1b46115120fbe
Author: Dmitriy Sorokin 
Date:   2018-04-13T14:54:38Z

IGNITE-8255: Possible name collisions in WorkersRegistry.




---


Re: Make TC Green in OSGI: IgniteKarafFeaturesInstallationTest

2018-04-13 Thread Ilya Kasnacheev
Hello!

I have tried this test and everything is very bad with it. As in, one slew
of errors when running with mvn, different one when running from Idea,
ungooglable errors as a result.

I suggest remove this build for now. Wait if everybody with background in
this technology cares enough to help us.

-- 
Ilya Kasnacheev

2018-04-13 17:24 GMT+03:00 Dmitry Pavlov :

> Hi Igniters,
>
> I've created https://issues.apache.org/jira/browse/IGNITE-8254 and muted
> test.
>
> Second test in OSGI suite is also flaky, and probably we should remove OSGI
> build from Run-All at all. What do you think?
>
> Sincerely,
> Dmitriy Pavlov
>
>
> вт, 10 апр. 2018 г. в 19:54, Dmitry Pavlov :
>
> > Hi Raúl, Igniters,
> >
> > Test related to OSGI/Karaf
> > (IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeature
> sInstalled)
> > is currently failing
> > https://ci.ignite.apache.org/project.html?projectId=
> IgniteTests24Java8=451522206339372479=%
> 3Cdefault%3E=testDetails
> > with low success rate.
> >
> > Recently Igniters have done 2 fixes for make this test passing (
> > https://issues.apache.org/jira/browse/IGNITE-7646 ,
> > https://issues.apache.org/jira/browse/IGNITE-7814 ) but test is failing
> > anyway.
> >
> > Could you please step in and help to make this test green?
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> >
>


[jira] [Created] (IGNITE-8255) Possible name collisions in WorkersRegistry

2018-04-13 Thread Dmitriy Sorokin (JIRA)
Dmitriy Sorokin created IGNITE-8255:
---

 Summary: Possible name collisions in WorkersRegistry
 Key: IGNITE-8255
 URL: https://issues.apache.org/jira/browse/IGNITE-8255
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitriy Sorokin
Assignee: Dmitriy Sorokin
 Fix For: 2.5


 
{code:java}
java.lang.IllegalStateException: Worker is already registered 
[worker=GridWorker [name=ttl-cleanup-worker, igniteInstanceName=null, 
finished=false, hashCode=612569625, interrupted=true, 
runner=ttl-cleanup-worker-#66]]
at 
org.apache.ignite.internal.worker.WorkersRegistry.register(WorkersRegistry.java:40)
at 
org.apache.ignite.internal.worker.WorkersRegistry.onStarted(WorkersRegistry.java:73)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:108)
at java.lang.Thread.run(Thread.java:748){code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3818: IGNITE-8237 Ignite blocks on SecurityException in...

2018-04-13 Thread kukushal
GitHub user kukushal opened a pull request:

https://github.com/apache/ignite/pull/3818

IGNITE-8237 Ignite blocks on SecurityException in exchange-worker due to 
unauthorised on-heap cache configuration 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8237

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3818.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3818


commit 2821e0411a84245e8ff25b5a58d01a791df84ff7
Author: Alexey Kukushkin 
Date:   2018-04-13T14:29:48Z

IGNITE-8237 Ignite blocks on SecurityException in exchange-worker due to 
unauthorised on-heap cache configuration

commit f1017230c38f96d0be3f3ebe133059761813a602
Author: Alexey Kukushkin 
Date:   2018-04-13T14:30:42Z

Merge remote-tracking branch 'origin/master' into ignite-8237




---


Re: Make TC Green in OSGI: IgniteKarafFeaturesInstallationTest

2018-04-13 Thread Dmitry Pavlov
Hi Igniters,

I've created https://issues.apache.org/jira/browse/IGNITE-8254 and muted
test.

Second test in OSGI suite is also flaky, and probably we should remove OSGI
build from Run-All at all. What do you think?

Sincerely,
Dmitriy Pavlov


вт, 10 апр. 2018 г. в 19:54, Dmitry Pavlov :

> Hi Raúl, Igniters,
>
> Test related to OSGI/Karaf
> (IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeaturesInstalled)
> is currently failing
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=451522206339372479=%3Cdefault%3E=testDetails
> with low success rate.
>
> Recently Igniters have done 2 fixes for make this test passing (
> https://issues.apache.org/jira/browse/IGNITE-7646 ,
> https://issues.apache.org/jira/browse/IGNITE-7814 ) but test is failing
> anyway.
>
> Could you please step in and help to make this test green?
>
> Sincerely,
> Dmitriy Pavlov
>
>


[jira] [Created] (IGNITE-8254) OSGI test is failed almost everytime IgniteOsgiTestSuite: IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeaturesInstalled failed

2018-04-13 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8254:
--

 Summary: OSGI test is failed almost everytime IgniteOsgiTestSuite: 
IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeaturesInstalled  
failed
 Key: IGNITE-8254
 URL: https://issues.apache.org/jira/browse/IGNITE-8254
 Project: Ignite
  Issue Type: Test
Reporter: Dmitriy Pavlov


Test related to OSGI/Karaf 
(IgniteKarafFeaturesInstallationTest.testAllBundlesActiveAndFeaturesInstalled) 
is currently failing 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=451522206339372479=%3Cdefault%3E=testDetailswith
 low success rate.

Recently Igniters have done 2 fixes for make this test passing 
(https://issues.apache.org/jira/browse/IGNITE-7646 , 
https://issues.apache.org/jira/browse/IGNITE-7814 ) but test is failing anyway.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8253) CacheConfiguration.keyConfiguration is never documented

2018-04-13 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-8253:
---

 Summary: CacheConfiguration.keyConfiguration is never documented
 Key: IGNITE-8253
 URL: https://issues.apache.org/jira/browse/IGNITE-8253
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4
Reporter: Ilya Kasnacheev


See 
http://apache-ignite-users.70518.x6.nabble.com/How-do-you-configure-affinityKey-in-xml-tp21165.html




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite RPM packaging (Stage II)

2018-04-13 Thread Ilya Kasnacheev
2018-04-13 7:42 GMT+03:00 Peter Ivanov :

> On Thu, 12 Apr 2018 at 20:04, Ilya Kasnacheev 
> wrote:
>
> >
> > Moreover I did not find a way to start service if default installed JVM
> is
> > Java 7 :( I understand it's EOL, still this is something that hit me.
>
>
> Apache Ignite >=2.4 does not support Java 7 - it is said in documentation,
> DEVNOTES and even in startup scripts.
>
>  I have Java 8 too, but I could not get service from package to start
Ignite since there's nowhere to put JAVA_HOME (or JVM_ARGS for that
matter). Is it possible to specify it while running packaged Ignite?


>
> >
> > apache-ignite-libs is a totally unexpected package name. apache-ignite
> core
> > doesn't depend on it. It doesn't enable anything out of the box. The
> > package is huge.
>
> ‘apache-ignite-libs’ is an aggregation package (for now) for all optional
> libs we are delivering. Possibly later they will be split more granular or
> even package per lib (like php, perl, python, etc. do for their libs).
> This package dependency on ‘apache-ignite-core’ may seem confusing though,
> I will try to explain it in IEP at least for current iteration.
>

Okay, but how do you add optional libs to be included into Ignite classpath
while being launched by service? Is it even possible? If it isn't, I think
it doesn't make sense to ship apache-ignite-libs at all.


>
> Further naming may become clear when we’ll start initiative on including
> packages to popular Linux distributions and theirs community will join
> naming discussions.
>
Renaming packages once they're deployed widely will be a pain point to out
users. Some things should probably be thought out in advance.


>
>
>
> >
> > Frankly speaking, I'm not sure that improvements over Stage I are enough
> as
> > of now. For demo-like activity, we can probably go with one package fits
> > all.
> >
>
> The process of finding the best package architecture is iterative, but
> previously community agreed in split design proposed for 2.5 release.
>
> Also, split architecture is half of proposed improvements. The other half -
> new process for deploying packages to Bintray (with virtually indefinite
> storage capabilities).
>
I think we could drop the split for now, or at least drop
apache-ignite-libs package at all. Probably also drop apache-ignite-cpp
package and maybe apache-ignite-benchmarks.

The point of a package is to ship something into root file system that can
be used from root file system. If cpp files require compilation we should
not ship them, or ship them to 'examples'. Ditto with benchmarks. If
there's no mechanism to add optional libs to Ignite classpath, we should
not ship optional libs. Moreover, some of 'optional' modules such as yarn
don't make sense here because they're not supposed to be used with
standalone Ignite.

IMO it is not right to try and shove every file from Ignite distribution
into some package. We should only put in packages things that can be used.
If something can't be used without copying it to a different FS location,
it should be in examples or not packaged at all.

In my opinion, it doesn't make sense to implement an underwhelming package
split right now just because we have agreed to have *some* package split in
2.5. Let's aim for happiness.


>
>
>
> >
> > --
> > Ilya Kasnacheev
>



> >
> > 2018-04-12 19:10 GMT+03:00 Petr Ivanov :
> >
> > > If someone from PMCы or Committers still sees necessity about including
> > > these tasks into Apache Ignite 2.5 release, this is the last chance to
> do
> > > so.
> > > Otherwise this task will be moved to at 2.6 release at least, or even
> > > moved to backlog indefinitely.
> > >
> > >
> > >
> > > > On 9 Apr 2018, at 19:08, Petr Ivanov  wrote:
> > > >
> > > > To top new RPM architecture off, update to release process is
> > introduced
> > > — [1] [2].
> > > >
> > > > Both tasks (this one and IGNITE-7647) are ready for review and should
> > be
> > > merged simultaneously.
> > > >
> > > >
> > > > [1] https://issues.apache.org/jira/browse/IGNITE-8172
> > > > [2] https://github.com/apache/ignite-release/pull/1
> > > >
> > > >
> > > >
> > > >
> > > >> On 2 Apr 2018, at 18:22, Ilya Kasnacheev  >
> > > wrote:
> > > >>
> > > >> Hello!
> > > >>
> > > >> Let me share my idea of how this shoud work. Splitting package into
> > > >> sub-packages should be dependency-driven.
> > > >>
> > > >> It means that all Ignite modules without dependencies or with small
> > > >> dependencies (such as ignite-log4j) should be included in
> ignite-core.
> > > It
> > > >> doesn't make sense to make a zillion RPM packages.
> > > >>
> > > >> Critical things like ignite-spring and ignite-indexing should be in
> > > >> ignite-core of course, even if they have dependencies. Ignite-core
> > > should
> > > >> be fully self-sufficient and feature-complete.
> > > >>
> > > >> However, e.g. .net API should probably be in a 

[GitHub] ignite pull request #3817: IGNITE-8169: Adopt KMeans and remove FuzzyCMeans

2018-04-13 Thread zaleslaw
GitHub user zaleslaw opened a pull request:

https://github.com/apache/ignite/pull/3817

IGNITE-8169: Adopt KMeans and remove FuzzyCMeans



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8169

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3817.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3817


commit 897de46fec2b5474f39b0a3c232e1fdb43a51ab9
Author: zaleslaw 
Date:   2018-04-07T13:44:19Z

Added KMeans

commit 2d1763253a84447f0421afd23d01f27762fc61ee
Author: zaleslaw 
Date:   2018-04-10T08:24:25Z

Added KMeans

commit 5f5e9c9565fab39b17afc168ea9cdd6c31ee613e
Author: Zinoviev Alexey 
Date:   2018-04-11T18:32:39Z

Fixed KMeans

commit 64c09cbf962d6001b5cbad2f9556dd2be6bc3e3f
Author: zaleslaw 
Date:   2018-04-13T06:32:12Z

IGNITE-7829: Fixed tests

commit 5e5681c0e6db6540685bf27bfc7afc32856cbdce
Author: Zinoviev Alexey 
Date:   2018-04-13T12:54:55Z

IGNITE-7829: Fixed Trainer and cleaned up the cluster package

commit 269266785fb330283df4f13a6383e97288a074ce
Author: Zinoviev Alexey 
Date:   2018-04-13T13:17:22Z

IGNITE-7829: Added seed




---


[jira] [Created] (IGNITE-8252) NullPointerException is thrown during parallel massive start of nodes

2018-04-13 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8252:
---

 Summary: NullPointerException is thrown during parallel massive 
start of nodes
 Key: IGNITE-8252
 URL: https://issues.apache.org/jira/browse/IGNITE-8252
 Project: Ignite
  Issue Type: Bug
  Components: zookeeper
Reporter: Sergey Chugunov
Assignee: Sergey Chugunov


When many nodes are started in parallel and IGNITE_DISCOVERY_HISTORY_SIZE is 
set to too small value (smaller than size of batch of nodes joining the cluster 
simultaneously) NPE is thrown from the exchange thread:
{noformat}
[ERROR][exchange-worker-#62][GridDhtPartitionsExchangeFuture] Failed to 
reinitialize local partitions (preloading will be stopped): 
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=5, 
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=ZookeeperClusterNode 
[id=2f412f7f-d326-4303-86f9-91004c82aa7b, addrs=[172.25.1.18], order=5, 
loc=true, client=false], topVer=5, nodeId8=2f412f7f, msg=null, 
type=NODE_JOINED, tstamp=1523571878505], nodeId=2f412f7f, evt=NODE_JOINED]
java.lang.NullPointerException: null
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.getLatchCoordinator(ExchangeLatchManager.java:249)
 ~[ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.getOrCreate(ExchangeLatchManager.java:207)
 ~[ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.waitPartitionRelease(GridDhtPartitionsExchangeFuture.java:1227)
 ~[ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1112)
 ~[ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:713)
 [ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2414)
 [ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2294)
 [ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8251) Reduce testPageEviction run time

2018-04-13 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8251:
--

 Summary: Reduce testPageEviction run time
 Key: IGNITE-8251
 URL: https://issues.apache.org/jira/browse/IGNITE-8251
 Project: Ignite
  Issue Type: Test
  Components: persistence
Reporter: Dmitriy Pavlov


Cache 3 and suite IgniteBinaryObjectsCacheTestSuite3 several times executes one 
test testPageEviction  and each run requires significiant time.

RandomLruNearEnabledPageEvictionMultinodeTest.testPageEviction duration 6m 
9.41s 
 Random2LruNearEnabledPageEvictionMultinodeTest.testPageEviction duration 5m 
51.353s 
 RandomLruPageEvictionMultinodeTest.testPageEviction duration 5m 36.529s 

It is necessary to understand what exactly the test does and shorten the 
execution time by 
- reducing the number of objects 
- the region size 
- the desired run time. 

Or some other parameter which help to run this test faster.

At the same time, it is necessary to leave the test coverage unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Service grid redesign

2018-04-13 Thread Denis Mekhanikov
Vladimir,

Currently we don't save binary metadata to disk, when persistence is
disabled.
But we still persist marshaller mappings for some reason, and I personally
believe, that we shouldn't.

But I agree, that we should separate data and service persistence
configuration.
Right now persistence of services is configured in a pretty non-obvious
manner.
It should be a clear way to tell Ignite, whether you want services to be
persisted or not.

I'm not sure, that we should make "statefullness" in general configurable.
Users don't care much, whether metadata is preserved on restarts, or not.

Denis

пт, 13 апр. 2018 г. в 14:29, Vladimir Ozerov :

> Alex,
>
> I would say that we've already had this behavior for years - marshaller
> cache. I think it is time to agree that "in-memory" != stateless. Instead
> "in-memory" means "data is not stored on disk".
> May be we can have a flag which will wipe out all metadata on node restart
> (e.g. it could make sense for embedded clients)?
>
> On Fri, Apr 13, 2018 at 12:48 PM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > Denis,
> >
> > This is a subtle question. It looks like we have now a number of
> use-cases
> > when persistent storage is required even for a pure in-memory mode. One
> of
> > the use-cases is thin client authentication, the other is service grid
> > configuration persistence.
> >
> > Generally, I would agree that this is an expected behavior. However, this
> > means that a user cannot simply start and stop nodes randomly anymore.
> > Ignite start will require some sort of installation or work folder
> > initialization (sort of initdb in postgres) which is ok for
> > persistence-enabled modes, but I am not sure if this is expected for
> > in-memory. Of course, we can run this initialization automatically, but
> it
> > is not always a good idea.
> >
> > If we are ok to have this restrictions for in-memory mode, then service
> > persistence makes sense.
> >
> > --AG
> >
> > 2018-04-11 22:36 GMT+03:00 Denis Magda :
> >
> >> Denis,
> >>
> >> I think that the service deployment state needs be persisted
> cluster-wide.
> >> I guess that our meta-store is capable of doing so. Alex G., Vladimir,
> >> could you confirm?
> >>
> >> As for the split-brain scenarios, I would put them aside for now
> because,
> >> anyway, they have to be solved at lower levels (meta store, discovery,
> >> etc.).
> >>
> >> Also, I heard that presently we store a service configuration in the
> >> system
> >> cache that doesn't give us a way to deploy a new version of a service
> >> without undeployment of the previous one. Will this issue be addressed
> by
> >> the new deployment approach?
> >>
> >> --
> >> Denis
> >>
> >> On Wed, Apr 11, 2018 at 1:28 AM, Denis Mekhanikov <
> dmekhani...@gmail.com>
> >> wrote:
> >>
> >> > Denis,
> >> >
> >> > Sounds reasonable. It's not clear, though, what should happen, if a
> >> joining
> >> > node has some services persisted, that are missing on other nodes.
> >> > Should we deploy them?
> >> > If we do so, it could lead to surprising behaviour. For example you
> >> could
> >> > kill a node, undeploy a service, then bring back an old node, and it
> >> would
> >> > make the service resurrect.
> >> > We could store some deployment counter along with the service
> >> > configurations on all nodes, that would show how many times the
> service
> >> > state has changed, i.e. it has been undeployed/redeployed. It should
> be
> >> > kept for undeployed services as well to avoid situations like I
> >> described.
> >> >
> >> > But it still leaves a possibility of incorrect behaviour, if there
> was a
> >> > split-brain situation at some point. I don't think we should precess
> it
> >> > somehow, though. If we choose to tackle it, it will overcomplicate
> >> things
> >> > for a sake of a minor improvement.
> >> >
> >> > Denis
> >> >
> >> > вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
> >> > valentin.kuliche...@gmail.com>:
> >> >
> >> > > I was responding to another Denis :) Agree with you on your point
> >> though.
> >> > >
> >> > > -Val
> >> > >
> >> > > On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda 
> >> wrote:
> >> > >
> >> > > > Val,
> >> > > >
> >> > > > Guess we're talking about other situations. I'm bringing up the
> case
> >> > > when a
> >> > > > service was deployed dynamically and has to be brought up after a
> >> full
> >> > > > cluster restart w/o user intervention. To achieve this we need to
> >> > persist
> >> > > > the service's configuration somewhere.
> >> > > >
> >> > > > --
> >> > > > Denis
> >> > > >
> >> > > > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
> >> > > > valentin.kuliche...@gmail.com> wrote:
> >> > > >
> >> > > > > Denis,
> >> > > > >
> >> > > > > EVT_CLASS_DEPLOYED should be fired every time a class is
> deployed
> >> or
> >> > > > > redeployed. If this doesn't happen in some cases, I believe this
> >> > would
> >> > > > be a
> >> > > > > 

[jira] [Created] (IGNITE-8250) Adopt Fuzzy CMeans to PartitionedDatasets

2018-04-13 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-8250:


 Summary: Adopt Fuzzy CMeans to PartitionedDatasets
 Key: IGNITE-8250
 URL: https://issues.apache.org/jira/browse/IGNITE-8250
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8249) Web Console: Convert first letter case for all the inputs

2018-04-13 Thread Vica Abramova (JIRA)
Vica Abramova created IGNITE-8249:
-

 Summary: Web Console: Convert first letter case for all the inputs
 Key: IGNITE-8249
 URL: https://issues.apache.org/jira/browse/IGNITE-8249
 Project: Ignite
  Issue Type: Improvement
  Components: UI, wizards
Reporter: Vica Abramova
Assignee: Alexey Kuznetsov


We should begin all words with a capital letter (inputs/placeholders).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Service grid redesign

2018-04-13 Thread Vladimir Ozerov
Alex,

I would say that we've already had this behavior for years - marshaller
cache. I think it is time to agree that "in-memory" != stateless. Instead
"in-memory" means "data is not stored on disk".
May be we can have a flag which will wipe out all metadata on node restart
(e.g. it could make sense for embedded clients)?

On Fri, Apr 13, 2018 at 12:48 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Denis,
>
> This is a subtle question. It looks like we have now a number of use-cases
> when persistent storage is required even for a pure in-memory mode. One of
> the use-cases is thin client authentication, the other is service grid
> configuration persistence.
>
> Generally, I would agree that this is an expected behavior. However, this
> means that a user cannot simply start and stop nodes randomly anymore.
> Ignite start will require some sort of installation or work folder
> initialization (sort of initdb in postgres) which is ok for
> persistence-enabled modes, but I am not sure if this is expected for
> in-memory. Of course, we can run this initialization automatically, but it
> is not always a good idea.
>
> If we are ok to have this restrictions for in-memory mode, then service
> persistence makes sense.
>
> --AG
>
> 2018-04-11 22:36 GMT+03:00 Denis Magda :
>
>> Denis,
>>
>> I think that the service deployment state needs be persisted cluster-wide.
>> I guess that our meta-store is capable of doing so. Alex G., Vladimir,
>> could you confirm?
>>
>> As for the split-brain scenarios, I would put them aside for now because,
>> anyway, they have to be solved at lower levels (meta store, discovery,
>> etc.).
>>
>> Also, I heard that presently we store a service configuration in the
>> system
>> cache that doesn't give us a way to deploy a new version of a service
>> without undeployment of the previous one. Will this issue be addressed by
>> the new deployment approach?
>>
>> --
>> Denis
>>
>> On Wed, Apr 11, 2018 at 1:28 AM, Denis Mekhanikov 
>> wrote:
>>
>> > Denis,
>> >
>> > Sounds reasonable. It's not clear, though, what should happen, if a
>> joining
>> > node has some services persisted, that are missing on other nodes.
>> > Should we deploy them?
>> > If we do so, it could lead to surprising behaviour. For example you
>> could
>> > kill a node, undeploy a service, then bring back an old node, and it
>> would
>> > make the service resurrect.
>> > We could store some deployment counter along with the service
>> > configurations on all nodes, that would show how many times the service
>> > state has changed, i.e. it has been undeployed/redeployed. It should be
>> > kept for undeployed services as well to avoid situations like I
>> described.
>> >
>> > But it still leaves a possibility of incorrect behaviour, if there was a
>> > split-brain situation at some point. I don't think we should precess it
>> > somehow, though. If we choose to tackle it, it will overcomplicate
>> things
>> > for a sake of a minor improvement.
>> >
>> > Denis
>> >
>> > вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
>> > valentin.kuliche...@gmail.com>:
>> >
>> > > I was responding to another Denis :) Agree with you on your point
>> though.
>> > >
>> > > -Val
>> > >
>> > > On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda 
>> wrote:
>> > >
>> > > > Val,
>> > > >
>> > > > Guess we're talking about other situations. I'm bringing up the case
>> > > when a
>> > > > service was deployed dynamically and has to be brought up after a
>> full
>> > > > cluster restart w/o user intervention. To achieve this we need to
>> > persist
>> > > > the service's configuration somewhere.
>> > > >
>> > > > --
>> > > > Denis
>> > > >
>> > > > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
>> > > > valentin.kuliche...@gmail.com> wrote:
>> > > >
>> > > > > Denis,
>> > > > >
>> > > > > EVT_CLASS_DEPLOYED should be fired every time a class is deployed
>> or
>> > > > > redeployed. If this doesn't happen in some cases, I believe this
>> > would
>> > > > be a
>> > > > > bug. I don't think we need to add any new events.
>> > > > >
>> > > > > -Val
>> > > > >
>> > > > > On Mon, Apr 9, 2018 at 10:50 AM, Denis Magda 
>> > > wrote:
>> > > > >
>> > > > > > Denis,
>> > > > > >
>> > > > > > I would encourage us to persist a service configuration in the
>> meta
>> > > > store
>> > > > > > and have this capability enabled by default. That's essential
>> for
>> > > > > services
>> > > > > > started dynamically. Moreover, we support similar behavior for
>> > > caches,
>> > > > > > indexes, and other DDL changes happened at runtime.
>> > > > > >
>> > > > > > --
>> > > > > > Denis
>> > > > > >
>> > > > > > On Mon, Apr 9, 2018 at 9:34 AM, Denis Mekhanikov <
>> > > > dmekhani...@gmail.com>
>> > > > > > wrote:
>> > > > > >
>> > > > > > > Another question, that I would like to discuss is whether
>> > services
>> > > > > should
>> > > > > > > be preserved on cluster restarts.
>> > 

[jira] [Created] (IGNITE-8248) Web Console: NullPointException in agent in case of self-signed certificates.

2018-04-13 Thread Andrey Novikov (JIRA)
Andrey Novikov created IGNITE-8248:
--

 Summary: Web Console: NullPointException in agent in case of 
self-signed certificates.
 Key: IGNITE-8248
 URL: https://issues.apache.org/jira/browse/IGNITE-8248
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Andrey Novikov
Assignee: Andrey Novikov


[2018-04-13 02:01:12,387][ERROR][EventThread][EventThread] Task threw exception

java.lang.NullPointerException

at 
okhttp3.internal.tls.TrustRootIndex$BasicTrustRootIndex.(TrustRootIndex.java:108)

at okhttp3.internal.tls.TrustRootIndex.get(TrustRootIndex.java:48)

at okhttp3.internal.tls.TrustRootIndex.get(TrustRootIndex.java:43)

at 
okhttp3.internal.platform.Platform.buildCertificateChainCleaner(Platform.java:167)

at 
okhttp3.internal.tls.CertificateChainCleaner.get(CertificateChainCleaner.java:41)

at okhttp3.OkHttpClient$Builder.sslSocketFactory(OkHttpClient.java:656)

at io.socket.engineio.client.transports.WebSocket.doOpen(WebSocket.java:50)

at io.socket.engineio.client.Transport$1.run(Transport.java:82)

at io.socket.thread.EventThread.exec(EventThread.java:55)

at io.socket.engineio.client.Transport.open(Transport.java:77)

at io.socket.engineio.client.Socket.probe(Socket.java:472)

at io.socket.engineio.client.Socket.onOpen(Socket.java:485)

at io.socket.engineio.client.Socket.onHandshake(Socket.java:526)

at io.socket.engineio.client.Socket.onPacket(Socket.java:499)

at io.socket.engineio.client.Socket.access$1000(Socket.java:31)

at io.socket.engineio.client.Socket$5.call(Socket.java:313)

at io.socket.emitter.Emitter.emit(Emitter.java:117)

at io.socket.engineio.client.Transport.onPacket(Transport.java:134)

at io.socket.engineio.client.transports.Polling.access$700(Polling.java:17)

at io.socket.engineio.client.transports.Polling$2.call(Polling.java:124)

at io.socket.engineio.parser.Parser.decodePayload(Parser.java:251)

at io.socket.engineio.client.transports.Polling._onData(Polling.java:134)

at io.socket.engineio.client.transports.Polling.onData(Polling.java:106)

at io.socket.engineio.client.transports.PollingXHR$5$1.run(PollingXHR.java:111)

at io.socket.thread.EventThread$2.run(EventThread.java:80)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8247) .NET: ICacheLock.TryEnter() returns false if client is disconnected

2018-04-13 Thread Roman Guseinov (JIRA)
Roman Guseinov created IGNITE-8247:
--

 Summary: .NET: ICacheLock.TryEnter() returns false if client is 
disconnected
 Key: IGNITE-8247
 URL: https://issues.apache.org/jira/browse/IGNITE-8247
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Roman Guseinov
 Attachments: TryEnterIssue.cs

It seems that TryEnter() doesn't propagate an IgniteClientDisconnectedException 
from Java to .NET.

Reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3816: IGNITE-8246: update print errors

2018-04-13 Thread Mmuzaf
GitHub user Mmuzaf opened a pull request:

https://github.com/apache/ignite/pull/3816

IGNITE-8246: update print errors

Fix print error

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mmuzaf/ignite ignite-8246

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3816.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3816


commit 925250633e008b04c12cbf2967cadfc9490714ea
Author: Maxim Muzafarov 
Date:   2018-04-13T10:04:15Z

IGNITE-8246: use proper generalization




---


[jira] [Created] (IGNITE-8246) Cast exception when using printPartitionState method

2018-04-13 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-8246:
---

 Summary: Cast exception when using printPartitionState method
 Key: IGNITE-8246
 URL: https://issues.apache.org/jira/browse/IGNITE-8246
 Project: Ignite
  Issue Type: Bug
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov
 Fix For: 2.6


Using of {{printPartitionState}} produces error which obstructs log analysys.

 
{code:java}
[2018-04-13 
12:53:33,055][ERROR][test-runner-#1%distributed.CacheBaselineTopologyTest%][root]
 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion cannot 
be cast to [C
[2018-04-13 
12:53:33,055][ERROR][test-runner-#1%distributed.CacheBaselineTopologyTest%][root]
 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion cannot 
be cast to [C
[2018-04-13 
12:53:33,055][ERROR][test-runner-#1%distributed.CacheBaselineTopologyTest%][root]
 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion cannot 
be cast to [C
[2018-04-13 
12:53:33,055][ERROR][test-runner-#1%distributed.CacheBaselineTopologyTest%][root]
 org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion cannot 
be cast to [C
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3815: IGNITE-7024

2018-04-13 Thread NSAmelchev
GitHub user NSAmelchev opened a pull request:

https://github.com/apache/ignite/pull/3815

IGNITE-7024



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NSAmelchev/ignite ignite-7024

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3815.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3815


commit 11424e9eeefc8fe1b8575623c18a8d565a376ab9
Author: NSAmelchev 
Date:   2017-11-28T14:25:13Z

Merge remote-tracking branch 'refs/remotes/apache/master'

commit 0b16700731bda414b8d7921f2c098c5ad1b6540b
Author: NSAmelchev 
Date:   2018-01-19T09:46:31Z

Merge remote-tracking branch 'refs/remotes/apache/master'

commit 0de054fe0a1ae41db58a67d3387d08deaf6d22e2
Author: NSAmelchev 
Date:   2018-02-27T11:00:41Z

Merge remote-tracking branch 'apache/master'

commit 29cb0f44d25621d1e41e00d504c3e7bf44c1d735
Author: NSAmelchev 
Date:   2018-03-15T10:58:37Z

Merge pull request #20 from apache/master

merge

commit 9b0f16930a5e1da4ef3a528b7879cd1fca5307f7
Author: NSAmelchev 
Date:   2018-03-20T08:17:26Z

Merge pull request #21 from apache/master

Merge

commit d888d0e2b5fda995d1aa8e59f51bc181cfd6ca27
Author: NSAmelchev 
Date:   2018-04-10T13:48:07Z

Merge pull request #23 from apache/master

Merge

commit 9d38430687843530d71b5fdb1218534cb2e4985d
Author: NSAmelchev 
Date:   2018-04-12T13:55:02Z

Merge pull request #24 from apache/master

Merge

commit 0e00f13c7ac7496d16ff31015e87572041facdee
Author: NSAmelchev 
Date:   2018-04-13T09:55:27Z

squash comression




---


[jira] [Created] (IGNITE-8245) Web console: "Warning" icon is displayed above "secured key" icon.

2018-04-13 Thread Andrey Novikov (JIRA)
Andrey Novikov created IGNITE-8245:
--

 Summary: Web console: "Warning" icon is displayed above "secured 
key" icon.
 Key: IGNITE-8245
 URL: https://issues.apache.org/jira/browse/IGNITE-8245
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Affects Versions: 2.4
Reporter: Andrey Novikov
 Fix For: 2.5


See attachment. Reproduced in Safari.

Make the actual input borderless, move the border to outer element, shrink the 
input element when an error notification has to be shown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3798: IGNITE-7829

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3798


---


Re: Memory usage per cache

2018-04-13 Thread Vladimir Ozerov
Igniters,

I have several questions regarding overall metrics design:
1) Why we split PK and non-PK indexes? This is merely implementation detail
and It is not clear why we want to pin it on public API forever. Other
database vendors allow users to get size of specific index. For now I would
only show total size of all indexes, and add something like
"indexSize(String indexName)" method later
2) What is the purpose of "reuseList" metric? Same as p.1 - this is
internal stuff, why do we think users need it? I think it makes sense to
split "public" and "private" parts, "Public" - this is what makes sense
from user perspective and will not change in future. "Private" - is our
internal details which we can show, but do not guarantee that they will not
change over time.
3) What is the difference between "data size" and "data pages size"?

On Fri, Apr 13, 2018 at 1:41 AM, Denis Magda  wrote:

> Alex, Dmitriy,
>
> Please clarify/consider the following:
>
>- Can we get the size of a particular secondary index with a method like
>getIndexSize(indexName)? Vladimir Ozerov
>,
>it should be feasible, right?
>- The new DataRegionMXBean metrics list is not the same as of
>DataRegionMetricsMXBean interface. Why is so that and what's the
>difference between such similar interfaces?
>- I wouldn't do this - *Depricate
>CacheMetrics.getRebalancingPartitionsCount(); and move to
>CacheGroupMetricsMXBean.getRebalancingPartitionsCount()*. If we
> redesign
>the way we store our data within data pages in the future, then
>CacheMetrics.getRebalancingPartitionsCount() would make sense.
>
>
> --
> Denis
>
> On Thu, Apr 12, 2018 at 8:46 AM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > Sounds good to me.
> >
> > Folks, any other feedback on metrics API in IGNITE-8078?
> >
> > 2018-04-06 21:36 GMT+03:00 Denis Magda :
> >
> > > Alex,
> > >
> > > Why not return cache group metrics from this method by default and
> > properly
> > > > document it?
> > >
> > >
> > > What do you think about Dmitry's suggestion? It sounds reasonable to
> me.
> > >
> > > --
> > > Denis
> > >
> > > On Wed, Apr 4, 2018 at 12:22 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > On Wed, Apr 4, 2018 at 5:27 AM, Alexey Goncharuk <
> > > > alexey.goncha...@gmail.com
> > > > > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > I think this particular metric should be deprecated. The most we
> can
> > do
> > > > > about it is to return the actual allocated size when a cache is the
> > > only
> > > > > cache in a group and return -1 if there are multiple caches in a
> > group.
> > > > > However, this does not look like a consistent approach to me, so I
> > > would
> > > > > prefer to always return -1 and suggest that users use corresponding
> > > cache
> > > > > group metrics.
> > > > >
> > > >
> > > > Why not return cache group metrics from this method by default and
> > > properly
> > > > document it?
> > > >
> > >
> >
>


Re: Service grid redesign

2018-04-13 Thread Alexey Goncharuk
Denis,

This is a subtle question. It looks like we have now a number of use-cases
when persistent storage is required even for a pure in-memory mode. One of
the use-cases is thin client authentication, the other is service grid
configuration persistence.

Generally, I would agree that this is an expected behavior. However, this
means that a user cannot simply start and stop nodes randomly anymore.
Ignite start will require some sort of installation or work folder
initialization (sort of initdb in postgres) which is ok for
persistence-enabled modes, but I am not sure if this is expected for
in-memory. Of course, we can run this initialization automatically, but it
is not always a good idea.

If we are ok to have this restrictions for in-memory mode, then service
persistence makes sense.

--AG

2018-04-11 22:36 GMT+03:00 Denis Magda :

> Denis,
>
> I think that the service deployment state needs be persisted cluster-wide.
> I guess that our meta-store is capable of doing so. Alex G., Vladimir,
> could you confirm?
>
> As for the split-brain scenarios, I would put them aside for now because,
> anyway, they have to be solved at lower levels (meta store, discovery,
> etc.).
>
> Also, I heard that presently we store a service configuration in the system
> cache that doesn't give us a way to deploy a new version of a service
> without undeployment of the previous one. Will this issue be addressed by
> the new deployment approach?
>
> --
> Denis
>
> On Wed, Apr 11, 2018 at 1:28 AM, Denis Mekhanikov 
> wrote:
>
> > Denis,
> >
> > Sounds reasonable. It's not clear, though, what should happen, if a
> joining
> > node has some services persisted, that are missing on other nodes.
> > Should we deploy them?
> > If we do so, it could lead to surprising behaviour. For example you could
> > kill a node, undeploy a service, then bring back an old node, and it
> would
> > make the service resurrect.
> > We could store some deployment counter along with the service
> > configurations on all nodes, that would show how many times the service
> > state has changed, i.e. it has been undeployed/redeployed. It should be
> > kept for undeployed services as well to avoid situations like I
> described.
> >
> > But it still leaves a possibility of incorrect behaviour, if there was a
> > split-brain situation at some point. I don't think we should precess it
> > somehow, though. If we choose to tackle it, it will overcomplicate things
> > for a sake of a minor improvement.
> >
> > Denis
> >
> > вт, 10 апр. 2018 г. в 0:55, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com>:
> >
> > > I was responding to another Denis :) Agree with you on your point
> though.
> > >
> > > -Val
> > >
> > > On Mon, Apr 9, 2018 at 2:48 PM, Denis Magda  wrote:
> > >
> > > > Val,
> > > >
> > > > Guess we're talking about other situations. I'm bringing up the case
> > > when a
> > > > service was deployed dynamically and has to be brought up after a
> full
> > > > cluster restart w/o user intervention. To achieve this we need to
> > persist
> > > > the service's configuration somewhere.
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
> > > > valentin.kuliche...@gmail.com> wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > EVT_CLASS_DEPLOYED should be fired every time a class is deployed
> or
> > > > > redeployed. If this doesn't happen in some cases, I believe this
> > would
> > > > be a
> > > > > bug. I don't think we need to add any new events.
> > > > >
> > > > > -Val
> > > > >
> > > > > On Mon, Apr 9, 2018 at 10:50 AM, Denis Magda 
> > > wrote:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > I would encourage us to persist a service configuration in the
> meta
> > > > store
> > > > > > and have this capability enabled by default. That's essential for
> > > > > services
> > > > > > started dynamically. Moreover, we support similar behavior for
> > > caches,
> > > > > > indexes, and other DDL changes happened at runtime.
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > > > On Mon, Apr 9, 2018 at 9:34 AM, Denis Mekhanikov <
> > > > dmekhani...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Another question, that I would like to discuss is whether
> > services
> > > > > should
> > > > > > > be preserved on cluster restarts.
> > > > > > >
> > > > > > > Currently it depends on persistence configuration. If
> persistence
> > > for
> > > > > any
> > > > > > > data region is enabled, then services will be persisted as
> well.
> > > This
> > > > > is
> > > > > > a
> > > > > > > pretty strange way of configuring this behaviour.
> > > > > > > I'm not sure, if anybody relies on this functionality right
> now.
> > > > Should
> > > > > > we
> > > > > > > support it at all? If yes, should we make it configurable?
> > > > > > >
> > > > > > > Denis
> > > > > > >
> > > > > > > пн, 9 апр. 2018 г. в 19:27, 

[GitHub] ignite pull request #3812: IGNITE-8240 .NET: Use default scheduler when star...

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3812


---


[GitHub] ignite pull request #3814: Fixed skipping of affinity calculation in case wh...

2018-04-13 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/3814

Fixed skipping of affinity calculation in case when eventNode is not 
affinityNode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8210

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3814.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3814


commit 301b7f2fa2a4f758eba911c260df1b83a95e9272
Author: Sergey Chugunov 
Date:   2018-04-12T09:41:34Z

IGNITE-8210 test reproducing the problem is added

commit 769c6bb71896c13e45ca7de7c1db2b4680fa7115
Author: Ilya Lantukh 
Date:   2018-04-06T10:49:10Z

ignite-8210 : Fixed skipping of affinity calculation in case when eventNode 
is not affinityNode.

commit 91f0b9085ed4405c5a411b8c1e6ed0e56ecee57c
Author: Sergey Chugunov 
Date:   2018-04-13T08:55:49Z

IGNITE-8210 test was improved to performs actual checks instead of dumping 
info




---


[GitHub] ignite pull request #3691: IGNITE-7691: Provide info about DECIMAL column sc...

2018-04-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3691


---


[jira] [Created] (IGNITE-8244) Sporadic ClusterTopologyCheckedException for the example run

2018-04-13 Thread Sergey Kozlov (JIRA)
Sergey Kozlov created IGNITE-8244:
-

 Summary: Sporadic ClusterTopologyCheckedException for the example 
run
 Key: IGNITE-8244
 URL: https://issues.apache.org/jira/browse/IGNITE-8244
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Sergey Kozlov
 Fix For: 2.5


1. Start standalone node
2. Start example from the list:
{noformat}
org.apache.ignite.examples.binary.datagrid.store.auto.CacheBinaryAutoStoreExample
org.apache.ignite.examples.datagrid.store.CacheLoadOnlyStoreExample 
org.apache.ignite.examples.datastructures.IgniteSemaphoreExample    
org.apache.ignite.examples.java8.computegrid.ComputeCallableExample 
org.apache.ignite.examples.java8.datagrid.CacheApiExample   
org.apache.ignite.examples.datagrid.store.auto.CacheAutoStoreExample    
org.apache.ignite.examples.java8.cluster.ClusterGroupExample    
org.apache.ignite.examples.java8.computegrid.ComputeClosureExample  
org.apache.ignite.examples.java8.messaging.MessagingExample 
org.apache.ignite.examples.messaging.MessagingExample   
org.apache.ignite.scalar.examples.ScalarTaskExample 
org.apache.ignite.examples.datagrid.store.jdbc.CacheJdbcStoreExample    
org.apache.ignite.examples.java8.computegrid.ComputeAsyncExample    
org.apache.ignite.examples.java8.computegrid.ComputeRunnableExample 
org.apache.ignite.examples.java8.datagrid.CacheEntryProcessorExample    
org.apache.ignite.examples.java8.messaging.MessagingPingPongExample 
org.apache.ignite.examples.datagrid.store.spring.CacheSpringStoreExample    
org.apache.ignite.examples.java8.computegrid.ComputeBroadcastExample    
org.apache.ignite.examples.java8.datagrid.CacheAffinityExample  
org.apache.ignite.examples.java8.datastructures.IgniteExecutorServiceExample 
{noformat}
3. Sometimes example ends up with following exception:
{noformat}
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: 
Failed to send message because node left grid 
[nodeId=178785f1-6df2-4395-82ba-5636b496f6cd, 
msg=GridDhtTxOnePhaseCommitAckRequest [vers=[GridCacheVersion 
[topVer=135009857, time=1523529863612, order=1523529860859, nodeOrder=1], 
GridCacheVersion [topVer=135009857, time=1523529863612, order=1523529860860, 
nodeOrder=1], GridCacheVersion [topVer=135009857, time=1523529863623, 
order=1523529860866, nodeOrder=1], GridCacheVersion [topVer=135009857, 
time=1523529863625, order=1523529860869, nodeOrder=1]], super=GridCacheMessage 
[msgId=-1, depInfo=null, err=null, skipPrepare=false, cacheId=0, cacheId=0]]]
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1065)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$2.finish(IgniteTxManager.java:283)
at 
org.apache.ignite.internal.processors.cache.GridDeferredAckMessageSender$DeferredAckMessageBuffer.finish0(GridDeferredAckMessageSender.java:214)
at 
org.apache.ignite.internal.processors.cache.GridDeferredAckMessageSender$DeferredAckMessageBuffer.access$000(GridDeferredAckMessageSender.java:111)
at 
org.apache.ignite.internal.processors.cache.GridDeferredAckMessageSender$DeferredAckMessageBuffer$1.run(GridDeferredAckMessageSender.java:159)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6640)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:788)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-04-13 Thread Ivan Rakov

Agree with Alex.

Now we perform extra WAL fsync() at the beginning of checkpoint. We 
*have* to wait for call completion before starting to write checkpoint 
pages - otherwise both physical records in WAL and partition files in 
storage will be in a mess in case of power loss. User threads *don't* 
directly wait for this fsync(), however total throughput of user threads 
can't exceed total throughput of checkpoint, that's why total throughput 
of user threads is decreased.


Denis, regarding this:


Could we run Yardstick or YCSB benchmarks to see how the fixed LOG_ONLY
affected the performance under the operational load (after the preloading
part you're referring to is over)?


Please take a look at benchmark results attached to 
https://issues.apache.org/jira/browse/IGNITE-7754 ticket - "put" 
benchmarks represent data loading, and "put-get" benchmarks represent 
operational load. As you can see, operational load degradation is 4-5 
times lesser that in data load case.


Best Regards,
Ivan Rakov

On 13.04.2018 11:24, Alexey Goncharuk wrote:

Dmitriy,

The point of this fsync is to order FS disk writes to prevent data
corruption, so this fsync has to be synchronous and cannot be asynchronous
or delayed.

Given that we fix correctness, I believe that current results are
acceptable.

2018-04-13 2:48 GMT+03:00 Dmitriy Setrakyan :


On Thu, Apr 12, 2018 at 9:45 AM, Ivan Rakov  wrote:


Dmitriy,

fsync() is really slow operation - it's the main reason why FSYNC mode is
way slower than LOG_ONLY.
Fix includes extra fsyncs in necessary parts of code and nothing more.
Every part is important - at the beginning of the thread I described why.

20% slow in benchmark doesn't mean than Ignite itself will become 20%
slower. Benchmark replays only "data loading" scenario. It signals that
maximum throughput with WAL enabled will be 20% slower. By the way, we
already have option to disable WAL in runtime for the period of data
loading.



Ivan, I get it, but I am sure that you can do more things in parallel. Do
we wait for the fsync call to complete? If yes, do we have to wait? Are
there other performance optimizations you can add, considering that we are
in LOG_ONLY or BACKGROUND modes and disk writes may be delayed.

D.





Re: Apache Ignite 2.5 release

2018-04-13 Thread Anton Vinogradov
Andrey, thanks for control :)

So, You'll fix broken versions eventually?

BTW, I don't think it's a good idea to merge issues with fix version 2.5 to
ignite-2.5. Good way is to fix version to 2.6 instead.

2018-04-12 21:34 GMT+03:00 Andrey Gura :

> Anton,
>
> all is under control.
>
> Branches will be compared and changes that should be included to AI
> 2.5 will be identified.
>
> On Thu, Apr 12, 2018 at 6:19 PM, Petr Ivanov  wrote:
> > Possibly it is Andrey Gura — he initiated this thread and created
> corresponding branch.
> >
> >
> >> On 12 Apr 2018, at 17:39, Anton Vinogradov  wrote:
> >>
> >> Release manager is responsible for this change.
> >> Do we have release manager for 2.5?
> >>
> >> 2018-04-12 17:35 GMT+03:00 Dmitry Pavlov :
> >>
> >>> I've changed my ticket version assignment, and a lot of Igniters
> changed.
> >>>
> >>> Filter for double-check tickets related to you
> >>> *https://issues.apache.org/jira/issues/?jql=project%
> >>> 3DIGNITE%20AND%20fixVersion%3D2.5%20and%20resolution%20is%
> >>> 20EMPTY%20%20and%20(assignee%3DcurrentUser()%20or%
> >>> 20reporter%3DcurrentUser())
> >>>  >>> 3DIGNITE%20AND%20fixVersion%3D2.5%20and%20resolution%20is%
> >>> 20EMPTY%20%20and%20(assignee%3DcurrentUser()%20or%
> >>> 20reporter%3DcurrentUser())>*
> >>>
> >>>
> >>> чт, 12 апр. 2018 г. в 17:24, Anton Vinogradov :
> >>>
>  Folks,
>  I see a lot of issues resolved as 2.5 but not merged to ignite-2.5
> >>> branch.
> 
>  Who is in charge of release 2.5, why (first time in history) nobody
> >>> changes
>  all 2.5 to 2.6?
> 
>  2018-04-06 10:19 GMT+03:00 Petr Ivanov :
> 
> > Added corresponding triggers for ignite-2.5 in Ignite Tests 2.4+
> >>> project
> > in TC.
> >
> >
> >
> >> On 5 Apr 2018, at 21:55, Denis Magda  wrote:
> >>
> >> Thanks Andrey!
> >>
> >> Folks, if you'd like to add anything to 2.5 please make sure it gets
> > merged
> >> into 2.5 branch.
> >>
> >> --
> >> Denis
> >>
> >> On Thu, Apr 5, 2018 at 11:29 AM, Andrey Gura 
> >>> wrote:
> >>
> >>> Hi,
> >>>
> >>> I've created branch ignite-2.5 for Apache Ignite 2.5 release.
> >>>
> >>> As always please follow the rules below when merging new commits to
> > master:
> >>>
> >>> 1) If ticket is targeted for 2.5 release, please merge to master,
> >>> then
> >>> cherry-pick to ignite-2.5
> >>> 2) Otherwise just merge to master.
> >>>
> >>>
> >>>
> >>> On Wed, Apr 4, 2018 at 9:11 PM, Andrey Gura 
> >>> wrote:
>  Igniters,
> 
>  It's time to create branch for upcoming Apache Ignite 2.5 release
> >>> in
>  order to start stabilization process.
> 
>  If there are no any objections I'll create ignite-2.5 branch
>  tomorrow.
> 
>  Also please check JIRA issues assigned to you and move it to the
> >>> next
>  version if this issues shouldn't be included to 2.5 release.
> 
>  Release page on wiki [1] contains all issues targeted to 2.5 (fix
>  version field).
> 
>  [1] https://cwiki.apache.org/confluence/display/IGNITE/
> > Apache+Ignite+2.5
> >>>
> >
> >
> 
> >>>
> >
>


Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-04-13 Thread Alexey Goncharuk
Dmitriy,

The point of this fsync is to order FS disk writes to prevent data
corruption, so this fsync has to be synchronous and cannot be asynchronous
or delayed.

Given that we fix correctness, I believe that current results are
acceptable.

2018-04-13 2:48 GMT+03:00 Dmitriy Setrakyan :

> On Thu, Apr 12, 2018 at 9:45 AM, Ivan Rakov  wrote:
>
> > Dmitriy,
> >
> > fsync() is really slow operation - it's the main reason why FSYNC mode is
> > way slower than LOG_ONLY.
> > Fix includes extra fsyncs in necessary parts of code and nothing more.
> > Every part is important - at the beginning of the thread I described why.
> >
> > 20% slow in benchmark doesn't mean than Ignite itself will become 20%
> > slower. Benchmark replays only "data loading" scenario. It signals that
> > maximum throughput with WAL enabled will be 20% slower. By the way, we
> > already have option to disable WAL in runtime for the period of data
> > loading.
> >
> >
> Ivan, I get it, but I am sure that you can do more things in parallel. Do
> we wait for the fsync call to complete? If yes, do we have to wait? Are
> there other performance optimizations you can add, considering that we are
> in LOG_ONLY or BACKGROUND modes and disk writes may be delayed.
>
> D.
>


[jira] [Created] (IGNITE-8243) Possible memory leak at ExchangeLatchManager during dynamic creating/removing of the local caches

2018-04-13 Thread Andrey Aleksandrov (JIRA)
Andrey Aleksandrov created IGNITE-8243:
--

 Summary: Possible memory leak at ExchangeLatchManager during 
dynamic creating/removing of the local caches
 Key: IGNITE-8243
 URL: https://issues.apache.org/jira/browse/IGNITE-8243
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Andrey Aleksandrov
 Attachments: image.png, reproducer.java

Reproducer was attached. Memory analizer report was attached too. 

Looks like that next collection never removes its items in case of dynamic 
creating/removing of the local caches:



/** Server latches collection. */
private final ConcurrentMap, ServerLatch> 
serverLatches = new ConcurrentHashMap<>();

To see it you can modify source code a little:



private Latch createServerLatch(String id, AffinityTopologyVersion topVer, 
Collection participants) {
 final T2 latchId = new T2<>(id, topVer);

 if (serverLatches.containsKey(latchId))
 return serverLatches.get(latchId);

 ServerLatch latch = new ServerLatch(id, topVer, participants);

 serverLatches.put(latchId, latch);

 if (log.isDebugEnabled())
 log.debug("Server latch is created [latch=" + latchId + ", participantsSize=" 
+ participants.size() + "]");

 log.error("Server latch is created [size=" + serverLatches.size() +
 ", latchId = " + latchId + "]");

And add some breakpoints in places where removing can be done.

Log should be like that:

[2018-04-13 09:55:44,911][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1990, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1989]]]
[2018-04-13 09:55:44,911][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1991, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1990]]]
[2018-04-13 09:55:44,911][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1992, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1991]]]
[2018-04-13 09:55:44,926][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1993, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1992]]]
[2018-04-13 09:55:44,926][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1994, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1993]]]
[2018-04-13 09:55:44,926][ERROR][exchange-worker-#42][ExchangeLatchManager] 
Server latch is created [size=1995, latchId = IgniteBiTuple [val1=exchange, 
val2=AffinityTopologyVersion [topVer=1, minorTopVer=1994]]]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)