Re: Why SQL_PUBLIC is appending to Cache name while using JDBC thin driver

2017-10-09 Thread Dmitriy Setrakyan
On Mon, Oct 9, 2017 at 12:05 PM, Vladimir Ozerov 
wrote:

> Because it should be possible to have two tables with the same name in
> different schemas.
>

Still confused. If we have multiple schemas with the same table/cache name,
the schema name should be enough to identify a table. Currently, using your
words, the solution looks like a "hack". Why not just remove the prefix and
check for uniqueness within a schema?

D.


>
> On Mon, Oct 9, 2017 at 9:21 PM, Dmitriy Setrakyan 
> wrote:
>
> > On Mon, Oct 9, 2017 at 1:27 AM, Vladimir Ozerov 
> > wrote:
> >
> > > Hi Dima,
> > >
> > > To maintain unique cache names across the cluster.
> > >
> >
> > Why not simply check for uniqueness at creation time? Why introduce some
> > automatic prefix?
> >
> >
> > >
> > > On Mon, Oct 9, 2017 at 7:34 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > > wrote:
> > >
> > > > Cross-sending to dev@
> > > >
> > > > Why do we need to append SQL_PUBLIC_ to all table names?
> > > >
> > > > D.
> > > >
> > > > -- Forwarded message --
> > > > From: Denis Magda 
> > > > Date: Sun, Oct 8, 2017 at 7:01 AM
> > > > Subject: Re: Why SQL_PUBLIC is appending to Cache name while using
> JDBC
> > > > thin driver
> > > > To: "u...@ignite.apache.org" 
> > > >
> > > >
> > > > Hi Austin,
> > > >
> > > > Yes, it will be possible to pass a cache name you like into CREATE
> > TABLE
> > > > command in 2.3. The release should be available in a couple of weeks.
> > > > Follow our announcements.
> > > >
> > > > Denis
> > > >
> > > >
> > > > On Saturday, October 7, 2017, austin solomon <
> > > austin.solomon...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am using Ignite version 2.2.0, and I have created a table using
> > > > > IgniteJdbcThinDriver.
> > > > >
> > > > > When I checked the cache in Ignite Visor I'm seeing
> > > > SQL_PUBLIC_{TABLE-NAME}
> > > > > is appended.
> > > > > Is their a way to get rid of this.
> > > > >
> > > > > I want to remove the SQL_PUBLIC from the cache name.
> > > > >
> > > > > Thanks,
> > > > > Austin
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> > > > >
> > > >
> > >
> >
>


[GitHub] ignite pull request #2823: IGNITE-6569 Exception on DROP TABLE via cache API...

2017-10-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2823


---


Re: Why SQL_PUBLIC is appending to Cache name while using JDBC thin driver

2017-10-09 Thread Vladimir Ozerov
Because it should be possible to have two tables with the same name in
different schemas.

On Mon, Oct 9, 2017 at 9:21 PM, Dmitriy Setrakyan 
wrote:

> On Mon, Oct 9, 2017 at 1:27 AM, Vladimir Ozerov 
> wrote:
>
> > Hi Dima,
> >
> > To maintain unique cache names across the cluster.
> >
>
> Why not simply check for uniqueness at creation time? Why introduce some
> automatic prefix?
>
>
> >
> > On Mon, Oct 9, 2017 at 7:34 AM, Dmitriy Setrakyan  >
> > wrote:
> >
> > > Cross-sending to dev@
> > >
> > > Why do we need to append SQL_PUBLIC_ to all table names?
> > >
> > > D.
> > >
> > > -- Forwarded message --
> > > From: Denis Magda 
> > > Date: Sun, Oct 8, 2017 at 7:01 AM
> > > Subject: Re: Why SQL_PUBLIC is appending to Cache name while using JDBC
> > > thin driver
> > > To: "u...@ignite.apache.org" 
> > >
> > >
> > > Hi Austin,
> > >
> > > Yes, it will be possible to pass a cache name you like into CREATE
> TABLE
> > > command in 2.3. The release should be available in a couple of weeks.
> > > Follow our announcements.
> > >
> > > Denis
> > >
> > >
> > > On Saturday, October 7, 2017, austin solomon <
> > austin.solomon...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am using Ignite version 2.2.0, and I have created a table using
> > > > IgniteJdbcThinDriver.
> > > >
> > > > When I checked the cache in Ignite Visor I'm seeing
> > > SQL_PUBLIC_{TABLE-NAME}
> > > > is appended.
> > > > Is their a way to get rid of this.
> > > >
> > > > I want to remove the SQL_PUBLIC from the cache name.
> > > >
> > > > Thanks,
> > > > Austin
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> > > >
> > >
> >
>


Re: Integration of Spark and Ignite. Prototype.

2017-10-09 Thread Николай Ижиков
Hello, Valentin.

Did you have a chance to look at my changes?

Now I think I have done almost all required features.
I want to make some performance test to ensure my implementation work
properly with a significant amount of data.
And I definitely need some feedback for my changes.


2017-10-09 18:45 GMT+03:00 Николай Ижиков :

> Hello, guys.
>
> Which version of Spark do we want to use?
>
> 1. Currently, Ignite depends on Spark 2.1.0.
>
> * Can be run on JDK 7.
> * Still supported: 2.1.2 will be released soon.
>
> 2. Latest Spark version is 2.2.0.
>
> * Can be run only on JDK 8+
> * Released Jul 11, 2017.
> * Already supported by huge vendors(Amazon for example).
>
> Note that in IGNITE-3084 I implement some internal Spark API.
> So It will take some effort to switch between Spark 2.1 and 2.2
>
>
> 2017-09-27 2:20 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
>> I will review in the next few days.
>>
>> -Val
>>
>> On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda  wrote:
>>
>> > Hello Nikolay,
>> >
>> > This is good news. Finally this capability is coming to Ignite.
>> >
>> > Val, Vladimir, could you do a preliminary review?
>> >
>> > Answering on your questions.
>> >
>> > 1. Yardstick should be enough for performance measurements. As a Spark
>> > user, I will be curious to know what’s the point of this integration.
>> > Probably we need to compare Spark + Ignite and Spark + Hive or Spark +
>> > RDBMS cases.
>> >
>> > 2. If Spark community is reluctant let’s include the module in
>> > ignite-spark integration.
>> >
>> > —
>> > Denis
>> >
>> > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков 
>> > wrote:
>> > >
>> > > Hello, guys.
>> > >
>> > > Currently, I’m working on integration between Spark and Ignite [1].
>> > >
>> > > For now, I implement following:
>> > >* Ignite DataSource implementation(IgniteRelationProvider)
>> > >* DataFrame support for Ignite SQL table.
>> > >* IgniteCatalog implementation for a transparent resolving of
>> ignites
>> > > SQL tables.
>> > >
>> > > Implementation of it can be found in PR [2]
>> > > It would be great if someone provides feedback for a prototype.
>> > >
>> > > I made some examples in PR so you can see how API suppose to be used
>> [3].
>> > > [4].
>> > >
>> > > I need some advice. Can you help me?
>> > >
>> > > 1. How should this PR be tested?
>> > >
>> > > Of course, I need to provide some unit tests. But what about
>> scalability
>> > > tests, etc.
>> > > Maybe we need some Yardstick benchmark or similar?
>> > > What are your thoughts?
>> > > Which scenarios should I consider in the first place?
>> > >
>> > > 2. Should we provide Spark Catalog implementation inside Ignite
>> codebase?
>> > >
>> > > A current implementation of Spark Catalog based on *internal Spark
>> API*.
>> > > Spark community seems not interested in making Catalog API public or
>> > > including Ignite Catalog in Spark code base [5], [6].
>> > >
>> > > *Should we include Spark internal API implementation inside Ignite
>> code
>> > > base?*
>> > >
>> > > Or should we consider to include Catalog implementation in some
>> external
>> > > module?
>> > > That will be created and released outside Ignite?(we still can support
>> > and
>> > > develop it inside Ignite community).
>> > >
>> > > [1] https://issues.apache.org/jira/browse/IGNITE-3084
>> > > [2] https://github.com/apache/ignite/pull/2742
>> > > [3] https://github.com/apache/ignite/pull/2742/files#diff-
>> > > f4ff509cef3018e221394474775e0905
>> > > [4] https://github.com/apache/ignite/pull/2742/files#diff-
>> > > f2b670497d81e780dfd5098c5dd8a89c
>> > > [5] http://apache-spark-developers-list.1001551.n3.
>> > > nabble.com/Spark-Core-Custom-Catalog-Integration-between-
>> > > Apache-Ignite-and-Apache-Spark-td22452.html
>> > > [6] https://issues.apache.org/jira/browse/SPARK-17767
>> > >
>> > > --
>> > > Nikolay Izhikov
>> > > nizhikov@gmail.com
>> >
>> >
>>
>
>
>
> --
> Nikolay Izhikov
> nizhikov@gmail.com
>



-- 
Nikolay Izhikov
nizhikov@gmail.com


Re: Why SQL_PUBLIC is appending to Cache name while using JDBC thin driver

2017-10-09 Thread Dmitriy Setrakyan
On Mon, Oct 9, 2017 at 1:27 AM, Vladimir Ozerov 
wrote:

> Hi Dima,
>
> To maintain unique cache names across the cluster.
>

Why not simply check for uniqueness at creation time? Why introduce some
automatic prefix?


>
> On Mon, Oct 9, 2017 at 7:34 AM, Dmitriy Setrakyan 
> wrote:
>
> > Cross-sending to dev@
> >
> > Why do we need to append SQL_PUBLIC_ to all table names?
> >
> > D.
> >
> > -- Forwarded message --
> > From: Denis Magda 
> > Date: Sun, Oct 8, 2017 at 7:01 AM
> > Subject: Re: Why SQL_PUBLIC is appending to Cache name while using JDBC
> > thin driver
> > To: "u...@ignite.apache.org" 
> >
> >
> > Hi Austin,
> >
> > Yes, it will be possible to pass a cache name you like into CREATE TABLE
> > command in 2.3. The release should be available in a couple of weeks.
> > Follow our announcements.
> >
> > Denis
> >
> >
> > On Saturday, October 7, 2017, austin solomon <
> austin.solomon...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am using Ignite version 2.2.0, and I have created a table using
> > > IgniteJdbcThinDriver.
> > >
> > > When I checked the cache in Ignite Visor I'm seeing
> > SQL_PUBLIC_{TABLE-NAME}
> > > is appended.
> > > Is their a way to get rid of this.
> > >
> > > I want to remove the SQL_PUBLIC from the cache name.
> > >
> > > Thanks,
> > > Austin
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> > >
> >
>


Re: Download links on ignite.apache.org not always working

2017-10-09 Thread Dmitriy Setrakyan
Ilya,

I do not see this mirror in the list. Do you still see it? If yes, we can
try filing an INFRA issue in JIRA.

D.

On Mon, Oct 9, 2017 at 10:00 AM, Ilya Kasnacheev 
wrote:

> Hello Igniters,
>
> It came to my attention that I am offered to download
> http://mirror.linux-ia64.org/apache//ignite/2.2.0/apache-
> ignite-fabric-2.2.0-bin.zip
> This links look fishy to me, and it doesn't work anyway, unable to connect.
>
> Can we please at least kick mirror.linux-ia64.org from the list of
> mirrors?
>
> This is from
> https://ignite.apache.org/download.cgi
>
> Regards,
>
> --
> Ilya Kasnacheev
>


Re: Adding sqlline tool to Apache Ignite project

2017-10-09 Thread Oleg Ostanin
New build with fixed argument parsing:
https://ci.ignite.apache.org/viewLog.html?buildId=882282=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3PrepareVote#!1rrb2,1esn4zrslm4po,-h8h0hn9vvvxp

On Mon, Oct 9, 2017 at 5:38 PM, Denis Magda  wrote:

> I think it’s a must have for the ticket resolution.
>
> Denis
>
> On Monday, October 9, 2017, Anton Vinogradov 
> wrote:
>
> > Any plans to have ignitesql.bat?
> >
> > On Mon, Oct 9, 2017 at 5:29 PM, Oleg Ostanin  > > wrote:
> >
> > > Another build with sqlline included:
> > > https://ci.ignite.apache.org/viewLog.html?buildId=881120;
> > > tab=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3Pre
> > > pareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp
> > >
> > > On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  > > wrote:
> > >
> > > > No more doubts on my side. +1 for Vladimir’s suggestion.
> > > >
> > > > Denis
> > > >
> > > > On Saturday, October 7, 2017, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > I now tend to agree with Vladimir. We should always require that
> some
> > > > > address is specified. The help menu should clearly state how to
> > connect
> > > > to
> > > > > a localhost.
> > > > >
> > > > > D.
> > > > >
> > > > > On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov <
> > voze...@gridgain.com 
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > Default Ignite configuration uses multicast, this is why you do
> not
> > > > need
> > > > > to
> > > > > > change anything. Ignite node is always both a server (listens)
> and
> > a
> > > > > client
> > > > > > (connects).
> > > > > >
> > > > > > This will not work for ignitesql, as this is a client. And in
> real
> > > > > > deployments it will connect to remote nodes, not local. So the
> > > earlier
> > > > we
> > > > > > explain user how to do this, the better. This is why it should
> not
> > > work
> > > > > out
> > > > > > of the box connecting to 127.0.0.1. No magic for users please.
> > > > > >
> > > > > > This is what user will see (draft):
> > > > > > > ./ignitesql.sh
> > > > > > > Please specify the host: ignitesql.sh [host]; type --help for
> > more
> > > > > > information.
> > > > > > > ./ignitesql.sh 192.168.12.55
> > > > > > > Connected successfully.
> > > > > >
> > > > > > Again, specifying parameters manually is not poor UX. This is
> > > excellent
> > > > > UX,
> > > > > > as user learns on his own how to connect to a node in 1 minute.
> > Most
> > > > > > command line tools work this way.
> > > > > >
> > > > > > сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan <
> > dsetrak...@apache.org 
> > > > > >:
> > > > > >
> > > > > > > How does the binding happen? Can we bind to everything, like we
> > do
> > > in
> > > > > > > Ignite?
> > > > > > >
> > > > > > > On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  > 
> > > > > > wrote:
> > > > > > >
> > > > > > > > Thought over 127.0.0.1 as a default host once again. The bad
> > > thing
> > > > > > about
> > > > > > > > it is that the user gets a lengthy exception stack trace if
> > > Ignite
> > > > > is
> > > > > > > not
> > > > > > > > running locally and not a small error message.
> > > > > > > >
> > > > > > > > What are the other opinions on this? Do we want to follow
> > > > Vladimir’s
> > > > > > > > suggestion forcing to set the host name/IP (port is optional)
> > for
> > > > the
> > > > > > > sake
> > > > > > > > of usability or leaver 127.0.0.1 as default?
> > > > > > > >
> > > > > > > > —
> > > > > > > > Denis
> > > > > > > >
> > > > > > > > > On Oct 6, 2017, at 12:21 PM, Denis Magda <
> dma...@apache.org
> > 
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > >> But, we need to support “help” (-h, -help) argument
> listing
> > > all
> > > > > the
> > > > > > > > parameters accepted by the tools.
> > > > > > > > >
> > > > > > > > > Meant accepted by the ignitesql script only such as host
> > name.
> > > > > > > > >
> > > > > > > > > —
> > > > > > > > > Denis
> > > > > > > > >
> > > > > > > > >> On Oct 6, 2017, at 12:20 PM, Denis Magda <
> dma...@apache.org
> > 
> > > > > > wrote:
> > > > > > > > >>
> > > > > > > > >> Really nice, could click through the getting started [1]
> in
> > a
> > > > > > minute!
> > > > > > > > >>
> > > > > > > > >> +1 to rename the script to “ignitesql”. Vladimir’s point
> > makes
> > > > > total
> > > > > > > > sense.
> > > > > > > > >>
> > > > > > > > >> However, tend to disagree that the host has to be
> requested
> > > all
> > > > > the
> > > > > > > > times. We never request a configuration or host name for
> > > ignite.sh,
> > > > > > visor
> > > > > > > > or web agent scripts. I would follow this approach that’s
> > > excellent
> > > > > for
> > > > > 

Download links on ignite.apache.org not always working

2017-10-09 Thread Ilya Kasnacheev
Hello Igniters,

It came to my attention that I am offered to download
http://mirror.linux-ia64.org/apache//ignite/2.2.0/apache-ignite-fabric-2.2.0-bin.zip
This links look fishy to me, and it doesn't work anyway, unable to connect.

Can we please at least kick mirror.linux-ia64.org from the list of mirrors?

This is from
https://ignite.apache.org/download.cgi

Regards,

-- 
Ilya Kasnacheev


[jira] [Created] (IGNITE-6585) SVM for Apache Ignite ML module

2017-10-09 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-6585:
--

 Summary: SVM for Apache Ignite ML module
 Key: IGNITE-6585
 URL: https://issues.apache.org/jira/browse/IGNITE-6585
 Project: Ignite
  Issue Type: New Feature
  Components: ml
Reporter: Yury Babak


SVM - support vector machine, is pretty common algorithm and I think that we 
need it in our module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Integration of Spark and Ignite. Prototype.

2017-10-09 Thread Николай Ижиков
Hello, guys.

Which version of Spark do we want to use?

1. Currently, Ignite depends on Spark 2.1.0.

* Can be run on JDK 7.
* Still supported: 2.1.2 will be released soon.

2. Latest Spark version is 2.2.0.

* Can be run only on JDK 8+
* Released Jul 11, 2017.
* Already supported by huge vendors(Amazon for example).

Note that in IGNITE-3084 I implement some internal Spark API.
So It will take some effort to switch between Spark 2.1 and 2.2


2017-09-27 2:20 GMT+03:00 Valentin Kulichenko :

> I will review in the next few days.
>
> -Val
>
> On Tue, Sep 26, 2017 at 2:23 PM, Denis Magda  wrote:
>
> > Hello Nikolay,
> >
> > This is good news. Finally this capability is coming to Ignite.
> >
> > Val, Vladimir, could you do a preliminary review?
> >
> > Answering on your questions.
> >
> > 1. Yardstick should be enough for performance measurements. As a Spark
> > user, I will be curious to know what’s the point of this integration.
> > Probably we need to compare Spark + Ignite and Spark + Hive or Spark +
> > RDBMS cases.
> >
> > 2. If Spark community is reluctant let’s include the module in
> > ignite-spark integration.
> >
> > —
> > Denis
> >
> > > On Sep 25, 2017, at 11:14 AM, Николай Ижиков 
> > wrote:
> > >
> > > Hello, guys.
> > >
> > > Currently, I’m working on integration between Spark and Ignite [1].
> > >
> > > For now, I implement following:
> > >* Ignite DataSource implementation(IgniteRelationProvider)
> > >* DataFrame support for Ignite SQL table.
> > >* IgniteCatalog implementation for a transparent resolving of
> ignites
> > > SQL tables.
> > >
> > > Implementation of it can be found in PR [2]
> > > It would be great if someone provides feedback for a prototype.
> > >
> > > I made some examples in PR so you can see how API suppose to be used
> [3].
> > > [4].
> > >
> > > I need some advice. Can you help me?
> > >
> > > 1. How should this PR be tested?
> > >
> > > Of course, I need to provide some unit tests. But what about
> scalability
> > > tests, etc.
> > > Maybe we need some Yardstick benchmark or similar?
> > > What are your thoughts?
> > > Which scenarios should I consider in the first place?
> > >
> > > 2. Should we provide Spark Catalog implementation inside Ignite
> codebase?
> > >
> > > A current implementation of Spark Catalog based on *internal Spark
> API*.
> > > Spark community seems not interested in making Catalog API public or
> > > including Ignite Catalog in Spark code base [5], [6].
> > >
> > > *Should we include Spark internal API implementation inside Ignite code
> > > base?*
> > >
> > > Or should we consider to include Catalog implementation in some
> external
> > > module?
> > > That will be created and released outside Ignite?(we still can support
> > and
> > > develop it inside Ignite community).
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-3084
> > > [2] https://github.com/apache/ignite/pull/2742
> > > [3] https://github.com/apache/ignite/pull/2742/files#diff-
> > > f4ff509cef3018e221394474775e0905
> > > [4] https://github.com/apache/ignite/pull/2742/files#diff-
> > > f2b670497d81e780dfd5098c5dd8a89c
> > > [5] http://apache-spark-developers-list.1001551.n3.
> > > nabble.com/Spark-Core-Custom-Catalog-Integration-between-
> > > Apache-Ignite-and-Apache-Spark-td22452.html
> > > [6] https://issues.apache.org/jira/browse/SPARK-17767
> > >
> > > --
> > > Nikolay Izhikov
> > > nizhikov@gmail.com
> >
> >
>



-- 
Nikolay Izhikov
nizhikov@gmail.com


[GitHub] ignite pull request #2823: IGNITE-6569 Exception on DROP TABLE via cache API...

2017-10-09 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/2823

IGNITE-6569 Exception on DROP TABLE via cache API of cache being dropped



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6569

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2823.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2823


commit c3c78b630aa556e95f0dd4ab3c6ed9067cc49ed9
Author: Alexander Paschenko 
Date:   2017-10-09T12:03:20Z

IGNITE-6569 Exception on DROP TABLE via cache API of cache being dropped




---


[GitHub] ignite pull request #2822: proper getters for rebalance metrics were added

2017-10-09 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/2822

proper getters for rebalance metrics were added

…tyle getters (without get) were deprecated

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6583-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2822.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2822


commit f06ade39be127241b31754824f4a3c37f98c9747
Author: Sergey Chugunov 
Date:   2017-10-09T14:35:03Z

IGNITE-6583 proper getters for rebalance metrics were added; ignite-style 
getters (without get) were deprecated




---


Re: Adding sqlline tool to Apache Ignite project

2017-10-09 Thread Denis Magda
I think it’s a must have for the ticket resolution.

Denis

On Monday, October 9, 2017, Anton Vinogradov 
wrote:

> Any plans to have ignitesql.bat?
>
> On Mon, Oct 9, 2017 at 5:29 PM, Oleg Ostanin  > wrote:
>
> > Another build with sqlline included:
> > https://ci.ignite.apache.org/viewLog.html?buildId=881120;
> > tab=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3Pre
> > pareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp
> >
> > On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  > wrote:
> >
> > > No more doubts on my side. +1 for Vladimir’s suggestion.
> > >
> > > Denis
> > >
> > > On Saturday, October 7, 2017, Dmitriy Setrakyan  >
> > > wrote:
> > >
> > > > I now tend to agree with Vladimir. We should always require that some
> > > > address is specified. The help menu should clearly state how to
> connect
> > > to
> > > > a localhost.
> > > >
> > > > D.
> > > >
> > > > On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov <
> voze...@gridgain.com 
> > > > >
> > > > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > Default Ignite configuration uses multicast, this is why you do not
> > > need
> > > > to
> > > > > change anything. Ignite node is always both a server (listens) and
> a
> > > > client
> > > > > (connects).
> > > > >
> > > > > This will not work for ignitesql, as this is a client. And in real
> > > > > deployments it will connect to remote nodes, not local. So the
> > earlier
> > > we
> > > > > explain user how to do this, the better. This is why it should not
> > work
> > > > out
> > > > > of the box connecting to 127.0.0.1. No magic for users please.
> > > > >
> > > > > This is what user will see (draft):
> > > > > > ./ignitesql.sh
> > > > > > Please specify the host: ignitesql.sh [host]; type --help for
> more
> > > > > information.
> > > > > > ./ignitesql.sh 192.168.12.55
> > > > > > Connected successfully.
> > > > >
> > > > > Again, specifying parameters manually is not poor UX. This is
> > excellent
> > > > UX,
> > > > > as user learns on his own how to connect to a node in 1 minute.
> Most
> > > > > command line tools work this way.
> > > > >
> > > > > сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan <
> dsetrak...@apache.org 
> > > > >:
> > > > >
> > > > > > How does the binding happen? Can we bind to everything, like we
> do
> > in
> > > > > > Ignite?
> > > > > >
> > > > > > On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  
> > > > > wrote:
> > > > > >
> > > > > > > Thought over 127.0.0.1 as a default host once again. The bad
> > thing
> > > > > about
> > > > > > > it is that the user gets a lengthy exception stack trace if
> > Ignite
> > > > is
> > > > > > not
> > > > > > > running locally and not a small error message.
> > > > > > >
> > > > > > > What are the other opinions on this? Do we want to follow
> > > Vladimir’s
> > > > > > > suggestion forcing to set the host name/IP (port is optional)
> for
> > > the
> > > > > > sake
> > > > > > > of usability or leaver 127.0.0.1 as default?
> > > > > > >
> > > > > > > —
> > > > > > > Denis
> > > > > > >
> > > > > > > > On Oct 6, 2017, at 12:21 PM, Denis Magda  
> > > > > wrote:
> > > > > > > >
> > > > > > > >> But, we need to support “help” (-h, -help) argument listing
> > all
> > > > the
> > > > > > > parameters accepted by the tools.
> > > > > > > >
> > > > > > > > Meant accepted by the ignitesql script only such as host
> name.
> > > > > > > >
> > > > > > > > —
> > > > > > > > Denis
> > > > > > > >
> > > > > > > >> On Oct 6, 2017, at 12:20 PM, Denis Magda  
> > > > > wrote:
> > > > > > > >>
> > > > > > > >> Really nice, could click through the getting started [1] in
> a
> > > > > minute!
> > > > > > > >>
> > > > > > > >> +1 to rename the script to “ignitesql”. Vladimir’s point
> makes
> > > > total
> > > > > > > sense.
> > > > > > > >>
> > > > > > > >> However, tend to disagree that the host has to be requested
> > all
> > > > the
> > > > > > > times. We never request a configuration or host name for
> > ignite.sh,
> > > > > visor
> > > > > > > or web agent scripts. I would follow this approach that’s
> > excellent
> > > > for
> > > > > > dev
> > > > > > > time.
> > > > > > > >>
> > > > > > > >> But, we need to support “help” (-h, -help) argument listing
> > all
> > > > the
> > > > > > > parameters accepted by the tools.
> > > > > > > >>
> > > > > > > >> Please consider our feedback and share the next build once
> > it’s
> > > > > ready.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> [1] https://apacheignite-sql.readme.io/v2.1/docs/getting-
> > > started
> > > > <
> > > > > > > https://apacheignite-sql.readme.io/v2.1/docs/getting-started>
> > > > > > > >>
> > > > > > > >> —
> > > > > > > >> Denis
> > > 

[GitHub] ignite pull request #2725: IGNITE-6397 .NET: Thin client: basic cache operat...

2017-10-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2725


---


Re: Adding sqlline tool to Apache Ignite project

2017-10-09 Thread Anton Vinogradov
Any plans to have ignitesql.bat?

On Mon, Oct 9, 2017 at 5:29 PM, Oleg Ostanin  wrote:

> Another build with sqlline included:
> https://ci.ignite.apache.org/viewLog.html?buildId=881120;
> tab=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3Pre
> pareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp
>
> On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  wrote:
>
> > No more doubts on my side. +1 for Vladimir’s suggestion.
> >
> > Denis
> >
> > On Saturday, October 7, 2017, Dmitriy Setrakyan 
> > wrote:
> >
> > > I now tend to agree with Vladimir. We should always require that some
> > > address is specified. The help menu should clearly state how to connect
> > to
> > > a localhost.
> > >
> > > D.
> > >
> > > On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov  > > >
> > > wrote:
> > >
> > > > Denis,
> > > >
> > > > Default Ignite configuration uses multicast, this is why you do not
> > need
> > > to
> > > > change anything. Ignite node is always both a server (listens) and a
> > > client
> > > > (connects).
> > > >
> > > > This will not work for ignitesql, as this is a client. And in real
> > > > deployments it will connect to remote nodes, not local. So the
> earlier
> > we
> > > > explain user how to do this, the better. This is why it should not
> work
> > > out
> > > > of the box connecting to 127.0.0.1. No magic for users please.
> > > >
> > > > This is what user will see (draft):
> > > > > ./ignitesql.sh
> > > > > Please specify the host: ignitesql.sh [host]; type --help for more
> > > > information.
> > > > > ./ignitesql.sh 192.168.12.55
> > > > > Connected successfully.
> > > >
> > > > Again, specifying parameters manually is not poor UX. This is
> excellent
> > > UX,
> > > > as user learns on his own how to connect to a node in 1 minute. Most
> > > > command line tools work this way.
> > > >
> > > > сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan  > > >:
> > > >
> > > > > How does the binding happen? Can we bind to everything, like we do
> in
> > > > > Ignite?
> > > > >
> > > > > On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  > > > wrote:
> > > > >
> > > > > > Thought over 127.0.0.1 as a default host once again. The bad
> thing
> > > > about
> > > > > > it is that the user gets a lengthy exception stack trace if
> Ignite
> > > is
> > > > > not
> > > > > > running locally and not a small error message.
> > > > > >
> > > > > > What are the other opinions on this? Do we want to follow
> > Vladimir’s
> > > > > > suggestion forcing to set the host name/IP (port is optional) for
> > the
> > > > > sake
> > > > > > of usability or leaver 127.0.0.1 as default?
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > > > On Oct 6, 2017, at 12:21 PM, Denis Magda  > > > wrote:
> > > > > > >
> > > > > > >> But, we need to support “help” (-h, -help) argument listing
> all
> > > the
> > > > > > parameters accepted by the tools.
> > > > > > >
> > > > > > > Meant accepted by the ignitesql script only such as host name.
> > > > > > >
> > > > > > > —
> > > > > > > Denis
> > > > > > >
> > > > > > >> On Oct 6, 2017, at 12:20 PM, Denis Magda  > > > wrote:
> > > > > > >>
> > > > > > >> Really nice, could click through the getting started [1] in a
> > > > minute!
> > > > > > >>
> > > > > > >> +1 to rename the script to “ignitesql”. Vladimir’s point makes
> > > total
> > > > > > sense.
> > > > > > >>
> > > > > > >> However, tend to disagree that the host has to be requested
> all
> > > the
> > > > > > times. We never request a configuration or host name for
> ignite.sh,
> > > > visor
> > > > > > or web agent scripts. I would follow this approach that’s
> excellent
> > > for
> > > > > dev
> > > > > > time.
> > > > > > >>
> > > > > > >> But, we need to support “help” (-h, -help) argument listing
> all
> > > the
> > > > > > parameters accepted by the tools.
> > > > > > >>
> > > > > > >> Please consider our feedback and share the next build once
> it’s
> > > > ready.
> > > > > > >>
> > > > > > >>
> > > > > > >> [1] https://apacheignite-sql.readme.io/v2.1/docs/getting-
> > started
> > > <
> > > > > > https://apacheignite-sql.readme.io/v2.1/docs/getting-started>
> > > > > > >>
> > > > > > >> —
> > > > > > >> Denis
> > > > > > >>
> > > > > > >>> On Oct 6, 2017, at 9:04 AM, Anton Vinogradov <
> > > > > avinogra...@gridgain.com >
> > > > > > wrote:
> > > > > > >>>
> > > > > > >>> How about sqlconsole.sh or sqlcmd.sh ?
> > > > > > >>>
> > > > > > >>> On Fri, Oct 6, 2017 at 6:04 PM,  > > > wrote:
> > > > > > >>>
> > > > > >  I like ignitesql.
> > > > > > 
> > > > > >  ⁣D.​
> > > > > > 
> > > > > >  On Oct 6, 2017, 4:49 PM, at 4:49 PM, Vladimir Ozerov <
> > > > > > voze...@gridgain.com >
> > > > > >  

[jira] [Created] (IGNITE-6584) .NET: Propagate new cache metrics

2017-10-09 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-6584:
--

 Summary: .NET: Propagate new cache metrics
 Key: IGNITE-6584
 URL: https://issues.apache.org/jira/browse/IGNITE-6584
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
Priority: Trivial


Some properties are missing in {{ICacheMetrics}} that exist in {{CacheMetrics}} 
on Java side, such as rebalancing-related stuff (see IGNITE-6583). Add them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Adding sqlline tool to Apache Ignite project

2017-10-09 Thread Oleg Ostanin
Another build with sqlline included:
https://ci.ignite.apache.org/viewLog.html?buildId=881120=artifacts=IgniteRelease_XxxFromMirrorIgniteRelease3PrepareVote#!1rrb2,-wpvx2aopzexz,1esn4zrslm4po,-h8h0hn9vvvxp

On Sun, Oct 8, 2017 at 5:11 PM, Denis Magda  wrote:

> No more doubts on my side. +1 for Vladimir’s suggestion.
>
> Denis
>
> On Saturday, October 7, 2017, Dmitriy Setrakyan 
> wrote:
>
> > I now tend to agree with Vladimir. We should always require that some
> > address is specified. The help menu should clearly state how to connect
> to
> > a localhost.
> >
> > D.
> >
> > On Sat, Oct 7, 2017 at 12:44 AM, Vladimir Ozerov  > >
> > wrote:
> >
> > > Denis,
> > >
> > > Default Ignite configuration uses multicast, this is why you do not
> need
> > to
> > > change anything. Ignite node is always both a server (listens) and a
> > client
> > > (connects).
> > >
> > > This will not work for ignitesql, as this is a client. And in real
> > > deployments it will connect to remote nodes, not local. So the earlier
> we
> > > explain user how to do this, the better. This is why it should not work
> > out
> > > of the box connecting to 127.0.0.1. No magic for users please.
> > >
> > > This is what user will see (draft):
> > > > ./ignitesql.sh
> > > > Please specify the host: ignitesql.sh [host]; type --help for more
> > > information.
> > > > ./ignitesql.sh 192.168.12.55
> > > > Connected successfully.
> > >
> > > Again, specifying parameters manually is not poor UX. This is excellent
> > UX,
> > > as user learns on his own how to connect to a node in 1 minute. Most
> > > command line tools work this way.
> > >
> > > сб, 7 окт. 2017 г. в 7:12, Dmitriy Setrakyan  > >:
> > >
> > > > How does the binding happen? Can we bind to everything, like we do in
> > > > Ignite?
> > > >
> > > > On Fri, Oct 6, 2017 at 2:51 PM, Denis Magda  > > wrote:
> > > >
> > > > > Thought over 127.0.0.1 as a default host once again. The bad thing
> > > about
> > > > > it is that the user gets a lengthy exception stack trace if Ignite
> > is
> > > > not
> > > > > running locally and not a small error message.
> > > > >
> > > > > What are the other opinions on this? Do we want to follow
> Vladimir’s
> > > > > suggestion forcing to set the host name/IP (port is optional) for
> the
> > > > sake
> > > > > of usability or leaver 127.0.0.1 as default?
> > > > >
> > > > > —
> > > > > Denis
> > > > >
> > > > > > On Oct 6, 2017, at 12:21 PM, Denis Magda  > > wrote:
> > > > > >
> > > > > >> But, we need to support “help” (-h, -help) argument listing all
> > the
> > > > > parameters accepted by the tools.
> > > > > >
> > > > > > Meant accepted by the ignitesql script only such as host name.
> > > > > >
> > > > > > —
> > > > > > Denis
> > > > > >
> > > > > >> On Oct 6, 2017, at 12:20 PM, Denis Magda  > > wrote:
> > > > > >>
> > > > > >> Really nice, could click through the getting started [1] in a
> > > minute!
> > > > > >>
> > > > > >> +1 to rename the script to “ignitesql”. Vladimir’s point makes
> > total
> > > > > sense.
> > > > > >>
> > > > > >> However, tend to disagree that the host has to be requested all
> > the
> > > > > times. We never request a configuration or host name for ignite.sh,
> > > visor
> > > > > or web agent scripts. I would follow this approach that’s excellent
> > for
> > > > dev
> > > > > time.
> > > > > >>
> > > > > >> But, we need to support “help” (-h, -help) argument listing all
> > the
> > > > > parameters accepted by the tools.
> > > > > >>
> > > > > >> Please consider our feedback and share the next build once it’s
> > > ready.
> > > > > >>
> > > > > >>
> > > > > >> [1] https://apacheignite-sql.readme.io/v2.1/docs/getting-
> started
> > <
> > > > > https://apacheignite-sql.readme.io/v2.1/docs/getting-started>
> > > > > >>
> > > > > >> —
> > > > > >> Denis
> > > > > >>
> > > > > >>> On Oct 6, 2017, at 9:04 AM, Anton Vinogradov <
> > > > avinogra...@gridgain.com >
> > > > > wrote:
> > > > > >>>
> > > > > >>> How about sqlconsole.sh or sqlcmd.sh ?
> > > > > >>>
> > > > > >>> On Fri, Oct 6, 2017 at 6:04 PM,  > > wrote:
> > > > > >>>
> > > > >  I like ignitesql.
> > > > > 
> > > > >  ⁣D.​
> > > > > 
> > > > >  On Oct 6, 2017, 4:49 PM, at 4:49 PM, Vladimir Ozerov <
> > > > > voze...@gridgain.com >
> > > > >  wrote:
> > > > > > Denis,
> > > > > >
> > > > > > Setting default host to 127.0.0.1 is bad idea, because it
> mean
> > > that
> > > > > in
> > > > > > practice users would have to change the script always.
> Instead,
> > > we
> > > > > > should
> > > > > > accept host name as argument. This is perfectly fine from
> > > usability
> > > > > > perspective, most tools work this way (i.e. throw 

[GitHub] ignite pull request #2806: IGNITE-6483 Tests for availability of metrics in ...

2017-10-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2806


---


[jira] [Created] (IGNITE-6583) Methods of rebalancing progress metrics must obey JavaBean naming conventions

2017-10-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-6583:
---

 Summary: Methods of rebalancing progress metrics must obey 
JavaBean naming conventions
 Key: IGNITE-6583
 URL: https://issues.apache.org/jira/browse/IGNITE-6583
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Chugunov
Assignee: Sergey Chugunov
 Fix For: 2.4


h2. Notes
Getters for *rebalancingStartTime* and *estimateRebalancingFinishTime* metrics 
from *CacheMetrics* interface don't follow JavaBean naming conventions; method 
names don't start with *get* prefix.

As a result metrics cannot be accessed via MXBean interface.

h2. Acceptance Criteria
# Methods *getRebalancingStartTime* and *getEstimateRebalancingFinishTime* are 
introduced to *CacheMetrics* interface.
# Existing methods *rebalancingStartTime* and *estimateRebalancingFinishTime* 
are declared as deprecated and will be deleted in future major release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6582) JDBC thin: supports 'updatedOnServer' connection flag

2017-10-09 Thread Taras Ledkov (JIRA)
Taras Ledkov created IGNITE-6582:


 Summary: JDBC thin: supports 'updatedOnServer' connection flag
 Key: IGNITE-6582
 URL: https://issues.apache.org/jira/browse/IGNITE-6582
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Affects Versions: 2.3
Reporter: Taras Ledkov
Assignee: Taras Ledkov
 Fix For: 2.3


JDBC must support  'updateOnServer' connection flag. See more: IGNITE-6024



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6581) clent deadlock in spiStart

2017-10-09 Thread Konstantin Dudkov (JIRA)
Konstantin Dudkov created IGNITE-6581:
-

 Summary: clent deadlock in spiStart
 Key: IGNITE-6581
 URL: https://issues.apache.org/jira/browse/IGNITE-6581
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Konstantin Dudkov
Assignee: Konstantin Dudkov



{code:java}
"tcp-client-disco-msg-worker-#4%soloots-tg-ManagementFabric%" #50 prio=5 
os_prio=0 tid=0x7fafecd50800 nid=0x469e sleeping[0x7fafc3bfa000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.ignite.internal.util.GridSpinReadWriteLock.tryWriteLock(GridSpinReadWriteLock.java:349)
at 
org.apache.ignite.internal.GridKernalGatewayImpl.writeLock(GridKernalGatewayImpl.java:121)
at 
org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:3427)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:601)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2400)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2379)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1707)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

"main" #1 prio=5 os_prio=0 tid=0x7fafec01 nid=0x4644 waiting on 
condition [0x7faff325]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00068a331ad0> (a 
java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:265)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1862)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:268)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:690)
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:940)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1814)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1605)
- locked <0x0004107210e8> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:569)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:516)
at org.apache.ignite.Ignition.start(Ignition.java:322)
at 
com.workday.fabric.ignite.IgniteFabric.lambda$start$1(IgniteFabric.java:143)
at 
com.workday.fabric.ignite.IgniteFabric$$Lambda$6/576020159.run(Unknown Source)
at 
com.workday.fabric.util.InvocationInterceptor.invokeRunnable(InvocationInterceptor.java:119)
at com.workday.fabric.ignite.IgniteFabric.start(IgniteFabric.java:138)
- locked <0x0004107212e0> (a 
com.workday.fabric.ignite.IgniteWorkdayFabric)
at com.workday.fabric.FabricManager.ensureFabric(FabricManager.java:146)
- locked <0x000410721368> (a java.util.concurrent.ConcurrentHashMap)
at 
com.workday.fabric.WorkdayFabricManager.ensureFabric(WorkdayFabricManager.java:76)
at 
com.workday.fabric.verifier.FabricVerifier.verify(FabricVerifier.java:347)
at 
com.workday.fabric.verifier.FabricVerifier.main(FabricVerifier.java:276)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2819: GG-12914: JDBC thin fix handshake compatibility f...

2017-10-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2819


---


[jira] [Created] (IGNITE-6580) Cluster can fail during concurrent re-balancing and cache destruction

2017-10-09 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6580:
-

 Summary: Cluster can fail during concurrent re-balancing and cache 
destruction
 Key: IGNITE-6580
 URL: https://issues.apache.org/jira/browse/IGNITE-6580
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Mikhail Cherkasov
Priority: Critical


The following exceptions can be abserved during concurrent re-balancing and 
cache destruction:
1.
{noformat}

[00:01:27,135][ERROR][sys-#4375%null%][GridDhtPreloader] Partition eviction 
failed, this can cause grid hang.
org.apache.ignite.IgniteException: Runtime failure on search row: Row@6be51c3d[ 
**REMOVED SENSITIVE INFORMATION** ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1787)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1578)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.remove(H2TreeIndex.java:226)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:523)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:416)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.remove(IgniteH2Indexing.java:574)
 ~[ignite-indexing-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.remove(GridQueryProcessor.java:2172)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.remove(GridCacheQueryManager.java:451)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishRemove(IgniteCacheOffheapManagerImpl.java:1462)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1425)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:383)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3224)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:951)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:809)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6629)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 [ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.1.4.jar:2.1.4]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.IllegalStateException: Item not found: 1
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.findIndirectItemIndex(DataPageIO.java:346)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:446)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:149)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:101)
 ~[ignite-core-2.1.4.jar:2.1.4]
at 

[GitHub] ignite pull request #2549: ignite-5097 BinaryMarshaller should write ints in...

2017-10-09 Thread daradurvs
Github user daradurvs closed the pull request at:

https://github.com/apache/ignite/pull/2549


---


[GitHub] ignite pull request #2821: ignite-5097 BinaryMarshaller should write ints in...

2017-10-09 Thread daradurvs
GitHub user daradurvs opened a pull request:

https://github.com/apache/ignite/pull/2821

ignite-5097 BinaryMarshaller should write ints in "varint" encoding where 
it makes sense



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/daradurvs/ignite ignite-5097-release-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2821.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2821


commit 7f2fe578e2ebaf1754a4a0699d06c41739850768
Author: Vyacheslav Daradur 
Date:   2017-10-09T10:50:59Z

IGNITE-5097 BinaryMarshaller should write ints in "varint" encoding where 
it makes sense

commit 857c54c708db7cf6757e38dac2c21b1bf4c6db0c
Author: Vyacheslav Daradur 
Date:   2017-10-09T10:51:54Z

IGNITE-5152 .NET: BinaryMarshaller should write ints in "varint" encoding 
where it makes sense

commit 38c56ec287f64185f2c9922fbc474b9f90e9bf69
Author: Igor Sapego 
Date:   2017-10-09T10:54:55Z

IGNITE-5153 CPP: Introduced varint encoding in C++




---


[GitHub] ignite pull request #2820: IGNITE-6234 Initialize schemaIds to empty set if ...

2017-10-09 Thread mcherkasov
GitHub user mcherkasov opened a pull request:

https://github.com/apache/ignite/pull/2820

IGNITE-6234 Initialize schemaIds to empty set if schemas field is null 
during the deserialization



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6234-v2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2820.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2820


commit 55fe403e9d2fb73cbbf0c89f5cedbe2eb5b8839b
Author: mcherkasov 
Date:   2017-10-09T10:44:06Z

IGNITE-6234 Initialize schemaIds to empty set if schemas field is null 
during the deserialization




---


[jira] [Created] (IGNITE-6579) WAL history does not used when node returns to cluster again

2017-10-09 Thread Vladislav Pyatkov (JIRA)
Vladislav Pyatkov created IGNITE-6579:
-

 Summary: WAL history does not used when node returns to cluster 
again
 Key: IGNITE-6579
 URL: https://issues.apache.org/jira/browse/IGNITE-6579
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Reporter: Vladislav Pyatkov


When I have set big enough value to "WAL history size" and stop node on 20 
minutes, I got the message from coordinator (order=1):

{noformat}
2017-10-06 15:46:33.429 [WARN 
][sys-#10740%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.GridDhtPartitionTopologyImpl]
 Partition has been scheduled for rebalancing due to outdated update counter 
[nodeId=e51a1db2-f49b-44a9-b122-adde4016d9e7,
 cacheOrGroupName=CACHEGROUP_PARTICLE_DServiceZone, partId=2424, 
haveHistory=false]
2017-10-06 15:46:33.429 [WARN 
][sys-#10740%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.GridDhtPartitionTopologyImpl]
 Partition has been scheduled for rebalancing due to outdated update counter 
[nodeId=e51a1db2-f49b-44a9-b122-adde4016d9e7,
 cacheOrGroupName=CACHEGROUP_PARTICLE_DServiceZone, partId=2427, 
haveHistory=false]
2017-10-06 15:46:33.429 [WARN 
][sys-#10740%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.GridDhtPartitionTopologyImpl]
 Partition has been scheduled for rebalancing due to outdated update counter 
[nodeId=e51a1db2-f49b-44a9-b122-adde4016d9e7,
 cacheOrGroupName=CACHEGROUP_PARTICLE_DServiceZone, partId=2426, 
haveHistory=false]
{noformat}

after start node again.
I think, history size should be enough, but I see it is not by logs 
(haveHistory=false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2819: GG-12914: JDBC thin fix handshake compatibility f...

2017-10-09 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/2819

GG-12914: JDBC thin fix handshake compatibility for version 2.3.0.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-12914

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2819.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2819


commit a9aa19ae53564c5ce0528810836c34811b92505f
Author: tledkov-gridgain 
Date:   2017-10-09T09:38:19Z

GG-12914: JDBC thin fix handshake compatibility for version 2.3.0.




---


Re: Persistence per memory policy configuration

2017-10-09 Thread Pavel Tupitsyn
Sounds good to me.

On Mon, Oct 9, 2017 at 12:35 PM, Ivan Rakov  wrote:

> Pavel,
>
> Sounds reasonable.
> I suggest to include both "data" and "configuration" to make it even more
> obvious:
>
> set/getDefaultDataRegionConfiguration
> set/getDataRegionConfigurations
>
> Best Regards,
> Ivan Rakov
>
>
> On 09.10.2017 10:51, Pavel Tupitsyn wrote:
>
>> Sorry that I'm late to the party, but this looks inconsistent:
>>
>> DataStorageConfiguration defaultRegionConfiguration
>> DataRegionConfiguration[] getDataRegions
>>
>> defaultRegionConfiguration + getRegionConfigurations
>> - or -
>> defaultDataRegion + getDataRegions
>>
>> Thoughts?
>>
>> On Mon, Oct 2, 2017 at 9:10 PM, Ivan Rakov  wrote:
>>
>> Denis,
>>>
>>> Yes, you're right. All cache groups without specific data region
>>> configured will be persistent.
>>> And if you want to add another persistent data region, you should set
>>> *isPeristenceEnabled* flag in its *DataRegionConfiguration* explictly.
>>>
>>> Best Regards,
>>> Ivan Rakov
>>>
>>>
>>> On 02.10.2017 21:01, Denis Magda wrote:
>>>
>>> Missed the point with defaults. Makes sense to me now. So to wrap this
 up, if I want to enable the persistence globally and don’t have any
 regions
 configured explicitly I need to take the default region and switch the
 persistence on for it. Is my understanding correct?

 —
 Denis

 On Oct 2, 2017, at 10:57 AM, Ivan Rakov  wrote:

> Denis, why do you need to access an instance of the default region
> bean?
> If you want to set any parameter, just instantiate new bean with this
> parameter set (like in XML snipped below). Other parameters will be
> automatically initialized with their default values.
>
> Best Regards,
> Ivan Rakov
>
> On 02.10.2017 19:28, Denis Magda wrote:
>
>
>>
>>>









 In other data regions persistence will be disabled by default.
>>>
>>> Ivan, how to get an instance to the default region bean and change a
>> parameter? Obviously, if the goal is to enable the persistence I
>> don’t want
>> to create the default region bean from scratch.
>>
>> —
>> Denis
>>
>> On Oct 2, 2017, at 9:11 AM, Ivan Rakov  wrote:
>>
>>> Agree with Alexey.
>>>
>>> Properties like *defaultDataRegionSize*,
>>> *isDefaultPersistenceEnabled*
>>> can confuse users who don't know that there's such thing as default
>>> data
>>> region. They can decide they are inherited by all data regions where
>>> size
>>> and persistence flag are not explicitly set.
>>>
>>> Let's get rid of these properties and add
>>> *defaultRegionConfiguration*
>>> property with explicit configuration of default data region.
>>>
>>> Regarding XML configuration, changing size or persistence flag of
>>> default data region will be just two lines longer (for bean
>>> description):
>>>
>>>
>>>










 In other data regions persistence will be disabled by default.
>>> I've updated draft in https://issues.apache.org/jira
>>> /browse/IGNITE-6030 with these changes.
>>>
>>> Best Regards,
>>> Ivan Rakov
>>>
>>> On 02.10.2017 18:35, Denis Magda wrote:
>>>
>>> To resolve this, I suggest to

> introduce just another field defaultRegionConfiguration and get rid
> of
> other defaults in DataStorageConfiguration.
>
> Won’t it complicate the configuration from a Spring XML file? I’m
 not
 an expert in Spring so how do I get defaultRegionConfiguration bean
 first
 to change any parameter?

 —
 Denis

 On Oct 2, 2017, at 8:30 AM, Alexey Goncharuk <

> alexey.goncha...@gmail.com> wrote:
>
> Agree with Vladimir. If we are to implement this, we would either
> need to
> have a Boolean (non-primitive) for persistenceEnabled on
> DataRegionConfiguration, or introduce an enum for this field which
> is also
> an overkill. On the other hand, one can assume that the defaults we
> are
> talking about are actually inherited. To resolve this, I suggest to
> introduce just another field defaultRegionConfiguration and get rid
> of
> other defaults in DataStorageConfiguration.
>

Re: Persistence per memory policy configuration

2017-10-09 Thread Ivan Rakov

Pavel,

Sounds reasonable.
I suggest to include both "data" and "configuration" to make it even 
more obvious:


set/getDefaultDataRegionConfiguration
set/getDataRegionConfigurations

Best Regards,
Ivan Rakov

On 09.10.2017 10:51, Pavel Tupitsyn wrote:

Sorry that I'm late to the party, but this looks inconsistent:

DataStorageConfiguration defaultRegionConfiguration
DataRegionConfiguration[] getDataRegions

defaultRegionConfiguration + getRegionConfigurations
- or -
defaultDataRegion + getDataRegions

Thoughts?

On Mon, Oct 2, 2017 at 9:10 PM, Ivan Rakov  wrote:


Denis,

Yes, you're right. All cache groups without specific data region
configured will be persistent.
And if you want to add another persistent data region, you should set
*isPeristenceEnabled* flag in its *DataRegionConfiguration* explictly.

Best Regards,
Ivan Rakov


On 02.10.2017 21:01, Denis Magda wrote:


Missed the point with defaults. Makes sense to me now. So to wrap this
up, if I want to enable the persistence globally and don’t have any regions
configured explicitly I need to take the default region and switch the
persistence on for it. Is my understanding correct?

—
Denis

On Oct 2, 2017, at 10:57 AM, Ivan Rakov  wrote:

Denis, why do you need to access an instance of the default region bean?
If you want to set any parameter, just instantiate new bean with this
parameter set (like in XML snipped below). Other parameters will be
automatically initialized with their default values.

Best Regards,
Ivan Rakov

On 02.10.2017 19:28, Denis Magda wrote:


   

   
   
   
   
   
   
   
   
   


In other data regions persistence will be disabled by default.


Ivan, how to get an instance to the default region bean and change a
parameter? Obviously, if the goal is to enable the persistence I don’t want
to create the default region bean from scratch.

—
Denis

On Oct 2, 2017, at 9:11 AM, Ivan Rakov  wrote:

Agree with Alexey.

Properties like *defaultDataRegionSize*, *isDefaultPersistenceEnabled*
can confuse users who don't know that there's such thing as default data
region. They can decide they are inherited by all data regions where size
and persistence flag are not explicitly set.

Let's get rid of these properties and add *defaultRegionConfiguration*
property with explicit configuration of default data region.

Regarding XML configuration, changing size or persistence flag of
default data region will be just two lines longer (for bean description):

   

   
   
   
   
   
   
   
   
   


In other data regions persistence will be disabled by default.
I've updated draft in https://issues.apache.org/jira
/browse/IGNITE-6030 with these changes.

Best Regards,
Ivan Rakov

On 02.10.2017 18:35, Denis Magda wrote:


To resolve this, I suggest to

introduce just another field defaultRegionConfiguration and get rid
of
other defaults in DataStorageConfiguration.


Won’t it complicate the configuration from a Spring XML file? I’m not
an expert in Spring so how do I get defaultRegionConfiguration bean first
to change any parameter?

—
Denis

On Oct 2, 2017, at 8:30 AM, Alexey Goncharuk <

alexey.goncha...@gmail.com> wrote:

Agree with Vladimir. If we are to implement this, we would either
need to
have a Boolean (non-primitive) for persistenceEnabled on
DataRegionConfiguration, or introduce an enum for this field which
is also
an overkill. On the other hand, one can assume that the defaults we
are
talking about are actually inherited. To resolve this, I suggest to
introduce just another field defaultRegionConfiguration and get rid
of
other defaults in DataStorageConfiguration.

Thoughts?

2017-10-02 15:19 GMT+03:00 Ivan Rakov :

Vladimir,

I like your approach because it's easier to implement.

However, user may be confused by setting
*isDefaultPersistenceEnabled*
flag and seeing that persistence is not enabled by default in
custom memory
region. I'll add clarifying Javadoc at this place.

Best Regards,
Ivan Rakov


On 02.10.2017 11:28, Vladimir Ozerov wrote:

Ivan,

I do not think this is correct approach, because it will be hard to
explain, and you will have to use "Boolean" instead of "boolean"
for
DataRegionConfiguration. I do not think we need default
"persistence
enabled" for all regions. Instead, we should have "persistence
enabled"
flag for default region only. It should not be propagated to custom
regions.

On Mon, Oct 2, 2017 at 11:14 AM, Ivan Rakov 

[GitHub] ignite pull request #2818: IGNITE-6380 Reject compute task execution if dead...

2017-10-09 Thread xtern
GitHub user xtern opened a pull request:

https://github.com/apache/ignite/pull/2818

IGNITE-6380 Reject compute task execution if deadlock possible.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xtern/ignite IGNITE-6380

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2818.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2818


commit 403d2e7b071d56d67f9a960176ef94f634da1b59
Author: Pereslegin Pavel 
Date:   2017-10-08T16:30:01Z

IGNITE-6380 Reject compute task execution if deadlock possible.

commit 8abb43916e27f7b651cd63511ef759bf8cd343f5
Author: Pavel Pereslegin <30606288+xt...@users.noreply.github.com>
Date:   2017-10-09T08:46:51Z

IGNITE-6380 Exception message correction.

commit ff44e7d8056c5f53ec51e63a65ea9a81119ecfa5
Author: Pavel Pereslegin <30606288+xt...@users.noreply.github.com>
Date:   2017-10-09T08:51:23Z

IGNITE-6380 Test was renamed.

commit b617d4c41ebb3948d5c770f5369c283f60fe5c4c
Author: Pavel Pereslegin <30606288+xt...@users.noreply.github.com>
Date:   2017-10-09T08:54:29Z

IGNITE-6380 Test was included to test suit.




---


Re: Why SQL_PUBLIC is appending to Cache name while using JDBC thin driver

2017-10-09 Thread Vladimir Ozerov
Hi Dima,

To maintain unique cache names across the cluster.

On Mon, Oct 9, 2017 at 7:34 AM, Dmitriy Setrakyan 
wrote:

> Cross-sending to dev@
>
> Why do we need to append SQL_PUBLIC_ to all table names?
>
> D.
>
> -- Forwarded message --
> From: Denis Magda 
> Date: Sun, Oct 8, 2017 at 7:01 AM
> Subject: Re: Why SQL_PUBLIC is appending to Cache name while using JDBC
> thin driver
> To: "u...@ignite.apache.org" 
>
>
> Hi Austin,
>
> Yes, it will be possible to pass a cache name you like into CREATE TABLE
> command in 2.3. The release should be available in a couple of weeks.
> Follow our announcements.
>
> Denis
>
>
> On Saturday, October 7, 2017, austin solomon 
> wrote:
>
> > Hi,
> >
> > I am using Ignite version 2.2.0, and I have created a table using
> > IgniteJdbcThinDriver.
> >
> > When I checked the cache in Ignite Visor I'm seeing
> SQL_PUBLIC_{TABLE-NAME}
> > is appended.
> > Is their a way to get rid of this.
> >
> > I want to remove the SQL_PUBLIC from the cache name.
> >
> > Thanks,
> > Austin
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
>


Re: Persistence per memory policy configuration

2017-10-09 Thread Pavel Tupitsyn
Sorry that I'm late to the party, but this looks inconsistent:

DataStorageConfiguration defaultRegionConfiguration
DataRegionConfiguration[] getDataRegions

defaultRegionConfiguration + getRegionConfigurations
- or -
defaultDataRegion + getDataRegions

Thoughts?

On Mon, Oct 2, 2017 at 9:10 PM, Ivan Rakov  wrote:

> Denis,
>
> Yes, you're right. All cache groups without specific data region
> configured will be persistent.
> And if you want to add another persistent data region, you should set
> *isPeristenceEnabled* flag in its *DataRegionConfiguration* explictly.
>
> Best Regards,
> Ivan Rakov
>
>
> On 02.10.2017 21:01, Denis Magda wrote:
>
>> Missed the point with defaults. Makes sense to me now. So to wrap this
>> up, if I want to enable the persistence globally and don’t have any regions
>> configured explicitly I need to take the default region and switch the
>> persistence on for it. Is my understanding correct?
>>
>> —
>> Denis
>>
>> On Oct 2, 2017, at 10:57 AM, Ivan Rakov  wrote:
>>>
>>> Denis, why do you need to access an instance of the default region bean?
>>> If you want to set any parameter, just instantiate new bean with this
>>> parameter set (like in XML snipped below). Other parameters will be
>>> automatically initialized with their default values.
>>>
>>> Best Regards,
>>> Ivan Rakov
>>>
>>> On 02.10.2017 19:28, Denis Magda wrote:
>>>
   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>
> In other data regions persistence will be disabled by default.
>
 Ivan, how to get an instance to the default region bean and change a
 parameter? Obviously, if the goal is to enable the persistence I don’t want
 to create the default region bean from scratch.

 —
 Denis

 On Oct 2, 2017, at 9:11 AM, Ivan Rakov  wrote:
>
> Agree with Alexey.
>
> Properties like *defaultDataRegionSize*, *isDefaultPersistenceEnabled*
> can confuse users who don't know that there's such thing as default data
> region. They can decide they are inherited by all data regions where size
> and persistence flag are not explicitly set.
>
> Let's get rid of these properties and add *defaultRegionConfiguration*
> property with explicit configuration of default data region.
>
> Regarding XML configuration, changing size or persistence flag of
> default data region will be just two lines longer (for bean description):
>
>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>
> In other data regions persistence will be disabled by default.
> I've updated draft in https://issues.apache.org/jira
> /browse/IGNITE-6030 with these changes.
>
> Best Regards,
> Ivan Rakov
>
> On 02.10.2017 18:35, Denis Magda wrote:
>
>> To resolve this, I suggest to
>>> introduce just another field defaultRegionConfiguration and get rid
>>> of
>>> other defaults in DataStorageConfiguration.
>>>
>> Won’t it complicate the configuration from a Spring XML file? I’m not
>> an expert in Spring so how do I get defaultRegionConfiguration bean first
>> to change any parameter?
>>
>> —
>> Denis
>>
>> On Oct 2, 2017, at 8:30 AM, Alexey Goncharuk <
>>> alexey.goncha...@gmail.com> wrote:
>>>
>>> Agree with Vladimir. If we are to implement this, we would either
>>> need to
>>> have a Boolean (non-primitive) for persistenceEnabled on
>>> DataRegionConfiguration, or introduce an enum for this field which
>>> is also
>>> an overkill. On the other hand, one can assume that the defaults we
>>> are
>>> talking about are actually inherited. To resolve this, I suggest to
>>> introduce just another field defaultRegionConfiguration and get rid
>>> of
>>> other defaults in DataStorageConfiguration.
>>>
>>> Thoughts?
>>>
>>> 2017-10-02 15:19 GMT+03:00 Ivan Rakov :
>>>
>>> Vladimir,

 I like your approach because it's easier to implement.

 However, user may be confused by setting
 *isDefaultPersistenceEnabled*
 flag and seeing that persistence is not enabled by default in
 custom memory
 region. I'll add clarifying Javadoc at this place.

 Best Regards,
 Ivan Rakov


 On 02.10.2017 11:28, Vladimir Ozerov wrote:

 Ivan,
>
> I do not think this is correct approach, because it will be hard to
> explain, and you will have to use "Boolean" instead of "boolean"
> 

[jira] [Created] (IGNITE-6578) Too many diagnostic: Found long running cache future

2017-10-09 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-6578:


 Summary: Too many diagnostic: Found long running cache future
 Key: IGNITE-6578
 URL: https://issues.apache.org/jira/browse/IGNITE-6578
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.1
Reporter: Alexander Belyak
Priority: Critical


Get about 100Mb of message:
 [WARN][grid-timeout-worker-...][o.apache.ignite.internal.diagnostic] 
Found long running cache future 
few equals message per ms! Can loose logs by rotating! Can't read logs without 
prefiltering!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)