Re: proposed realization KILL QUERY command

2018-11-26 Thread Павлухин Иван
I believe that the meaning was:

> I propose to start with running queries VIEW first.
вт, 27 нояб. 2018 г. в 10:47, Vladimir Ozerov :
>
> I propose to start with running queries мшуц first. Once we have it, it
> will be easier to agree on final command syntax.
>
> On Fri, Nov 23, 2018 at 9:32 AM Павлухин Иван  wrote:
>
> > Hi,
> >
> > May be I am a little bit late with my thoughts about a command syntax.
> > How do I see it is going to be used:
> > 1. A user is able to kill a query by unique id belonging only to this
> > query.
> > 2. A user is able to kill all queries started by a specific node.
> > For killing a single query we must identify it by unique id which is
> > going to be received directly from Ignite (e.g. running queries view)
> > and not calculated by user. Internally the id is compound but why
> > cannot we convert it to opaque integer or string which hides real
> > structure? E.g. base16String(concat(nodeOrder.toString(), ".",
> > queryIdOnNode.toString())) The syntax could be KILL QUERY '123' or
> > KILL QUERY WHERE queryId = '123'
> > For killing all queries started by some node we need to use only node
> > order (or id). It could be like KILL QUERY WHERE nodeOrder = 34.
> > чт, 22 нояб. 2018 г. в 12:56, Denis Mekhanikov :
> > >
> > > Actually, option with separate parameters was mentioned in another thread
> > >
> > http://apache-ignite-developers.2346864.n4.nabble.com/proposed-design-for-thin-client-SQL-management-and-monitoring-view-running-queries-and-kill-it-tp37713p38056.html
> > >
> > > Denis
> > >
> > > чт, 22 нояб. 2018 г. в 08:51, Vladimir Ozerov :
> > >
> > > > Denis,
> > > >
> > > > Problems with separate parameters are explained above.
> > > >
> > > > чт, 22 нояб. 2018 г. в 3:23, Denis Magda :
> > > >
> > > > > Vladimir,
> > > > >
> > > > > All of the alternatives are reminiscent of mathematical operations.
> > Don't
> > > > > look like a SQL command. What if we use a SQL approach introducing
> > named
> > > > > parameters:
> > > > >
> > > > > KILL QUERY query_id=10 [AND node_id=5]
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Wed, Nov 21, 2018 at 4:11 AM Vladimir Ozerov <
> > voze...@gridgain.com>
> > > > > wrote:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > Space is bad candidate because it is a whitespace. Without
> > whitespaces
> > > > we
> > > > > > can have syntax without quotes at all. Any non-whitespace delimiter
> > > > will
> > > > > > work, though:
> > > > > >
> > > > > > KILL QUERY 45.1
> > > > > > KILL QUERY 45-1
> > > > > > KILL QUERY 45:1
> > > > > >
> > > > > > On Wed, Nov 21, 2018 at 3:06 PM Юрий 
> > > > > wrote:
> > > > > >
> > > > > > > Denis,
> > > > > > >
> > > > > > > Let's consider parameter of KILL QUERY just a string with some
> > query
> > > > > id,
> > > > > > > without any meaning for user. User just need to get the id and
> > pass
> > > > as
> > > > > > > parameter to KILL QUERY command.
> > > > > > >
> > > > > > > Even if query is distributed it have single query id from user
> > > > > > perspective
> > > > > > > and will killed on all nodes. User just need to known one global
> > > > query
> > > > > > id.
> > > > > > >
> > > > > > > How it can works.
> > > > > > > 1)SELECT * from running_queries
> > > > > > > result is
> > > > > > >  query_id | node_id
> > > > > > >   | sql   | schema_name | connection_id | duration
> > > > > > > 123.33 | e0a69cb8-a1a8-45f6-b84d-ead367a0   | SELECT
> > ...  |
> > > > ...
> > > > > > >   |   22 | 23456
> > > > > > > 333.31 | aaa6acb8-a4a5-42f6-f842-ead111b00020 |
> > UPDATE...  |
> > > > > ...
> > > > > > >   |  321| 346
> > > > > > > 2) KILL QUERY '123.33'
> > > > > > >
> > > > > > > So, user need select query_id from running_queries view and use
> > it
> > > > for
> > > > > > KILL
> > > > > > > QUERY command.
> > > > > > >
> > > > > > > I hope it became clearer.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > ср, 21 нояб. 2018 г. в 02:11, Denis Magda :
> > > > > > >
> > > > > > > > Folks,
> > > > > > > >
> > > > > > > > The decimal syntax is really odd - KILL QUERY
> > > > > > > > '[node_order].[query_counter]'
> > > > > > > >
> > > > > > > > Confusing, let's use a space to separate parameters.
> > > > > > > >
> > > > > > > > Also, what if I want to halt a specific query with certain ID?
> > > > Don't
> > > > > > know
> > > > > > > > the node number, just know that the query is distributed and
> > runs
> > > > > > across
> > > > > > > > several machines. Sounds like the syntax still should consider
> > > > > > > > [node_order/id] as an optional parameter.
> > > > > > > >
> > > > > > > > Probably, if you explain to me how an end user will use this
> > > > command
> > > > > > from
> > > > > > > > the very beginning (how do I look for a query id and node id,
> > etc)
> > > > > then
> > > > > > > the
> > > > > > > > things get clearer.
> > > > > > > >
> > > > > > > > --
> > > > > > > >

Re: h2.IgniteH2Indexing.start

2018-11-26 Thread Vladimir Ozerov
Hi Srikanth,

Apache Ignite 2.6.0 should use H2 1.4.195. But you use H2 1.4.197. This is
the most likely cause of the error.

On Mon, Nov 26, 2018 at 5:47 PM srikanth.mer...@tcs.com <
srikanth.mer...@tcs.com> wrote:

> Hi
>
> Am trying to read data from Ignite cache from SPARK and facing below error
> while creating IgniteContext
>
>  val ic = new IgniteContext(sc, () =>new
> IgniteConfiguration()).fromCache("PUBLIC")
>
> Error:
> java.lang.NoSuchFieldError: serializer
>   at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:2539)
>   at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:242)
>   at
>
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
>   at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:981)
>   at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
>   at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:671)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:611)
>   at org.apache.ignite.Ignition.getOrStart(Ignition.java:419)
>   at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:150)
>   at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:63)
>
> Using below jars to read and connect to ignite from Spark:
> 
>
> ignite-core-2.6.0.jar,ignite-spring-2.5.0.jar,ignite-spark-2.6.0.jar,cache-api-1.1.0.jar,
>
> ignite-log4j-2.5.0.jarlog4j-1.2.15.jar,ignite-indexing-2.6.0.jar,h2-1.0.60.jar,h2-1.4.197.jar
>
> required your help
> Thanks ,
> Srikanth Merugu
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: proposed realization KILL QUERY command

2018-11-26 Thread Vladimir Ozerov
I propose to start with running queries мшуц first. Once we have it, it
will be easier to agree on final command syntax.

On Fri, Nov 23, 2018 at 9:32 AM Павлухин Иван  wrote:

> Hi,
>
> May be I am a little bit late with my thoughts about a command syntax.
> How do I see it is going to be used:
> 1. A user is able to kill a query by unique id belonging only to this
> query.
> 2. A user is able to kill all queries started by a specific node.
> For killing a single query we must identify it by unique id which is
> going to be received directly from Ignite (e.g. running queries view)
> and not calculated by user. Internally the id is compound but why
> cannot we convert it to opaque integer or string which hides real
> structure? E.g. base16String(concat(nodeOrder.toString(), ".",
> queryIdOnNode.toString())) The syntax could be KILL QUERY '123' or
> KILL QUERY WHERE queryId = '123'
> For killing all queries started by some node we need to use only node
> order (or id). It could be like KILL QUERY WHERE nodeOrder = 34.
> чт, 22 нояб. 2018 г. в 12:56, Denis Mekhanikov :
> >
> > Actually, option with separate parameters was mentioned in another thread
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/proposed-design-for-thin-client-SQL-management-and-monitoring-view-running-queries-and-kill-it-tp37713p38056.html
> >
> > Denis
> >
> > чт, 22 нояб. 2018 г. в 08:51, Vladimir Ozerov :
> >
> > > Denis,
> > >
> > > Problems with separate parameters are explained above.
> > >
> > > чт, 22 нояб. 2018 г. в 3:23, Denis Magda :
> > >
> > > > Vladimir,
> > > >
> > > > All of the alternatives are reminiscent of mathematical operations.
> Don't
> > > > look like a SQL command. What if we use a SQL approach introducing
> named
> > > > parameters:
> > > >
> > > > KILL QUERY query_id=10 [AND node_id=5]
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Nov 21, 2018 at 4:11 AM Vladimir Ozerov <
> voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > Space is bad candidate because it is a whitespace. Without
> whitespaces
> > > we
> > > > > can have syntax without quotes at all. Any non-whitespace delimiter
> > > will
> > > > > work, though:
> > > > >
> > > > > KILL QUERY 45.1
> > > > > KILL QUERY 45-1
> > > > > KILL QUERY 45:1
> > > > >
> > > > > On Wed, Nov 21, 2018 at 3:06 PM Юрий 
> > > > wrote:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > Let's consider parameter of KILL QUERY just a string with some
> query
> > > > id,
> > > > > > without any meaning for user. User just need to get the id and
> pass
> > > as
> > > > > > parameter to KILL QUERY command.
> > > > > >
> > > > > > Even if query is distributed it have single query id from user
> > > > > perspective
> > > > > > and will killed on all nodes. User just need to known one global
> > > query
> > > > > id.
> > > > > >
> > > > > > How it can works.
> > > > > > 1)SELECT * from running_queries
> > > > > > result is
> > > > > >  query_id | node_id
> > > > > >   | sql   | schema_name | connection_id | duration
> > > > > > 123.33 | e0a69cb8-a1a8-45f6-b84d-ead367a0   | SELECT
> ...  |
> > > ...
> > > > > >   |   22 | 23456
> > > > > > 333.31 | aaa6acb8-a4a5-42f6-f842-ead111b00020 |
> UPDATE...  |
> > > > ...
> > > > > >   |  321| 346
> > > > > > 2) KILL QUERY '123.33'
> > > > > >
> > > > > > So, user need select query_id from running_queries view and use
> it
> > > for
> > > > > KILL
> > > > > > QUERY command.
> > > > > >
> > > > > > I hope it became clearer.
> > > > > >
> > > > > >
> > > > > >
> > > > > > ср, 21 нояб. 2018 г. в 02:11, Denis Magda :
> > > > > >
> > > > > > > Folks,
> > > > > > >
> > > > > > > The decimal syntax is really odd - KILL QUERY
> > > > > > > '[node_order].[query_counter]'
> > > > > > >
> > > > > > > Confusing, let's use a space to separate parameters.
> > > > > > >
> > > > > > > Also, what if I want to halt a specific query with certain ID?
> > > Don't
> > > > > know
> > > > > > > the node number, just know that the query is distributed and
> runs
> > > > > across
> > > > > > > several machines. Sounds like the syntax still should consider
> > > > > > > [node_order/id] as an optional parameter.
> > > > > > >
> > > > > > > Probably, if you explain to me how an end user will use this
> > > command
> > > > > from
> > > > > > > the very beginning (how do I look for a query id and node id,
> etc)
> > > > then
> > > > > > the
> > > > > > > things get clearer.
> > > > > > >
> > > > > > > --
> > > > > > > Denis
> > > > > > >
> > > > > > > On Tue, Nov 20, 2018 at 1:03 AM Юрий <
> jury.gerzhedow...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Vladimir,
> > > > > > > >
> > > > > > > > Thanks for your suggestion to use MANAGEMENT_POOL for
> processing
> > > > > > > > cancellation requests.
> > > > > > > >
> > > > > > > > About your questions.
> > > > > > > > 1) I'm going to implements SQL view to p

Re: Apache Ignite 2.7. Last Mile

2018-11-26 Thread Vladimir Ozerov
Hi Nikolay,

I merged IGNITE-10393 into AI 2.7 branch. No more opened tickets for now.

On Sat, Nov 24, 2018 at 9:44 AM Nikolay Izhikov  wrote:

> Hello, Igniters.
>
> Changes regarding to GridToStringBuilder reverted in ignite-2.7 branch:
>
> * IGNITE-8493
> * IGNITE-9209
> * IGNITE-602
>
> We have 1 ticket for 2.7:
>
> IGNITE-10393: DataStreamer failed with NPE for MVCC caches
>
> which is unassigned.
>
> Who can fix it?
>
> В Вт, 20/11/2018 в 12:38 +0300, Dmitrii Ryabov пишет:
> > Yes, revert both.
> >
> > вт, 20 нояб. 2018 г., 11:52 Vladimir Ozerov voze...@gridgain.com:
> >
> > > +1 for reverting both.
> > >
> > > On Tue, Nov 20, 2018 at 9:43 AM Nikolay Izhikov 
> > > wrote:
> > >
> > > > Hello, Dmitrii.
> > > >
> > > > I see 2 tickets for this improvement:
> > > >
> > > > IGNITE-602 - [Test] GridToStringBuilder is vulnerable for
> > > > StackOverflowError caused by infinite recursion [1]
> > > > IGNITE-9209 - GridDistributedTxMapping.toString() returns broken
> string
> > >
> > > [2]
> > > >
> > > > Should we revert both commits?
> > > >
> > > > [1] https://github.com/apache/ignite/commit/d67c5bf
> > > > [2] https://github.com/apache/ignite/commit/9bb9c04
> > > >
> > > >
> > > > В Пн, 19/11/2018 в 13:36 +0300, Dmitrii Ryabov пишет:
> > > > > I agree to revert and make fix for 2.8. So, we will have more time
> to
> > > >
> > > > test
> > > > > it.
> > > > >
> > > > > пн, 19 нояб. 2018 г., 10:53 Vladimir Ozerov voze...@gridgain.com:
> > > > >
> > > > > > +1 for revert.
> > > > > >
> > > > > > On Sun, Nov 18, 2018 at 11:31 PM Dmitriy Pavlov <
> dpav...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > I personally don't mind.
> > > > > > >
> > > > > > > But I would like Dmitry Ryabov and Alexey Goncharuck share
> their
> > > > > >
> > > > > > opinions.
> > > > > > >
> > > > > > > вс, 18 нояб. 2018 г., 20:43 Nikolay Izhikov <
> nizhi...@apache.org>:
> > > > > > >
> > > > > > > > Yes, I think so.
> > > > > > > >
> > > > > > > > вс, 18 нояб. 2018 г., 20:34 Denis Magda dma...@apache.org:
> > > > > > > >
> > > > > > > > > Sounds good to me. Are we starting the vote then?
> > > > > > > > >
> > > > > > > > > Denis
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Sun, Nov 18, 2018 at 8:25 AM Nikolay Izhikov <
> > > >
> > > > nizhi...@apache.org
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hello, Igniters.
> > > > > > > > > >
> > > > > > > > > > This issue is the only ticket that blocks 2.7 release.
> > > > > > > > > >
> > > > > > > > > > I looked at IGNITE-602 PR and GridToStringBuilder.
> > > > > > > > > > The code looks complicated for me.
> > > > > > > > > > And it's not obvious for me how to fix this issue in a
> short
> > > >
> > > > period
> > > > > > >
> > > > > > > of
> > > > > > > > > > time.
> > > > > > > > > > Especially, code deals with recursion and other things
> that
> > >
> > > can
> > > > > >
> > > > > > lead
> > > > > > > to
> > > > > > > > > > very dangerous errors.
> > > > > > > > > >
> > > > > > > > > > Let's revert this patch and fix it in calmly.
> > > > > > > > > > Also, we need additional tests for it.
> > > > > > > > > >
> > > > > > > > > > В Пт, 16/11/2018 в 17:57 +0300, Dmitrii Ryabov пишет:
> > > > > > > > > > > Ok, I'll check the issue.
> > > > > > > > > > > пт, 16 нояб. 2018 г. в 17:52, Alexey Goncharuk <
> > > > > > > > > >
> > > > > > > > > > alexey.goncha...@gmail.com>:
> > > > > > > > > > > >
> > > > > > > > > > > > Igniters,
> > > > > > > > > > > >
> > > > > > > > > > > > I've just found that S.toString() implementation is
> > >
> > > broken
> > > > in
> > > > > > > > > >
> > > > > > > > > > ignite-2.7 and master [1]. It leads to a message
> > > > > > > > > > > > Wrapper [p=Parent [a=0]Child [b=0, super=]]
> > > > > > > > > > > > being formed instead of
> > > > > > > > > > > > Wrapper [p=Child [b=0, super=Parent [a=0]]]
> > > > > > > > > > > > for classes with inheritance that use
> > > > > >
> > > > > > S.toString(SomeClass.class,
> > > > > > > > > > this, super.toString()) embedded to some other object.
> > > > > > > > > > > >
> > > > > > > > > > > > Dmitrii Ryabov, I've reverted two commits related to
> > > >
> > > > IGNITE-602
> > > > > > >
> > > > > > > and
> > > > > > > > > > IGNITE-9209 tickets locally and it fixes the issue. Can
> you
> > > >
> > > > take a
> > > > > > >
> > > > > > > look
> > > > > > > > > at
> > > > > > > > > > the issue?
> > > > > > > > > > > >
> > > > > > > > > > > > I think this regression essentially makes our logs
> > > >
> > > > unreadable
> > > > > >
> > > > > > in
> > > > > > > > some
> > > > > > > > > > cases and I would like to get it fixed in ignite-2.7 or
> > >
> > > revert
> > > > both
> > > > > > > > >
> > > > > > > > > commits
> > > > > > > > > > from the release.
> > > > > > > > > > > >
> > > > > > > > > > > > [1]
> https://issues.apache.org/jira/browse/IGNITE-10301
> > > > > > > > > > > >
> > > > > > > > > > > > пт, 9 нояб. 2018 г. в 09:22, Nikolay Izhikov <
> > > > > > >
> > > > > >

[GitHub] ignite pull request #5497: IGNITE-10393

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5497


---


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Sergey Chugunov
+1

Plus this dedicated list should be properly documented in wiki, mentioning
it in How to Contribute [1] or in Make Teamcity Green Again [2] would be a
good idea.

[1] https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
[2]
https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again

On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван  wrote:

> +1
> вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov :
> >
> > 0
> > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov :
> > >
> > > +1
> > > Do not forget notification from GitBox too!
> > >
> > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya 
> wrote:
> > >
> > > > +1, already make it by filers.
> > > >
> > > > > This was discussed already [1].
> > > > >
> > > > > So, I want to complete this discussion with moving outside dev-list
> > > > > GitHub-notification to dedicated list.
> > > > >
> > > > > Please start voting.
> > > > >
> > > > > +1 - to accept this change.
> > > > > 0 - you don't care.
> > > > > -1 - to decline this change.
> > > > >
> > > > > This vote will go for 72 hours.
> > > > >
> > > > > [1]
> > > > >
> > > >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > >
> > >
> > >
> > > --
> > > Alexey Kuznetsov
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Павлухин Иван
+1
вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov :
>
> 0
> вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov :
> >
> > +1
> > Do not forget notification from GitBox too!
> >
> > On Tue, Nov 27, 2018 at 2:20 AM Zhenya  wrote:
> >
> > > +1, already make it by filers.
> > >
> > > > This was discussed already [1].
> > > >
> > > > So, I want to complete this discussion with moving outside dev-list
> > > > GitHub-notification to dedicated list.
> > > >
> > > > Please start voting.
> > > >
> > > > +1 - to accept this change.
> > > > 0 - you don't care.
> > > > -1 - to decline this change.
> > > >
> > > > This vote will go for 72 hours.
> > > >
> > > > [1]
> > > >
> > > http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > >
> >
> >
> > --
> > Alexey Kuznetsov



-- 
Best regards,
Ivan Pavlukhin


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Dmitrii Ryabov
0
вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov :
>
> +1
> Do not forget notification from GitBox too!
>
> On Tue, Nov 27, 2018 at 2:20 AM Zhenya  wrote:
>
> > +1, already make it by filers.
> >
> > > This was discussed already [1].
> > >
> > > So, I want to complete this discussion with moving outside dev-list
> > > GitHub-notification to dedicated list.
> > >
> > > Please start voting.
> > >
> > > +1 - to accept this change.
> > > 0 - you don't care.
> > > -1 - to decline this change.
> > >
> > > This vote will go for 72 hours.
> > >
> > > [1]
> > >
> > http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> >
>
>
> --
> Alexey Kuznetsov


[jira] [Created] (IGNITE-10414) IF NOT EXISTS in CREATE TABLE doesn't work

2018-11-26 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-10414:
--

 Summary: IF NOT EXISTS in CREATE TABLE doesn't work
 Key: IGNITE-10414
 URL: https://issues.apache.org/jira/browse/IGNITE-10414
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.4
Reporter: Evgenii Zhuravlev


Reproducer:
 
{code:java}
   Ignite ignite = Ignition.start();

ignite.getOrCreateCache("test").query(new SqlFieldsQuery("CREATE TABLE 
IF NOT EXISTS City(id LONG PRIMARY KEY,"
+ " name VARCHAR) WITH \"template=replicated\""));
ignite.getOrCreateCache("test").query(new SqlFieldsQuery("CREATE TABLE 
IF NOT EXISTS City(id LONG PRIMARY KEY,"
+ " name VARCHAR) WITH \"template=replicated\""));
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Alexey Kuznetsov
+1
Do not forget notification from GitBox too!

On Tue, Nov 27, 2018 at 2:20 AM Zhenya  wrote:

> +1, already make it by filers.
>
> > This was discussed already [1].
> >
> > So, I want to complete this discussion with moving outside dev-list
> > GitHub-notification to dedicated list.
> >
> > Please start voting.
> >
> > +1 - to accept this change.
> > 0 - you don't care.
> > -1 - to decline this change.
> >
> > This vote will go for 72 hours.
> >
> > [1]
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
>


-- 
Alexey Kuznetsov


[GitHub] ignite pull request #5493: IGNITE-10399 Fix inspection error.

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5493


---


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Zhenya

+1, already make it by filers.


This was discussed already [1].

So, I want to complete this discussion with moving outside dev-list
GitHub-notification to dedicated list.

Please start voting.

+1 - to accept this change.
0 - you don't care.
-1 - to decline this change.

This vote will go for 72 hours.

[1]
http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Denis Magda
0 - don't care

--
Denis

On Mon, Nov 26, 2018 at 10:54 AM Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:

> This was discussed already [1].
>
> So, I want to complete this discussion with moving outside dev-list
> GitHub-notification to dedicated list.
>
> Please start voting.
>
> +1 - to accept this change.
> 0 - you don't care.
> -1 - to decline this change.
>
> This vote will go for 72 hours.
>
> [1]
>
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
>


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Vladimir Ozerov
+1, definitely

On Mon, Nov 26, 2018 at 10:00 PM Ivan Rakov  wrote:

> +1.
>
> I've already solved this issue for myself by creating an e-mail filter,
> but I see no reason why every new contributor should do the same.
>
> Best Regards,
> Ivan Rakov
>
> On 26.11.2018 21:56, Dmitriy Pavlov wrote:
> > +0.42 because 42 is an Answer to Life, the Universe, and Everything
> >
> > пн, 26 нояб. 2018 г. в 21:54, Eduard Shangareev <
> eduard.shangar...@gmail.com
> >> :
> >> This was discussed already [1].
> >>
> >> So, I want to complete this discussion with moving outside dev-list
> >> GitHub-notification to dedicated list.
> >>
> >> Please start voting.
> >>
> >> +1 - to accept this change.
> >> 0 - you don't care.
> >> -1 - to decline this change.
> >>
> >> This vote will go for 72 hours.
> >>
> >> [1]
> >>
> >>
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> >>
>


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Ivan Rakov

+1.

I've already solved this issue for myself by creating an e-mail filter, 
but I see no reason why every new contributor should do the same.


Best Regards,
Ivan Rakov

On 26.11.2018 21:56, Dmitriy Pavlov wrote:

+0.42 because 42 is an Answer to Life, the Universe, and Everything

пн, 26 нояб. 2018 г. в 21:54, Eduard Shangareev 
:
This was discussed already [1].

So, I want to complete this discussion with moving outside dev-list
GitHub-notification to dedicated list.

Please start voting.

+1 - to accept this change.
0 - you don't care.
-1 - to decline this change.

This vote will go for 72 hours.

[1]

http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html



Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Dmitriy Pavlov
+0.42 because 42 is an Answer to Life, the Universe, and Everything

пн, 26 нояб. 2018 г. в 21:54, Eduard Shangareev :

> This was discussed already [1].
>
> So, I want to complete this discussion with moving outside dev-list
> GitHub-notification to dedicated list.
>
> Please start voting.
>
> +1 - to accept this change.
> 0 - you don't care.
> -1 - to decline this change.
>
> This vote will go for 72 hours.
>
> [1]
>
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
>


[VOTE] Creation dedicated list for github notifiacations

2018-11-26 Thread Eduard Shangareev
This was discussed already [1].

So, I want to complete this discussion with moving outside dev-list
GitHub-notification to dedicated list.

Please start voting.

+1 - to accept this change.
0 - you don't care.
-1 - to decline this change.

This vote will go for 72 hours.

[1]
http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html


[GitHub] ignite pull request #5502: Ignite 10413

2018-11-26 Thread glukos
GitHub user glukos opened a pull request:

https://github.com/apache/ignite/pull/5502

Ignite 10413



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10413

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5502.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5502


commit c22934b36188f86cb360b8c34ba67acee42c98a3
Author: Andrey Kuznetsov 
Date:   2018-11-23T17:48:13Z

IGNITE-10079 FileWriteAheadLogManager may return invalid 
lastCompactedSegment - Fixes #5219.

Signed-off-by: Ivan Rakov 

commit 21579035eb3ec70dfc3661f2b20299340109ee0a
Author: Ivan Rakov 
Date:   2018-11-26T18:10:02Z

IGNITE-10413 Perform cache validation logic on primary node instead of near 
node

commit c563a0818142445254616c4742d09f18d416866b
Author: Ivan Rakov 
Date:   2018-11-26T18:11:10Z

IGNITE-10079 rollback




---


Re: New API for changing configuration of persistent caches

2018-11-26 Thread Eduard Shangareev
Ok,

We need two approaches to change cache configuration:
1. Ignite.restartCaches(CacheConfiguration ... cfgs);
2. Ignite.restartCaches(CacheConfigurationDiff ... cfgDiffs);

Also, we need some versioning of cache configurations for caches. Which
could be done when we move the cache configuration from serialized file to
metastore.

It is necessary for several failover scenarios (actually, all of them
include joining node with outdated configuration).
And for CAS-like API for restarting caches.


On Thu, Nov 22, 2018 at 12:19 PM Vladimir Ozerov 
wrote:

> Ed,
>
> We have ~70 properties in CacheConfiguration. ~50 of them are plain, ~20
> are custom classes. My variant allows to change plain properties from any
> platform, and the rest 20 from any platform when user has relevant
> BinaryType.
>
> On Thu, Nov 22, 2018 at 11:30 AM Eduard Shangareev <
> eduard.shangar...@gmail.com> wrote:
>
> > I don't see how you variant handles user-defined objects (factories,
> > affinity-functions, interceptors, etc.). Could you describe?
> >
> > On Thu, Nov 22, 2018 at 10:47 AM Vladimir Ozerov 
> > wrote:
> >
> > > My variant of API avoids cache configuration.
> > >
> > > One more thing to note - as we found out control.sh cannot dump XML
> > > configuration. Currently it returns only subset of properties. And in
> > > general case it is impossible to convert CacheConfiguration to Spring
> > XML,
> > > because Spring XMLis not serialization protocol. So API with
> > > CacheConfiguration doesn’t seem to work for control.sh as well.
> > >
> > > чт, 22 нояб. 2018 г. в 10:05, Eduard Shangareev <
> > > eduard.shangar...@gmail.com
> > > >:
> > >
> > > > Vovan,
> > > >
> > > > We couldn't avoid API with cache configuration.
> > > > Almost all of ~70 properties could be changed, some of them are
> > instances
> > > > of objects or could be user-defined class.
> > > > Could you come up with alternatives for user-defined affinity
> function?
> > > >
> > > > Also, the race would have a place in other scenarios.
> > > >
> > > >
> > > >
> > > > On Thu, Nov 22, 2018 at 8:50 AM Vladimir Ozerov <
> voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Ed,
> > > > >
> > > > > We may have API similar to “cache” and “getOrCreateCache”, or may
> > not.
> > > It
> > > > > is up to us to decide. Similarity on it’s own is weak argument.
> > > > > Functionality and simplicity - this is what matters.
> > > > >
> > > > > Approach with cache configuration has three major issues
> > > > > 1) It exposes properties which user will not be able to change, so
> > > > typical
> > > > > user actions would be: try to change property, fail as it is
> > > unsupported,
> > > > > go reading documentation. Approach with separate POJO is intuitive
> > and
> > > > > self-documenting.
> > > > > 2) It has race condition between config read and config apply, so
> > user
> > > do
> > > > > not know what exactly he changes, unless you change API to
> something
> > > like
> > > > > “restartCaches(Tuple...)”,
> > > which
> > > > > user will need to call in a loop.
> > > > > 3) And it is not suitable for non-Java platform, which is a
> > > showstopper -
> > > > > all API should be available from all platforms unless it is proven
> to
> > > be
> > > > > impossible to implement.
> > > > >
> > > > > Vladimir.
> > > > >
> > > > > чт, 22 нояб. 2018 г. в 1:06, Eduard Shangareev <
> > > > > eduard.shangar...@gmail.com
> > > > > >:
> > > > >
> > > > > > Vovan,
> > > > > >
> > > > > > Would you argue that we should have the similar API in Java as
> > > > > > Ignite.cache(CacheConfiguration) or
> > > > > > Ignite.getOrCreateCache(CacheConfiguration)?
> > > > > >
> > > > > > With a proposed solution, every other API call would rely on it
> > > > finally.
> > > > > >
> > > > > > I am interested in having such feature not arguing about API
> > > > > alternatives.
> > > > > >
> > > > > > We definitely should have the ability to change it via control.sh
> > and
> > > > > Java
> > > > > > API. Everything else is optional from my point of view (at least
> on
> > > the
> > > > > > current stage).
> > > > > >
> > > > > > Moreover, your arguments are more about our format of
> > > > CacheConfiguration
> > > > > > which couldn't be defined in other languages and clients. So,
> maybe
> > > we
> > > > > > should start a discussion about how we should change it in 3.0?
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, Nov 21, 2018 at 7:45 PM Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Ed,
> > > > > > >
> > > > > > > Why do we want to operate on CacheConfiguration so desperately?
> > > Your
> > > > > > > example raises even more questions:
> > > > > > > 1) What to do with thin clients?
> > > > > > > 2) What to do with aforementioned race conditions, when cache
> > could
> > > > be
> > > > > > > changed concurrently?
> > > > > > > 3) Why such trivial operation from user perspective is only
> > > supported
> > > > > > from
> > > > > >

[jira] [Created] (IGNITE-10413) Perform cache validation logic on primary node instead of near node

2018-11-26 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-10413:
---

 Summary: Perform cache validation logic on primary node instead of 
near node
 Key: IGNITE-10413
 URL: https://issues.apache.org/jira/browse/IGNITE-10413
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Rakov
Assignee: Ivan Rakov
 Fix For: 2.8


Exchange is completed on clients asynchronously, that's why we can perform 
outdated validation when near node is client.
We have to execute validation on dht node instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5501: IGNITE-10412 Reproducer

2018-11-26 Thread agura
GitHub user agura opened a pull request:

https://github.com/apache/ignite/pull/5501

IGNITE-10412 Reproducer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agura/incubator-ignite ignite-10412

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5501.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5501


commit f4335f12fd9547b713f32ea05b27fbece2c4244d
Author: Andrey Gura 
Date:   2018-11-26T18:03:57Z

IGNITE-10412 Reproducer




---


[jira] [Created] (IGNITE-10412) Transaction tries prepare after commit

2018-11-26 Thread Andrey Gura (JIRA)
Andrey Gura created IGNITE-10412:


 Summary: Transaction tries prepare after commit
 Key: IGNITE-10412
 URL: https://issues.apache.org/jira/browse/IGNITE-10412
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.6
Reporter: Andrey Gura


Transaction could be rolled back due to a changing baseline topology. But it 
still tries to prepare. This transaction state transition is invalid.

See reproducer:

See also https://issues.apache.org/jira/browse/IGNITE-10277



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] Design document. Rebalance caches by transferring partition files

2018-11-26 Thread Eduard Shangareev
Maxim,

I have looked through your algorithm of reading partition consistently.
And I have some questions/comments.

1. The algorithm requires heavy synchronization between checkpoint-thread
and new-approach-rebalance-threads,
because you need strong guarantees to not start writing or reading to chunk
which was updated or started reading by the counterpart.

2. Also, if we have started transferring this chunk in original partition
couldn't be updated by checkpoint-threads. They should wait for transfer
finishing.

3. If sending is slow and partition is updated then in worst case
checkpoint-threads would create the whole copy of the partition.

So, what we have:
-on every page write checkpoint-thread should synchronize with
new-approach-rebalance-threads;
-checkpoint-thread should do extra-work, sometimes this could be as huge as
copying the whole partition.


On Fri, Nov 23, 2018 at 2:55 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> This proposal will also happily break my compression-with-dictionary patch
> since it relies currently on only having local dictionaries.
>
> However, when you have compressed data, maybe speed boost is even greater
> with your approach.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 23 нояб. 2018 г. в 13:08, Maxim Muzafarov :
>
> > Igniters,
> >
> >
> > I'd like to take the next step of increasing the Apache Ignite with
> > enabled persistence rebalance speed. Currently, the rebalancing
> > procedure doesn't utilize the network and storage device throughout to
> > its full extent even with enough meaningful values of
> > rebalanceThreadPoolSize property. As part of the previous discussion
> > `How to make rebalance faster` [1] and IEP-16 [2] Ilya proposed an
> > idea [3] of transferring cache partition files over the network.
> > From my point, the case to which this type of rebalancing procedure
> > can bring the most benefit – is adding a completely new node or set of
> > new nodes to the cluster. Such a scenario implies fully relocation of
> > cache partition files to the new node. To roughly estimate the
> > superiority of partition file transmitting over the network the native
> > Linux scp\rsync commands can be used. My test environment showed the
> > result of the new approach as 270 MB/s vs the current 40 MB/s
> > single-threaded rebalance speed.
> >
> >
> > I've prepared the design document IEP-28 [4] and accumulated all the
> > process details of a new rebalance approach on that page. Below you
> > can find the most significant details of the new rebalance procedure
> > and components of the Apache Ignite which are proposed to change.
> >
> > Any feedback is very appreciated.
> >
> >
> > *PROCESS OVERVIEW*
> >
> > The whole process is described in terms of rebalancing single cache
> > group and partition files would be rebalanced one-by-one:
> >
> > 1. The demander node sends the GridDhtPartitionDemandMessage to the
> > supplier node;
> > 2. When the supplier node receives GridDhtPartitionDemandMessage and
> > starts the new checkpoint process;
> > 3. The supplier node creates empty the temporary cache partition file
> > with .tmp postfix in the same cache persistence directory;
> > 4. The supplier node splits the whole cache partition file into
> > virtual chunks of predefined size (multiply to the PageMemory size);
> > 4.1. If the concurrent checkpoint thread determines the appropriate
> > cache partition file chunk and tries to flush dirty page to the cache
> > partition file
> > 4.1.1. If rebalance chunk already transferred
> > 4.1.1.1. Flush the dirty page to the file;
> > 4.1.2. If rebalance chunk not transferred
> > 4.1.2.1. Write this chunk to the temporary cache partition file;
> > 4.1.2.2. Flush the dirty page to the file;
> > 4.2. The node starts sending to the demander node each cache partition
> > file chunk one by one using FileChannel#transferTo
> > 4.2.1. If the current chunk was modified by checkpoint thread – read
> > it from the temporary cache partition file;
> > 4.2.2. If the current chunk is not touched – read it from the original
> > cache partition file;
> > 5. The demander node starts to listen to new pipe incoming connections
> > from the supplier node on TcpCommunicationSpi;
> > 6. The demander node creates the temporary cache partition file with
> > .tmp postfix in the same cache persistence directory;
> > 7. The demander node receives each cache partition file chunk one by one
> > 7.1. The node checks CRC for each PageMemory in the downloaded chunk;
> > 7.2. The node flushes the downloaded chunk at the appropriate cache
> > partition file position;
> > 8. When the demander node receives the whole cache partition file
> > 8.1. The node initializes received .tmp file as its appropriate cache
> > partition file;
> > 8.2. Thread-per-partition begins to apply for data entries from the
> > beginning of WAL-temporary storage;
> > 8.3. All async operations corresponding to this partition file still
> > write to the end of temporary WAL;
> > 8.4. At the

[jira] [Created] (IGNITE-10411) .NET: Allow conditional serialization override

2018-11-26 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-10411:
---

 Summary: .NET: Allow conditional serialization override
 Key: IGNITE-10411
 URL: https://issues.apache.org/jira/browse/IGNITE-10411
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn


IBinarySerializer provides a way to override serialization per-type or globally 
(when set as BinaryConfiguration.Serializer).

But often we want a global override only for *some* types, and default Ignite 
mechanism for everything else. This is not trivial, because Ignite passes 
uninitialized object to ReadBinary.

Provide a way to fall back to default mode from IBinarySerializer (another 
interface, or extend current?).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5500: IGNITE-10366: MVCC: Create "Cache 1" test suite f...

2018-11-26 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/5500

IGNITE-10366: MVCC: Create "Cache 1" test suite for MVCC mode.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10366

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5500.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5500


commit 50b9e9dcf5875727b42abcf707fdb50f03f898ce
Author: Andrey V. Mashenkov 
Date:   2018-10-25T14:55:01Z

IGNITE-10002: Add MvccCacheTestSuite2. Add flag to force mvcc mode.

commit b307bfb1a73955ff416d8c5b9dd74c4e2990f91d
Author: Andrey V. Mashenkov 
Date:   2018-10-25T16:21:24Z

IGNITE-10002: Ignore AtomicCache tests.
Mute NearCache tests.

commit 3bb55091b240fce4888fc1e1c44b9132532dba1a
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:02:55Z

IGNITE-10004: Mute NearCache and CacheStore tests.

commit 3c0071d2a6494dae2881a3c19849e26390bfd5d9
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:17:39Z

IGNITE-10004: WIP.

commit 915fc162783dfa05f14ebd44918eec9452cf5ae9
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:32:33Z

IGNITE-10004: Mute unsupported cases.

commit 6257df961693b15433568d1223976151ac37458e
Author: Andrey V. Mashenkov 
Date:   2018-10-29T12:26:39Z

IGNITE-10004: WIP.

commit 3732a8b049200022e0af85e2441e6c1921461b4d
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:40:37Z

IGNITE-10004: WIP.

commit 185baf7ce1ccc414067640fdf2f13b00e052ed23
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:41:27Z

IGNITE-10004: WIP.

commit 4c65f29b7b11556a0d9c38f107f1e3f5eb0cebe9
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:53:40Z

IGNITE-10004: WIP.

commit 3d48430a957459cd0819e1b1f676b251b7b86438
Author: Andrey V. Mashenkov 
Date:   2018-10-29T14:20:37Z

IGNITE-10004: WIP.

commit a44f9be8bac14c31ebd1b84351b5a659156b4466
Author: Andrey V. Mashenkov 
Date:   2018-10-29T16:13:22Z

IGNITE-10004: WIP.

commit 7d242f4b6b6b13a2876fc8068e2d8f37aa9b2a57
Author: Andrey V. Mashenkov 
Date:   2018-10-29T16:15:26Z

IGNITE-10004: WIP.

commit 0ac066f7a10938d2309b56fefb8735fdc1675100
Author: Andrey V. Mashenkov 
Date:   2018-10-30T08:54:58Z

IGNITE-10002: WIP.

commit 28f105989b8270e96c5bf59f24639a92c74aca97
Author: Andrey V. Mashenkov 
Date:   2018-10-30T09:04:44Z

IGNITE-10002: Minor.

commit 2a61ec244c9a02e200beb211a3c7b0c73be1ec2a
Author: Andrey V. Mashenkov 
Date:   2018-10-30T10:49:35Z

IGNITE-10002: Disable non-supported Tx modes.

commit 160b39753c41de3d0e3041b07db893fe327c891b
Author: Andrey V. Mashenkov 
Date:   2018-10-30T12:19:42Z

IGNITE-10002: Disable non-supported Tx modes.

commit 58f00edd49e2a45c650a9c0089ffc9b13217ca74
Author: Andrey V. Mashenkov 
Date:   2018-10-30T12:49:42Z

IGNITE-10002: Minor.

commit e5f5838fd1e1655283ba5c56bc8a6cc4ae69412c
Author: Andrey V. Mashenkov 
Date:   2018-11-13T08:15:53Z

IGNITE-10002: Fix test.

commit 982b4574d201c055ce52005e61308536cb532547
Author: Andrey V. Mashenkov 
Date:   2018-11-14T09:34:10Z

IGNITE-10002: Fix FORCE_MVCC flag.

commit 4c3ccee618180a7282bedc49752c9c75f0711e68
Author: Andrey V. Mashenkov 
Date:   2018-11-14T09:34:20Z

IGNITE-10002: Fix tests.

commit 9d65e2614c6182c4fcaf6fe74bf299c73af2bc07
Author: Andrey V. Mashenkov 
Date:   2018-11-14T11:03:26Z

IGNITE-10002: Fix tx tests.

commit ddb59fcea609a62aa77532db5bcde83c96e0f382
Author: Andrey V. Mashenkov 
Date:   2018-11-14T12:10:07Z

IGNITE-10002: Fix hanged test.

commit 712e6d8ddcdaa5aecd7762d25cb062f94ef06ce5
Author: Andrey V. Mashenkov 
Date:   2018-11-14T13:34:41Z

IGNITE-10002: Fix node stop.

commit db9aca4792753e6b0ca2eddb8c8795b41498e682
Author: Andrey V. Mashenkov 
Date:   2018-11-14T15:08:51Z

IGNITE-10002: Mute hanged test.

commit 30d24b1c0a961d3cb9e64231867cc87982f84332
Author: user 
Date:   2018-11-14T19:38:52Z

IGNITE-10052: Fix and mute mvcc tests.

commit 191500222e87256f3417019bd2df3dc886918806
Author: user 
Date:   2018-11-14T19:48:10Z

IGNITE-10002: Mute mvcc tests.

commit 0293f36c7621843a8cf84d848df37f6996a21d71
Author: user 
Date:   2018-11-14T20:10:59Z

IGNITE-10002: Mute mvcc tests.

commit 760972570935ad59a118267ce3589ccad77f7ad0
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:05:46Z

IGNITE-10002: Fix FORCE_MVCC flag.

commit 219c402f6c95ca12015bf8ba314b9c20cc1b6683
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:07:42Z

IGNITE-10002: Unmute mvcc localPeek tests.

commit c0fc6c9c113c23be058d56b63fbe336ffd165491
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:35:00Z

IGNITE-10002: Mute tests.




---


[GitHub] ignite pull request #5499: IGNITE-10049: Cache 4 test suite for MVCC mode.

2018-11-26 Thread rkondakov
GitHub user rkondakov opened a pull request:

https://github.com/apache/ignite/pull/5499

IGNITE-10049: Cache 4 test suite for MVCC mode.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10049

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5499


commit 50b9e9dcf5875727b42abcf707fdb50f03f898ce
Author: Andrey V. Mashenkov 
Date:   2018-10-25T14:55:01Z

IGNITE-10002: Add MvccCacheTestSuite2. Add flag to force mvcc mode.

commit b307bfb1a73955ff416d8c5b9dd74c4e2990f91d
Author: Andrey V. Mashenkov 
Date:   2018-10-25T16:21:24Z

IGNITE-10002: Ignore AtomicCache tests.
Mute NearCache tests.

commit 3bb55091b240fce4888fc1e1c44b9132532dba1a
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:02:55Z

IGNITE-10004: Mute NearCache and CacheStore tests.

commit 3c0071d2a6494dae2881a3c19849e26390bfd5d9
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:17:39Z

IGNITE-10004: WIP.

commit 915fc162783dfa05f14ebd44918eec9452cf5ae9
Author: Andrey V. Mashenkov 
Date:   2018-10-26T18:32:33Z

IGNITE-10004: Mute unsupported cases.

commit 6257df961693b15433568d1223976151ac37458e
Author: Andrey V. Mashenkov 
Date:   2018-10-29T12:26:39Z

IGNITE-10004: WIP.

commit 3732a8b049200022e0af85e2441e6c1921461b4d
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:40:37Z

IGNITE-10004: WIP.

commit 185baf7ce1ccc414067640fdf2f13b00e052ed23
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:41:27Z

IGNITE-10004: WIP.

commit 4c65f29b7b11556a0d9c38f107f1e3f5eb0cebe9
Author: Andrey V. Mashenkov 
Date:   2018-10-29T13:53:40Z

IGNITE-10004: WIP.

commit 3d48430a957459cd0819e1b1f676b251b7b86438
Author: Andrey V. Mashenkov 
Date:   2018-10-29T14:20:37Z

IGNITE-10004: WIP.

commit a44f9be8bac14c31ebd1b84351b5a659156b4466
Author: Andrey V. Mashenkov 
Date:   2018-10-29T16:13:22Z

IGNITE-10004: WIP.

commit 7d242f4b6b6b13a2876fc8068e2d8f37aa9b2a57
Author: Andrey V. Mashenkov 
Date:   2018-10-29T16:15:26Z

IGNITE-10004: WIP.

commit 0ac066f7a10938d2309b56fefb8735fdc1675100
Author: Andrey V. Mashenkov 
Date:   2018-10-30T08:54:58Z

IGNITE-10002: WIP.

commit 28f105989b8270e96c5bf59f24639a92c74aca97
Author: Andrey V. Mashenkov 
Date:   2018-10-30T09:04:44Z

IGNITE-10002: Minor.

commit 2a61ec244c9a02e200beb211a3c7b0c73be1ec2a
Author: Andrey V. Mashenkov 
Date:   2018-10-30T10:49:35Z

IGNITE-10002: Disable non-supported Tx modes.

commit 160b39753c41de3d0e3041b07db893fe327c891b
Author: Andrey V. Mashenkov 
Date:   2018-10-30T12:19:42Z

IGNITE-10002: Disable non-supported Tx modes.

commit 58f00edd49e2a45c650a9c0089ffc9b13217ca74
Author: Andrey V. Mashenkov 
Date:   2018-10-30T12:49:42Z

IGNITE-10002: Minor.

commit e5f5838fd1e1655283ba5c56bc8a6cc4ae69412c
Author: Andrey V. Mashenkov 
Date:   2018-11-13T08:15:53Z

IGNITE-10002: Fix test.

commit 982b4574d201c055ce52005e61308536cb532547
Author: Andrey V. Mashenkov 
Date:   2018-11-14T09:34:10Z

IGNITE-10002: Fix FORCE_MVCC flag.

commit 4c3ccee618180a7282bedc49752c9c75f0711e68
Author: Andrey V. Mashenkov 
Date:   2018-11-14T09:34:20Z

IGNITE-10002: Fix tests.

commit 9d65e2614c6182c4fcaf6fe74bf299c73af2bc07
Author: Andrey V. Mashenkov 
Date:   2018-11-14T11:03:26Z

IGNITE-10002: Fix tx tests.

commit ddb59fcea609a62aa77532db5bcde83c96e0f382
Author: Andrey V. Mashenkov 
Date:   2018-11-14T12:10:07Z

IGNITE-10002: Fix hanged test.

commit 712e6d8ddcdaa5aecd7762d25cb062f94ef06ce5
Author: Andrey V. Mashenkov 
Date:   2018-11-14T13:34:41Z

IGNITE-10002: Fix node stop.

commit db9aca4792753e6b0ca2eddb8c8795b41498e682
Author: Andrey V. Mashenkov 
Date:   2018-11-14T15:08:51Z

IGNITE-10002: Mute hanged test.

commit 30d24b1c0a961d3cb9e64231867cc87982f84332
Author: user 
Date:   2018-11-14T19:38:52Z

IGNITE-10052: Fix and mute mvcc tests.

commit 191500222e87256f3417019bd2df3dc886918806
Author: user 
Date:   2018-11-14T19:48:10Z

IGNITE-10002: Mute mvcc tests.

commit 0293f36c7621843a8cf84d848df37f6996a21d71
Author: user 
Date:   2018-11-14T20:10:59Z

IGNITE-10002: Mute mvcc tests.

commit 760972570935ad59a118267ce3589ccad77f7ad0
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:05:46Z

IGNITE-10002: Fix FORCE_MVCC flag.

commit 219c402f6c95ca12015bf8ba314b9c20cc1b6683
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:07:42Z

IGNITE-10002: Unmute mvcc localPeek tests.

commit c0fc6c9c113c23be058d56b63fbe336ffd165491
Author: Andrey V. Mashenkov 
Date:   2018-11-15T09:35:00Z

IGNITE-10002: Mute tests.




---


[jira] [Created] (IGNITE-10410) MVCC: Create "Cache 7" test suite for MVCC mode.

2018-11-26 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10410:
---

 Summary: MVCC: Create "Cache 7" test suite for MVCC mode.
 Key: IGNITE-10410
 URL: https://issues.apache.org/jira/browse/IGNITE-10410
 Project: Ignite
  Issue Type: Sub-task
  Components: mvcc
Reporter: Roman Kondakov
Assignee: Andrew Mashenkov


Create MVCC version of IgniteCacheTestSuite6 and add it to TC.

All non-relevant tests should be marked as ignored.
 Failed tests should be muted and tickets should be created for unknown failure 
reasons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4927: IGNITE-9818 Fix javadoc for annotation AffinityKe...

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4927


---


Re: Brainstorm: Make TC Run All faster

2018-11-26 Thread aplatonov
It should be noticed that additional parameter TEST_SCALE_FACTOR was added.
This parameter with ScaleFactorUtil methods can be used for test size
scaling for different runs (like ordinary and nightly RunALLs). If someone
want to distinguish these builds he/she can apply scaling methods from
ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for
non-nightly builds TEST_SCALE_FACTOR<1.0. For example in
GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for
scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be
added to runs at the same time with RunALL (nightly) runs.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


h2.IgniteH2Indexing.start

2018-11-26 Thread srikanth.mer...@tcs.com
Hi 

Am trying to read data from Ignite cache from SPARK and facing below error
while creating IgniteContext
 
 val ic = new IgniteContext(sc, () =>new
IgniteConfiguration()).fromCache("PUBLIC")

Error:
java.lang.NoSuchFieldError: serializer
  at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:2539)
  at
org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:242)
  at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
  at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:981)
  at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
  at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
  at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:671)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:611)
  at org.apache.ignite.Ignition.getOrStart(Ignition.java:419)
  at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:150)
  at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:63)

Using below jars to read and connect to ignite from Spark:

ignite-core-2.6.0.jar,ignite-spring-2.5.0.jar,ignite-spark-2.6.0.jar,cache-api-1.1.0.jar,
ignite-log4j-2.5.0.jarlog4j-1.2.15.jar,ignite-indexing-2.6.0.jar,h2-1.0.60.jar,h2-1.4.197.jar

required your help 
Thanks ,
Srikanth Merugu




--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #5470: IGNITE-10358

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5470


---


[GitHub] ignite pull request #5397: IGNITE-9517: Replace uses of ConcurrentHashSet wi...

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5397


---


[GitHub] ignite pull request #5390: IGNITE-10184: Fixed type conflict check test in p...

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5390


---


IEP-24: SQL Partition Pruning

2018-11-26 Thread Vladimir Ozerov
Igniters,

I prepared and IEP-24 [1] for so-called "partition pruning" optimization
for our SQL engine, which will allow us to determine target nodes
containing query data prior to query execution. We already use this
optimization for very simple scenarios - only one expression, no JOINs.

The goals of this IEP:
1) Extract partitions from complex expressions
2) Support common JOIN scenarios
3) Allow calculation of target partitions on thin client to allow more
efficient request routing
4) Introduce monitoring capabilities to let user know whether optimization
is applicable to specific query or not

IEP covers several complex architecture questions, which will be finalized
during actual implementation:
1) Rules for partition extraction from complex AND and OR expressions, as
well as from "IN (...)", "BETWEEN ... AND ...", and range expressions
2) Rules for partition extraction from JOINs
3) Several subquery rewrite rules which will allow to apply optimization to
certain subqueries.

Also this optimization will introduce some basic building blocks
("co-location tree") for further improvements of our distributed joins.

Will appreciate your review and comments.

Vladimir.

[1]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-24%3A+SQL+Partition+Pruning


[jira] [Created] (IGNITE-10409) ExchangeFuture should be in charge on cancelling rebalancing process

2018-11-26 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-10409:


 Summary: ExchangeFuture should be in charge on cancelling 
rebalancing process
 Key: IGNITE-10409
 URL: https://issues.apache.org/jira/browse/IGNITE-10409
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Chugunov
 Fix For: 2.8


Ticket IGNITE-7165 introduced improvement of not cancelling any on-going 
partition rebalancing process when client node joins topology. Client join 
event doesn't change affinity distribution so on-going rebalance remains valid, 
no need to cancel it and restart again.
Implementation was based on introducing new method *rebalanceRequired* in 
*GridCachePreloader* interface.

At the same time PME optimizations efforts enhanced ExchangeFuture 
functionality so now the future itself contains all information about weather 
affinity changed or not.

We need to rework code changes from IGNITE-7165 and base it on ExchangeFuture 
functionality instead of new method in Preloader interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5078: IGNITE-9937 Primary response error can be lost du...

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5078


---


[jira] [Created] (IGNITE-10408) Clean unused legacy code in tests marked with GG-11148

2018-11-26 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10408:
---

 Summary: Clean unused legacy code in tests marked with GG-11148
 Key: IGNITE-10408
 URL: https://issues.apache.org/jira/browse/IGNITE-10408
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Roman Kondakov


There is a bunch of TODO's in cache tests marked with GG-11148. It is need 
verify the relevance of these tests and clean them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10407) [ML] Add Multi-label multi-class classification trainer and model

2018-11-26 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10407:
-

 Summary: [ML] Add Multi-label multi-class classification trainer 
and model
 Key: IGNITE-10407
 URL: https://issues.apache.org/jira/browse/IGNITE-10407
 Project: Ignite
  Issue Type: New Feature
  Components: ml
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev


Improve Ignite ML ability to work with tasks for multi-labeled 
multi-classification

It requiers
 * extension of current API with models for Double prediction only
 * addition of common OneVsRest Multi-labeled Multi-classification Model and 
Trainer
 * preparing apropriate datasets for example and testing
 *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5200: IGNITE-10330 Page compression for Ignite persiste...

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5200


---


Re: Suggestion to improve deadlock detection

2018-11-26 Thread Павлухин Иван
Vladimir,

I think it might work. So, if nobody minds I can start prototyping
edge-chasing approach.
пн, 26 нояб. 2018 г. в 14:32, Vladimir Ozerov :
>
> Ivan,
>
> The problem is that in our system a transaction may wait for N locks
> simultaneously. This may form complex graphs which spread between many
> nodes. Now consider that I have a deadlock between 4 nodes: A -> B -> *C*
> -> D -> A. I've sent a message from a and never reached D because C failed.
> Well, may be that is good, because some transactions will be rolled back.
> But when they are rolled back, another cycles from pending multi-way
> deadlocks may form again. E.g. A -> B -> *E* -> D -> A. So the question is
> whether we will be able to detect them reliably. I think that we may use
> the following assumption:
> 1) If data node fails, relevant transactions will be rolled-back
> 2) It means that some other transactions will make at least partial
> progress with locks acquisition
>
> So, we can introduce a kind of counter which will be advanced on every
> acquired lock on a given node. And define a rule that transaction deadlock
> request for the given pair (NODE_ID, COUNTER) should be requested at most
> once. This way, even if specific deadlock check request is lost due to node
> failure, and another deadlock has formed, then some other node will
> re-trigger deadlock change request sooner or later, as it's counter
> advanced.
>
> Makes sense?
>
> On Sat, Nov 24, 2018 at 5:40 PM Павлухин Иван  wrote:
>
> > Hi Vladimir,
> >
> > Regarding fault tolerance. It seems that it is not a problem for
> > edge-chasing approaches. A found deadlock is identified by a message
> > returned to a detection initiator with initiator's identifier. If
> > there is no deadlock then such message will not come. If some node
> > containing a deadlocked transaction fails then it will break the
> > deadlock. Am I missing something?
> >
> > About messaging overhead. Indeed, it looks like edge-chasing
> > approaches can bring redundant messages. Perhaps, we can borrow some
> > ideas about optimization from [1]. And I also think about a timeout
> > before starting a deadlock detection.
> >
> > Thoughts about adaptive timeouts lead me to some thoughts about
> > monitoring. I assume that long waiting for locks signals about not
> > optimal performance of the system. I think it would be great to have
> > means of monitoring transactions contention. It could help users to
> > improve theirs workloads. It also could help us to build some
> > reasoning how contention correlates with deadlock detection overhead.
> >
> > [1] http://mentallandscape.com/Papers_podc84.pdf
> > вс, 18 нояб. 2018 г. в 10:52, Vladimir Ozerov :
> > >
> > > Hi Ivan,
> > >
> > > Great analysis. Agree that edge-chasing looks like better candidate.
> > First,
> > > it will be applicable to both normal and MVCC transactions. Second, in
> > MVCC
> > > we probably will also need to release some locks when doing rollbacks.
> > What
> > > we should think about is failover - what if a node which was in the
> > middle
> > > of a graph fails? We need to craft fault-tolerance carefully.
> > >
> > > Another critically important point is how to trigger deadlock detection.
> > My
> > > concerns about edge-chasing was not about latency, but about a number of
> > > messages which travels between nodes. Good algorithm must produce little
> > to
> > > no messages in case of normal contenion while still providing protection
> > in
> > > case of real deadlocks. So how would we trigger fraph traversal for the
> > > given transaction? May be we can start with hard timeout and then employ
> > a
> > > kind of adaptive increment in case high rate of false-positives are
> > > observed. May be something else.
> > >
> > > On Sun, Nov 18, 2018 at 10:21 AM Павлухин Иван 
> > wrote:
> > >
> > > > Vladimir,
> > > >
> > > > Thanks for the articles! I studied them and a couple of others. And I
> > > > would like to share a knowledge I found.
> > > >
> > > > BACKGROUND
> > > > First of all our algorithm implemented in
> > > >
> > > >
> > org.apache.ignite.internal.processors.cache.transactions.TxDeadlockDetection
> > > > is not an edge-chasing algorithm. In essence a lock-waiting
> > > > transaction site polls nodes responsible for keys of interest one by
> > > > one and reconstructs global wait-for graph (WFG) locally.
> > > > A centralized algorithm discussed in this thread looks similar to
> > > > algorithms proposed by Ho [1]. The simplest of them (two-phased)
> > > > reports false deadlocks when unlock before transaction finish is
> > > > permitted. So, it seems that it only works when strict two-phase
> > > > locking (2PL) is obeyed. Another one (called one-phased) requires
> > > > tracking all locked keys by each transaction which is not desirable
> > > > for MVCC transactions.
> > > > Aforementioned edge-chasing algorithm by Chandy, Misra and Haas [2] is
> > > > proven to work even when 2PL is not obeyed.
> > > > Also performance t

[jira] [Created] (IGNITE-10406) .NET Failed to run ScanQuery with custom filter after server node restart

2018-11-26 Thread Ivan Daschinskiy (JIRA)
Ivan Daschinskiy created IGNITE-10406:
-

 Summary: .NET Failed to run ScanQuery with custom filter after 
server node restart
 Key: IGNITE-10406
 URL: https://issues.apache.org/jira/browse/IGNITE-10406
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Daschinskiy


Scenario:
1. Start server
2. Start client.
3. Restart server and wait for client reconnects the server.
4. Put some data to cache and run ScanQuery with custom filter
 
StackTrace:

{code:java}
class org.apache.ignite.IgniteCheckedException: Failed to inject resource 
[method=setIgniteInstance, 
target=org.apache.ignite.internal.processors.platform.cache.PlatformCacheEntryFilterImpl@6225c21c,
 rsrc=IgniteKernal [cfg=IgniteConfiguration 
[igniteInstanceName=CashflowCluster, pubPoolSize=8, svcPoolSize=8, 
callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, 
igfsPoolSize=4, dataStreamerPoolSize=8,
 utilityCachePoolSize=8, utilityCacheKeepAliveTime=6, p2pPoolSize=2, 
qryPoolSize=8, 
igniteHome=C:\Job\fd-tasks\7404\IgniteTests2\packages\Apache.Ignite.2.6.0, 
igniteWorkDir=C:\Job\fd-tasks\7404\IgniteTests2\packages\Apache.Ignite.2.6.0\work,
 mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@49993335, 
nodeId=3f4aadd9-01b3-4ffe-b629-895fb6ac886f, 
marsh=org.apache.ignite.internal.binary.BinaryMarshaller@77a57272, mar
shLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000, 
sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1, 
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, 
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
marsh=JdkMarshaller 
[clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@65b1c1e3], 
reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=fals
e, clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP, 
segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, 
segChkFreq=1, commSpi=TcpCommunicationSpi 
[connectGate=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$ConnectGateway@4737110c,
 connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$6@bce0ed4, 
enableForcibleNodeKill=false, enableTroubleshootingLog=fa
lse, 
srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@11c20519, 
locAddr=null, locHost=0.0.0.0/0.0.0.0, locPort=47100, locPortRange=100, 
shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=60, 
connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768, 
sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, 
nioSrvr=GridNioServer [selectorSpins=0, filterChain=Filte
rChain[filters=[GridNioCodecFilter 
[parser=org.apache.ignite.internal.util.nio.GridDirectParser@6839fd4e, 
directMode=true], GridConnectionBytesVerifyFilter], 
lsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@11c20519, 
closed=false, directBuf=true, tcpNoDelay=true, sockSndBuf=32768, 
sockRcvBuf=32768, writeTimeout=2000, idleTimeout=60, skipWrite=false, 
skipRead=false, locAddr=0.0.0.0/0.0.0.0:47100
, order=LITTLE_ENDIAN, sndQueueLimit=0, directMode=true, 
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@4e41089d,
 sslFilter=null, msgQueueLsnr=null, readerMoveCnt=0, writerMoveCnt=0, 
readWriteSelectorsAssign=false], shmemSrv=null, usePairedConnections=false, 
connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, 
ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteT
imeout=2000, 
lsnr=org.apache.ignite.internal.managers.communication.GridIoManager$2@432d2e4e,
 boundTcpPort=47100, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, 
addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@70beb599[Count 
= 0], stopping=false, 
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@4e41089d],
 evtSpi=org.apache.ignite.spi.eventstorage.NoopEventSt
orageSpi@32a068d1, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi 
[lsnr=org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore$LocalDeploymentListener@3c6df856],
 indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@282003e1, 
addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1, 
txCfg=org.apache.ignite.configuration.TransactionConfiguration@7fad8c79, 
cacheSanityCheckEnable
d=true, discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100, 
locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, 
failureDetectionTimeout=1, clientFailureDetectionTimeout=3, 
metricsLogFreq=6, hadoopCfg=null, 
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@71a794e5, 
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration 
[seqReserveSize=1000, cacheMode=PARTITI
ONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, 
platformCfg=PlatformDotNetConfiguration [binaryCfg=null], binaryCfg=null, 
memCfg=null,

[jira] [Created] (IGNITE-10405) [ML] Refactor GaussianNaiveBayesTrainerExample to read data sample from file

2018-11-26 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10405:
-

 Summary: [ML] Refactor GaussianNaiveBayesTrainerExample to read 
data sample from file
 Key: IGNITE-10405
 URL: https://issues.apache.org/jira/browse/IGNITE-10405
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


Remove IrisDataset class in Utils and use two_classed_iris.csv to load dataset 
from csv file.

Also, delete method of filling SandboxMLDatasets with double[][] array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Suggestion to improve deadlock detection

2018-11-26 Thread Vladimir Ozerov
Ivan,

The problem is that in our system a transaction may wait for N locks
simultaneously. This may form complex graphs which spread between many
nodes. Now consider that I have a deadlock between 4 nodes: A -> B -> *C*
-> D -> A. I've sent a message from a and never reached D because C failed.
Well, may be that is good, because some transactions will be rolled back.
But when they are rolled back, another cycles from pending multi-way
deadlocks may form again. E.g. A -> B -> *E* -> D -> A. So the question is
whether we will be able to detect them reliably. I think that we may use
the following assumption:
1) If data node fails, relevant transactions will be rolled-back
2) It means that some other transactions will make at least partial
progress with locks acquisition

So, we can introduce a kind of counter which will be advanced on every
acquired lock on a given node. And define a rule that transaction deadlock
request for the given pair (NODE_ID, COUNTER) should be requested at most
once. This way, even if specific deadlock check request is lost due to node
failure, and another deadlock has formed, then some other node will
re-trigger deadlock change request sooner or later, as it's counter
advanced.

Makes sense?

On Sat, Nov 24, 2018 at 5:40 PM Павлухин Иван  wrote:

> Hi Vladimir,
>
> Regarding fault tolerance. It seems that it is not a problem for
> edge-chasing approaches. A found deadlock is identified by a message
> returned to a detection initiator with initiator's identifier. If
> there is no deadlock then such message will not come. If some node
> containing a deadlocked transaction fails then it will break the
> deadlock. Am I missing something?
>
> About messaging overhead. Indeed, it looks like edge-chasing
> approaches can bring redundant messages. Perhaps, we can borrow some
> ideas about optimization from [1]. And I also think about a timeout
> before starting a deadlock detection.
>
> Thoughts about adaptive timeouts lead me to some thoughts about
> monitoring. I assume that long waiting for locks signals about not
> optimal performance of the system. I think it would be great to have
> means of monitoring transactions contention. It could help users to
> improve theirs workloads. It also could help us to build some
> reasoning how contention correlates with deadlock detection overhead.
>
> [1] http://mentallandscape.com/Papers_podc84.pdf
> вс, 18 нояб. 2018 г. в 10:52, Vladimir Ozerov :
> >
> > Hi Ivan,
> >
> > Great analysis. Agree that edge-chasing looks like better candidate.
> First,
> > it will be applicable to both normal and MVCC transactions. Second, in
> MVCC
> > we probably will also need to release some locks when doing rollbacks.
> What
> > we should think about is failover - what if a node which was in the
> middle
> > of a graph fails? We need to craft fault-tolerance carefully.
> >
> > Another critically important point is how to trigger deadlock detection.
> My
> > concerns about edge-chasing was not about latency, but about a number of
> > messages which travels between nodes. Good algorithm must produce little
> to
> > no messages in case of normal contenion while still providing protection
> in
> > case of real deadlocks. So how would we trigger fraph traversal for the
> > given transaction? May be we can start with hard timeout and then employ
> a
> > kind of adaptive increment in case high rate of false-positives are
> > observed. May be something else.
> >
> > On Sun, Nov 18, 2018 at 10:21 AM Павлухин Иван 
> wrote:
> >
> > > Vladimir,
> > >
> > > Thanks for the articles! I studied them and a couple of others. And I
> > > would like to share a knowledge I found.
> > >
> > > BACKGROUND
> > > First of all our algorithm implemented in
> > >
> > >
> org.apache.ignite.internal.processors.cache.transactions.TxDeadlockDetection
> > > is not an edge-chasing algorithm. In essence a lock-waiting
> > > transaction site polls nodes responsible for keys of interest one by
> > > one and reconstructs global wait-for graph (WFG) locally.
> > > A centralized algorithm discussed in this thread looks similar to
> > > algorithms proposed by Ho [1]. The simplest of them (two-phased)
> > > reports false deadlocks when unlock before transaction finish is
> > > permitted. So, it seems that it only works when strict two-phase
> > > locking (2PL) is obeyed. Another one (called one-phased) requires
> > > tracking all locked keys by each transaction which is not desirable
> > > for MVCC transactions.
> > > Aforementioned edge-chasing algorithm by Chandy, Misra and Haas [2] is
> > > proven to work even when 2PL is not obeyed.
> > > Also performance target is not clear for me. It looks like centralized
> > > approaches can use fewer messages and provide lower latency that
> > > distributed ones. But would like to understand what are the orders of
> > > latency and messaging overhead. Many of algorithms are described in
> > > [3] and some performance details are mentioned. It is said "As per
> >

Re: [MTCGA] Disabled tests.

2018-11-26 Thread Ilya Kasnacheev
Hello!

I think we should un-ignore these tests. You can even create a sub-task
under https://issues.apache.org/jira/browse/IGNITE-9210

Regards,
-- 
Ilya Kasnacheev


пн, 26 нояб. 2018 г. в 14:20, Andrey Mashenkov :

> Hi Igniters,
>
>
> I've found  "Cache 1" TC suite actually
> starts IgniteBinaryCacheTestSuite.class suite.
> This suite ignores several tests that has copies to be run with binary
> marshaller:
> * DataStreamProcessorSelfTest
> * GridCacheAffinityRoutingSelfTest
> * IgniteCacheAtomicLocalExpiryPolicyTest
> * GridCacheEntryMemorySizeSelfTest
> * GridCacheMvccSelfTest
>
> Looks like these test were excluded from run as duplicates as they were a
> part of another TC suite before BinaryMarshaller becomes a default
> marshaller.
>
> Quick investigation shows that
> 1. DataStreamProcessorSelfTest is DataStreamer test with keepBinary=false
> mode and we never check this case
> 2. DataStreamProcessorBinarySelfTest (it's binary version) checks
> keepBinary=true case within IgniteBinaryCacheTestSuite.
>
>
> Should we stop ignoring mentioned tests or remove ones?
> Thoughts?
>
> --
> Best regards,
> Andrey V. Mashenkov
>


[MTCGA] Disabled tests.

2018-11-26 Thread Andrey Mashenkov
Hi Igniters,


I've found  "Cache 1" TC suite actually
starts IgniteBinaryCacheTestSuite.class suite.
This suite ignores several tests that has copies to be run with binary
marshaller:
* DataStreamProcessorSelfTest
* GridCacheAffinityRoutingSelfTest
* IgniteCacheAtomicLocalExpiryPolicyTest
* GridCacheEntryMemorySizeSelfTest
* GridCacheMvccSelfTest

Looks like these test were excluded from run as duplicates as they were a
part of another TC suite before BinaryMarshaller becomes a default
marshaller.

Quick investigation shows that
1. DataStreamProcessorSelfTest is DataStreamer test with keepBinary=false
mode and we never check this case
2. DataStreamProcessorBinarySelfTest (it's binary version) checks
keepBinary=true case within IgniteBinaryCacheTestSuite.


Should we stop ignoring mentioned tests or remove ones?
Thoughts?

-- 
Best regards,
Andrey V. Mashenkov


[GitHub] ignite pull request #5481: IGNITE-9145: Added EncodingSortingStrategy

2018-11-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5481


---


Re: Brainstorm: Make TC Run All faster

2018-11-26 Thread Petr Ivanov
I see, thanks!

> On 26 Nov 2018, at 11:46, Roman Kondakov  wrote:
> 
> Hi, Petr!
> 
> Actually these tests do not behave exactly as one test, but they behave 
> mostly as one test. Often this is expressed in the following: we have a 
> feature to test (i.e. transaction isolation) and different sets of parameters 
> to be tested with this feature: cache mode, backups number, cluster size, 
> persistence enabled/disabled, etc. And we have a list of tests with different 
> combinations of these parameters:
> 
> testTxIsolation(Replicated, 0 backups, 1 severs 0 clients,   persistence 
> disabled)
> testTxIsolation(Replicated, 0 backups, 2 severs 1 clients,   persistence 
> disabled)
> testTxIsolation(Replicated, 0 backups, 4 severs 2 clients,   persistence 
> disabled)
> testTxIsolation(Replicated, 0 backups, 1 severs 0 clients,   persistence 
> enabled)
> testTxIsolation(Replicated, 0 backups, 2 severs 1 clients,   persistence 
> enabled)
> testTxIsolation(Replicated, 0 backups, 4 severs 2 clients,   persistence 
> enabled)
> testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients,   persistence 
> disabled)
> testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients,   persistence 
> disabled)
> testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients,   persistence 
> disabled)
> testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients,   persistence 
> enabled)
> testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients,   persistence 
> enabled)
> testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients,   persistence 
> enabled)
> 
> Each test in this list represents a special case which should be tested. If  
> the key functionality of Tx isolation is broken by  developer - all tests 
> will be failed. But there are a very rare cases when only a subset of this 
> tests is failed when others are green. In my opinion for "fast" runs we 
> should trigger only one or two tests from this group - it should be enough to 
> detect the most of bugs. But for night runs, of course, all cases should be 
> checked.
> 
> -- 
> Kind Regards
> Roman Kondakov
> 
> On 26.11.2018 11:11, Petr Ivanov wrote:
>> Hi, Roman.
>> 
>>> On 25 Nov 2018, at 21:26, Roman Kondakov  wrote:
>>> 
>>> Hi Dmitriy!
>>> 
>>> We have over 50 000 test in our tests base. And this number will be 
>>> noticeably increased soon by MVCC tests coverage activity. This means that 
>>> it is very difficult to rework and rewrite these test manually to make it 
>>> run faster. But we can choose another way. Do we have an ability to perform 
>>> a statistical analysis over a considerable number of last tests runs? If we 
>>> do, let's consider two points:
>>> 
>>> 1. After careful consideration in terms of statistics it may turnout that 
>>> significant number of these test are "evergreen" tests. It means that these 
>>> tests check cases which are very difficult to break. If so, why should we 
>>> run these tests each time? They are great candidates for night runs.
>>> 
>>> 2. After dropping "evergreen" tests there are may be a number of tests with 
>>> correlated results. There could be a lot of test groups with a some number 
>>> of tests in each group, where either all tests are red or all tests are 
>>> green. In this case in "fast" runs we can launch only one test from each 
>>> group instead of entire group. Other tests in group can be launched at 
>>> night build.
>> What’s the point of having all this tests if they behave as one and will 
>> never fail exclusively?
>> Maybe such tests require optimisation and shrinking into one?
>> 
>> 
>>> 
>>> Having a list of "good" tests (good tests = all tests - evergreen tests - 
>>> groups (except chosen represenative from each group)), we can mark these 
>>> tests with annotation @Category (or @Tag in junit 5). For fast tests runs 
>>> we can run only annotated tests, for night runs - all tests as usual.
>>> 
>>> When new test is added developers could decide to add or not to add this 
>>> annotation.
>>> 
>>> Annotated tests list should be reviewed monthly or weekly. Or, if possible, 
>>> automate this procedure.
>>> 
>>> 
>>> -- 
>>> Kind Regards
>>> Roman Kondakov
>>> 
>>> On 15.11.2018 13:34, Dmitriy Pavlov wrote:
 Hi Igniters,
 
 
 
 Some of us started to use the Bot to get an approval of PRs. It helps to
 protect master from new failures, but this requires to run RunAll tests set
 for each commit and this makes markable pressure to TC infra.
 
 
 
 I would like to ask you to share your ideas on how to make runAll faster,
 maybe you can share any of your measurements and any other info about
 (possible) bottlenecks.
 
 
 
 Sincerely,
 
 Dmitriy Pavlov
 



Re: Brainstorm: Make TC Run All faster

2018-11-26 Thread Roman Kondakov

Hi, Petr!

Actually these tests do not behave exactly as one test, but they behave 
mostly as one test. Often this is expressed in the following: we have a 
feature to test (i.e. transaction isolation) and different sets of 
parameters to be tested with this feature: cache mode, backups number, 
cluster size, persistence enabled/disabled, etc. And we have a list of 
tests with different combinations of these parameters:


testTxIsolation(Replicated, 0 backups, 1 severs 0 clients,   persistence 
disabled)
testTxIsolation(Replicated, 0 backups, 2 severs 1 clients,   persistence 
disabled)
testTxIsolation(Replicated, 0 backups, 4 severs 2 clients,   persistence 
disabled)
testTxIsolation(Replicated, 0 backups, 1 severs 0 clients,   persistence 
enabled)
testTxIsolation(Replicated, 0 backups, 2 severs 1 clients,   persistence 
enabled)
testTxIsolation(Replicated, 0 backups, 4 severs 2 clients,   persistence 
enabled)
testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients,   
persistence disabled)
testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients,   
persistence disabled)
testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients,   
persistence disabled)
testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients,   
persistence enabled)
testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients,   
persistence enabled)
testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients,   
persistence enabled)


Each test in this list represents a special case which should be tested. 
If  the key functionality of Tx isolation is broken by  developer - all 
tests will be failed. But there are a very rare cases when only a subset 
of this tests is failed when others are green. In my opinion for "fast" 
runs we should trigger only one or two tests from this group - it should 
be enough to detect the most of bugs. But for night runs, of course, all 
cases should be checked.


--
Kind Regards
Roman Kondakov

On 26.11.2018 11:11, Petr Ivanov wrote:

Hi, Roman.


On 25 Nov 2018, at 21:26, Roman Kondakov  wrote:

Hi Dmitriy!

We have over 50 000 test in our tests base. And this number will be noticeably 
increased soon by MVCC tests coverage activity. This means that it is very 
difficult to rework and rewrite these test manually to make it run faster. But 
we can choose another way. Do we have an ability to perform a statistical 
analysis over a considerable number of last tests runs? If we do, let's 
consider two points:

1. After careful consideration in terms of statistics it may turnout that significant 
number of these test are "evergreen" tests. It means that these tests check 
cases which are very difficult to break. If so, why should we run these tests each time? 
They are great candidates for night runs.

2. After dropping "evergreen" tests there are may be a number of tests with correlated 
results. There could be a lot of test groups with a some number of tests in each group, where 
either all tests are red or all tests are green. In this case in "fast" runs we can 
launch only one test from each group instead of entire group. Other tests in group can be launched 
at night build.

What’s the point of having all this tests if they behave as one and will never 
fail exclusively?
Maybe such tests require optimisation and shrinking into one?




Having a list of "good" tests (good tests = all tests - evergreen tests - 
groups (except chosen represenative from each group)), we can mark these tests with 
annotation @Category (or @Tag in junit 5). For fast tests runs we can run only annotated 
tests, for night runs - all tests as usual.

When new test is added developers could decide to add or not to add this 
annotation.

Annotated tests list should be reviewed monthly or weekly. Or, if possible, 
automate this procedure.


--
Kind Regards
Roman Kondakov

On 15.11.2018 13:34, Dmitriy Pavlov wrote:

Hi Igniters,



Some of us started to use the Bot to get an approval of PRs. It helps to
protect master from new failures, but this requires to run RunAll tests set
for each commit and this makes markable pressure to TC infra.



I would like to ask you to share your ideas on how to make runAll faster,
maybe you can share any of your measurements and any other info about
(possible) bottlenecks.



Sincerely,

Dmitriy Pavlov



Re: Brainstorm: Make TC Run All faster

2018-11-26 Thread Petr Ivanov
Hi, Roman.

> On 25 Nov 2018, at 21:26, Roman Kondakov  wrote:
> 
> Hi Dmitriy!
> 
> We have over 50 000 test in our tests base. And this number will be 
> noticeably increased soon by MVCC tests coverage activity. This means that it 
> is very difficult to rework and rewrite these test manually to make it run 
> faster. But we can choose another way. Do we have an ability to perform a 
> statistical analysis over a considerable number of last tests runs? If we do, 
> let's consider two points:
> 
> 1. After careful consideration in terms of statistics it may turnout that 
> significant number of these test are "evergreen" tests. It means that these 
> tests check cases which are very difficult to break. If so, why should we run 
> these tests each time? They are great candidates for night runs.
> 
> 2. After dropping "evergreen" tests there are may be a number of tests with 
> correlated results. There could be a lot of test groups with a some number of 
> tests in each group, where either all tests are red or all tests are green. 
> In this case in "fast" runs we can launch only one test from each group 
> instead of entire group. Other tests in group can be launched at night build.

What’s the point of having all this tests if they behave as one and will never 
fail exclusively?
Maybe such tests require optimisation and shrinking into one?


> 
> 
> Having a list of "good" tests (good tests = all tests - evergreen tests - 
> groups (except chosen represenative from each group)), we can mark these 
> tests with annotation @Category (or @Tag in junit 5). For fast tests runs we 
> can run only annotated tests, for night runs - all tests as usual.
> 
> When new test is added developers could decide to add or not to add this 
> annotation.
> 
> Annotated tests list should be reviewed monthly or weekly. Or, if possible, 
> automate this procedure.
> 
> 
> -- 
> Kind Regards
> Roman Kondakov
> 
> On 15.11.2018 13:34, Dmitriy Pavlov wrote:
>> Hi Igniters,
>> 
>> 
>> 
>> Some of us started to use the Bot to get an approval of PRs. It helps to
>> protect master from new failures, but this requires to run RunAll tests set
>> for each commit and this makes markable pressure to TC infra.
>> 
>> 
>> 
>> I would like to ask you to share your ideas on how to make runAll faster,
>> maybe you can share any of your measurements and any other info about
>> (possible) bottlenecks.
>> 
>> 
>> 
>> Sincerely,
>> 
>> Dmitriy Pavlov
>> 



Re: Historical rebalance

2018-11-26 Thread Павлухин Иван
Igor,

Could you please clarify some points?

> 1) preserve list of active txs, sorted by the time of their first update 
> (using WAL ptr of first WAL record in tx)

Is this list maintained per transaction or per checkpoint (or per
something else)? Why can't we track only oldest active transaction
instead of whole active list?

> 4) find a checkpoint where the earliest tx exists in persisted txs and use 
> saved WAL ptr as a start point or apply current approach in case the active 
> tx list (sent on previous step) is empty

What is the base storage state on a demanding node to which we are
applying WAL records to? I mean the state before applying WAL records.
Do we apply all records simply one by one or filter out some of them?
пт, 23 нояб. 2018 г. в 11:22, Seliverstov Igor :
>
> Hi Igniters,
>
> Currently I’m working on possible approaches how to implement historical 
> rebalance (delta rebalance using WAL iterator) over MVCC caches.
>
> The main difficulty is that MVCC writes changes on tx active phase while 
> partition update version, aka update counter, is being applied on tx finish. 
> This means we cannot start iteration over WAL right from the pointer where 
> the update counter updated, but should include updates, which the transaction 
> that updated the counter did.
>
> These updates may be much earlier than the point where the update counter was 
> updated, so we have to be able to identify the point where the first update 
> happened.
>
> The proposed approach includes:
>
> 1) preserve list of active txs, sorted by the time of their first update 
> (using WAL ptr of first WAL record in tx)
>
> 2) persist this list on each checkpoint (together with TxLog for example)
>
> 4) send whole active tx list (transactions which were in active state at the 
> time the node was crushed, empty list in case of graceful node stop) as a 
> part of partition demand message.
>
> 4) find a checkpoint where the earliest tx exists in persisted txs and use 
> saved WAL ptr as a start point or apply current approach in case the active 
> tx list (sent on previous step) is empty
>
> 5) start iteration.
>
> Your thoughts?
>
> Regards,
> Igor



-- 
Best regards,
Ivan Pavlukhin