[GitHub] ignite pull request #4838: IGNITE-9700: Remove configurable values from meso...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4838


---


[jira] [Created] (IGNITE-10433) Web Console: "Import models" dialog doesn't unsubscribe from watching agent after closing

2018-11-27 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-10433:
-

 Summary: Web Console: "Import models" dialog doesn't unsubscribe 
from watching agent after closing
 Key: IGNITE-10433
 URL: https://issues.apache.org/jira/browse/IGNITE-10433
 Project: Ignite
  Issue Type: Task
  Components: wizards
Reporter: Alexey Kuznetsov
Assignee: Alexey Kuznetsov
 Fix For: 2.8


Noticed the next behaviour:
 # Open configuration page or cluster edit page.
 # Click import when agent is not run.
 # Click *Back* button to close *Connection to Ignite Web Agent is not 
established* dialog.
 # Start and stop web-agent.
*Connection to Ignite Web Agent is not established* dialog is shown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.7. Last Mile

2018-11-27 Thread Alexey Kuznetsov
Hi,

We found a regression https://issues.apache.org/jira/browse/IGNITE-10432

Please take a look.

-- 
Alexey Kuznetsov


[jira] [Created] (IGNITE-10432) regression on sqlline

2018-11-27 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-10432:
---

 Summary: regression on sqlline 
 Key: IGNITE-10432
 URL: https://issues.apache.org/jira/browse/IGNITE-10432
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.7
Reporter: Pavel Konstantinov
Assignee: Vladimir Ozerov
 Fix For: 2.7


{code}
2787/5325INSERT INTO City(ID, Name, CountryCode, District, Population) 
VALUES (2769,'Sokoto','NGA','Sokoto & Kebbi & Zam',204900);
Error: Value for a column 'DISTRICT' is too long. Maximum length: 20, actual 
length: 21 (state=23000,code=4008)
Aborting command set because "force" is false and command failed: "INSERT INTO 
City(ID, Name, CountryCode, District, Population) VALUES 
(2769,'Sokoto','NGA','Sokoto & Kebbi & Zam',204900);"
0: jdbc:ignite:thin://127.0.0.1:10800>
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: proposed design for thin client SQL management and monitoring (view running queries and kill it)

2018-11-27 Thread Denis Magda
Vladimir,

Please see inline

On Mon, Nov 19, 2018 at 8:23 AM Vladimir Ozerov 
wrote:

> Denis,
>
> I partially agree with you. But there are several problem with syntax
> proposed by you:
> 1) This is harder to implement technically - more parsing logic to
> implement. Ok, this is our internal problem, users do not care about it
> 2) User will have to consult to docs in any case
>

Two of these are not a big deal. We just need to invest more time for
development and during the design phase so that people need to consult the
docs rarely.


> 3) "nodeId" is not really node ID. For Ignite users node ID is UUID. In our
> case this is node order, and we intentionally avoided any naming here.
>

Let's use a more loose name such as "node".


> Query is just identified by a string, no more than that
> 4) Proposed syntax is more verbose and open ways for misuse. E.g. what is
> "KILL QUERY WHERE queryId=1234"?
>
> I am not 100% satisfied with both variants, but the first one looks simpler
> to me. Remember, that user will not guess query ID. Instead, he will get
> the list of running queries with some other syntax. What we need to
> understand for now is how this syntax will look like. I think that we
> should implement getting list of running queries, and only then start
> working on cancellation.
>

That's a good point. Syntax of both running and killing queires commands
should be tightly coupled. We're going to name a column if running queries
IDs somehow anyway and that name might be resued in the WHERE clause of
KILL.

Should we discuss the syntax in a separate thread?

--
Denis

>
> Vladimir.
>
>
> On Mon, Nov 19, 2018 at 7:02 PM Denis Mekhanikov 
> wrote:
>
> > Guys,
> >
> > Syntax like *KILL QUERY '25.1234'* look a bit cryptic to me.
> > I'm going to look up in documentation, which parameter goes first in this
> > query every time I use it.
> > I like the syntax, that Igor suggested more.
> > Will it be better if we make *nodeId* and *queryId *named properties?
> >
> > Something like this:
> > KILL QUERY WHERE nodeId=25 and queryId=1234
> >
> > Denis
> >
> > пт, 16 нояб. 2018 г. в 14:12, Юрий :
> >
> > > I fully agree with last sentences and can start to implement this part.
> > >
> > > Guys, thanks for your productive participate at discussion.
> > >
> > > пт, 16 нояб. 2018 г. в 2:53, Denis Magda :
> > >
> > > > Vladimir,
> > > >
> > > > Thanks, make perfect sense to me.
> > > >
> > > >
> > > > On Thu, Nov 15, 2018 at 12:18 AM Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Denis,
> > > > >
> > > > > The idea is that QueryDetailMetrics will be exposed through
> separate
> > > > > "historical" SQL view in addition to current API. So we are on the
> > same
> > > > > page here.
> > > > >
> > > > > As far as query ID I do not see any easy way to operate on a single
> > > > integer
> > > > > value (even 64bit). This is distributed system - we do not want to
> > have
> > > > > coordination between nodes to get query ID. And coordination is the
> > > only
> > > > > possible way to get sexy "long". Instead, I would propose to form
> ID
> > > from
> > > > > node order and query counter within node. This will be (int, long)
> > > pair.
> > > > > For use convenience we may convert it to a single string, e.g.
> > > > > "[node_order].[query_counter]". Then the syntax would be:
> > > > >
> > > > > KILL QUERY '25.1234'; // Kill query 1234 on node 25
> > > > > KILL QUERY '25.*; // Kill all queries on the node 25
> > > > >
> > > > > Makes sense?
> > > > >
> > > > > Vladimir.
> > > > >
> > > > > On Wed, Nov 14, 2018 at 1:25 PM Denis Magda 
> > wrote:
> > > > >
> > > > > > Yury,
> > > > > >
> > > > > > As I understand you mean that the view should contains both
> running
> > > and
> > > > > > > finished queries. If be honest for the view I was going to use
> > just
> > > > > > queries
> > > > > > > running right now. For finished queries I thought about another
> > > view
> > > > > with
> > > > > > > another set of fields which should include I/O related ones. Is
> > it
> > > > > works?
> > > > > >
> > > > > >
> > > > > > Got you, so if only running queries are there then your initial
> > > > proposal
> > > > > > makes total sense. Not sure we need a view of the finished
> queries.
> > > It
> > > > > will
> > > > > > be possible to analyze them through the updated DetailedMetrics
> > > > approach,
> > > > > > won't it?
> > > > > >
> > > > > > For "KILL QUERY node_id query_id"  node_id required as part of
> > unique
> > > > key
> > > > > > > of query and help understand Ignite which node start the
> > > distributed
> > > > > > query.
> > > > > > > Use both parameters will allow cheap generate unique key across
> > all
> > > > > > nodes.
> > > > > > > Node which started a query can cancel it on all nodes
> participate
> > > > > nodes.
> > > > > > > So, to stop any queries initially we need just send the cancel
> > > > request
> > > > > to
> > > > > > > node who started the query. This 

[GitHub] ignite pull request #5512: IGNITE-8542: [ML] Add OneVsRest Trainer to handle...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5512


---


[GitHub] ignite pull request #5519: IGNITE-10352 Final version

2018-11-27 Thread dgovorukhin
GitHub user dgovorukhin opened a pull request:

https://github.com/apache/ignite/pull/5519

IGNITE-10352 Final version



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10352-final

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5519.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5519


commit 6fd4f0121092eeadccb885e3f46ec872f88718ee
Author: Dmitriy Govorukhin 
Date:   2018-11-23T21:44:28Z

IGNITE-10352 add test

Signed-off-by: Dmitriy Govorukhin 

commit dceb2ad5d951e69f58f265e28f8ce5af4aed6009
Author: Dmitriy Govorukhin 
Date:   2018-11-27T21:41:01Z

IGNITE-10352 refactoring GridPartitionedGetFuture

Signed-off-by: Dmitriy Govorukhin 

commit 64eaeb955998df2aca21fab0e2c871a41e321894
Author: Dmitriy Govorukhin 
Date:   2018-11-27T22:00:52Z

IGNITE-10352 refactoring GridPartitionedGetFuture + minor changes

Signed-off-by: Dmitriy Govorukhin 

commit 3ab78b360bf58dbb11ea3ba803cdfefc2977c7ca
Author: Dmitriy Govorukhin 
Date:   2018-11-27T22:07:09Z

IGNITE-10352 refactoring GridPartitionedGetFuture + remove serUuid

Signed-off-by: Dmitriy Govorukhin 

commit fe2c0c888fe65e2eb4b0d491c4655d2fd429ad9d
Author: Dmitriy Govorukhin 
Date:   2018-11-27T22:44:24Z

IGNITE-10352 refactoring GridPartitionedSingleGetFuture

Signed-off-by: Dmitriy Govorukhin 




---


[GitHub] ignite pull request #5518: Ignite 10413 2

2018-11-27 Thread glukos
GitHub user glukos opened a pull request:

https://github.com/apache/ignite/pull/5518

Ignite 10413 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10413-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5518.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5518


commit c22934b36188f86cb360b8c34ba67acee42c98a3
Author: Andrey Kuznetsov 
Date:   2018-11-23T17:48:13Z

IGNITE-10079 FileWriteAheadLogManager may return invalid 
lastCompactedSegment - Fixes #5219.

Signed-off-by: Ivan Rakov 

commit 21579035eb3ec70dfc3661f2b20299340109ee0a
Author: Ivan Rakov 
Date:   2018-11-26T18:10:02Z

IGNITE-10413 Perform cache validation logic on primary node instead of near 
node

commit c563a0818142445254616c4742d09f18d416866b
Author: Ivan Rakov 
Date:   2018-11-26T18:11:10Z

IGNITE-10079 rollback

commit 8e9527f0566267ca509506462977dd933d07f106
Author: Ivan Rakov 
Date:   2018-11-27T21:50:49Z

ignite-10413 rollback

commit d3101ce5a0522329b14862cea47902d11e60096c
Author: Andrey Gura 
Date:   2018-11-12T15:34:06Z

TDR-127 Incorrect read only cluster detection fixed

(cherry picked from commit 069389d)




---


[GitHub] ignite pull request #5514: IGNITE-10429: Wrap Scanner in DirectorySerializer...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5514


---


[GitHub] ignite pull request #5517: IGNITE-10298 Cover possible deadlock in case of c...

2018-11-27 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/5517

IGNITE-10298 Cover possible deadlock in case of caches start and frequent 
checkpoints



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10298

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5517


commit efe84b1356bbb89ca059691fbbb112716a6aa85c
Author: Pavel Kovalenko 
Date:   2018-11-20T17:26:52Z

IGNITE-10298 Cover possible deadlock in case of caches start and frequent 
checkpoints.




---


[GitHub] dspavlov commented on issue #78: IGNITE-10203 Support for alternative configurations for PR testing

2018-11-27 Thread GitBox
dspavlov commented on issue #78: IGNITE-10203 Support for alternative 
configurations for PR testing
URL: 
https://github.com/apache/ignite-teamcity-bot/pull/78#issuecomment-442177816
 
 
   I suggest to
   - move logic from IgnitedTCImpl to standalone class e.g. BuildTypeSync in 
build type package
   - fix the case when service responding [] to a request of contributions.
   - fix current page (and probably build page) with correct SuiteId displaying 
or omitting it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (IGNITE-10431) Make tests independent of memory size

2018-11-27 Thread Sergi Vladykin (JIRA)
Sergi Vladykin created IGNITE-10431:
---

 Summary: Make tests independent of memory size
 Key: IGNITE-10431
 URL: https://issues.apache.org/jira/browse/IGNITE-10431
 Project: Ignite
  Issue Type: Bug
Reporter: Sergi Vladykin


The following tests were added to Page Compression suite and they fail because 
page size there increased to 8k.
{code:java}
org.apache.ignite.testsuites.IgnitePdsCompressionTestSuite2…g.apache.ignite.internal.processors.cache.persistence.db.wal
 (4)
 IgniteWALTailIsReachedDuringIterationOverArchiveTest.testStandAloneIterator    
 IgniteWalFormatFileFailoverTest.testFailureHandlerTriggered    
 IgniteWalFormatFileFailoverTest.testFailureHandlerTriggeredFsync   
 IgniteWalIteratorExceptionDuringReadTest.test  
org.apache.ignite.testsuites.IgnitePdsCompressionTestSuite2…e.ignite.internal.processors.cache.persistence.db.wal.reader
 (9)
 IgniteWalReaderTest.testCheckBoundsIterator    
 IgniteWalReaderTest.testFillWalAndReadRecords  
 IgniteWalReaderTest.testFillWalForExactSegmentsCount   
 IgniteWalReaderTest.testFillWalWithDifferentTypes  
 IgniteWalReaderTest.testPutAllTxIntoTwoNodes   
 IgniteWalReaderTest.testRemoveOperationPresentedForDataEntry   
 IgniteWalReaderTest.testRemoveOperationPresentedForDataEntryForAtomic  
 IgniteWalReaderTest.testTxFillWalAndExtractDataRecords 
 IgniteWalReaderTest.testTxRecordsReadWoBinaryMeta  
org.apache.ignite.testsuites.IgnitePdsCompressionTestSuite2…ache.ignite.internal.processors.cache.persistence.wal.reader
 (2)
 StandaloneWalRecordsIteratorTest.testCorrectClosingFileDescriptors 
 StandaloneWalRecordsIteratorTest.testStrictBounds 
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5516: IGNITE-9298 SSL support in control.sh

2018-11-27 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/5516

IGNITE-9298 SSL support in control.sh

I have merged two commits to get best of two worlds.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9298

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5516.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5516


commit 8de5e6fbb9e75deaff80337ae5e72675186653d8
Author: Ilya Kasnacheev 
Date:   2018-11-27T17:39:19Z

IGNITE-9298 SSL support in control.sh




---


[GitHub] ignite pull request #5446: IGNITE-10193: IgniteBaselineAffinityTopologyActiv...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5446


---


Re: [DISCUSSION] Design document. Rebalance caches by transferring partition files

2018-11-27 Thread Eduard Shangareev
Sergey,

>If I understand correctly when there is a continuous flow of updates to the
page already transferred to receiver checkpointer will write this page to
the log file over and over again. Do you see here any risks of exhausting
disk space on sender's side?

We could track the set of page which are in log file to avoid this issue
(any concurrent hash set would work fine);

> What if some updates come after checkpointer stopped updating log file?
How
these updates will be transferred to the receiver and applied there?

Temporary wal file on the receiver from the original approach should cover
this case.

On Tue, Nov 27, 2018 at 7:19 PM Sergey Chugunov 
wrote:

> Eduard,
>
> This algorithm looks much easier but could you clarify some edge cased
> please?
>
> If I understand correctly when there is a continuous flow of updates to the
> page already transferred to receiver checkpointer will write this page to
> the log file over and over again. Do you see here any risks of exhausting
> disk space on sender's side?
>
> What if some updates come after checkpointer stopped updating log file? How
> these updates will be transferred to the receiver and applied there?
>
> On Tue, Nov 27, 2018 at 7:52 PM Eduard Shangareev <
> eduard.shangar...@gmail.com> wrote:
>
> > So, after some discussion, I could describe another approach on how to
> > build consistent partition on the fly.
> >
> > 1. We make a checkpoint, fix the size of the partition in OffheapManager.
> > 2. After checkpoint finish, we start sending partition file (without any
> > lock) to the receiver from 0 to fixed size.
> > 3. Next checkpoints if they detect that they would override some pages of
> > transferring file should write the previous state of a page to a
> dedicated
> > file.
> > So, we would have a list of pages written 1 by 1, page id is written in
> the
> > page itself so we could determine page index. Let's name it log.
> > 4. When transfer finished checkpointer would stop updating log-file. Now
> we
> > are ready to send it to the receiver.
> > 5. On receiver side we start merging the dirty partition file with log
> > (updating it with pages from log-file).
> >
> > So, an advantage of this method:
> > - checkpoint-thread work couldn't  increase more than twice;
> > - checkpoint-threads shouldn't wait for anything;
> > - in best case, we receive partition without any extra effort.
> >
> >
> > On Mon, Nov 26, 2018 at 8:54 PM Eduard Shangareev <
> > eduard.shangar...@gmail.com> wrote:
> >
> > > Maxim,
> > >
> > > I have looked through your algorithm of reading partition consistently.
> > > And I have some questions/comments.
> > >
> > > 1. The algorithm requires heavy synchronization between
> checkpoint-thread
> > > and new-approach-rebalance-threads,
> > > because you need strong guarantees to not start writing or reading to
> > > chunk which was updated or started reading by the counterpart.
> > >
> > > 2. Also, if we have started transferring this chunk in original
> partition
> > > couldn't be updated by checkpoint-threads. They should wait for
> transfer
> > > finishing.
> > >
> > > 3. If sending is slow and partition is updated then in worst case
> > > checkpoint-threads would create the whole copy of the partition.
> > >
> > > So, what we have:
> > > -on every page write checkpoint-thread should synchronize with
> > > new-approach-rebalance-threads;
> > > -checkpoint-thread should do extra-work, sometimes this could be as
> huge
> > > as copying the whole partition.
> > >
> > >
> > > On Fri, Nov 23, 2018 at 2:55 PM Ilya Kasnacheev <
> > ilya.kasnach...@gmail.com>
> > > wrote:
> > >
> > >> Hello!
> > >>
> > >> This proposal will also happily break my compression-with-dictionary
> > patch
> > >> since it relies currently on only having local dictionaries.
> > >>
> > >> However, when you have compressed data, maybe speed boost is even
> > greater
> > >> with your approach.
> > >>
> > >> Regards,
> > >> --
> > >> Ilya Kasnacheev
> > >>
> > >>
> > >> пт, 23 нояб. 2018 г. в 13:08, Maxim Muzafarov :
> > >>
> > >> > Igniters,
> > >> >
> > >> >
> > >> > I'd like to take the next step of increasing the Apache Ignite with
> > >> > enabled persistence rebalance speed. Currently, the rebalancing
> > >> > procedure doesn't utilize the network and storage device throughout
> to
> > >> > its full extent even with enough meaningful values of
> > >> > rebalanceThreadPoolSize property. As part of the previous discussion
> > >> > `How to make rebalance faster` [1] and IEP-16 [2] Ilya proposed an
> > >> > idea [3] of transferring cache partition files over the network.
> > >> > From my point, the case to which this type of rebalancing procedure
> > >> > can bring the most benefit – is adding a completely new node or set
> of
> > >> > new nodes to the cluster. Such a scenario implies fully relocation
> of
> > >> > cache partition files to the new node. To roughly estimate the
> > >> > superiority of partition file transmitting over the network the
> 

[GitHub] ignite pull request #5344: IGNITE-10158 Some tests in IgniteCacheAbstractQue...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5344


---


[GitHub] ignite pull request #2882: ignite-1793

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2882


---


[jira] [Created] (IGNITE-10430) Ignite.NET Data Region Metrics: PagesRead, PagesWritten, PagesReplaced, OffHeapSize, OffheapUsedSize

2018-11-27 Thread Alexey Kukushkin (JIRA)
Alexey Kukushkin created IGNITE-10430:
-

 Summary: Ignite.NET Data Region Metrics: PagesRead, PagesWritten, 
PagesReplaced, OffHeapSize, OffheapUsedSize
 Key: IGNITE-10430
 URL: https://issues.apache.org/jira/browse/IGNITE-10430
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.6
Reporter: Alexey Kukushkin
Assignee: Alexey Kukushkin


Add Ignite.NET Data Region Metrics presently existing in Java but missing in 
.NET: PagesRead, PagesWritten, PagesReplaced, OffHeapSize, OffheapUsedSize



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5252: IGNITE-10106: Cache 5 test suite optimization

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5252


---


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-27 Thread Denis Mekhanikov
+1
I'm for making the dev list readable without filters of any kind.

On Tue, Nov 27, 2018, 15:14 Maxim Muzafarov  +1
>
> Let's have a look at how it will be.
>
> On Tue, 27 Nov 2018 at 14:48 Seliverstov Igor 
> wrote:
>
> > +1
> >
> > вт, 27 нояб. 2018 г. в 14:45, Юрий :
> >
> > > +1
> > >
> > > вт, 27 нояб. 2018 г. в 11:22, Andrey Mashenkov <
> > andrey.mashen...@gmail.com
> > > >:
> > >
> > > > +1
> > > >
> > > > On Tue, Nov 27, 2018 at 10:12 AM Sergey Chugunov <
> > > > sergey.chugu...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Plus this dedicated list should be properly documented in wiki,
> > > > mentioning
> > > > > it in How to Contribute [1] or in Make Teamcity Green Again [2]
> would
> > > be
> > > > a
> > > > > good idea.
> > > > >
> > > > > [1]
> > > https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again
> > > > >
> > > > > On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван  >
> > > > wrote:
> > > > >
> > > > > > +1
> > > > > > вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov <
> > somefire...@gmail.com
> > > >:
> > > > > > >
> > > > > > > 0
> > > > > > > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov <
> > > > akuznet...@apache.org
> > > > > >:
> > > > > > > >
> > > > > > > > +1
> > > > > > > > Do not forget notification from GitBox too!
> > > > > > > >
> > > > > > > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya
> > >  > > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1, already make it by filers.
> > > > > > > > >
> > > > > > > > > > This was discussed already [1].
> > > > > > > > > >
> > > > > > > > > > So, I want to complete this discussion with moving
> outside
> > > > > dev-list
> > > > > > > > > > GitHub-notification to dedicated list.
> > > > > > > > > >
> > > > > > > > > > Please start voting.
> > > > > > > > > >
> > > > > > > > > > +1 - to accept this change.
> > > > > > > > > > 0 - you don't care.
> > > > > > > > > > -1 - to decline this change.
> > > > > > > > > >
> > > > > > > > > > This vote will go for 72 hours.
> > > > > > > > > >
> > > > > > > > > > [1]
> > > > > > > > > >
> > > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Alexey Kuznetsov
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards,
> > > > > > Ivan Pavlukhin
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Andrey V. Mashenkov
> > > >
> > >
> > >
> > > --
> > > Живи с улыбкой! :D
> > >
> >
> --
> --
> Maxim Muzafarov
>


[GitHub] ignite pull request #5515: IGNITE-10348 Safely recreate metastore to mitigat...

2018-11-27 Thread SpiderRus
GitHub user SpiderRus opened a pull request:

https://github.com/apache/ignite/pull/5515

IGNITE-10348 Safely recreate metastore to mitigate IGNITE-8735



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10348

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5515.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5515


commit efda38b24ddfc7854b7fc15b43a0863de3420091
Author: Alexey Stelmak 
Date:   2018-11-27T16:38:53Z

Temporary storage




---


[GitHub] ignite pull request #5486: IGNITE-10390 Fixed BPlusTree#isEmpty

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5486


---


[GitHub] ignite pull request #5064: IGNITE-7072: IgniteCache.replace(k, v, nv) fix fo...

2018-11-27 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/5064


---


[GitHub] ignite pull request #5490: IGNITE-8718: Updated C++ doxygen comments about B...

2018-11-27 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/5490


---


[GitHub] ignite pull request #5038: IGNITE-9948

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5038


---


[GitHub] ignite pull request #5482: IGNITE-7441 Drop IGNITE_SERVICES_COMPATIBILITY_MO...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5482


---


Re: [DISCUSSION] Design document. Rebalance caches by transferring partition files

2018-11-27 Thread Sergey Chugunov
Eduard,

This algorithm looks much easier but could you clarify some edge cased
please?

If I understand correctly when there is a continuous flow of updates to the
page already transferred to receiver checkpointer will write this page to
the log file over and over again. Do you see here any risks of exhausting
disk space on sender's side?

What if some updates come after checkpointer stopped updating log file? How
these updates will be transferred to the receiver and applied there?

On Tue, Nov 27, 2018 at 7:52 PM Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:

> So, after some discussion, I could describe another approach on how to
> build consistent partition on the fly.
>
> 1. We make a checkpoint, fix the size of the partition in OffheapManager.
> 2. After checkpoint finish, we start sending partition file (without any
> lock) to the receiver from 0 to fixed size.
> 3. Next checkpoints if they detect that they would override some pages of
> transferring file should write the previous state of a page to a dedicated
> file.
> So, we would have a list of pages written 1 by 1, page id is written in the
> page itself so we could determine page index. Let's name it log.
> 4. When transfer finished checkpointer would stop updating log-file. Now we
> are ready to send it to the receiver.
> 5. On receiver side we start merging the dirty partition file with log
> (updating it with pages from log-file).
>
> So, an advantage of this method:
> - checkpoint-thread work couldn't  increase more than twice;
> - checkpoint-threads shouldn't wait for anything;
> - in best case, we receive partition without any extra effort.
>
>
> On Mon, Nov 26, 2018 at 8:54 PM Eduard Shangareev <
> eduard.shangar...@gmail.com> wrote:
>
> > Maxim,
> >
> > I have looked through your algorithm of reading partition consistently.
> > And I have some questions/comments.
> >
> > 1. The algorithm requires heavy synchronization between checkpoint-thread
> > and new-approach-rebalance-threads,
> > because you need strong guarantees to not start writing or reading to
> > chunk which was updated or started reading by the counterpart.
> >
> > 2. Also, if we have started transferring this chunk in original partition
> > couldn't be updated by checkpoint-threads. They should wait for transfer
> > finishing.
> >
> > 3. If sending is slow and partition is updated then in worst case
> > checkpoint-threads would create the whole copy of the partition.
> >
> > So, what we have:
> > -on every page write checkpoint-thread should synchronize with
> > new-approach-rebalance-threads;
> > -checkpoint-thread should do extra-work, sometimes this could be as huge
> > as copying the whole partition.
> >
> >
> > On Fri, Nov 23, 2018 at 2:55 PM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com>
> > wrote:
> >
> >> Hello!
> >>
> >> This proposal will also happily break my compression-with-dictionary
> patch
> >> since it relies currently on only having local dictionaries.
> >>
> >> However, when you have compressed data, maybe speed boost is even
> greater
> >> with your approach.
> >>
> >> Regards,
> >> --
> >> Ilya Kasnacheev
> >>
> >>
> >> пт, 23 нояб. 2018 г. в 13:08, Maxim Muzafarov :
> >>
> >> > Igniters,
> >> >
> >> >
> >> > I'd like to take the next step of increasing the Apache Ignite with
> >> > enabled persistence rebalance speed. Currently, the rebalancing
> >> > procedure doesn't utilize the network and storage device throughout to
> >> > its full extent even with enough meaningful values of
> >> > rebalanceThreadPoolSize property. As part of the previous discussion
> >> > `How to make rebalance faster` [1] and IEP-16 [2] Ilya proposed an
> >> > idea [3] of transferring cache partition files over the network.
> >> > From my point, the case to which this type of rebalancing procedure
> >> > can bring the most benefit – is adding a completely new node or set of
> >> > new nodes to the cluster. Such a scenario implies fully relocation of
> >> > cache partition files to the new node. To roughly estimate the
> >> > superiority of partition file transmitting over the network the native
> >> > Linux scp\rsync commands can be used. My test environment showed the
> >> > result of the new approach as 270 MB/s vs the current 40 MB/s
> >> > single-threaded rebalance speed.
> >> >
> >> >
> >> > I've prepared the design document IEP-28 [4] and accumulated all the
> >> > process details of a new rebalance approach on that page. Below you
> >> > can find the most significant details of the new rebalance procedure
> >> > and components of the Apache Ignite which are proposed to change.
> >> >
> >> > Any feedback is very appreciated.
> >> >
> >> >
> >> > *PROCESS OVERVIEW*
> >> >
> >> > The whole process is described in terms of rebalancing single cache
> >> > group and partition files would be rebalanced one-by-one:
> >> >
> >> > 1. The demander node sends the GridDhtPartitionDemandMessage to the
> >> > supplier node;
> >> > 2. When the supplier node receives 

[GitHub] ignite pull request #5514: IGNITE-10429: Wrap Scanner in DirectorySerializer...

2018-11-27 Thread dmitrievanthony
GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/5514

IGNITE-10429: Wrap Scanner in DirectorySerializerTest into 
try-with-resources.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10429

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5514.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5514


commit 82e5e487744cc5b56232b304eb90f040619272ea
Author: dmitrievanthony 
Date:   2018-11-27T16:01:27Z

IGNITE-10429: Wrap Scanner in DirectorySerializerTest into
try-with-resources.




---


[jira] [Created] (IGNITE-10429) ML: TensorFlowLocalInferenceExample fails on Windows

2018-11-27 Thread Anton Dmitriev (JIRA)
Anton Dmitriev created IGNITE-10429:
---

 Summary: ML: TensorFlowLocalInferenceExample fails on Windows
 Key: IGNITE-10429
 URL: https://issues.apache.org/jira/browse/IGNITE-10429
 Project: Ignite
  Issue Type: Bug
  Components: ml
Affects Versions: 2.8
Reporter: Anton Dmitriev
Assignee: Anton Dmitriev
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10428) [ML] Add example for OneVsRest trainer/model usage

2018-11-27 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10428:
-

 Summary: [ML] Add example for OneVsRest trainer/model usage
 Key: IGNITE-10428
 URL: https://issues.apache.org/jira/browse/IGNITE-10428
 Project: Ignite
  Issue Type: Wish
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


This example should use LogReg or SVM or DT to train multiclass model to 
distinguish classes on prepared dataset (generate or use wide-known dataset)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] Design document. Rebalance caches by transferring partition files

2018-11-27 Thread Eduard Shangareev
So, after some discussion, I could describe another approach on how to
build consistent partition on the fly.

1. We make a checkpoint, fix the size of the partition in OffheapManager.
2. After checkpoint finish, we start sending partition file (without any
lock) to the receiver from 0 to fixed size.
3. Next checkpoints if they detect that they would override some pages of
transferring file should write the previous state of a page to a dedicated
file.
So, we would have a list of pages written 1 by 1, page id is written in the
page itself so we could determine page index. Let's name it log.
4. When transfer finished checkpointer would stop updating log-file. Now we
are ready to send it to the receiver.
5. On receiver side we start merging the dirty partition file with log
(updating it with pages from log-file).

So, an advantage of this method:
- checkpoint-thread work couldn't  increase more than twice;
- checkpoint-threads shouldn't wait for anything;
- in best case, we receive partition without any extra effort.


On Mon, Nov 26, 2018 at 8:54 PM Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:

> Maxim,
>
> I have looked through your algorithm of reading partition consistently.
> And I have some questions/comments.
>
> 1. The algorithm requires heavy synchronization between checkpoint-thread
> and new-approach-rebalance-threads,
> because you need strong guarantees to not start writing or reading to
> chunk which was updated or started reading by the counterpart.
>
> 2. Also, if we have started transferring this chunk in original partition
> couldn't be updated by checkpoint-threads. They should wait for transfer
> finishing.
>
> 3. If sending is slow and partition is updated then in worst case
> checkpoint-threads would create the whole copy of the partition.
>
> So, what we have:
> -on every page write checkpoint-thread should synchronize with
> new-approach-rebalance-threads;
> -checkpoint-thread should do extra-work, sometimes this could be as huge
> as copying the whole partition.
>
>
> On Fri, Nov 23, 2018 at 2:55 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> This proposal will also happily break my compression-with-dictionary patch
>> since it relies currently on only having local dictionaries.
>>
>> However, when you have compressed data, maybe speed boost is even greater
>> with your approach.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 23 нояб. 2018 г. в 13:08, Maxim Muzafarov :
>>
>> > Igniters,
>> >
>> >
>> > I'd like to take the next step of increasing the Apache Ignite with
>> > enabled persistence rebalance speed. Currently, the rebalancing
>> > procedure doesn't utilize the network and storage device throughout to
>> > its full extent even with enough meaningful values of
>> > rebalanceThreadPoolSize property. As part of the previous discussion
>> > `How to make rebalance faster` [1] and IEP-16 [2] Ilya proposed an
>> > idea [3] of transferring cache partition files over the network.
>> > From my point, the case to which this type of rebalancing procedure
>> > can bring the most benefit – is adding a completely new node or set of
>> > new nodes to the cluster. Such a scenario implies fully relocation of
>> > cache partition files to the new node. To roughly estimate the
>> > superiority of partition file transmitting over the network the native
>> > Linux scp\rsync commands can be used. My test environment showed the
>> > result of the new approach as 270 MB/s vs the current 40 MB/s
>> > single-threaded rebalance speed.
>> >
>> >
>> > I've prepared the design document IEP-28 [4] and accumulated all the
>> > process details of a new rebalance approach on that page. Below you
>> > can find the most significant details of the new rebalance procedure
>> > and components of the Apache Ignite which are proposed to change.
>> >
>> > Any feedback is very appreciated.
>> >
>> >
>> > *PROCESS OVERVIEW*
>> >
>> > The whole process is described in terms of rebalancing single cache
>> > group and partition files would be rebalanced one-by-one:
>> >
>> > 1. The demander node sends the GridDhtPartitionDemandMessage to the
>> > supplier node;
>> > 2. When the supplier node receives GridDhtPartitionDemandMessage and
>> > starts the new checkpoint process;
>> > 3. The supplier node creates empty the temporary cache partition file
>> > with .tmp postfix in the same cache persistence directory;
>> > 4. The supplier node splits the whole cache partition file into
>> > virtual chunks of predefined size (multiply to the PageMemory size);
>> > 4.1. If the concurrent checkpoint thread determines the appropriate
>> > cache partition file chunk and tries to flush dirty page to the cache
>> > partition file
>> > 4.1.1. If rebalance chunk already transferred
>> > 4.1.1.1. Flush the dirty page to the file;
>> > 4.1.2. If rebalance chunk not transferred
>> > 4.1.2.1. Write this chunk to the temporary cache partition file;
>> > 4.1.2.2. Flush the dirty page to the file;
>> > 4.2. The node starts 

[GitHub] ignite pull request #5513: IGNITE-10427 Wrap state change future before send...

2018-11-27 Thread antonovsergey93
GitHub user antonovsergey93 opened a pull request:

https://github.com/apache/ignite/pull/5513

IGNITE-10427 Wrap state change future before sending activation request.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10427

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5513


commit 2653488c9bc575e5bf1c6212bc796bfe8d7a8098
Author: Sergey Antonov 
Date:   2018-11-27T15:32:57Z

IGNITE-10427 Wrap state change future before sending activation request.




---


[GitHub] ignite pull request #5512: IGNITE-8542: [ML] Add OneVsRest Trainer to handle...

2018-11-27 Thread zaleslaw
GitHub user zaleslaw opened a pull request:

https://github.com/apache/ignite/pull/5512

IGNITE-8542: [ML] Add OneVsRest Trainer to handle cases with multiple class 
labels in dataset



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8542

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5512


commit 5122879095068a4ebaa0def4ffcee2ba29b545f4
Author: zaleslaw 
Date:   2018-11-22T16:59:42Z

IGNITE-8542: Added MultiClassModel

commit 171f9c310a51585f224e01dc956a663fdbc0b0b5
Author: Zinoviev Alexey 
Date:   2018-11-25T14:40:17Z

Merge branch 'master' into ignite-8542

commit 018e29e92d867a2f64f938c829ff8179a8644b4e
Author: Zinoviev Alexey 
Date:   2018-11-26T11:52:16Z

Merge branch 'master' into ignite-8542

commit cc60887e449cf0f4ecf1f11211945606eae1b39b
Author: Zinoviev Alexey 
Date:   2018-11-27T15:25:41Z

IGNITE-8542: Added OneVsRest Trainer/MultiClass model




---


[jira] [Created] (IGNITE-10427) GridClusterStateProcessor#changeGlobalState0() should wrap future before sending ChangeGlobalStateMessage

2018-11-27 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-10427:
---

 Summary: GridClusterStateProcessor#changeGlobalState0() should 
wrap future before sending ChangeGlobalStateMessage
 Key: IGNITE-10427
 URL: https://issues.apache.org/jira/browse/IGNITE-10427
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Sergey Antonov
Assignee: Sergey Antonov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] Welcome Ilya Kasnacheev as a new committer

2018-11-27 Thread Павлухин Иван
Ilya, my congratulations!
вт, 27 нояб. 2018 г. в 13:56, Dmitriy Pavlov :
>
> Dear Igniters,
>
>
>
> It is last but not least announce. The Apache Ignite Project Management
> Committee (PMC) has invited Ilya Kasnacheev to become a new committer and
> are happy to announce that he has accepted.
>
>
>
> Being a committer enables you to more easily make changes without needing
> to go through the patch submission process.
>
>
>
> Ilya has both code contributions and valuable contribution into project and
> community,  we appreciate his effort in helping users on user-list,
> proofing-of-concept for compression, contributions into stability (Lost &
> found tests).
>
>
>
> Igniters,
>
>
> Please join me in welcoming Ilya and congratulating him on his new role in
> the Apache Ignite Community.
>
>
>
> Thanks
>
> Dmitriy Pavlov



-- 
Best regards,
Ivan Pavlukhin


Re: Lightweight profiling of messages processing

2018-11-27 Thread Vladimir Ozerov
Alexey,

I would say that to implement this feature successfully, we first need to
clearly understand specific scenarios we want to target, and only then plan
implementation in small iterations. Please take in count the following
points:
1) Splitting by message type might be too fine-grained thing for the first
iteration. For example, we have N different message types processed in a
system pool. Add several messages causing long-running tasks (e.g. invoke),
and deque time will grow for all other message types for the same queue. I
think we can start with a simpler metrics - per-pool queueing time, pool
throughput, pool task latencies. This would be enough for a wide number of
use cases. We may expand it in future if needed.
2) Some IO statistics were already implemented as a part of IGNITE-6868
[1]. See TcpCommunicationSpiMBeanImpl. You may find it useful for your task.
3) Public API should be designed upfront. Normally it should include JMX
and system SQL views (e.g. like [2]). JMX is useful for external tools, SQL
views allow access to performance data from all platforms all at once. This
is critical.
4) It should be possible to enable/disable such metrics in runtime.
Probably infrastructure from IGNITE-369 [3] might be reused for this
purpose (see GridCacheProcessor.EnableStatisticsFuture)
5) IO access statistics are already under development [4]. First part will
be merged very soon.
5) Performance effects must be measured extremely carefully in all modes
(in-memory, persistence, background/log_only/fsync), because you are likely
to change very performance sensitive code pieces. We had a lot of
performance issues when implementing IGNITE-6868 [1].

Vladimir.

[1] https://issues.apache.org/jira/browse/IGNITE-6868
[1] https://issues.apache.org/jira/browse/IGNITE-7700
[3] https://issues.apache.org/jira/browse/IGNITE-369
[4]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-27%3A+Page+IO+statistics

On Tue, Nov 27, 2018 at 5:04 PM Alexey Kukushkin 
wrote:

> Hi Alexei,
>
> Did you consider general-purpose APM
>  tools
> like free InspectIT  or commercial
> DynaTrace  or AppDynamics
> ?
>
> Java APM tools do not require writing code or even instrumenting Ignite
> binaries: you just attach them as javaagent
> 
> to
> Ignite JVM and then you can configure "sensors" to track whatever API you
> want. APM tools collect metrics from the sensors and provide sophisticated
> analysis tools.
>
> DynaTrace claims they have 2%-7% overhead (depending on application) but
> you can always detach the tool if you do not always need it in production.
>
> I did not try APM for Ignite myself but it might work well.
>
> On Tue, Nov 27, 2018 at 4:37 PM Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
> > Igniters,
> >
> > At work I often have to solve performance issues with Ignite cluster
> > without having access to source code of running user application.
> >
> > Looks like Ignite have limited capabilities to identify bottlenecks
> without
> > extensive profiling on server and client side (JFR recording , sampling
> > profilers, regular thread dumps,  etc), which is not always possible.
> >
> > Even having profiling data not always helpful for determining several
> types
> > of bottlenecks, on example, if where is a contention on single
> > key/partition.
> >
> > I propose to implement new feature, like lightweight profiling of message
> > processing.
> >
> > The feature will allow to have view on message processing statistics for
> > each node and for all grid nodes.
> >
> > In short, it's necessary to track each message execution in executors and
> > record execution statistics like synchronous execution time in executor
> > thread and waiting time in queue.
> >
> > Full description:
> >
> > 1. Implement detailed tracking of message waiting in queue and actual
> > processing by executors with splitting to several time bins. Example of
> > detailed statistics for each processed message:
> >
> >
> > Processing time(%)
> >  Message   Total Average(ms)
> >  < 1 ms   < 10 ms< 30ms< 50ms   < 100ms   < 250ms   <
> > 500ms   < 750ms  < 1000ms  > 1000ms
> >
> >
> 
> > GridNearSingleGetRequest   9043113720.023000
> >   904240521 57394  7242  3961  1932   229
> > 6124 4 4
> >GridNearSingleGetResponse   3401344160.041000
> >   340118791 11660  1167   729   901  1001
> > 158 8 1 0
> >  GridNearLockRequest

Re: Lightweight profiling of messages processing

2018-11-27 Thread Vladimir Ozerov
Clarification: p.5 is about buffer pool and disk access statistics, not
network.

On Tue, Nov 27, 2018 at 6:02 PM Vladimir Ozerov 
wrote:

> Alexey,
>
> I would say that to implement this feature successfully, we first need to
> clearly understand specific scenarios we want to target, and only then plan
> implementation in small iterations. Please take in count the following
> points:
> 1) Splitting by message type might be too fine-grained thing for the first
> iteration. For example, we have N different message types processed in a
> system pool. Add several messages causing long-running tasks (e.g. invoke),
> and deque time will grow for all other message types for the same queue. I
> think we can start with a simpler metrics - per-pool queueing time, pool
> throughput, pool task latencies. This would be enough for a wide number of
> use cases. We may expand it in future if needed.
> 2) Some IO statistics were already implemented as a part of IGNITE-6868
> [1]. See TcpCommunicationSpiMBeanImpl. You may find it useful for your task.
> 3) Public API should be designed upfront. Normally it should include JMX
> and system SQL views (e.g. like [2]). JMX is useful for external tools, SQL
> views allow access to performance data from all platforms all at once. This
> is critical.
> 4) It should be possible to enable/disable such metrics in runtime.
> Probably infrastructure from IGNITE-369 [3] might be reused for this
> purpose (see GridCacheProcessor.EnableStatisticsFuture)
> 5) IO access statistics are already under development [4]. First part will
> be merged very soon.
> 5) Performance effects must be measured extremely carefully in all modes
> (in-memory, persistence, background/log_only/fsync), because you are likely
> to change very performance sensitive code pieces. We had a lot of
> performance issues when implementing IGNITE-6868 [1].
>
> Vladimir.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6868
> [1] https://issues.apache.org/jira/browse/IGNITE-7700
> [3] https://issues.apache.org/jira/browse/IGNITE-369
> [4]
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-27%3A+Page+IO+statistics
>
> On Tue, Nov 27, 2018 at 5:04 PM Alexey Kukushkin <
> kukushkinale...@gmail.com> wrote:
>
>> Hi Alexei,
>>
>> Did you consider general-purpose APM
>>  tools
>> like free InspectIT  or
>> commercial
>> DynaTrace  or AppDynamics
>> ?
>>
>> Java APM tools do not require writing code or even instrumenting Ignite
>> binaries: you just attach them as javaagent
>> 
>> to
>> Ignite JVM and then you can configure "sensors" to track whatever API you
>> want. APM tools collect metrics from the sensors and provide sophisticated
>> analysis tools.
>>
>> DynaTrace claims they have 2%-7% overhead (depending on application) but
>> you can always detach the tool if you do not always need it in production.
>>
>> I did not try APM for Ignite myself but it might work well.
>>
>> On Tue, Nov 27, 2018 at 4:37 PM Alexei Scherbakov <
>> alexey.scherbak...@gmail.com> wrote:
>>
>> > Igniters,
>> >
>> > At work I often have to solve performance issues with Ignite cluster
>> > without having access to source code of running user application.
>> >
>> > Looks like Ignite have limited capabilities to identify bottlenecks
>> without
>> > extensive profiling on server and client side (JFR recording , sampling
>> > profilers, regular thread dumps,  etc), which is not always possible.
>> >
>> > Even having profiling data not always helpful for determining several
>> types
>> > of bottlenecks, on example, if where is a contention on single
>> > key/partition.
>> >
>> > I propose to implement new feature, like lightweight profiling of
>> message
>> > processing.
>> >
>> > The feature will allow to have view on message processing statistics for
>> > each node and for all grid nodes.
>> >
>> > In short, it's necessary to track each message execution in executors
>> and
>> > record execution statistics like synchronous execution time in executor
>> > thread and waiting time in queue.
>> >
>> > Full description:
>> >
>> > 1. Implement detailed tracking of message waiting in queue and actual
>> > processing by executors with splitting to several time bins. Example of
>> > detailed statistics for each processed message:
>> >
>> >
>> > Processing time(%)
>> >  Message   Total Average(ms)
>> >  < 1 ms   < 10 ms< 30ms< 50ms   < 100ms   < 250ms   <
>> > 500ms   < 750ms  < 1000ms  > 1000ms
>> >
>> >
>> 
>> > GridNearSingleGetRequest   9043113720.023000
>> >   904240521   

Re: Code inspection

2018-11-27 Thread Dmitriy Pavlov
I'm totally with you in this decision, let's move the file.

вт, 27 нояб. 2018 г. в 16:24, Maxim Muzafarov :

> Igniters,
>
> I propose to make inspection configuration default on the project
> level. I've created a new issue [1] for it. It can be easily done and
> recommend by IntelliJ documentation [2].
> Thoughts?
>
>
> Vyacheslav,
>
> Can you share an example of your warnings?
> Currently, we have different inspection configurations:
> - ignite_inspections.xml - to import inspections as default and use it
> daily.
> - ignite_inspections_teamcity.xml - config to run it on TC. Only fixed
> rules in the project code are enabled. Each of these rules are marked
> with ERROR level.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-10422
> [2] https://www.jetbrains.com/help/idea/code-inspection.html
> On Tue, 20 Nov 2018 at 13:58, Nikolay Izhikov  wrote:
> >
> > Hello, Vyacheslav.
> >
> > Yes, we have.
> >
> > Maxim Muzafarov, can you fix it, please?
> >
> > вт, 20 нояб. 2018 г., 13:10 Vyacheslav Daradur daradu...@gmail.com:
> >
> > > Guys, why we have 2 different inspection files in the repo?
> > > idea\ignite_inspections.xml
> > > idea\ignite_inspections_teamcity.xml
> > >
> > > AFAIK TeamCity is able to use the same inspection file with IDE.
> > >
> > > I've imported 'idea\ignite_inspections.xml' in the IDE, but now see
> > > inspection warnings for my PR on TC because of different rules.
> > >
> > >
> > > On Sun, Nov 11, 2018 at 6:06 PM Maxim Muzafarov 
> > > wrote:
> > > >
> > > > Yakov, Dmitry,
> > > >
> > > > Which example of unsuccessful suite execution do we need?
> > > > Does the current fail [1] in the master branch enough to configure
> > > > notifications by TC.Bot?
> > > >
> > > > > Please consider adding more checks
> > > > > - line endings. I think we should only have \n
> > > > > - ensure blank line at the end of file
> > > >
> > > > It seems to me that `line endings` is easy to add, but for the `blank
> > > > line at the end` we need as special regexp. Can we focus on built-in
> > > > IntelliJ inspections at first and fix others special further?
> > > >
> > > > [1]
> > >
> https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_InspectionsCore_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> > > > On Sun, 11 Nov 2018 at 17:55, Maxim Muzafarov 
> > > wrote:
> > > > >
> > > > > Igniters,
> > > > >
> > > > > Since the inspection rules are included in RunAll a few members of
> the
> > > > > community mentioned a wide distributed execution time on TC agents:
> > > > >  - 1h:27m:38s publicagent17_9094
> > > > >  - 38m:04s publicagent17_9094
> > > > >  - 33m:29s publicagent17_9094
> > > > >  - 17m:13s publicagent17_9094
> > > > > It seems that we should configure the resources distribution
> across TC
> > > > > containers. Can anyone take a look at it?
> > > > >
> > > > >
> > > > > I've also prepared the short list of rules to work on:
> > > > > + Inconsistent line separators (6 matches)
> > > > > + Problematic whitespace (4 matches)
> > > > > + expression.equals("literal")' rather than
> > > > > '"literal".equals(expression) (53 matches)
> > > > > + Unnecessary 'null' check before 'instanceof' expression or call
> (42
> > > matches)
> > > > > + Redundant 'if' statement (69 matches)
> > > > > + Redundant interface declaration (28 matches)
> > > > > + Double negation (0 matches)
> > > > > + Unnecessary code block (472 matches)
> > > > > + Line is longer than allowed by code style (2614 matches) (Is it
> > > > > possible to implement?)
> > > > >
> > > > > WDYT?
> > > > >
> > > > > On Fri, 26 Oct 2018 at 23:43, Dmitriy Pavlov <
> dpavlov@gmail.com>
> > > wrote:
> > > > > >
> > > > > > Hi Maxim,
> > > > > >
> > > > > >  thank you for your efforts to make this happen. Keep the pace!
> > > > > >
> > > > > > Could you please provide an example of how Inspections can fail,
> so
> > > I or
> > > > > > another contributor could implement support of these failures
> > > validation in
> > > > > > the Tc Bot.
> > > > > >
> > > > > > Sincerely,
> > > > > > Dmitriy Pavlov
> > > > > >
> > > > > > пт, 26 окт. 2018 г. в 18:27, Yakov Zhdanov  >:
> > > > > >
> > > > > > > Maxim,
> > > > > > >
> > > > > > > Thanks for response, let's do it the way you suggested.
> > > > > > >
> > > > > > > Please consider adding more checks
> > > > > > > - line endings. I think we should only have \n
> > > > > > > - ensure blank line in the end of file
> > > > > > >
> > > > > > > All these are code reviews issues I pointed out many times when
> > > reviewing
> > > > > > > conributions. It would be cool if we have TC build failing if
> > > there is any.
> > > > > > >
> > > > > > > Thanks!
> > > > > > >
> > > > > > > --Yakov
> > > > > > >
> > >
> > >
> > >
> > > --
> > > Best Regards, Vyacheslav D.
> > >
>


[jira] [Created] (IGNITE-10426) [ML] Spread parameter isKeepRawLabels across all models

2018-11-27 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-10426:
-

 Summary: [ML] Spread parameter isKeepRawLabels across all models
 Key: IGNITE-10426
 URL: https://issues.apache.org/jira/browse/IGNITE-10426
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Affects Versions: 2.8
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
 Fix For: 2.8


Currently, a few models has the parameter isKeepRawLabels and threshold to 
change predicted value to one of class labels 1 or 0.

Discuss this in dev-list and think how to solve this task to optimize 
MultiClassModel

Possible solution:
 * add these methods to common model
 * add this method to MultiClassModel and use reflection to check this 
parameter in apply method for example



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10425) Node failed to add to topology due to problem with detecting local Address

2018-11-27 Thread Max Shonichev (JIRA)
Max Shonichev created IGNITE-10425:
--

 Summary: Node failed to add to topology due to problem with 
detecting local Address
 Key: IGNITE-10425
 URL: https://issues.apache.org/jira/browse/IGNITE-10425
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Max Shonichev
 Fix For: 2.8


When localhost has resolvable DNS name to 127.0.0.1, running two nodes on with 
"localHost" property set to 127.0.0.1 might result in following exception:
{noformat}

Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to add node 
to topology because remote node is configured to use loopback address, but 
local node is not (consider changing 'localAddress' configuration parameter) 
[locNodeAddrs=[prtagent07.gridgain.local/0:0:0:0:0:0:0:1, 
prtagent07.gridgain.local/127.0.0.1, /172.25.2.7, 
/2001:0:9d38:6abd:24c2:3fcd:53e6:fdf8], rmtNodeAddrs=[127.0.0.1], 
creatorNodeId=28c5fc84-30aa-4d24-a576-ac9a866a4a8b]
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:970)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:377)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1955)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 15 more
{noformat}

This looks extremely silly, both nodes are started locally and still can't 
connect to each other



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Service grid redesign

2018-11-27 Thread Vyacheslav Daradur
We had the private talk with Nikolay Izhikov, Vladimir Ozerov, Alexey
Goncharuk, Yakov Zhdanov, Denis Mekhanikov, Dmitriy Pavlov and I would
like to share the summary with the community:

The architecture of the implemented deployment process looks good in
general, but the following points should be improved:
* The new implementation of service processor implementation should be
moved to a new class;
* A new system property should be introduced to allow users to switch
to old implementation in case of errors;
* Introduced service deployment failures policy should be removed from
current PR and should be implemented as a different task with detailed
discussion on dev list to avoid unexpected behavior;
* The word "exchange" should be removed from classes names to avoid
confusion with PME classes.
* Single/full messages should include containing only deployment
process-related information only (instead of all service) to reduce
messages size;

Thanks all! I'll let you know once I fix the notes.

On Wed, Nov 21, 2018 at 4:28 PM Dmitriy Pavlov  wrote:
>
> Hi Vyacheslav, Vladimir,
>
> Could you please invite me, if you will set up a call.
>
> ср, 21 нояб. 2018 г. в 13:08, Vladimir Ozerov :
>
> > Hi Vyacheslav,
> >
> > Still not clear enough for me. I do not see a reason to send another over a
> > ring in case of successful execution. The only reason is an error on a node
> > which require correction (re-deploy to other node, full service undeploy,
> > etc).
> > I think it makes sense to organize another call to discuss current
> > architecture. Otherwise we may spend too much time on emails.
> >
> > Vladimir.
> >
> > On Wed, Nov 21, 2018 at 12:57 PM Vyacheslav Daradur 
> > wrote:
> >
> > > The full map is needed:
> > > 1) to propagate deployment results which could be different from
> > > locally calculated in case of any errors;
> > > 2) to transfer deployment errors across the cluster;
> > > 3) to undeploy exceeded the number of service instances if needed;
> > > 4) to get know other nodes that deployment process was finished, this
> > > need to avoid calling services which have not been deployed yet (or
> > > can't be deployed). We can't just store pending requests because of
> > > time to deploy one service instances which may be significant.
> > > On Wed, Nov 21, 2018 at 12:45 PM Vladimir Ozerov 
> > > wrote:
> > > >
> > > > Vyacheslav,
> > > >
> > > > I looked at the document and failed to find explanation why full maps
> > are
> > > > needed. Could you point me to a place where it is explained?
> > > > I ask this because my impression from last discussion was that it is
> > > never
> > > > needed. Service status change is initiated by user action, then all
> > nodes
> > > > perform respective action locally, then they reply to coordinator, then
> > > > coordinator reply to the client, no need a kind of "full" map over
> > > > discovery again. The only situation when another message over ring is
> > > > required, is when some node failed to execute local operation (for
> > > whatever
> > > > reason) and corrective action is required.
> > > >
> > > > Am I missing something?
> > > >
> > > > On Wed, Nov 21, 2018 at 11:50 AM Vyacheslav Daradur <
> > daradu...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Denis, I suggested new names above in the thread.
> > > > >
> > > > > Please, look at PME document [1] is should be quiet actual to show
> > the
> > > > > same flow.
> > > > >
> > > > > [1]
> > > > >
> > >
> > https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood
> > > > >
> > > > > On Wed, Nov 21, 2018 at 11:43 AM Denis Mekhanikov <
> > > dmekhani...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Vyacheslav,
> > > > > >
> > > > > > Actually, the service assignment is implemented in a way,
> > > > > > that allows every node calculate the assignment itself, so no
> > > information
> > > > > > needs to be shared.
> > > > > > The only data, that is sent between nodes is deployment results,
> > > > > > and I don't see an analogy with exchange here.
> > > > > >
> > > > > > Denis
> > > > > >
> > > > > > ср, 21 нояб. 2018 г. в 11:16, Vladimir Ozerov <
> > voze...@gridgain.com
> > > >:
> > > > > >
> > > > > > > Hi Vyacheslav,
> > > > > > >
> > > > > > > Could you please explain in what situation coordinator needs to
> > > collect
> > > > > > > service deployments info from all nodes and share it with the
> > > cluster?
> > > > > I
> > > > > > > cannot remember from our design discussion when it is needed.
> > > Global
> > > > > state
> > > > > > > normally shared through discovery and only on node join, In this
> > > case
> > > > > we
> > > > > > > use "DiscoveryDataBags", not separate messages.
> > > > > > >
> > > > > > > On Wed, Nov 21, 2018 at 11:11 AM Vyacheslav Daradur <
> > > > > daradu...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I think request-response is not suitable terms.
> > > > > > > >
> > > > > > > > Nodes send to coordinator maps 

Re: Lightweight profiling of messages processing

2018-11-27 Thread Alexey Kukushkin
Hi Alexei,

Did you consider general-purpose APM
 tools
like free InspectIT  or commercial
DynaTrace  or AppDynamics
?

Java APM tools do not require writing code or even instrumenting Ignite
binaries: you just attach them as javaagent
 to
Ignite JVM and then you can configure "sensors" to track whatever API you
want. APM tools collect metrics from the sensors and provide sophisticated
analysis tools.

DynaTrace claims they have 2%-7% overhead (depending on application) but
you can always detach the tool if you do not always need it in production.

I did not try APM for Ignite myself but it might work well.

On Tue, Nov 27, 2018 at 4:37 PM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:

> Igniters,
>
> At work I often have to solve performance issues with Ignite cluster
> without having access to source code of running user application.
>
> Looks like Ignite have limited capabilities to identify bottlenecks without
> extensive profiling on server and client side (JFR recording , sampling
> profilers, regular thread dumps,  etc), which is not always possible.
>
> Even having profiling data not always helpful for determining several types
> of bottlenecks, on example, if where is a contention on single
> key/partition.
>
> I propose to implement new feature, like lightweight profiling of message
> processing.
>
> The feature will allow to have view on message processing statistics for
> each node and for all grid nodes.
>
> In short, it's necessary to track each message execution in executors and
> record execution statistics like synchronous execution time in executor
> thread and waiting time in queue.
>
> Full description:
>
> 1. Implement detailed tracking of message waiting in queue and actual
> processing by executors with splitting to several time bins. Example of
> detailed statistics for each processed message:
>
>
> Processing time(%)
>  Message   Total Average(ms)
>  < 1 ms   < 10 ms< 30ms< 50ms   < 100ms   < 250ms   <
> 500ms   < 750ms  < 1000ms  > 1000ms
>
> 
> GridNearSingleGetRequest   9043113720.023000
>   904240521 57394  7242  3961  1932   229
> 6124 4 4
>GridNearSingleGetResponse   3401344160.041000
>   340118791 11660  1167   729   901  1001
> 158 8 1 0
>  GridNearLockRequest770886890.079000
>77073458 11945  2299   643   31131
> 2 0 0 0
>  GridNearAtomicSingleUpdateInvokeRequest396457520.298000
>39580914 28222  6469  4638  9870 13414
> 2087   137 1 0
> GridDhtAtomicSingleUpdateRequest376368290.277000
>37579375 23247  5915  4210  8954 12917
> 2048   162 1 0
>  GridDhtAtomicDeferredUpdateResponse335801980.002000
>33579805   33751 3 1 1
> 0 0 0 0
> GridNearTxPrepareRequest216677900.238000
>21078069580126  1622  1261  2531  3631
> 49640 014
> GridDhtTxPrepareResponse209498730.316000
>17130161   3803105  4615  3162  4489  3721
> 57734 1 8
>  GridDhtTxPrepareRequest209498380.501000
>16158732   4750217 16183  5735  8472  8994
> 1353881153
>  GridDhtTxFinishResponse138350650.007000
>13832519  2476272814 1
> 0 0 0 0
>   GridDhtTxFinishRequest138350280.547000
>12084106   1736789  8971  2340  1792   807
> 11841 460
>  GridNearTxFinishRequest137621970.725000
>11811828   1942499  4441  1400  1201   524
> 893419   162
>GridDhtAtomicNearResponse 27844220.122000
> 2783393  1022 5 2 0 0
> 0 0 0 0
>   GridNearGetRequest 23604830.484000
> 2345937 14129   244   10164 8
> 0 0 0 0
>  GridNearGetResponse 19842430.054000
> 1981905  2327 8 1 1 1
> 

[GitHub] ignite pull request #5511: Ignite 10417

2018-11-27 Thread voropava
GitHub user voropava opened a pull request:

https://github.com/apache/ignite/pull/5511

Ignite 10417



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10417

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5511


commit bcdf0c0ee8fc14c50dfbf704dc1c0638cea3eb9d
Author: Pavel Voronkin 
Date:   2018-11-09T12:00:03Z

IGNITE-10417 Fixed spiState in notifyDisoveryListener().

commit 22c1f9c3846f71993e856e30acea930f31fb5efe
Author: Pavel Voronkin 
Date:   2018-11-09T12:00:03Z

IGNITE-10417 Added assert on ring.node() existence.

commit a5502af7b2181c275bf17bf22c29a7dbaa5d4763
Author: Pavel Voronkin 
Date:   2018-11-09T12:00:03Z

IGNITE-10417 Fixed spiState in notifyDisoveryListener().

commit 326e137ee83c4355e5195e53ab1cac9052579ac0
Author: Pavel Voronkin 
Date:   2018-11-09T12:00:03Z

IGNITE-10417 Logging on custom discovery events.




---


Lightweight profiling of messages processing

2018-11-27 Thread Alexei Scherbakov
Igniters,

At work I often have to solve performance issues with Ignite cluster
without having access to source code of running user application.

Looks like Ignite have limited capabilities to identify bottlenecks without
extensive profiling on server and client side (JFR recording , sampling
profilers, regular thread dumps,  etc), which is not always possible.

Even having profiling data not always helpful for determining several types
of bottlenecks, on example, if where is a contention on single
key/partition.

I propose to implement new feature, like lightweight profiling of message
processing.

The feature will allow to have view on message processing statistics for
each node and for all grid nodes.

In short, it's necessary to track each message execution in executors and
record execution statistics like synchronous execution time in executor
thread and waiting time in queue.

Full description:

1. Implement detailed tracking of message waiting in queue and actual
processing by executors with splitting to several time bins. Example of
detailed statistics for each processed message:


Processing time(%)
 Message   Total Average(ms)
 < 1 ms   < 10 ms< 30ms< 50ms   < 100ms   < 250ms   <
500ms   < 750ms  < 1000ms  > 1000ms

GridNearSingleGetRequest   9043113720.023000
  904240521 57394  7242  3961  1932   229
6124 4 4
   GridNearSingleGetResponse   3401344160.041000
  340118791 11660  1167   729   901  1001
158 8 1 0
 GridNearLockRequest770886890.079000
   77073458 11945  2299   643   31131
2 0 0 0
 GridNearAtomicSingleUpdateInvokeRequest396457520.298000
   39580914 28222  6469  4638  9870 13414
2087   137 1 0
GridDhtAtomicSingleUpdateRequest376368290.277000
   37579375 23247  5915  4210  8954 12917
2048   162 1 0
 GridDhtAtomicDeferredUpdateResponse335801980.002000
   33579805   33751 3 1 1
0 0 0 0
GridNearTxPrepareRequest216677900.238000
   21078069580126  1622  1261  2531  3631
49640 014
GridDhtTxPrepareResponse209498730.316000
   17130161   3803105  4615  3162  4489  3721
57734 1 8
 GridDhtTxPrepareRequest209498380.501000
   16158732   4750217 16183  5735  8472  8994
1353881153
 GridDhtTxFinishResponse138350650.007000
   13832519  2476272814 1
0 0 0 0
  GridDhtTxFinishRequest138350280.547000
   12084106   1736789  8971  2340  1792   807
11841 460
 GridNearTxFinishRequest137621970.725000
   11811828   1942499  4441  1400  1201   524
893419   162
   GridDhtAtomicNearResponse 27844220.122000
2783393  1022 5 2 0 0
0 0 0 0
  GridNearGetRequest 23604830.484000
2345937 14129   244   10164 8
0 0 0 0
 GridNearGetResponse 19842430.054000
1981905  2327 8 1 1 1
0 0 0 0
   GridNearTxPrepareResponse  1928560.153000
 192660   188 1 5 1 1
0 0 0 0
GridNearLockResponse  1927800.091000
 192667   107 3 0 3 0
0 0 0 0
GridNearTxFinishResponse 1770.822000
12947 1 0 0 0
0 0 0 0
   GridNearAtomicSingleUpdateRequest 1244.803000
 525319 0 0 0
0 0 0 0
GridNearAtomicUpdateResponse 1200.448000
11010 0 0 0 0
0 0 0 0

  1544912252
 1531765132  12965900  

[GitHub] ignite pull request #5510: IGNITE-10423 print dump thread on workersRegistry

2018-11-27 Thread akalash
GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/5510

IGNITE-10423 print dump thread on workersRegistry



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10423

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5510.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5510


commit 147d82b22dee3b79d6ecb19198c802fffaf1cf05
Author: Anton Kalashnikov 
Date:   2018-11-27T13:31:36Z

IGNITE-10423 print dump thread on workersRegistry




---


[jira] [Created] (IGNITE-10424) expected SqlException not thrown

2018-11-27 Thread Max Shonichev (JIRA)
Max Shonichev created IGNITE-10424:
--

 Summary: expected SqlException not thrown
 Key: IGNITE-10424
 URL: https://issues.apache.org/jira/browse/IGNITE-10424
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Max Shonichev
 Fix For: 2.8


When running query 
{noformat}
SELECT SUM(field2*10) FROM tmp_table_age_name_wage;
{noformat}

Apache Ignite 2.4 threw SqlException Numeric value out of range: 
"100";

Apache Ignite 2.5 does not wrap underlying exception and throws 
javax.cache.CacheException instead


{noformat}
SELECT SUM(field2*10) FROM tmp_table_age_name_wage;
Expected error:
Numeric value out of range
Actual error:
Error: javax.cache.CacheException: Failed to execute map query on remote node 
[nodeId=76cea51c-87a3-4054-b39e-1ad6d01c0df6, errMsg=Failed to execute SQL 
query. Numeric value out of range: "100"; SQL statement:
 SELECT
 SUM(__Z0.FIELD2 * 10) __C0_0
 FROM PUBLIC.TMP_TABLE_AGE_NAME_WAGE __Z0 [22003-195]] (state=5,code=0)
 java.sql.SQLException: javax.cache.CacheException: Failed to execute map query 
on remote node [nodeId=76cea51c-87a3-4054-b39e-1ad6d01c0df6, errMsg=Failed to 
execute SQL query. Numeric value out of range: "100"; SQL statement:
 SELECT
 SUM(__Z0.FIELD2 * 10) __C0_0
 FROM PUBLIC.TMP_TABLE_AGE_NAME_WAGE __Z0 [22003-195]]
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:779)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:210)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:473)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.runCommands(SqlLine.java:1706)
at sqlline.Commands.run(Commands.java:1317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:791)
at sqlline.SqlLine.initArgs(SqlLine.java:595)
at sqlline.SqlLine.begin(SqlLine.java:643)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Code inspection

2018-11-27 Thread Maxim Muzafarov
Igniters,

I propose to make inspection configuration default on the project
level. I've created a new issue [1] for it. It can be easily done and
recommend by IntelliJ documentation [2].
Thoughts?


Vyacheslav,

Can you share an example of your warnings?
Currently, we have different inspection configurations:
- ignite_inspections.xml - to import inspections as default and use it daily.
- ignite_inspections_teamcity.xml - config to run it on TC. Only fixed
rules in the project code are enabled. Each of these rules are marked
with ERROR level.

[1] https://issues.apache.org/jira/browse/IGNITE-10422
[2] https://www.jetbrains.com/help/idea/code-inspection.html
On Tue, 20 Nov 2018 at 13:58, Nikolay Izhikov  wrote:
>
> Hello, Vyacheslav.
>
> Yes, we have.
>
> Maxim Muzafarov, can you fix it, please?
>
> вт, 20 нояб. 2018 г., 13:10 Vyacheslav Daradur daradu...@gmail.com:
>
> > Guys, why we have 2 different inspection files in the repo?
> > idea\ignite_inspections.xml
> > idea\ignite_inspections_teamcity.xml
> >
> > AFAIK TeamCity is able to use the same inspection file with IDE.
> >
> > I've imported 'idea\ignite_inspections.xml' in the IDE, but now see
> > inspection warnings for my PR on TC because of different rules.
> >
> >
> > On Sun, Nov 11, 2018 at 6:06 PM Maxim Muzafarov 
> > wrote:
> > >
> > > Yakov, Dmitry,
> > >
> > > Which example of unsuccessful suite execution do we need?
> > > Does the current fail [1] in the master branch enough to configure
> > > notifications by TC.Bot?
> > >
> > > > Please consider adding more checks
> > > > - line endings. I think we should only have \n
> > > > - ensure blank line at the end of file
> > >
> > > It seems to me that `line endings` is easy to add, but for the `blank
> > > line at the end` we need as special regexp. Can we focus on built-in
> > > IntelliJ inspections at first and fix others special further?
> > >
> > > [1]
> > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_InspectionsCore_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv
> > > On Sun, 11 Nov 2018 at 17:55, Maxim Muzafarov 
> > wrote:
> > > >
> > > > Igniters,
> > > >
> > > > Since the inspection rules are included in RunAll a few members of the
> > > > community mentioned a wide distributed execution time on TC agents:
> > > >  - 1h:27m:38s publicagent17_9094
> > > >  - 38m:04s publicagent17_9094
> > > >  - 33m:29s publicagent17_9094
> > > >  - 17m:13s publicagent17_9094
> > > > It seems that we should configure the resources distribution across TC
> > > > containers. Can anyone take a look at it?
> > > >
> > > >
> > > > I've also prepared the short list of rules to work on:
> > > > + Inconsistent line separators (6 matches)
> > > > + Problematic whitespace (4 matches)
> > > > + expression.equals("literal")' rather than
> > > > '"literal".equals(expression) (53 matches)
> > > > + Unnecessary 'null' check before 'instanceof' expression or call (42
> > matches)
> > > > + Redundant 'if' statement (69 matches)
> > > > + Redundant interface declaration (28 matches)
> > > > + Double negation (0 matches)
> > > > + Unnecessary code block (472 matches)
> > > > + Line is longer than allowed by code style (2614 matches) (Is it
> > > > possible to implement?)
> > > >
> > > > WDYT?
> > > >
> > > > On Fri, 26 Oct 2018 at 23:43, Dmitriy Pavlov 
> > wrote:
> > > > >
> > > > > Hi Maxim,
> > > > >
> > > > >  thank you for your efforts to make this happen. Keep the pace!
> > > > >
> > > > > Could you please provide an example of how Inspections can fail, so
> > I or
> > > > > another contributor could implement support of these failures
> > validation in
> > > > > the Tc Bot.
> > > > >
> > > > > Sincerely,
> > > > > Dmitriy Pavlov
> > > > >
> > > > > пт, 26 окт. 2018 г. в 18:27, Yakov Zhdanov :
> > > > >
> > > > > > Maxim,
> > > > > >
> > > > > > Thanks for response, let's do it the way you suggested.
> > > > > >
> > > > > > Please consider adding more checks
> > > > > > - line endings. I think we should only have \n
> > > > > > - ensure blank line in the end of file
> > > > > >
> > > > > > All these are code reviews issues I pointed out many times when
> > reviewing
> > > > > > conributions. It would be cool if we have TC build failing if
> > there is any.
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > > --Yakov
> > > > > >
> >
> >
> >
> > --
> > Best Regards, Vyacheslav D.
> >


[jira] [Created] (IGNITE-10423) Hangs grid-nio-worker-tcp-comm

2018-11-27 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-10423:
--

 Summary: Hangs grid-nio-worker-tcp-comm
 Key: IGNITE-10423
 URL: https://issues.apache.org/jira/browse/IGNITE-10423
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov


{noformat}
[org.apache.ignite:ignite-core] [2018-11-24 
04:49:34,736][ERROR][tcp-disco-msg-worker-#89615%replicated.GridCacheReplicatedNodeRestartSelfTest2%][G]
 Blocked system-critical thread has be
en detected. This can lead to cluster-wide undefined behaviour 
[threadName=grid-nio-worker-tcp-comm-1, blockedFor=11s]
[org.apache.ignite:ignite-core] [2018-11-24 04:49:44,894][WARN 
][tcp-disco-msg-worker-#89615%replicated.GridCacheReplicatedNodeRestartSelfTest2%][G]
 Thread [name="grid-nio-worker-tcp-com
m-1-#454082%replicated.GridCacheReplicatedNodeRestartSelfTest2%", id=562184, 
state=RUNNABLE, blockCnt=1, waitCnt=0]

[org.apache.ignite:ignite-core] [2018-11-24 
04:49:44,897][ERROR][tcp-disco-msg-worker-#89615%replicated.GridCacheReplicatedNodeRestartSelfTest2%][IgniteTestResources]
 Critical system err
or detected. Will be handled accordingly to configured handler 
[hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=SingletonSet [SYSTEM_WORKER_BLOCKED]]], 
failureCtx=FailureContext [type=S
YSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker 
[name=grid-nio-worker-tcp-comm-1, 
igniteInstanceName=replicated.GridCacheReplicatedNodeRestartSelfTest2, 
finished=false, heartbeatTs=154303498488
9]]]
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Brainstorm: Make TC Run All faster

2018-11-27 Thread Павлухин Иван
Roman,

Do you have some expectations how faster "correlated" tests
elimination will make Run All? Also do you have a vision how can we
determine such "correlated" tests, can we do it relatively fast?

But all in all, I am not sure that reducing a group of correlated
tests to only one test can show good stability.
пн, 26 нояб. 2018 г. в 17:48, aplatonov :
>
> It should be noticed that additional parameter TEST_SCALE_FACTOR was added.
> This parameter with ScaleFactorUtil methods can be used for test size
> scaling for different runs (like ordinary and nightly RunALLs). If someone
> want to distinguish these builds he/she can apply scaling methods from
> ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for
> non-nightly builds TEST_SCALE_FACTOR<1.0. For example in
> GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for
> scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be
> added to runs at the same time with RunALL (nightly) runs.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


[jira] [Created] (IGNITE-10422) Make {{ignite_inspection.xml}} configuration default on the project level

2018-11-27 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-10422:


 Summary: Make {{ignite_inspection.xml}} configuration default on 
the project level
 Key: IGNITE-10422
 URL: https://issues.apache.org/jira/browse/IGNITE-10422
 Project: Ignite
  Issue Type: Task
Reporter: Maxim Muzafarov


IntelliJ IDEA can perform static code analysis by applying _inspections_ to the 
project code. The inspection analysis process can be easily run from both the 
IDE and the command line. The command line usage of IntelliJ IDEA inspections 
already configured as daily [Inspections(Core) 
TeamCity|https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_InspectionsCore_IgniteTests24Java8=%3Cdefault%3E=buildTypeStatusDiv]
 suite. This approach has proven its convenience and efficiency as part of the 
_TC.Bot_.

As for the next step, I propose to improve personal productivity of writing 
code by making *{{ignite_inspection.xml}}* configuration default on the project 
level.

According to [IntelliJ IDEA 
documentation|https://www.jetbrains.com/help/idea/code-inspection.html] the 
inspection profile should be placed to *{{/.idea/inspectionProfiles}}* 
with name *{{Project_Default.xml}}*. This project profile will be shared and 
accessible for the team members via VCS by default.

Note.
The build and test procedure of Apache Ignite project will remain IDE 
independent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10421) MVCC: Assertion in checkpointing tread after disabling WAL.

2018-11-27 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-10421:
---

 Summary: MVCC: Assertion in checkpointing tread after disabling 
WAL.
 Key: IGNITE-10421
 URL: https://issues.apache.org/jira/browse/IGNITE-10421
 Project: Ignite
  Issue Type: Bug
  Components: mvcc, persistence
Reporter: Roman Kondakov


Reproducer: {{WalModeChangeAdvancedSelfTest#testJoin}} with enabled MVCC.


{noformat}
[2018-11-27 
14:56:47,548][ERROR][db-checkpoint-thread-#358%srv_3%][IgniteTestResources] 
Critical system error detected. Will be handled accordingly to configured 
handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
[ignoredFailureTypes=SingletonSet [SYSTEM_WORKER_BLOCKED]]], 
failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
o.a.i.IgniteCheckedException: Compound exception for CountDownFuture.]]
class org.apache.ignite.IgniteCheckedException: Compound exception for 
CountDownFuture.
at 
org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46)
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:474)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3957)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.AssertionError: off=3000, 
allocated=1000, pageId=00020002, 
file=/home/gridgain/Documents/work/incubator-ignite/work/db/node02-20092321-f30d-498f-8609-21ff87e4d104/TxLog/index.bin
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.write(FilePageStore.java:550)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.writeInternal(FilePageStoreManager.java:520)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.writePages(GridCacheDatabaseSharedManager.java:4022)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3930)
... 3 more
Suppressed: java.lang.AssertionError: off=4000, 
allocated=1000, pageId=00020003, 
file=/home/gridgain/Documents/work/incubator-ignite/work/db/node02-20092321-f30d-498f-8609-21ff87e4d104/TxLog/index.bin
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.write(FilePageStore.java:550)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.writeInternal(FilePageStoreManager.java:520)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.writePages(GridCacheDatabaseSharedManager.java:4022)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3930)
... 3 more
Suppressed: java.lang.AssertionError: off=2000, 
allocated=1000, pageId=00020001, 
file=/home/gridgain/Documents/work/incubator-ignite/work/db/node02-20092321-f30d-498f-8609-21ff87e4d104/TxLog/index.bin
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.write(FilePageStore.java:550)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.writeInternal(FilePageStoreManager.java:520)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.writePages(GridCacheDatabaseSharedManager.java:4022)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:3930)
... 3 more

{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-27 Thread Maxim Muzafarov
+1

Let's have a look at how it will be.

On Tue, 27 Nov 2018 at 14:48 Seliverstov Igor  wrote:

> +1
>
> вт, 27 нояб. 2018 г. в 14:45, Юрий :
>
> > +1
> >
> > вт, 27 нояб. 2018 г. в 11:22, Andrey Mashenkov <
> andrey.mashen...@gmail.com
> > >:
> >
> > > +1
> > >
> > > On Tue, Nov 27, 2018 at 10:12 AM Sergey Chugunov <
> > > sergey.chugu...@gmail.com>
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Plus this dedicated list should be properly documented in wiki,
> > > mentioning
> > > > it in How to Contribute [1] or in Make Teamcity Green Again [2] would
> > be
> > > a
> > > > good idea.
> > > >
> > > > [1]
> > https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
> > > > [2]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again
> > > >
> > > > On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван 
> > > wrote:
> > > >
> > > > > +1
> > > > > вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov <
> somefire...@gmail.com
> > >:
> > > > > >
> > > > > > 0
> > > > > > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov <
> > > akuznet...@apache.org
> > > > >:
> > > > > > >
> > > > > > > +1
> > > > > > > Do not forget notification from GitBox too!
> > > > > > >
> > > > > > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya
> >  > > >
> > > > > wrote:
> > > > > > >
> > > > > > > > +1, already make it by filers.
> > > > > > > >
> > > > > > > > > This was discussed already [1].
> > > > > > > > >
> > > > > > > > > So, I want to complete this discussion with moving outside
> > > > dev-list
> > > > > > > > > GitHub-notification to dedicated list.
> > > > > > > > >
> > > > > > > > > Please start voting.
> > > > > > > > >
> > > > > > > > > +1 - to accept this change.
> > > > > > > > > 0 - you don't care.
> > > > > > > > > -1 - to decline this change.
> > > > > > > > >
> > > > > > > > > This vote will go for 72 hours.
> > > > > > > > >
> > > > > > > > > [1]
> > > > > > > > >
> > > > > > > >
> > > > >
> > > >
> > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Alexey Kuznetsov
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > > Ivan Pavlukhin
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrey V. Mashenkov
> > >
> >
> >
> > --
> > Живи с улыбкой! :D
> >
>
-- 
--
Maxim Muzafarov


Re: [MTCGA] Disabled tests.

2018-11-27 Thread Andrey Mashenkov
Thanks Ilya.

I've create a sub-ticket for IGNITE-9210:
https://issues.apache.org/jira/browse/IGNITE-10420

On Mon, Nov 26, 2018 at 2:22 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> I think we should un-ignore these tests. You can even create a sub-task
> under https://issues.apache.org/jira/browse/IGNITE-9210
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 26 нояб. 2018 г. в 14:20, Andrey Mashenkov  >:
>
> > Hi Igniters,
> >
> >
> > I've found  "Cache 1" TC suite actually
> > starts IgniteBinaryCacheTestSuite.class suite.
> > This suite ignores several tests that has copies to be run with binary
> > marshaller:
> > * DataStreamProcessorSelfTest
> > * GridCacheAffinityRoutingSelfTest
> > * IgniteCacheAtomicLocalExpiryPolicyTest
> > * GridCacheEntryMemorySizeSelfTest
> > * GridCacheMvccSelfTest
> >
> > Looks like these test were excluded from run as duplicates as they were a
> > part of another TC suite before BinaryMarshaller becomes a default
> > marshaller.
> >
> > Quick investigation shows that
> > 1. DataStreamProcessorSelfTest is DataStreamer test with keepBinary=false
> > mode and we never check this case
> > 2. DataStreamProcessorBinarySelfTest (it's binary version) checks
> > keepBinary=true case within IgniteBinaryCacheTestSuite.
> >
> >
> > Should we stop ignoring mentioned tests or remove ones?
> > Thoughts?
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>


-- 
Best regards,
Andrey V. Mashenkov


[jira] [Created] (IGNITE-10420) Enable ignored test in "Cache 1" test suite.

2018-11-27 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10420:
-

 Summary: Enable ignored test in "Cache 1" test suite.
 Key: IGNITE-10420
 URL: https://issues.apache.org/jira/browse/IGNITE-10420
 Project: Ignite
  Issue Type: Sub-task
  Components: general
Reporter: Andrew Mashenkov


"Cache 1" TC suite actually runs IgniteBinaryCacheTestSuite.

It looks like we can merge IgniteBinaryCacheTestSuite and IgniteCacheTestSuite 
as the last one is never run separately.

IgniteBinaryCacheTestSuite has ignored test that never run. 
Let's enable these tests and mute them with certain Jira ticket if there is any 
issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-27 Thread Seliverstov Igor
+1

вт, 27 нояб. 2018 г. в 14:45, Юрий :

> +1
>
> вт, 27 нояб. 2018 г. в 11:22, Andrey Mashenkov  >:
>
> > +1
> >
> > On Tue, Nov 27, 2018 at 10:12 AM Sergey Chugunov <
> > sergey.chugu...@gmail.com>
> > wrote:
> >
> > > +1
> > >
> > > Plus this dedicated list should be properly documented in wiki,
> > mentioning
> > > it in How to Contribute [1] or in Make Teamcity Green Again [2] would
> be
> > a
> > > good idea.
> > >
> > > [1]
> https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
> > > [2]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again
> > >
> > > On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван 
> > wrote:
> > >
> > > > +1
> > > > вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov  >:
> > > > >
> > > > > 0
> > > > > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov <
> > akuznet...@apache.org
> > > >:
> > > > > >
> > > > > > +1
> > > > > > Do not forget notification from GitBox too!
> > > > > >
> > > > > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya
>  > >
> > > > wrote:
> > > > > >
> > > > > > > +1, already make it by filers.
> > > > > > >
> > > > > > > > This was discussed already [1].
> > > > > > > >
> > > > > > > > So, I want to complete this discussion with moving outside
> > > dev-list
> > > > > > > > GitHub-notification to dedicated list.
> > > > > > > >
> > > > > > > > Please start voting.
> > > > > > > >
> > > > > > > > +1 - to accept this change.
> > > > > > > > 0 - you don't care.
> > > > > > > > -1 - to decline this change.
> > > > > > > >
> > > > > > > > This vote will go for 72 hours.
> > > > > > > >
> > > > > > > > [1]
> > > > > > > >
> > > > > > >
> > > >
> > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Alexey Kuznetsov
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Ivan Pavlukhin
> > > >
> > >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>
>
> --
> Живи с улыбкой! :D
>


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-27 Thread Юрий
+1

вт, 27 нояб. 2018 г. в 11:22, Andrey Mashenkov :

> +1
>
> On Tue, Nov 27, 2018 at 10:12 AM Sergey Chugunov <
> sergey.chugu...@gmail.com>
> wrote:
>
> > +1
> >
> > Plus this dedicated list should be properly documented in wiki,
> mentioning
> > it in How to Contribute [1] or in Make Teamcity Green Again [2] would be
> a
> > good idea.
> >
> > [1] https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again
> >
> > On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван 
> wrote:
> >
> > > +1
> > > вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov :
> > > >
> > > > 0
> > > > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov <
> akuznet...@apache.org
> > >:
> > > > >
> > > > > +1
> > > > > Do not forget notification from GitBox too!
> > > > >
> > > > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya  >
> > > wrote:
> > > > >
> > > > > > +1, already make it by filers.
> > > > > >
> > > > > > > This was discussed already [1].
> > > > > > >
> > > > > > > So, I want to complete this discussion with moving outside
> > dev-list
> > > > > > > GitHub-notification to dedicated list.
> > > > > > >
> > > > > > > Please start voting.
> > > > > > >
> > > > > > > +1 - to accept this change.
> > > > > > > 0 - you don't care.
> > > > > > > -1 - to decline this change.
> > > > > > >
> > > > > > > This vote will go for 72 hours.
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > >
> > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Alexey Kuznetsov
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> > >
> >
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


-- 
Живи с улыбкой! :D


Re: Historical rebalance

2018-11-27 Thread Seliverstov Igor
Vladimir,

I think I got your point,

It should work if we do the next:
introduce two structures: active list (txs) and candidate list (updCntr ->
txn pairs)

Track active txs, mapping them to actual update counter at update time.
On each next update put update counter, associated with previous update,
into a candidates list possibly overwrite existing value (checking txn)
On tx finish remove tx from active list only if appropriate update counter
(associated with finished tx) is applied.
On update counter update set the minimal update counter from the candidates
list as a back-counter, clear the candidate list and remove an associated
tx from the active list if present.
Use back-counter instead of actual update counter in demand message.

вт, 27 нояб. 2018 г. в 12:56, Seliverstov Igor :

> Ivan,
>
> 1) The list is saved on each checkpoint, wholly (all transactions in
> active state at checkpoint begin).
> We need whole the list to get oldest transaction because after
> the previous oldest tx finishes, we need to get the following one.
>
> 2) I guess there is a description of how persistent storage works and how
> it restores [1]
>
> Vladimir,
>
> the whole list of what we going to store on checkpoint (updated):
> 1) Partition counter low watermark (LWM)
> 2) WAL pointer of earliest active transaction write to partition at the
> time the checkpoint have started
> 3) List of prepared txs with acquired partition counters (which were
> acquired but not applied yet)
>
> This way we don't need any additional info in demand message. Start point
> can be easily determined using stored WAL "back-pointer".
>
> [1]
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-LocalRecoveryProcess
>
>
> вт, 27 нояб. 2018 г. в 11:19, Vladimir Ozerov :
>
>> Igor,
>>
>> Could you please elaborate - what is the whole set of information we are
>> going to save at checkpoint time? From what I understand this should be:
>> 1) List of active transactions with WAL pointers of their first writes
>> 2) List of prepared transactions with their update counters
>> 3) Partition counter low watermark (LWM) - the smallest partition counter
>> before which there are no prepared transactions.
>>
>> And the we send to supplier node a message: "Give me all updates starting
>> from that LWM plus data for that transactions which were active when I
>> failed".
>>
>> Am I right?
>>
>> On Fri, Nov 23, 2018 at 11:22 AM Seliverstov Igor 
>> wrote:
>>
>> > Hi Igniters,
>> >
>> > Currently I’m working on possible approaches how to implement historical
>> > rebalance (delta rebalance using WAL iterator) over MVCC caches.
>> >
>> > The main difficulty is that MVCC writes changes on tx active phase while
>> > partition update version, aka update counter, is being applied on tx
>> > finish. This means we cannot start iteration over WAL right from the
>> > pointer where the update counter updated, but should include updates,
>> which
>> > the transaction that updated the counter did.
>> >
>> > These updates may be much earlier than the point where the update
>> counter
>> > was updated, so we have to be able to identify the point where the first
>> > update happened.
>> >
>> > The proposed approach includes:
>> >
>> > 1) preserve list of active txs, sorted by the time of their first update
>> > (using WAL ptr of first WAL record in tx)
>> >
>> > 2) persist this list on each checkpoint (together with TxLog for
>> example)
>> >
>> > 4) send whole active tx list (transactions which were in active state at
>> > the time the node was crushed, empty list in case of graceful node
>> stop) as
>> > a part of partition demand message.
>> >
>> > 4) find a checkpoint where the earliest tx exists in persisted txs and
>> use
>> > saved WAL ptr as a start point or apply current approach in case the
>> active
>> > tx list (sent on previous step) is empty
>> >
>> > 5) start iteration.
>> >
>> > Your thoughts?
>> >
>> > Regards,
>> > Igor
>>
>


[GitHub] ignite pull request #5508: 10183

2018-11-27 Thread SomeFire
GitHub user SomeFire opened a pull request:

https://github.com/apache/ignite/pull/5508

10183



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SomeFire/ignite IGNITE-10183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5508.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5508


commit decc8c5ea172ebe47524107a726151d9e44be930
Author: Dmitrii Ryabov 
Date:   2018-11-27T11:36:49Z

IGNITE-10183 
IgniteClientReconnectCacheTest.testReconnectOperationInProgress flaky fails on 
TC.




---


[GitHub] ignite pull request #4964: IGNITE-9284 Add standard scaler

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4964


---


Re: MVCC test coverage.

2018-11-27 Thread Roman Kondakov

Vladimir, Andrey,

as you mentioned this approach has several disadvantages. I can name a 
few of them:


1. This new MVCC suites will be triggered in "long" runs at night - this 
means developers will not receive feedback about MVCC problems 
immediately - they will have to wait until their commit will be merged 
to master and than triggered at night build. It may lead to permanent 
problems in master branch.


2. Developers should always keep in mind that all tests they add will be 
run twice: for MVCC mode and non-MVCC mode. And if they don't want their 
tests run twice, such tests should be added to exclude map in the 
according MVCC suite. Due to the fact this is not the obvious rule and 
we do not have any possibility to control this process, I expect this 
rule will be violated very often. This lead us to double runs of non 
MVCC-relevant tests.


3. MVCC has became a full-fledged feature of Apache Ignite. Each 
developer should take it into account when contributing to project. Mvcc 
case should be considered in each feature as well as other atomicity 
modes: transactional and atomic. Proposed approach removes the need for 
the developer to think of MVCC at all. Everybody will assume that if 
they've added atomic and transactional tests, their job is done, beacuse 
MVCC test should run automatically. IMO this is not good.



Of course, proposed approach has an obvious advantage: it is very fast. 
We can adopt old tests to MVCC case in a couple weeks. So, it is a good 
temporary solution.


As a possible long-term solution I would propose the following:

1. Do not inherit MVCC suites from non-MVCC suites, but instead refactor 
it - i.e. extract common logic to the basic abstract class and run this 
tests with different atomicity modes - MVCC and non-MVCC.


2. Notify developers about TRANSACTIONAL_SNAPSHOT atomicity mode has the 
same importance as other modes and it should be considered in the same 
way as other.


3. To deal with the dramatically increased number of tests, RunAll suite 
could be split into two variants: RunAll(full) and RunAll(fast) as 
discussed on dev-list several times. Full suite runs all tests, fast 
suite runs only a subset of tests (or all tests but with the smaller 
timeouts -it is under discussion). One of the proposed ways [1] - is to 
extract only significant, representative tests from the entire suite, 
and run this small subset on "fast" RunAll's. In this case if we have a 
significant test in MVCC suite - we do not have to wait night build 
until this test is checked - because if the test is significant - it in 
the "fast" run by default.



[1] 
http://apache-ignite-developers.2346864.n4.nabble.com/Brainstorm-Make-TC-Run-All-faster-tt37845.html#a38445


--

Kind Regards
Roman Kondakov

On 21.11.2018 22:37, Vladimir Ozerov wrote:

Hi Andrey,

Thank you for bringing this question to the list. I already reviewed this
PR and it looks good to me. But I would like to hear more opinions from
other community members regarding the whole approach.

One important detail - we are going to create new suites as a child classes
of existing suites with irrelevant tests excluded manually. This way if a
new test is added to existing cache suite, it will be automatically added
to TC suite as well, and we will see potential MVCC issues in a nightly
build. This is critical thing to keep MVCC mode on par with “classical”
transactions.

I am not 100% happy with the fact that we will know about new failures only
after problematic commit is pushed. But I do not see how to improve it
without extending Run All time for another 30 hours. This will do more harm
than good. So proposed solution looks like a good pros-cons balance at the
moment.

Vladimir.



ср, 21 нояб. 2018 г. в 17:59, Andrey Mashenkov :


Hi Igniters,

As you may already know, MVCC transaction feature will be available in
upcoming Ignite-2.7.
However, MVCC Tx feature is released for beta testing and has many
limitations and we a going
to improve stability and integrations with other features in future
releases.

We can reuse existed transactional cache tests and run them in Mvcc mode to
get much better test coverage with small looses.
Here is a ticket for this IGNITE-10001 [1].

This mean we will have twice more "Cache Tests" and get TC runs some
longer.
To reduce new Mvcc cache suites impact and save TC time we are going to
1. Include new tests to nightly suite only, this will allow us to put our
ears to the ground.
2. Exclude non-tx tests and non-relevant tx cases and aggressively mute
tests for unsupported features integrations.

I've implement a PR to one of  tasks [2] as an example how it can be done.

Technical details:
1. Introduced a new FORCE_MVCC flag and created a child "Mvcc Cache 2"
suite for "Cache 2" test suite with FORCE_MVCC flag on.
2. Implemented a hook that change TRANSACTIONAL cache atomicity mode to
TRANSACTIONAL_SNAPSHOT if FORCE_MVCC flag turned on.
This allow us to check MVCC mode without creating 

[ANNOUNCE] Welcome Ilya Kasnacheev as a new committer

2018-11-27 Thread Dmitriy Pavlov
Dear Igniters,



It is last but not least announce. The Apache Ignite Project Management
Committee (PMC) has invited Ilya Kasnacheev to become a new committer and
are happy to announce that he has accepted.



Being a committer enables you to more easily make changes without needing
to go through the patch submission process.



Ilya has both code contributions and valuable contribution into project and
community,  we appreciate his effort in helping users on user-list,
proofing-of-concept for compression, contributions into stability (Lost &
found tests).



Igniters,


Please join me in welcoming Ilya and congratulating him on his new role in
the Apache Ignite Community.



Thanks

Dmitriy Pavlov


[GitHub] ignite pull request #5507: IGNITE-10287: Add ML inference model storage.

2018-11-27 Thread dmitrievanthony
GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/5507

IGNITE-10287: Add ML inference model storage.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10287

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5507.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5507


commit 45bd87c33972d943afcba6ecdfb9735a3b44567e
Author: dmitrievanthony 
Date:   2018-11-26T15:37:56Z

IGNITE-10287: First version of Model Storage.

commit 9d13c19d2d45ad9983f92177ffb34e38d75e53b5
Author: dmitrievanthony 
Date:   2018-11-27T10:44:16Z

IGNITE-10287: Add javadoc and extend example.




---


[jira] [Created] (IGNITE-10419) [ML] Move person dataset to SandboxMLCache class

2018-11-27 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-10419:
---

 Summary: [ML] Move person dataset to SandboxMLCache class
 Key: IGNITE-10419
 URL: https://issues.apache.org/jira/browse/IGNITE-10419
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Yury Babak
 Fix For: 2.8


How we have duplicated code in examples, simple cache with several Person 
records. We should move this cache creation code into SandboxMLCache class



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Historical rebalance

2018-11-27 Thread Seliverstov Igor
Ivan,

1) The list is saved on each checkpoint, wholly (all transactions in active
state at checkpoint begin).
We need whole the list to get oldest transaction because after
the previous oldest tx finishes, we need to get the following one.

2) I guess there is a description of how persistent storage works and how
it restores [1]

Vladimir,

the whole list of what we going to store on checkpoint (updated):
1) Partition counter low watermark (LWM)
2) WAL pointer of earliest active transaction write to partition at the
time the checkpoint have started
3) List of prepared txs with acquired partition counters (which were
acquired but not applied yet)

This way we don't need any additional info in demand message. Start point
can be easily determined using stored WAL "back-pointer".

[1]
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-LocalRecoveryProcess


вт, 27 нояб. 2018 г. в 11:19, Vladimir Ozerov :

> Igor,
>
> Could you please elaborate - what is the whole set of information we are
> going to save at checkpoint time? From what I understand this should be:
> 1) List of active transactions with WAL pointers of their first writes
> 2) List of prepared transactions with their update counters
> 3) Partition counter low watermark (LWM) - the smallest partition counter
> before which there are no prepared transactions.
>
> And the we send to supplier node a message: "Give me all updates starting
> from that LWM plus data for that transactions which were active when I
> failed".
>
> Am I right?
>
> On Fri, Nov 23, 2018 at 11:22 AM Seliverstov Igor 
> wrote:
>
> > Hi Igniters,
> >
> > Currently I’m working on possible approaches how to implement historical
> > rebalance (delta rebalance using WAL iterator) over MVCC caches.
> >
> > The main difficulty is that MVCC writes changes on tx active phase while
> > partition update version, aka update counter, is being applied on tx
> > finish. This means we cannot start iteration over WAL right from the
> > pointer where the update counter updated, but should include updates,
> which
> > the transaction that updated the counter did.
> >
> > These updates may be much earlier than the point where the update counter
> > was updated, so we have to be able to identify the point where the first
> > update happened.
> >
> > The proposed approach includes:
> >
> > 1) preserve list of active txs, sorted by the time of their first update
> > (using WAL ptr of first WAL record in tx)
> >
> > 2) persist this list on each checkpoint (together with TxLog for example)
> >
> > 4) send whole active tx list (transactions which were in active state at
> > the time the node was crushed, empty list in case of graceful node stop)
> as
> > a part of partition demand message.
> >
> > 4) find a checkpoint where the earliest tx exists in persisted txs and
> use
> > saved WAL ptr as a start point or apply current approach in case the
> active
> > tx list (sent on previous step) is empty
> >
> > 5) start iteration.
> >
> > Your thoughts?
> >
> > Regards,
> > Igor
>


[jira] [Created] (IGNITE-10418) Implement lightweight profiling for message processing

2018-11-27 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-10418:
--

 Summary: Implement lightweight profiling for message processing
 Key: IGNITE-10418
 URL: https://issues.apache.org/jira/browse/IGNITE-10418
 Project: Ignite
  Issue Type: New Feature
Reporter: Alexei Scherbakov
Assignee: Alexei Scherbakov
 Fix For: 2.8






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Historical rebalance

2018-11-27 Thread Vladimir Ozerov
Just thought of this a bit more. I we will look for start of long-running
transaction in WAL we may go back too far to the past only to get few
entries.

What if we consider slightly different approach? We assume that transaction
can be represented as a set of independent operations, which are applied in
the same order on both primary and backup nodes. Then we can do the
following:
1) When next operation is finished, we assign transaction LWM of the last
checkpoint. I.e. we maintain a map [Txn -> last_op_LWM].
2) If "last_op_LWM" of transaction is not changed between two subsequent
checkpoints, we assign it to special value "UP_TO_DATE".

Now at the time of checkpoint we get minimal value among current partition
LWM and active transaction LWMs, ignoring "UP_TO_DATE" values. Resulting
value is the final partition counter which we will request from supplier
node. We save it to checkpoint record. When WAL on demander is unwound from
this value, then it is guaranteed to contain all missing data of
demanders's active transactions.

I.e. instead of tracking the whole active transaction, we track part of
transaction which is possibly missing on a node.

Will that work?


On Tue, Nov 27, 2018 at 11:19 AM Vladimir Ozerov 
wrote:

> Igor,
>
> Could you please elaborate - what is the whole set of information we are
> going to save at checkpoint time? From what I understand this should be:
> 1) List of active transactions with WAL pointers of their first writes
> 2) List of prepared transactions with their update counters
> 3) Partition counter low watermark (LWM) - the smallest partition counter
> before which there are no prepared transactions.
>
> And the we send to supplier node a message: "Give me all updates starting
> from that LWM plus data for that transactions which were active when I
> failed".
>
> Am I right?
>
> On Fri, Nov 23, 2018 at 11:22 AM Seliverstov Igor 
> wrote:
>
>> Hi Igniters,
>>
>> Currently I’m working on possible approaches how to implement historical
>> rebalance (delta rebalance using WAL iterator) over MVCC caches.
>>
>> The main difficulty is that MVCC writes changes on tx active phase while
>> partition update version, aka update counter, is being applied on tx
>> finish. This means we cannot start iteration over WAL right from the
>> pointer where the update counter updated, but should include updates, which
>> the transaction that updated the counter did.
>>
>> These updates may be much earlier than the point where the update counter
>> was updated, so we have to be able to identify the point where the first
>> update happened.
>>
>> The proposed approach includes:
>>
>> 1) preserve list of active txs, sorted by the time of their first update
>> (using WAL ptr of first WAL record in tx)
>>
>> 2) persist this list on each checkpoint (together with TxLog for example)
>>
>> 4) send whole active tx list (transactions which were in active state at
>> the time the node was crushed, empty list in case of graceful node stop) as
>> a part of partition demand message.
>>
>> 4) find a checkpoint where the earliest tx exists in persisted txs and
>> use saved WAL ptr as a start point or apply current approach in case the
>> active tx list (sent on previous step) is empty
>>
>> 5) start iteration.
>>
>> Your thoughts?
>>
>> Regards,
>> Igor
>
>


[jira] [Created] (IGNITE-10417) notifyDiscoveryListener can be lost

2018-11-27 Thread Pavel Voronkin (JIRA)
Pavel Voronkin created IGNITE-10417:
---

 Summary: notifyDiscoveryListener can be lost
 Key: IGNITE-10417
 URL: https://issues.apache.org/jira/browse/IGNITE-10417
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Voronkin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10416) Revisit GridCacheAbstractTest childs.

2018-11-27 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10416:
-

 Summary: Revisit GridCacheAbstractTest childs.
 Key: IGNITE-10416
 URL: https://issues.apache.org/jira/browse/IGNITE-10416
 Project: Ignite
  Issue Type: Bug
  Components: general
Reporter: Andrew Mashenkov


We have a lot of test inherited from GridCacheAbstractSelfTest, that code is 
complicated and a bit outdated regarding to the fact JUnit 4/5 is coming. 
Moreover,  starting grid from class constructor made test muting tricky and 
disabling cacheStore (which is always enabled by default) doesn't looks 
straightforward in child classes.

 

We should revisit all its childs and check if such parentness is correct.

 

First candidates are:
 # Test has own "start grid routine", but implements gridCount() method just as 
a dummy.
 # Before/After methods do not call parent methods. This should be handled 
carefully.
 # Test that doesn't expect a feature to be used, but enabled in 
GridCacheAbstractSelfTest.
E.g. usually, rebalance test doesn't expect cacheStore to be enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5198: IGNITE-10002: MVCC: Create "Cache 2" test suite f...

2018-11-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5198


---


Re: New API for changing configuration of persistent caches

2018-11-27 Thread Vladimir Ozerov
Ed,

I think most questions around API will go away once we understand how users
are going to use it. Can you provide several examples of properties which
we think are going to be changed often, along with concrete example of API
usage (native API, control.sh, whatever we think important)?

Personally, with my SQL-oriented bias, I do not understand why we call
operation "restart" and why we need API with CacheConfiguration at all. In
SQL world we have ALTER TABLE command which may include one or several
changes for the given table ("cache" in our terms). And in the vast
majority cases user change only one property in a single command. Rarely -
several properties. Extremely rare - redefine the whole table.
Next, not all changes require "restart" at all. In SQL terms we call
operations "ONLINE" when there is no need to stop cache operations for a
long time, and "OFFLINE" otherwise. For Ignite more than half of properties
can be changed "ONLINE" - events flag, write-behind parameters, partition
loss policy, rebalance properties, etc.. I would say that this "change",
not restart.

Thoughts?

On Mon, Nov 26, 2018 at 9:25 PM Eduard Shangareev <
eduard.shangar...@gmail.com> wrote:

> Ok,
>
> We need two approaches to change cache configuration:
> 1. Ignite.restartCaches(CacheConfiguration ... cfgs);
> 2. Ignite.restartCaches(CacheConfigurationDiff ... cfgDiffs);
>
> Also, we need some versioning of cache configurations for caches. Which
> could be done when we move the cache configuration from serialized file to
> metastore.
>
> It is necessary for several failover scenarios (actually, all of them
> include joining node with outdated configuration).
> And for CAS-like API for restarting caches.
>
>
> On Thu, Nov 22, 2018 at 12:19 PM Vladimir Ozerov 
> wrote:
>
> > Ed,
> >
> > We have ~70 properties in CacheConfiguration. ~50 of them are plain, ~20
> > are custom classes. My variant allows to change plain properties from any
> > platform, and the rest 20 from any platform when user has relevant
> > BinaryType.
> >
> > On Thu, Nov 22, 2018 at 11:30 AM Eduard Shangareev <
> > eduard.shangar...@gmail.com> wrote:
> >
> > > I don't see how you variant handles user-defined objects (factories,
> > > affinity-functions, interceptors, etc.). Could you describe?
> > >
> > > On Thu, Nov 22, 2018 at 10:47 AM Vladimir Ozerov  >
> > > wrote:
> > >
> > > > My variant of API avoids cache configuration.
> > > >
> > > > One more thing to note - as we found out control.sh cannot dump XML
> > > > configuration. Currently it returns only subset of properties. And in
> > > > general case it is impossible to convert CacheConfiguration to Spring
> > > XML,
> > > > because Spring XMLis not serialization protocol. So API with
> > > > CacheConfiguration doesn’t seem to work for control.sh as well.
> > > >
> > > > чт, 22 нояб. 2018 г. в 10:05, Eduard Shangareev <
> > > > eduard.shangar...@gmail.com
> > > > >:
> > > >
> > > > > Vovan,
> > > > >
> > > > > We couldn't avoid API with cache configuration.
> > > > > Almost all of ~70 properties could be changed, some of them are
> > > instances
> > > > > of objects or could be user-defined class.
> > > > > Could you come up with alternatives for user-defined affinity
> > function?
> > > > >
> > > > > Also, the race would have a place in other scenarios.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Nov 22, 2018 at 8:50 AM Vladimir Ozerov <
> > voze...@gridgain.com>
> > > > > wrote:
> > > > >
> > > > > > Ed,
> > > > > >
> > > > > > We may have API similar to “cache” and “getOrCreateCache”, or may
> > > not.
> > > > It
> > > > > > is up to us to decide. Similarity on it’s own is weak argument.
> > > > > > Functionality and simplicity - this is what matters.
> > > > > >
> > > > > > Approach with cache configuration has three major issues
> > > > > > 1) It exposes properties which user will not be able to change,
> so
> > > > > typical
> > > > > > user actions would be: try to change property, fail as it is
> > > > unsupported,
> > > > > > go reading documentation. Approach with separate POJO is
> intuitive
> > > and
> > > > > > self-documenting.
> > > > > > 2) It has race condition between config read and config apply, so
> > > user
> > > > do
> > > > > > not know what exactly he changes, unless you change API to
> > something
> > > > like
> > > > > > “restartCaches(Tuple CacheConfiguration>...)”,
> > > > which
> > > > > > user will need to call in a loop.
> > > > > > 3) And it is not suitable for non-Java platform, which is a
> > > > showstopper -
> > > > > > all API should be available from all platforms unless it is
> proven
> > to
> > > > be
> > > > > > impossible to implement.
> > > > > >
> > > > > > Vladimir.
> > > > > >
> > > > > > чт, 22 нояб. 2018 г. в 1:06, Eduard Shangareev <
> > > > > > eduard.shangar...@gmail.com
> > > > > > >:
> > > > > >
> > > > > > > Vovan,
> > > > > > >
> > > > > > > Would you argue that we should have the similar API in Java as
> > > > > > > 

[jira] [Created] (IGNITE-10415) MVCC: TxRollbackOnIncorrectParamsTest fails if Mvcc enabled.

2018-11-27 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10415:
-

 Summary: MVCC: TxRollbackOnIncorrectParamsTest fails if Mvcc 
enabled.
 Key: IGNITE-10415
 URL: https://issues.apache.org/jira/browse/IGNITE-10415
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Reporter: Andrew Mashenkov


{noformat}
class org.apache.ignite.internal.processors.query.IgniteSQLException: 
Transaction is already completed.
at 
org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.checkActive(MvccUtils.java:660)
at 
org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.requestSnapshot(MvccUtils.java:816)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.mvccPutAllAsync0(GridNearTxLocal.java:740)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:580)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:446)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2504)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2502)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4323)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2502)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2483)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2460)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1105)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)
at 
org.apache.ignite.internal.processors.cache.transactions.TxRollbackOnIncorrectParamsTest.testLabelFilledLocalGuarantee(TxRollbackOnIncorrectParamsTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2166)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:144)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2082)
at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #5506: Ignite 10223

2018-11-27 Thread Salatich
GitHub user Salatich opened a pull request:

https://github.com/apache/ignite/pull/5506

Ignite 10223



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Salatich/ignite ignite-10223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5506.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5506


commit fee281e8acb4438061bc911cdfb687039f7e3430
Author: Aleksandr Salatich 
Date:   2018-11-13T14:42:01Z

ignite-10223
Add two new methods in Affinity class which returns List instead of 
Collection (and should replace them in future)

commit f1ae4ac5e5a9f9ebedc2a9d45e4b6bef12b7ebd6
Author: Aleksandr Salatich 
Date:   2018-11-13T14:42:01Z

ignite-10223
Add two new methods in Affinity class which returns List instead of 
Collection (and should replace them in future)

commit 17b6a7c9dc4ccca6ef0bc21c88fbb59f13c876a0
Author: Aleksandr Salatich 
Date:   2018-11-27T08:29:29Z

ignite-10223
* add corresponding methods for .NET API

commit 96f155ab432f77719b8c965ad3060a9f226d29ce
Author: Aleksandr Salatich 
Date:   2018-11-27T08:33:35Z

Merge branch 'ignite-10223' of https://github.com/Salatich/ignite into 
ignite-10223




---


[GitHub] ignite pull request #5380: ignite-10223

2018-11-27 Thread Salatich
Github user Salatich closed the pull request at:

https://github.com/apache/ignite/pull/5380


---


[MTCGA]: new failures in builds [2402883, 2402882] needs to be handled

2018-11-27 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master 
CacheConfigurationParityTest.TestCacheConfiguration 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-7481412179890583225=%3Cdefault%3E=testDetails

 *New test failure in master 
DataStorageMetricsParityTest.TestDataStorageMetrics 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5450053498549662287=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - aliskhakov 
https://ci.ignite.apache.org/viewModification.html?modId=840588
 - sergi.vladykin 
https://ci.ignite.apache.org/viewModification.html?modId=840556
 - rshtykh 
https://ci.ignite.apache.org/viewModification.html?modId=840590
 - xxtern 
https://ci.ignite.apache.org/viewModification.html?modId=840616
 - gromcase 
https://ci.ignite.apache.org/viewModification.html?modId=840584
 - pudov.max 
https://ci.ignite.apache.org/viewModification.html?modId=840599
 - zaleslaw.sin 
https://ci.ignite.apache.org/viewModification.html?modId=840530
 - dmitry.melnichuk 
https://ci.ignite.apache.org/viewModification.html?modId=840595

 *New test failure in master 
DataStorageMetricsParityTest.TestDataStorageMetrics 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=2700342014550792987=%3Cdefault%3E=testDetails

 *New test failure in master 
CacheConfigurationParityTest.TestCacheConfiguration 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-650946841614557165=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - aliskhakov 
https://ci.ignite.apache.org/viewModification.html?modId=840588
 - sergi.vladykin 
https://ci.ignite.apache.org/viewModification.html?modId=840556
 - rshtykh 
https://ci.ignite.apache.org/viewModification.html?modId=840590
 - xxtern 
https://ci.ignite.apache.org/viewModification.html?modId=840616
 - gromcase 
https://ci.ignite.apache.org/viewModification.html?modId=840584
 - pudov.max 
https://ci.ignite.apache.org/viewModification.html?modId=840599
 - zaleslaw.sin 
https://ci.ignite.apache.org/viewModification.html?modId=840530
 - dmitry.melnichuk 
https://ci.ignite.apache.org/viewModification.html?modId=840595

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 11:23:17 27-11-2018 


[GitHub] ignite pull request #5505: IGNITE-10352

2018-11-27 Thread dgovorukhin
GitHub user dgovorukhin opened a pull request:

https://github.com/apache/ignite/pull/5505

IGNITE-10352



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10352-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5505.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5505


commit 4d151fae64b8c59176bf5b619c10b5ca757542c0
Author: Dmitriy Govorukhin 
Date:   2018-11-27T08:04:13Z

IGNITE-10352 wip

Signed-off-by: Dmitriy Govorukhin 

commit 083488e0225a5d8dc8852ab727396f320d319167
Author: Dmitriy Govorukhin 
Date:   2018-11-27T08:23:56Z

IGNITE-10352 remove owner check

Signed-off-by: Dmitriy Govorukhin 




---


[GitHub] ignite pull request #5504: Ignite gc 371 2

2018-11-27 Thread AlexDel
GitHub user AlexDel opened a pull request:

https://github.com/apache/ignite/pull/5504

Ignite gc 371 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gc-371-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5504.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5504


commit 36957660fc3a16237984a7d316ae88c36b7a1e9c
Author: alexdel 
Date:   2018-11-27T06:45:16Z

GC-371. Web Console: Fixed resizing of right panel.

commit f633677143e465c50f89d8e09c4b78523c867e2e
Author: alexdel 
Date:   2018-11-27T07:39:43Z

GC-371. Web Console: Fixed resizing of ignite-grid-table.




---


Re: [VOTE] Creation dedicated list for github notifiacations

2018-11-27 Thread Andrey Mashenkov
+1

On Tue, Nov 27, 2018 at 10:12 AM Sergey Chugunov 
wrote:

> +1
>
> Plus this dedicated list should be properly documented in wiki, mentioning
> it in How to Contribute [1] or in Make Teamcity Green Again [2] would be a
> good idea.
>
> [1] https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute
> [2]
>
> https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again
>
> On Tue, Nov 27, 2018 at 9:51 AM Павлухин Иван  wrote:
>
> > +1
> > вт, 27 нояб. 2018 г. в 09:22, Dmitrii Ryabov :
> > >
> > > 0
> > > вт, 27 нояб. 2018 г. в 02:33, Alexey Kuznetsov  >:
> > > >
> > > > +1
> > > > Do not forget notification from GitBox too!
> > > >
> > > > On Tue, Nov 27, 2018 at 2:20 AM Zhenya 
> > wrote:
> > > >
> > > > > +1, already make it by filers.
> > > > >
> > > > > > This was discussed already [1].
> > > > > >
> > > > > > So, I want to complete this discussion with moving outside
> dev-list
> > > > > > GitHub-notification to dedicated list.
> > > > > >
> > > > > > Please start voting.
> > > > > >
> > > > > > +1 - to accept this change.
> > > > > > 0 - you don't care.
> > > > > > -1 - to decline this change.
> > > > > >
> > > > > > This vote will go for 72 hours.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Time-to-remove-automated-messages-from-the-devlist-td37484i20.html
> > > > >
> > > >
> > > >
> > > > --
> > > > Alexey Kuznetsov
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Historical rebalance

2018-11-27 Thread Vladimir Ozerov
Igor,

Could you please elaborate - what is the whole set of information we are
going to save at checkpoint time? From what I understand this should be:
1) List of active transactions with WAL pointers of their first writes
2) List of prepared transactions with their update counters
3) Partition counter low watermark (LWM) - the smallest partition counter
before which there are no prepared transactions.

And the we send to supplier node a message: "Give me all updates starting
from that LWM plus data for that transactions which were active when I
failed".

Am I right?

On Fri, Nov 23, 2018 at 11:22 AM Seliverstov Igor 
wrote:

> Hi Igniters,
>
> Currently I’m working on possible approaches how to implement historical
> rebalance (delta rebalance using WAL iterator) over MVCC caches.
>
> The main difficulty is that MVCC writes changes on tx active phase while
> partition update version, aka update counter, is being applied on tx
> finish. This means we cannot start iteration over WAL right from the
> pointer where the update counter updated, but should include updates, which
> the transaction that updated the counter did.
>
> These updates may be much earlier than the point where the update counter
> was updated, so we have to be able to identify the point where the first
> update happened.
>
> The proposed approach includes:
>
> 1) preserve list of active txs, sorted by the time of their first update
> (using WAL ptr of first WAL record in tx)
>
> 2) persist this list on each checkpoint (together with TxLog for example)
>
> 4) send whole active tx list (transactions which were in active state at
> the time the node was crushed, empty list in case of graceful node stop) as
> a part of partition demand message.
>
> 4) find a checkpoint where the earliest tx exists in persisted txs and use
> saved WAL ptr as a start point or apply current approach in case the active
> tx list (sent on previous step) is empty
>
> 5) start iteration.
>
> Your thoughts?
>
> Regards,
> Igor


[GitHub] ignite pull request #5503: IGNITE-10352

2018-11-27 Thread dgovorukhin
GitHub user dgovorukhin opened a pull request:

https://github.com/apache/ignite/pull/5503

IGNITE-10352



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dgovorukhin/ignite ignite-10352-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5503.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5503


commit 4d151fae64b8c59176bf5b619c10b5ca757542c0
Author: Dmitriy Govorukhin 
Date:   2018-11-27T08:04:13Z

IGNITE-10352 wip

Signed-off-by: Dmitriy Govorukhin 




---


Re: proposed realization KILL QUERY command

2018-11-27 Thread Vladimir Ozerov
Yes ("нуы")

On Tue, Nov 27, 2018 at 10:56 AM Павлухин Иван  wrote:

> I believe that the meaning was:
>
> > I propose to start with running queries VIEW first.
> вт, 27 нояб. 2018 г. в 10:47, Vladimir Ozerov :
> >
> > I propose to start with running queries мшуц first. Once we have it, it
> > will be easier to agree on final command syntax.
> >
> > On Fri, Nov 23, 2018 at 9:32 AM Павлухин Иван 
> wrote:
> >
> > > Hi,
> > >
> > > May be I am a little bit late with my thoughts about a command syntax.
> > > How do I see it is going to be used:
> > > 1. A user is able to kill a query by unique id belonging only to this
> > > query.
> > > 2. A user is able to kill all queries started by a specific node.
> > > For killing a single query we must identify it by unique id which is
> > > going to be received directly from Ignite (e.g. running queries view)
> > > and not calculated by user. Internally the id is compound but why
> > > cannot we convert it to opaque integer or string which hides real
> > > structure? E.g. base16String(concat(nodeOrder.toString(), ".",
> > > queryIdOnNode.toString())) The syntax could be KILL QUERY '123' or
> > > KILL QUERY WHERE queryId = '123'
> > > For killing all queries started by some node we need to use only node
> > > order (or id). It could be like KILL QUERY WHERE nodeOrder = 34.
> > > чт, 22 нояб. 2018 г. в 12:56, Denis Mekhanikov  >:
> > > >
> > > > Actually, option with separate parameters was mentioned in another
> thread
> > > >
> > >
> http://apache-ignite-developers.2346864.n4.nabble.com/proposed-design-for-thin-client-SQL-management-and-monitoring-view-running-queries-and-kill-it-tp37713p38056.html
> > > >
> > > > Denis
> > > >
> > > > чт, 22 нояб. 2018 г. в 08:51, Vladimir Ozerov  >:
> > > >
> > > > > Denis,
> > > > >
> > > > > Problems with separate parameters are explained above.
> > > > >
> > > > > чт, 22 нояб. 2018 г. в 3:23, Denis Magda :
> > > > >
> > > > > > Vladimir,
> > > > > >
> > > > > > All of the alternatives are reminiscent of mathematical
> operations.
> > > Don't
> > > > > > look like a SQL command. What if we use a SQL approach
> introducing
> > > named
> > > > > > parameters:
> > > > > >
> > > > > > KILL QUERY query_id=10 [AND node_id=5]
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > > > On Wed, Nov 21, 2018 at 4:11 AM Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Denis,
> > > > > > >
> > > > > > > Space is bad candidate because it is a whitespace. Without
> > > whitespaces
> > > > > we
> > > > > > > can have syntax without quotes at all. Any non-whitespace
> delimiter
> > > > > will
> > > > > > > work, though:
> > > > > > >
> > > > > > > KILL QUERY 45.1
> > > > > > > KILL QUERY 45-1
> > > > > > > KILL QUERY 45:1
> > > > > > >
> > > > > > > On Wed, Nov 21, 2018 at 3:06 PM Юрий <
> jury.gerzhedow...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Denis,
> > > > > > > >
> > > > > > > > Let's consider parameter of KILL QUERY just a string with
> some
> > > query
> > > > > > id,
> > > > > > > > without any meaning for user. User just need to get the id
> and
> > > pass
> > > > > as
> > > > > > > > parameter to KILL QUERY command.
> > > > > > > >
> > > > > > > > Even if query is distributed it have single query id from
> user
> > > > > > > perspective
> > > > > > > > and will killed on all nodes. User just need to known one
> global
> > > > > query
> > > > > > > id.
> > > > > > > >
> > > > > > > > How it can works.
> > > > > > > > 1)SELECT * from running_queries
> > > > > > > > result is
> > > > > > > >  query_id | node_id
> > > > > > > >   | sql   | schema_name | connection_id |
> duration
> > > > > > > > 123.33 | e0a69cb8-a1a8-45f6-b84d-ead367a0   | SELECT
> > > ...  |
> > > > > ...
> > > > > > > >   |   22 | 23456
> > > > > > > > 333.31 | aaa6acb8-a4a5-42f6-f842-ead111b00020 |
> > > UPDATE...  |
> > > > > > ...
> > > > > > > >   |  321| 346
> > > > > > > > 2) KILL QUERY '123.33'
> > > > > > > >
> > > > > > > > So, user need select query_id from running_queries view and
> use
> > > it
> > > > > for
> > > > > > > KILL
> > > > > > > > QUERY command.
> > > > > > > >
> > > > > > > > I hope it became clearer.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > ср, 21 нояб. 2018 г. в 02:11, Denis Magda  >:
> > > > > > > >
> > > > > > > > > Folks,
> > > > > > > > >
> > > > > > > > > The decimal syntax is really odd - KILL QUERY
> > > > > > > > > '[node_order].[query_counter]'
> > > > > > > > >
> > > > > > > > > Confusing, let's use a space to separate parameters.
> > > > > > > > >
> > > > > > > > > Also, what if I want to halt a specific query with certain
> ID?
> > > > > Don't
> > > > > > > know
> > > > > > > > > the node number, just know that the query is distributed
> and
> > > runs
> > > > > > > across
> > > > > > > > > several machines. Sounds like the syntax still should