[MTCGA]: new failures in builds [1677843] needs to be handled

2018-08-17 Thread dpavlov . tasks
Hi Ignite Developer,

I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. I 
hope you can help.

 *New Critical Failure in master Cache 4 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache4=%3Cdefault%3E=buildTypeStatusDiv
 No changes in build

- If your changes can led to this failure(s), please create issue with 
label MakeTeamCityGreenAgain and assign it to you.
-- If you have fix, please set ticket to PA state and write to dev list 
fix is ready 
-- For case fix will require some time please mute test and set label 
Muted_Test to issue 
- If you know which change caused failure please contact change author 
directly
- If you don't know which change caused failure please send message to 
dev list to find out
Should you have any questions please contact dpav...@apache.org or write to 
dev.list 
Best Regards,
MTCGA.Bot 
Notification generated at Sat Aug 18 08:52:35 MSK 2018 


[MTCGA]: new failures in builds [1677738] needs to be handled

2018-08-17 Thread dpavlov . tasks
Hi Ignite Developer,

I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. I 
hope you can help.

 *New Critical Failure in master Cache 4 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache4=%3Cdefault%3E=buildTypeStatusDiv
 No changes in build

- If your changes can led to this failure(s), please create issue with 
label MakeTeamCityGreenAgain and assign it to you.
-- If you have fix, please set ticket to PA state and write to dev list 
fix is ready 
-- For case fix will require some time please mute test and set label 
Muted_Test to issue 
- If you know which change caused failure please contact change author 
directly
- If you don't know which change caused failure please send message to 
dev list to find out
Should you have any questions please contact dpav...@apache.org or write to 
dev.list 
Best Regards,
MTCGA.Bot 
Notification generated at Sat Aug 18 07:37:39 MSK 2018 


[MTCGA]: new failures in builds [1677097] needs to be handled

2018-08-17 Thread dpavlov . tasks
Hi Ignite Developer,

I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. I 
hope you can help.

 *New Critical Failure in master Cache 4 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache4=%3Cdefault%3E=buildTypeStatusDiv
 Changes may led to failure were done by 
 - dmitrievanthony 
http://ci.ignite.apache.org/viewModification.html?modId=828652=false

- If your changes can led to this failure(s), please create issue with 
label MakeTeamCityGreenAgain and assign it to you.
-- If you have fix, please set ticket to PA state and write to dev list 
fix is ready 
-- For case fix will require some time please mute test and set label 
Muted_Test to issue 
- If you know which change caused failure please contact change author 
directly
- If you don't know which change caused failure please send message to 
dev list to find out
Should you have any questions please contact dpav...@apache.org or write to 
dev.list 
Best Regards,
MTCGA.Bot 
Notification generated at Sat Aug 18 06:53:10 MSK 2018 


[MTCGA]: new failures in builds [1676757] needs to be handled

2018-08-17 Thread dpavlov . tasks
Hi Ignite Developer,

I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. I 
hope you can help.

 *New Critical Failure in master Cache 4 
https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache4=%3Cdefault%3E=buildTypeStatusDiv
 Changes may led to failure were done by 
 - biryukovvitaliy92 
http://ci.ignite.apache.org/viewModification.html?modId=828649=false
 - xxtern 
http://ci.ignite.apache.org/viewModification.html?modId=828647=false
 - dkarachentsev 
http://ci.ignite.apache.org/viewModification.html?modId=828644=false
 - stkuzma 
http://ci.ignite.apache.org/viewModification.html?modId=828636=false
 - kaa.dev 
http://ci.ignite.apache.org/viewModification.html?modId=828630=false
 - plehanov.alex 
http://ci.ignite.apache.org/viewModification.html?modId=828627=false
 - kaa.dev 
http://ci.ignite.apache.org/viewModification.html?modId=828621=false
 - ilantukh 
http://ci.ignite.apache.org/viewModification.html?modId=828582=false
 - av 
http://ci.ignite.apache.org/viewModification.html?modId=828574=false
 - vinokurov.pasha 
http://ci.ignite.apache.org/viewModification.html?modId=828554=false

- If your changes can led to this failure(s), please create issue with 
label MakeTeamCityGreenAgain and assign it to you.
-- If you have fix, please set ticket to PA state and write to dev list 
fix is ready 
-- For case fix will require some time please mute test and set label 
Muted_Test to issue 
- If you know which change caused failure please contact change author 
directly
- If you don't know which change caused failure please send message to 
dev list to find out
Should you have any questions please contact dpav...@apache.org or write to 
dev.list 
Best Regards,
MTCGA.Bot 
Notification generated at Sat Aug 18 06:22:29 MSK 2018 


Release policy updates

2018-08-17 Thread Denis Magda
Peter, Anton V, Igniters,

The board communicated the following release policy changes:
  -- for new releases :
 -- you MUST supply a SHA-256 and/or SHA-512 file
 -- you SHOULD NOT supply MD5 or SHA-1 files

Are we good? More details are below.




*2 Release Dist Policy Changes  (Q? us...@infra.apache.org)
---

The Release Distribution Policy[1] changed regarding checksum files.
See under "Cryptographic Signatures and Checksums Requirements" [2].

Note that "MUST", "SHOULD", "SHOULD NOT" are technical terms ;
not just emphasized words ; for an explanation see RFC-2119 [3].

Old policy :

  -- SHOULD supply a SHA checksum file
  -- SHOULD NOT supply a MD5 checksum file

New policy :

  -- SHOULD supply a SHA-256 and/or SHA-512 checksum file
  -- SHOULD NOT supply MD5 or SHA-1 checksum files

Why this change ?

  -- Like MD5, SHA-1 is too broken ; we should move away from it.

Impact for PMCs :

  -- for new releases :
 -- you MUST supply a SHA-256 and/or SHA-512 file
 -- you SHOULD NOT supply MD5 or SHA-1 files

  -- for past releases :
 -- you are not required to change anything ;
 -- it would be nice if you fixed your dist area ;
start with : cleanup ; rename .sha's ; remove .md5's


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Denis Magda
Vova, the things are even simpler - we have this

ignite.dataRegionMetrics().getPhysicalMemorySize() that returns the
number equal/comparabel to pageNumber X pageSize.


Igniters, if you believe that we need to do more work here then let's
do it iteratively. Let's fix the off-heap occupied size the way above
(just print out getPhysicalMemorySize() for every data region). Then
do the rest. This needs to be fixed in 2.7.


--

Denis


On Fri, Aug 17, 2018 at 10:20 AM Vladimir Ozerov 
wrote:

> Folks,
>
> We already have this:
> >>> PageMemory [pages=6997377]
>
> Then we can multiply it by page size and get occupied memory. Am I wrong?
>
> On Fri, Aug 17, 2018 at 12:56 PM Dmitriy Pavlov 
> wrote:
>
> > Hi Maxim,
> >
> > thank you for stepping in and for finding these issues. Yes, these
> tickets
> > are correct.
> >
> > I can move https://issues.apache.org/jira/browse/IGNITE-5583 to
> unassigned
> > if someone would like to implement this change. I will not have enough
> time
> > to complete it in 1 month (before 2.7 release).
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пт, 17 авг. 2018 г. в 11:04, Maxim Muzafarov :
> >
> > > Igniters,
> > >
> > > Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap
> > memory
> > > metrics
> > > to new page memory semantics` and related previous disscustions to it
> > [4].
> > >
> > > Also we have some additional improvements to CacheMetrics:
> > > IGNITE-5490 [2] - `Implement replacement for obsolete
> > > CacheMetrics#getOffHeapAllocatedSize`
> > > IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and
> > fixes`
> > >
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-5583
> > > [2] https://issues.apache.org/jira/browse/IGNITE-5490
> > > [3] https://issues.apache.org/jira/browse/IGNITE-5765
> > > [4]
> > >
> > >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Negative-non-heap-memory-maximum-td17990.html
> > >
> > > On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov 
> > wrote:
> > >
> > > > Hi Igniters,
> > > >
> > > > It is not an easy fix, so I'm not sure it is possible to do in 2.7.
> > > >
> > > > Offheap size is not reported by VM (it returns -1). To implement it
> we
> > > need
> > > > totally migrate off-heap memory metrics to durable memory data.
> > > >
> > > > I think this issue was reported and I'll find the duplicate.
> > > >
> > > > Sincerely,
> > > > Dmitriy Pavlov
> > > >
> > > > пт, 17 авг. 2018 г. в 6:10, Denis Magda :
> > > >
> > > > > Yes, it was at the end of my wordy email :)
> > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> > > > dsetrak...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Is there a blocker ticket for 2.7?
> > > > > >
> > > > > > On Thu, Aug 16, 2018, 19:59 Denis Magda 
> wrote:
> > > > > >
> > > > > > > Igniters,
> > > > > > >
> > > > > > > Was troubleshooting an Ignite deployment today and couldn't
> find
> > > out
> > > > > from
> > > > > > > the logs what was the actual off-heap space used.
> > > > > > >
> > > > > > > Those were the given memory resoures (Ignite 2.6):
> > > > > > >
> > > > > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager]
> > > Topology
> > > > > > > snapshot [ver=1, servers=1, clients=0, CPUs=64,
> *offheap=30.0GB*,
> > > > > > > heap=24.0GB]
> > > > > > >
> > > > > > > And that weird stuff was reported by the node (pay attention to
> > the
> > > > > last
> > > > > > > line):
> > > > > > >
> > > > > > > [2018-08-16 15:45:50,211][INFO
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017]
> > > > > > > Metrics for local node (to disable set 'metricsLogFrequency' to
> > 0)
> > > > > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > > > > uptime=00:38:00.257]
> > > > > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > > > > ^-- PageMemory [pages=6997377]
> > > > > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > > > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this
> line
> > is
> > > > > > always
> > > > > > > the same!*
> > > > > > >
> > > > > > > Had to change the code by using
> > dataRegion.getPhysicalMemoryPages()
> > > > to
> > > > > > find
> > > > > > > out that actual off-heap usage size was
> > > > > > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > > > > > >
> > > > > > > Let's fix this issue in 2.7, I proposed a new format. Please
> > review
> > > > and
> > > > > > > share your thoughts:
> > > > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > > > >
> > > > > > > --
> > > > > > > Denis
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > --
> > > --
> > > Maxim Muzafarov
> > >
> >
>


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Dmitriy Pavlov
Yes, I agree.

To calculate free (currently -1) we need to know total & used.
Used=sysPageSize*usedPages, but overall should be obtained from all
segments and chunks. So it would be a remarkable part of all mem.metrics
migration (required by IGNITE-5583).

пт, 17 авг. 2018 г. в 23:55, Alex Plehanov :

> To be more precise we need to multiply it by page size with system overhead
> (systemPageSize). If we want to print only used offheap memory, this will
> solve the problem. If we want to print, for example, currently allocated
> (commited) offheap memory (if persistence is disabled for data region then
> offheap is allocated by chunks) we need to do more complex calculations and
> there is no public API for this now.
>
> 2018-08-17 17:19 GMT+03:00 Vladimir Ozerov :
>
> > Folks,
> >
> > We already have this:
> > >>> PageMemory [pages=6997377]
> >
> > Then we can multiply it by page size and get occupied memory. Am I wrong?
> >
> > On Fri, Aug 17, 2018 at 12:56 PM Dmitriy Pavlov 
> > wrote:
> >
> > > Hi Maxim,
> > >
> > > thank you for stepping in and for finding these issues. Yes, these
> > tickets
> > > are correct.
> > >
> > > I can move https://issues.apache.org/jira/browse/IGNITE-5583 to
> > unassigned
> > > if someone would like to implement this change. I will not have enough
> > time
> > > to complete it in 1 month (before 2.7 release).
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > пт, 17 авг. 2018 г. в 11:04, Maxim Muzafarov :
> > >
> > > > Igniters,
> > > >
> > > > Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap
> > > memory
> > > > metrics
> > > > to new page memory semantics` and related previous disscustions to it
> > > [4].
> > > >
> > > > Also we have some additional improvements to CacheMetrics:
> > > > IGNITE-5490 [2] - `Implement replacement for obsolete
> > > > CacheMetrics#getOffHeapAllocatedSize`
> > > > IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and
> > > fixes`
> > > >
> > > >
> > > > [1] https://issues.apache.org/jira/browse/IGNITE-5583
> > > > [2] https://issues.apache.org/jira/browse/IGNITE-5490
> > > > [3] https://issues.apache.org/jira/browse/IGNITE-5765
> > > > [4]
> > > >
> > > >
> > > http://apache-ignite-developers.2346864.n4.nabble.
> > com/Negative-non-heap-memory-maximum-td17990.html
> > > >
> > > > On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov 
> > > wrote:
> > > >
> > > > > Hi Igniters,
> > > > >
> > > > > It is not an easy fix, so I'm not sure it is possible to do in 2.7.
> > > > >
> > > > > Offheap size is not reported by VM (it returns -1). To implement it
> > we
> > > > need
> > > > > totally migrate off-heap memory metrics to durable memory data.
> > > > >
> > > > > I think this issue was reported and I'll find the duplicate.
> > > > >
> > > > > Sincerely,
> > > > > Dmitriy Pavlov
> > > > >
> > > > > пт, 17 авг. 2018 г. в 6:10, Denis Magda :
> > > > >
> > > > > > Yes, it was at the end of my wordy email :)
> > > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > > > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> > > > > dsetrak...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Is there a blocker ticket for 2.7?
> > > > > > >
> > > > > > > On Thu, Aug 16, 2018, 19:59 Denis Magda 
> > wrote:
> > > > > > >
> > > > > > > > Igniters,
> > > > > > > >
> > > > > > > > Was troubleshooting an Ignite deployment today and couldn't
> > find
> > > > out
> > > > > > from
> > > > > > > > the logs what was the actual off-heap space used.
> > > > > > > >
> > > > > > > > Those were the given memory resoures (Ignite 2.6):
> > > > > > > >
> > > > > > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager]
> > > > Topology
> > > > > > > > snapshot [ver=1, servers=1, clients=0, CPUs=64,
> > *offheap=30.0GB*,
> > > > > > > > heap=24.0GB]
> > > > > > > >
> > > > > > > > And that weird stuff was reported by the node (pay attention
> to
> > > the
> > > > > > last
> > > > > > > > line):
> > > > > > > >
> > > > > > > > [2018-08-16 15:45:50,211][INFO
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][
> > IgniteKernal%cluster_31-Dec-2017]
> > > > > > > > Metrics for local node (to disable set 'metricsLogFrequency'
> to
> > > 0)
> > > > > > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > > > > > uptime=00:38:00.257]
> > > > > > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > > > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > > > > > ^-- PageMemory [pages=6997377]
> > > > > > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > > > > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this
> > line
> > > is
> > > > > > > always
> > > > > > > > the same!*
> > > > > > > >
> > > > > > > > Had to change the code by using
> > > dataRegion.getPhysicalMemoryPages()
> > > > > to
> > > > > > > find
> > > > > > > > out that actual 

Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Alex Plehanov
To be more precise we need to multiply it by page size with system overhead
(systemPageSize). If we want to print only used offheap memory, this will
solve the problem. If we want to print, for example, currently allocated
(commited) offheap memory (if persistence is disabled for data region then
offheap is allocated by chunks) we need to do more complex calculations and
there is no public API for this now.

2018-08-17 17:19 GMT+03:00 Vladimir Ozerov :

> Folks,
>
> We already have this:
> >>> PageMemory [pages=6997377]
>
> Then we can multiply it by page size and get occupied memory. Am I wrong?
>
> On Fri, Aug 17, 2018 at 12:56 PM Dmitriy Pavlov 
> wrote:
>
> > Hi Maxim,
> >
> > thank you for stepping in and for finding these issues. Yes, these
> tickets
> > are correct.
> >
> > I can move https://issues.apache.org/jira/browse/IGNITE-5583 to
> unassigned
> > if someone would like to implement this change. I will not have enough
> time
> > to complete it in 1 month (before 2.7 release).
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пт, 17 авг. 2018 г. в 11:04, Maxim Muzafarov :
> >
> > > Igniters,
> > >
> > > Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap
> > memory
> > > metrics
> > > to new page memory semantics` and related previous disscustions to it
> > [4].
> > >
> > > Also we have some additional improvements to CacheMetrics:
> > > IGNITE-5490 [2] - `Implement replacement for obsolete
> > > CacheMetrics#getOffHeapAllocatedSize`
> > > IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and
> > fixes`
> > >
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-5583
> > > [2] https://issues.apache.org/jira/browse/IGNITE-5490
> > > [3] https://issues.apache.org/jira/browse/IGNITE-5765
> > > [4]
> > >
> > >
> > http://apache-ignite-developers.2346864.n4.nabble.
> com/Negative-non-heap-memory-maximum-td17990.html
> > >
> > > On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov 
> > wrote:
> > >
> > > > Hi Igniters,
> > > >
> > > > It is not an easy fix, so I'm not sure it is possible to do in 2.7.
> > > >
> > > > Offheap size is not reported by VM (it returns -1). To implement it
> we
> > > need
> > > > totally migrate off-heap memory metrics to durable memory data.
> > > >
> > > > I think this issue was reported and I'll find the duplicate.
> > > >
> > > > Sincerely,
> > > > Dmitriy Pavlov
> > > >
> > > > пт, 17 авг. 2018 г. в 6:10, Denis Magda :
> > > >
> > > > > Yes, it was at the end of my wordy email :)
> > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > > > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> > > > dsetrak...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Is there a blocker ticket for 2.7?
> > > > > >
> > > > > > On Thu, Aug 16, 2018, 19:59 Denis Magda 
> wrote:
> > > > > >
> > > > > > > Igniters,
> > > > > > >
> > > > > > > Was troubleshooting an Ignite deployment today and couldn't
> find
> > > out
> > > > > from
> > > > > > > the logs what was the actual off-heap space used.
> > > > > > >
> > > > > > > Those were the given memory resoures (Ignite 2.6):
> > > > > > >
> > > > > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager]
> > > Topology
> > > > > > > snapshot [ver=1, servers=1, clients=0, CPUs=64,
> *offheap=30.0GB*,
> > > > > > > heap=24.0GB]
> > > > > > >
> > > > > > > And that weird stuff was reported by the node (pay attention to
> > the
> > > > > last
> > > > > > > line):
> > > > > > >
> > > > > > > [2018-08-16 15:45:50,211][INFO
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][
> IgniteKernal%cluster_31-Dec-2017]
> > > > > > > Metrics for local node (to disable set 'metricsLogFrequency' to
> > 0)
> > > > > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > > > > uptime=00:38:00.257]
> > > > > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > > > > ^-- PageMemory [pages=6997377]
> > > > > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > > > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this
> line
> > is
> > > > > > always
> > > > > > > the same!*
> > > > > > >
> > > > > > > Had to change the code by using
> > dataRegion.getPhysicalMemoryPages()
> > > > to
> > > > > > find
> > > > > > > out that actual off-heap usage size was
> > > > > > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > > > > > >
> > > > > > > Let's fix this issue in 2.7, I proposed a new format. Please
> > review
> > > > and
> > > > > > > share your thoughts:
> > > > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > > > >
> > > > > > > --
> > > > > > > Denis
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > --
> > > --
> > > Maxim Muzafarov
> > >
> >
>


Re: Continuous queries and MVCC

2018-08-17 Thread Dmitriy Setrakyan
On Fri, Aug 17, 2018 at 3:18 AM, Vladimir Ozerov 
wrote:

> Folks,
>
> As you know we are developing multi-version concurrency control for Ignite
> caches, which is essentially new mode for transactional cache with snapshot
> semantics. One of the most important applications of this mode would be
> fully transactional SQL The question is how to implement continuous query
> semantics with MVCC. Several interesting things here.
>
> 1) *Event ordering*. Currently we guarantee order of updates for specific
> key. But for fair transactional mode user may want to get transactional
> ordering instead of key ordering. If there are several dependent
> transactions, I may want to receive their updates in order. E.g.:
> TX1: transfer(A -> B)
> TX2: transfer(B -> C)
> TX3: transfer(C -> D)
>
> If user receives update on key D (TX3), should we guarantee that he already
> received updates for all keys in TX1 and TX2? My opinion is that we'd
> better to leave things as is and guarantee only per-key ordering. Only one
> reason - implementation complexity.
>

I would preserve at least the current guarantees. However, many users
expressed interested in receiving events in proper order. If it is possible
to do, I would definitely do it, or at lease provide a configuration option
to have such behavior.


>
> 2) *Initial query*. We implemented it so that user can get some initial
> data snapshot and then start receiving events. Without MVCC we have no
> guarantees of visibility. E.g. if key is updated from V1 to V2, it is
> possible to see V2 in initial query and in event. With MVCC it is now
> technically possible to query data on certain snapshot and then receive
> only events happened after this snapshot. So that we never see V2 twice. Do
> you think we this feature will be interesting for our users?
>

Absolutely! It will CQ usage much cleaner and less error-prone.


Re: Continuous queries and MVCC

2018-08-17 Thread Valentin Kulichenko
Vladimir,

1. Continuous queries are asynchronous in general case, so I don't think
it's even possible to provide transactional ordering, especially for the
case of distributed transactions. I would leave current guarantees as-is.

2. This one might be pretty useful. If it's not very hard to do, let's do
this improvement.

-Val

On Fri, Aug 17, 2018 at 7:10 AM Nikolay Izhikov  wrote:

> Hello, Vladimir.
>
> > 1) *Event ordering*.
>
> I vote to keep things as is.
>
> > 2) *Initial query*
> > With MVCC it is now
> > technically possible to query data on certain snapshot and then receive
> > only events happened after this snapshot
>
> I think it a usefull extension for current implementation.
>
> В Пт, 17/08/2018 в 13:18 +0300, Vladimir Ozerov пишет:
> > Folks,
> >
> > As you know we are developing multi-version concurrency control for
> Ignite
> > caches, which is essentially new mode for transactional cache with
> snapshot
> > semantics. One of the most important applications of this mode would be
> > fully transactional SQL The question is how to implement continuous query
> > semantics with MVCC. Several interesting things here.
> >
> > 1) *Event ordering*. Currently we guarantee order of updates for specific
> > key. But for fair transactional mode user may want to get transactional
> > ordering instead of key ordering. If there are several dependent
> > transactions, I may want to receive their updates in order. E.g.:
> > TX1: transfer(A -> B)
> > TX2: transfer(B -> C)
> > TX3: transfer(C -> D)
> >
> > If user receives update on key D (TX3), should we guarantee that he
> already
> > received updates for all keys in TX1 and TX2? My opinion is that we'd
> > better to leave things as is and guarantee only per-key ordering. Only
> one
> > reason - implementation complexity.
> >
> > 2) *Initial query*. We implemented it so that user can get some initial
> > data snapshot and then start receiving events. Without MVCC we have no
> > guarantees of visibility. E.g. if key is updated from V1 to V2, it is
> > possible to see V2 in initial query and in event. With MVCC it is now
> > technically possible to query data on certain snapshot and then receive
> > only events happened after this snapshot. So that we never see V2 twice.
> Do
> > you think we this feature will be interesting for our users?
> >
> > Please share your thoughts.
> >
> > Vladimir.


[GitHub] ignite pull request #4557: IGNITE-9278 Fix TensorFlow integration: Can't fin...

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4557


---


[GitHub] ignite pull request #4568: IGNITE-9315: Eviction meta from near cache may ap...

2018-08-17 Thread slukyano
GitHub user slukyano opened a pull request:

https://github.com/apache/ignite/pull/4568

IGNITE-9315: Eviction meta from near cache may appear in DHT entries.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9315

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4568.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4568






---


[GitHub] ignite pull request #4477: IGNITE-9169: Cache (Deadlock Detection) suite han...

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4477


---


[GitHub] ignite pull request #4567: IGNITE-9220 Uncomment tests from internal test su...

2018-08-17 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/4567

IGNITE-9220 Uncomment tests from internal test suites.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9220

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4567.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4567


commit 5dda8f1578ebbdffec60b6a71668923063648fc8
Author: Ilya Kasnacheev 
Date:   2018-08-17T16:59:18Z

IGNITE-9220 Uncomment tests from internal test suites.




---


[jira] [Created] (IGNITE-9315) Eviction meta from near cache may appear in DHT entries

2018-08-17 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-9315:
--

 Summary: Eviction meta from near cache may appear in DHT entries
 Key: IGNITE-9315
 URL: https://issues.apache.org/jira/browse/IGNITE-9315
 Project: Ignite
  Issue Type: Bug
Reporter: Stanislav Lukyanov
Assignee: Stanislav Lukyanov


There are cases when eviction metadata from near cache's eviction policy may be 
added to a DHT cache entry, leading to unpredictable behavior.

The bug is caused by coding mistakes when a DHT entry is being updated, but 
near cache entry is being touched.
E.g. in GridDhtPartitionDemander::preloadEntry:
{code}
GridCacheContext cctx = grp.sharedGroup() ? ctx.cacheContext(entry.cacheId()) : 
grp.singleCacheContext();

cached = cctx.dhtCache().entryEx(entry.key());
//cctx = cached.context();

if (log.isDebugEnabled())
log.debug("Rebalancing key [key=" + entry.key() + ", part=" + p + ", node=" 
+ from.id() + ']');

if (preloadPred == null || preloadPred.apply(entry)) {
if (cached.initialValue(
entry.value(),
entry.version(),
entry.ttl(),
entry.expireTime(),
true,
topVer,
cctx.isDrEnabled() ? DR_PRELOAD : DR_NONE,
false
)) {
cctx.evicts().touch(cached, topVer); // Start tracking.
{code}

Here `cached` is always DHT entry, but `cctx` can be near.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4281: IGNITE-8859 - Open upper Java verison bounds

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4281


---


Re: Metrics for MVCC caches

2018-08-17 Thread Andrey Mashenkov
Hi.

There is another similar case, what if update actually change nothing?

BEGIN;
INSERT INTO person(id, name) VALUES(1, 'ivan');
UPDATE person SET name = 'ivan' WHERE id = 1;
COMMIT;

Should it we take into account both operations or not?

I'd think current implementation is correct one as
TX will make only one update in both cases.


On Fri, Aug 17, 2018 at 4:54 PM Павлухин Иван  wrote:

> Hi team,
>
> I need you opinion again. I wrote some tests for metrics and felt a
> confusion with metrics count for the same key update during transaction.
> Imagine following case:
>
> BEGIN;
> INSERT INTO person(id, name) VALUES(1, 'ivan');
> UPDATE person SET name = 'vanya' WHERE id = 1;
> COMMIT;
>
> My intuition said me that puts count should be increased by 2 for such
> case. But current (non-mvcc) implementation increments by 1. I am not sure
> that it is proper semantics. On the other hand, I think that it is better
> to preserve existing behavior instead of making exception for mvcc. It will
> cause even more confusion.
>
> So, if nobody has objections I will go with existing semantics.
>


-- 
Best regards,
Andrey V. Mashenkov


[GitHub] asfgit closed pull request #1: MTCGA-002 Build trigger timeout.

2018-08-17 Thread GitBox
asfgit closed pull request #1: MTCGA-002 Build trigger timeout.
URL: https://github.com/apache/ignite-teamcity-bot/pull/1
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServer.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServer.java
index 4ceedec..7b75223 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServer.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServer.java
@@ -17,6 +17,7 @@
 
 package org.apache.ignite.ci.conf;
 
+import java.util.Objects;
 import javax.annotation.Nonnull;
 import javax.annotation.Nullable;
 
@@ -24,39 +25,54 @@
  * Created by Дмитрий on 09.11.2017.
  */
 public class ChainAtServer {
-/** Server ID to access config files within helper */
+/** Server ID to access config files within helper. */
 @Nullable public String serverId;
 
-/** Suite identifier by teamcity identification for root chain */
+/** Suite identifier by teamcity identification for root chain. */
 @Nonnull public String suiteId;
 
 /** Automatic build triggering. */
 @Nullable private Boolean triggerBuild;
 
+/** Automatic build triggering quiet period in minutes. */
+@Nullable private Integer triggerBuildQuietPeriod;
+
+/** {@inheritDoc} */
 @Override public boolean equals(Object o) {
 if (this == o)
 return true;
 if (o == null || getClass() != o.getClass())
 return false;
-
 ChainAtServer server = (ChainAtServer)o;
-
-if (serverId != null ? !serverId.equals(server.serverId) : 
server.serverId != null)
-return false;
-return suiteId.equals(server.suiteId);
+return Objects.equals(serverId, server.serverId) &&
+Objects.equals(suiteId, server.suiteId) &&
+Objects.equals(triggerBuild, server.triggerBuild) &&
+Objects.equals(triggerBuildQuietPeriod, 
server.triggerBuildQuietPeriod);
 }
 
+/** {@inheritDoc} */
 @Override public int hashCode() {
-int result = serverId != null ? serverId.hashCode() : 0;
-result = 31 * result + suiteId.hashCode();
-return result;
+return Objects.hash(serverId, suiteId, triggerBuild, 
triggerBuildQuietPeriod);
 }
 
+/**
+ * @return Server ID to access config files within helper.
+ */
 @Nullable public String getServerId() {
 return serverId;
 }
 
+/**
+ * @return {@code True} If automatic build triggering enabled.
+ */
 @Nonnull public boolean isTriggerBuild() {
 return triggerBuild == null ? false : triggerBuild;
 }
+
+/**
+ * @return Quiet period in minutes between triggering builds or zero if 
period is not set and should be ignored.
+ */
+@Nonnull public int getTriggerBuildQuietPeriod() {
+return triggerBuildQuietPeriod == null ? 0 : triggerBuildQuietPeriod;
+}
 }
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServerTracked.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServerTracked.java
index c08c588..d0dbea6 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServerTracked.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/conf/ChainAtServerTracked.java
@@ -17,6 +17,7 @@
 
 package org.apache.ignite.ci.conf;
 
+import java.util.Objects;
 import javax.annotation.Nonnull;
 
 import static com.google.common.base.Preconditions.checkState;
@@ -44,4 +45,19 @@
 
 return branchForRest;
 }
+
+@Override public boolean equals(Object o) {
+if (this == o)
+return true;
+if (o == null || getClass() != o.getClass())
+return false;
+if (!super.equals(o))
+return false;
+ChainAtServerTracked tracked = (ChainAtServerTracked)o;
+return Objects.equals(branchForRest, tracked.branchForRest);
+}
+
+@Override public int hashCode() {
+return Objects.hash(super.hashCode(), branchForRest);
+}
 }
diff --git 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/jobs/CheckQueueJob.java
 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/jobs/CheckQueueJob.java
index 3065e2d..7eb7e7a 100644
--- 
a/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/jobs/CheckQueueJob.java
+++ 
b/ignite-tc-helper-web/src/main/java/org/apache/ignite/ci/jobs/CheckQueueJob.java
@@ -22,6 +22,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
 import org.apache.ignite.ci.HelperConfig;
 import 

Re: StandaloneWalRecordsIterator: support iteration from custom pointer

2018-08-17 Thread Ivan Rakov

Dmitriy,

Thanks a lot for stepping in! Your help is appreciated.
I've looked through your changes and left a comment in JIRA issue.

Best Regards,
Ivan Rakov

On 16.08.2018 23:56, Dmitriy Govorukhin wrote:

Ivan,

I implemented this issue, please review my changes.
https://reviews.ignite.apache.org/ignite/review/IGNT-CR-729

On Thu, Aug 16, 2018 at 3:09 PM Ivan Rakov  wrote:


Thanks for your comments!
I've created a ticket: https://issues.apache.org/jira/browse/IGNITE-9294

Best Regards,
Ivan Rakov

On 15.08.2018 21:31, Dmitriy Setrakyan wrote:

Agree, this should be a great performance boost.

On Wed, Aug 15, 2018 at 10:17 AM, Dmitriy Pavlov 
wrote:


Hi Ivan,

I agree that providing WAL pointer is the better option. Initially,
Standalone WAL iterator was developed for debugging utility, so a set of
files was perfectly OK.

Sincerely,
Dmitriy Palov

ср, 15 авг. 2018 г. в 20:06, Ivan Rakov :


Igniters,

Right now we are developing WAL shipping process for our internal
purposes and we use StandaloneWalRecordsIterator to read WAL files from
custom destination. We have bumped into a problem - iterator can be
constructed from set of files and dirs, but there's no option to pass
WAL pointer to the iterator factory class to start iteration with. It
can be worked around (by filtering all records prior to needed

pointer),

but I think it would be handy to add such option to
IgniteWalIteratorFactory API.

What do you think?

--
Best Regards,
Ivan Rakov








[GitHub] ignite pull request #4566: IGNITE-3653 Fix P2P class loading for remote filt...

2018-08-17 Thread dmekhanikov
GitHub user dmekhanikov opened a pull request:

https://github.com/apache/ignite/pull/4566

IGNITE-3653 Fix P2P class loading for remote filter and filter factory in 
CQs.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3653

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4566.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4566


commit 48ce5e66e199c1a3c2593d40e0cb7a8e61d279c3
Author: Denis Mekhanikov 
Date:   2018-08-17T12:18:01Z

IGNITE-3653 Add tests.

commit 31a81d8d3d3cdccb58b8d98f11d3a55fdfe281d6
Author: Denis Mekhanikov 
Date:   2018-08-17T14:32:04Z

IGNITE-3653 Call p2pUnmarshal() before registering CQ handler on data 
exchange.




---


[GitHub] ignite pull request #4543: IGNITE-9268 Check interrupted flag during locking...

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4543


---


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Vladimir Ozerov
Folks,

We already have this:
>>> PageMemory [pages=6997377]

Then we can multiply it by page size and get occupied memory. Am I wrong?

On Fri, Aug 17, 2018 at 12:56 PM Dmitriy Pavlov 
wrote:

> Hi Maxim,
>
> thank you for stepping in and for finding these issues. Yes, these tickets
> are correct.
>
> I can move https://issues.apache.org/jira/browse/IGNITE-5583 to unassigned
> if someone would like to implement this change. I will not have enough time
> to complete it in 1 month (before 2.7 release).
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 17 авг. 2018 г. в 11:04, Maxim Muzafarov :
>
> > Igniters,
> >
> > Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap
> memory
> > metrics
> > to new page memory semantics` and related previous disscustions to it
> [4].
> >
> > Also we have some additional improvements to CacheMetrics:
> > IGNITE-5490 [2] - `Implement replacement for obsolete
> > CacheMetrics#getOffHeapAllocatedSize`
> > IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and
> fixes`
> >
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-5583
> > [2] https://issues.apache.org/jira/browse/IGNITE-5490
> > [3] https://issues.apache.org/jira/browse/IGNITE-5765
> > [4]
> >
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/Negative-non-heap-memory-maximum-td17990.html
> >
> > On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov 
> wrote:
> >
> > > Hi Igniters,
> > >
> > > It is not an easy fix, so I'm not sure it is possible to do in 2.7.
> > >
> > > Offheap size is not reported by VM (it returns -1). To implement it we
> > need
> > > totally migrate off-heap memory metrics to durable memory data.
> > >
> > > I think this issue was reported and I'll find the duplicate.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > пт, 17 авг. 2018 г. в 6:10, Denis Magda :
> > >
> > > > Yes, it was at the end of my wordy email :)
> > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> > > dsetrak...@apache.org>
> > > > wrote:
> > > >
> > > > > Is there a blocker ticket for 2.7?
> > > > >
> > > > > On Thu, Aug 16, 2018, 19:59 Denis Magda  wrote:
> > > > >
> > > > > > Igniters,
> > > > > >
> > > > > > Was troubleshooting an Ignite deployment today and couldn't find
> > out
> > > > from
> > > > > > the logs what was the actual off-heap space used.
> > > > > >
> > > > > > Those were the given memory resoures (Ignite 2.6):
> > > > > >
> > > > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager]
> > Topology
> > > > > > snapshot [ver=1, servers=1, clients=0, CPUs=64, *offheap=30.0GB*,
> > > > > > heap=24.0GB]
> > > > > >
> > > > > > And that weird stuff was reported by the node (pay attention to
> the
> > > > last
> > > > > > line):
> > > > > >
> > > > > > [2018-08-16 15:45:50,211][INFO
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017]
> > > > > > Metrics for local node (to disable set 'metricsLogFrequency' to
> 0)
> > > > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > > > uptime=00:38:00.257]
> > > > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > > > ^-- PageMemory [pages=6997377]
> > > > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line
> is
> > > > > always
> > > > > > the same!*
> > > > > >
> > > > > > Had to change the code by using
> dataRegion.getPhysicalMemoryPages()
> > > to
> > > > > find
> > > > > > out that actual off-heap usage size was
> > > > > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > > > > >
> > > > > > Let's fix this issue in 2.7, I proposed a new format. Please
> review
> > > and
> > > > > > share your thoughts:
> > > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > > >
> > > > > > --
> > > > > > Denis
> > > > > >
> > > > >
> > > >
> > >
> > --
> > --
> > Maxim Muzafarov
> >
>


[GitHub] ignite pull request #4445: IGNITE-7701 SQL system view for node attributes

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4445


---


Re: Continuous queries and MVCC

2018-08-17 Thread Nikolay Izhikov
Hello, Vladimir.

> 1) *Event ordering*. 

I vote to keep things as is.

> 2) *Initial query*
> With MVCC it is now
> technically possible to query data on certain snapshot and then receive
> only events happened after this snapshot

I think it a usefull extension for current implementation.

В Пт, 17/08/2018 в 13:18 +0300, Vladimir Ozerov пишет:
> Folks,
> 
> As you know we are developing multi-version concurrency control for Ignite
> caches, which is essentially new mode for transactional cache with snapshot
> semantics. One of the most important applications of this mode would be
> fully transactional SQL The question is how to implement continuous query
> semantics with MVCC. Several interesting things here.
> 
> 1) *Event ordering*. Currently we guarantee order of updates for specific
> key. But for fair transactional mode user may want to get transactional
> ordering instead of key ordering. If there are several dependent
> transactions, I may want to receive their updates in order. E.g.:
> TX1: transfer(A -> B)
> TX2: transfer(B -> C)
> TX3: transfer(C -> D)
> 
> If user receives update on key D (TX3), should we guarantee that he already
> received updates for all keys in TX1 and TX2? My opinion is that we'd
> better to leave things as is and guarantee only per-key ordering. Only one
> reason - implementation complexity.
> 
> 2) *Initial query*. We implemented it so that user can get some initial
> data snapshot and then start receiving events. Without MVCC we have no
> guarantees of visibility. E.g. if key is updated from V1 to V2, it is
> possible to see V2 in initial query and in event. With MVCC it is now
> technically possible to query data on certain snapshot and then receive
> only events happened after this snapshot. So that we never see V2 twice. Do
> you think we this feature will be interesting for our users?
> 
> Please share your thoughts.
> 
> Vladimir.

signature.asc
Description: This is a digitally signed message part


Re: Metrics for MVCC caches

2018-08-17 Thread Павлухин Иван
Hi team,

I need you opinion again. I wrote some tests for metrics and felt a
confusion with metrics count for the same key update during transaction.
Imagine following case:

BEGIN;
INSERT INTO person(id, name) VALUES(1, 'ivan');
UPDATE person SET name = 'vanya' WHERE id = 1;
COMMIT;

My intuition said me that puts count should be increased by 2 for such
case. But current (non-mvcc) implementation increments by 1. I am not sure
that it is proper semantics. On the other hand, I think that it is better
to preserve existing behavior instead of making exception for mvcc. It will
cause even more confusion.

So, if nobody has objections I will go with existing semantics.


Re: QueryDetailMetrics for cache-less SQL queries

2018-08-17 Thread Vladimir Ozerov
Query is not executed on specific cache. It is executed on many caches.

On Fri, Aug 17, 2018 at 6:10 AM Dmitriy Setrakyan 
wrote:

> But internally the SQL query still runs on some cache, no? What happens to
> the metrics accumulated on that cache?
>
> D.
>
> On Thu, Aug 16, 2018, 18:51 Alexey Kuznetsov 
> wrote:
>
> > Dima,
> >
> > "cache-less" means that SQL executed directly on SQL engine.
> >
> > In previous version of Ignite we execute queries via cache:
> >
> > ignite.cache("Some cache").sqlFieldsQuery("select ... from ..")
> >
> > In current Ignite we can execute query directly without using cache as
> > "gateway".
> >
> > And if we execute query directly, metrics not update.
> >
> >
> >
> >
> > On Fri, Aug 17, 2018 at 4:21 AM Dmitriy Setrakyan  >
> > wrote:
> >
> > > Evgeny, what is a "cache-less" SQL query?
> > >
> > > D.
> > >
> > > On Thu, Aug 16, 2018 at 6:36 AM, Evgenii Zhuravlev <
> > > e.zhuravlev...@gmail.com
> > > > wrote:
> > >
> > > > Hi Igniters,
> > > >
> > > > I've started to work on adding QueryDetailMetrics for cache-less SQL
> > > > queries(issue https://issues.apache.org/jira/browse/IGNITE-6677) and
> > > found
> > > > that it's required to change API. I don't think that adding methods
> > like
> > > > queryDetailMetrics, resetQueryDetailMetrics, as in IgniteCache to
> > Ignite
> > > > class is a good idea. So, I see 2 possible solutions here:
> > > >
> > > > 1. Create IgniteMetrics(ignite.metrics()) and move metrics from
> > > > Ignite(like dataRegionMetrics and dataStorageMetrics) and add a new
> > > > metric "queryDetailMetrics" to it. Of course, old methods will be
> > > > deprecated.
> > > >
> > > > 2. Finally create Ignite.sql() API, which was already discussed here:
> > > > http://apache-ignite-developers.2346864.n4.nabble.
> > > > com/Rethink-native-SQL-API-in-Apache-Ignite-2-0-td14335.html
> > > > and place "queryDetailMetrics" metric there. Here is the ticket for
> > this
> > > > change: https://issues.apache.org/jira/browse/IGNITE-4701
> > > >
> > > > Personally, I think that the second solution looks better in this
> case,
> > > > however, moving dataRegionMetrics and dataStorageMetrics to
> > > > ignite.matrics() is still a good idea - IMO, Ignite class is not the
> > > right
> > > > place for them - we shouldn't change our main API class so often.
> > > >
> > > > What do you think?
> > > >
> > > > Thank you,
> > > > Evgenii
> > > >
> > >
> > > --
> > > Alexey Kuznetsov
> > >
> > >
> >
>


[GitHub] ignite pull request #4564: IGNITE-9307 Added completing eviction future if n...

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4564


---


[jira] [Created] (IGNITE-9314) MVCC TX: Datastreamer operations

2018-08-17 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-9314:


 Summary: MVCC TX: Datastreamer operations
 Key: IGNITE-9314
 URL: https://issues.apache.org/jira/browse/IGNITE-9314
 Project: Ignite
  Issue Type: Task
Reporter: Igor Seliverstov


Need to change DataStreamer semantics (make it transactional)

Currently clients can see DataStreamer partial writes and two subsequent 
selects, which are run in scope of one transaction at load time, may return 
different results.

Related thread:
http://apache-ignite-developers.2346864.n4.nabble.com/MVCC-and-IgniteDataStreamer-td32340.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9313) ML TF integration: killed user script or chief processes didn't restart workers

2018-08-17 Thread Stepan Pilschikov (JIRA)
Stepan Pilschikov created IGNITE-9313:
-

 Summary:  ML TF integration: killed user script or chief processes 
didn't restart workers
 Key: IGNITE-9313
 URL: https://issues.apache.org/jira/browse/IGNITE-9313
 Project: Ignite
  Issue Type: Bug
  Components: ml
Affects Versions: 2.7
Reporter: Stepan Pilschikov


 * Run cluster
 * Filling caches with data
 * Running python script
 * Killing user script or chief

Expected: 
- chief and user script processes shutdown and run again on same node (-)
- rerun user script (+)

Actual:
- chief or user script shutting down and run again
- all workers still running and didn't restart



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4565: IGNITE-9296 should not wait if walWriter is cance...

2018-08-17 Thread macrergate
GitHub user macrergate opened a pull request:

https://github.com/apache/ignite/pull/4565

IGNITE-9296 should not wait if walWriter is cancelled



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9296

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4565.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4565


commit 652a656d83c583dc76fcf2d51c0ffd9789ecd721
Author: Sergey Kosarev 
Date:   2018-08-17T11:35:14Z

IGNITE-9296 should not wait if walWriter is cancelled




---


[jira] [Created] (IGNITE-9312) Remove unnecessary @SuppressWarnings annotation

2018-08-17 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-9312:
---

 Summary: Remove unnecessary @SuppressWarnings annotation
 Key: IGNITE-9312
 URL: https://issues.apache.org/jira/browse/IGNITE-9312
 Project: Ignite
  Issue Type: Bug
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov


New `Code Inspections` profile can be found 
\idea\ignite_inspections.xml.

We will need to fix all methods with unnecessary {{@SuppressWarnings}} 
annotation regarding this inscpetion profile.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9311) Add missing @Override annotation

2018-08-17 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-9311:
---

 Summary: Add missing @Override annotation
 Key: IGNITE-9311
 URL: https://issues.apache.org/jira/browse/IGNITE-9311
 Project: Ignite
  Issue Type: Bug
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov


New `Code Inspections` profile can be found 
{{\idea\ignite_inspections.xml}}.

We will need to fix all methods with missed {{@Override}} annotation regarding 
this inscpetion profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9310) SQL: throw exception when missing cache is attempted to be created inside a transaction

2018-08-17 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-9310:
---

 Summary: SQL: throw exception when missing cache is attempted to 
be created inside a transaction
 Key: IGNITE-9310
 URL: https://issues.apache.org/jira/browse/IGNITE-9310
 Project: Ignite
  Issue Type: Task
  Components: mvcc, sql
Affects Versions: 2.6
Reporter: Vladimir Ozerov
 Fix For: 2.7


See 
\{{org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing#prepareStatementAndCaches}}.
 This method might be called inside a transaction (both MVCC and non-MVCC 
modes). If we do not have any protective mechanics (need to check), then this 
call may lead to cache creation on a client, which in turn will wait for all 
TXes to finish, including current one, leading to a deadlock.
 # Create tests confirming the problem
 # If hang is reproduced - add a check for ongoing transaction and throw an 
exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: CacheStore and ignite.close

2018-08-17 Thread Ilya Kasnacheev
Hello!

It turns out there's some problem with checkpointing-WAL interference.

close(true) cancels current checkpoint, and upon restart some data is
missing from the cache.

We are investigating this.

Regards,

-- 
Ilya Kasnacheev

2018-08-16 10:34 GMT+03:00 Alexey Goncharuk :

> Ilya,
>
> Can you please clarify what you mean by 'abandons cache store operations'?
> Does it mean that a read-through/write-through op is omitted, but the
> public API method returns without an error? If it is so, then this is a
> bug. If a public API method finishes with an exception when read-through is
> omitted, then I think it is a correct behavior.
>
> ср, 15 авг. 2018 г. в 16:38, Ilya Kasnacheev :
>
> > Hello!
> >
> > I'm working on https://issues.apache.org/jira/browse/IGNITE-9093 test
> fix
> > Turns out, IgniteDbPutGetWithCacheStoreTest.testReadThrough fails
> > sporadically because the default ignite.close() is close(cancel=true),
> and
> > it seems that it abandons cache store operations. So not all data is
> > read/written from cache store.
> >
> > Is it really so? Is it considered safe?
> >
> > I have a pull request on this topic:
> > https://github.com/apache/ignite/pull/4545
> >
> > Regards,
> >
> > --
> > Ilya Kasnacheev
> >
>


Continuous queries and MVCC

2018-08-17 Thread Vladimir Ozerov
Folks,

As you know we are developing multi-version concurrency control for Ignite
caches, which is essentially new mode for transactional cache with snapshot
semantics. One of the most important applications of this mode would be
fully transactional SQL The question is how to implement continuous query
semantics with MVCC. Several interesting things here.

1) *Event ordering*. Currently we guarantee order of updates for specific
key. But for fair transactional mode user may want to get transactional
ordering instead of key ordering. If there are several dependent
transactions, I may want to receive their updates in order. E.g.:
TX1: transfer(A -> B)
TX2: transfer(B -> C)
TX3: transfer(C -> D)

If user receives update on key D (TX3), should we guarantee that he already
received updates for all keys in TX1 and TX2? My opinion is that we'd
better to leave things as is and guarantee only per-key ordering. Only one
reason - implementation complexity.

2) *Initial query*. We implemented it so that user can get some initial
data snapshot and then start receiving events. Without MVCC we have no
guarantees of visibility. E.g. if key is updated from V1 to V2, it is
possible to see V2 in initial query and in event. With MVCC it is now
technically possible to query data on certain snapshot and then receive
only events happened after this snapshot. So that we never see V2 twice. Do
you think we this feature will be interesting for our users?

Please share your thoughts.

Vladimir.


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Dmitriy Pavlov
Hi Maxim,

thank you for stepping in and for finding these issues. Yes, these tickets
are correct.

I can move https://issues.apache.org/jira/browse/IGNITE-5583 to unassigned
if someone would like to implement this change. I will not have enough time
to complete it in 1 month (before 2.7 release).

Sincerely,
Dmitriy Pavlov

пт, 17 авг. 2018 г. в 11:04, Maxim Muzafarov :

> Igniters,
>
> Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap memory
> metrics
> to new page memory semantics` and related previous disscustions to it [4].
>
> Also we have some additional improvements to CacheMetrics:
> IGNITE-5490 [2] - `Implement replacement for obsolete
> CacheMetrics#getOffHeapAllocatedSize`
> IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and fixes`
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-5583
> [2] https://issues.apache.org/jira/browse/IGNITE-5490
> [3] https://issues.apache.org/jira/browse/IGNITE-5765
> [4]
>
> http://apache-ignite-developers.2346864.n4.nabble.com/Negative-non-heap-memory-maximum-td17990.html
>
> On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov  wrote:
>
> > Hi Igniters,
> >
> > It is not an easy fix, so I'm not sure it is possible to do in 2.7.
> >
> > Offheap size is not reported by VM (it returns -1). To implement it we
> need
> > totally migrate off-heap memory metrics to durable memory data.
> >
> > I think this issue was reported and I'll find the duplicate.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пт, 17 авг. 2018 г. в 6:10, Denis Magda :
> >
> > > Yes, it was at the end of my wordy email :)
> > > https://issues.apache.org/jira/browse/IGNITE-9305
> > >
> > > --
> > > Denis
> > >
> > > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > Is there a blocker ticket for 2.7?
> > > >
> > > > On Thu, Aug 16, 2018, 19:59 Denis Magda  wrote:
> > > >
> > > > > Igniters,
> > > > >
> > > > > Was troubleshooting an Ignite deployment today and couldn't find
> out
> > > from
> > > > > the logs what was the actual off-heap space used.
> > > > >
> > > > > Those were the given memory resoures (Ignite 2.6):
> > > > >
> > > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager]
> Topology
> > > > > snapshot [ver=1, servers=1, clients=0, CPUs=64, *offheap=30.0GB*,
> > > > > heap=24.0GB]
> > > > >
> > > > > And that weird stuff was reported by the node (pay attention to the
> > > last
> > > > > line):
> > > > >
> > > > > [2018-08-16 15:45:50,211][INFO
> > > > >
> > > > >
> > > >
> > >
> >
> ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017]
> > > > > Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> > > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > > uptime=00:38:00.257]
> > > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > > ^-- PageMemory [pages=6997377]
> > > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is
> > > > always
> > > > > the same!*
> > > > >
> > > > > Had to change the code by using dataRegion.getPhysicalMemoryPages()
> > to
> > > > find
> > > > > out that actual off-heap usage size was
> > > > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > > > >
> > > > > Let's fix this issue in 2.7, I proposed a new format. Please review
> > and
> > > > > share your thoughts:
> > > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > > >
> > > > > --
> > > > > Denis
> > > > >
> > > >
> > >
> >
> --
> --
> Maxim Muzafarov
>


[GitHub] ignite pull request #4507: Ignite 8926 9200

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4507


---


[jira] [Created] (IGNITE-9309) LocalNodeMovingPartitionsCount metrics may calculates incorrect due to processFullPartitionUpdate

2018-08-17 Thread Maxim Muzafarov (JIRA)
Maxim Muzafarov created IGNITE-9309:
---

 Summary: LocalNodeMovingPartitionsCount metrics may calculates 
incorrect due to processFullPartitionUpdate
 Key: IGNITE-9309
 URL: https://issues.apache.org/jira/browse/IGNITE-9309
 Project: Ignite
  Issue Type: Bug
Reporter: Maxim Muzafarov


[~qvad] have found incorrect {{LocalNodeMovingPartitionsCount}} metrics 
calculation on client node {{JOIN\LEFT}}. Full issue reproducer is absent.

Probable scenario:
{code}
Repeat 10 times:
1. stop node
2. clean lfs
3. add stopped node (trigger rebalance)
4. 3 times: start 2 clients, wait for topology snapshot, close clients
5. for each cache group check JMX metrics LocalNodeMovingPartitionsCount (like 
waitForFinishRebalance())
{code}

Whole discussion and all configuration details can be found in comments of 
[IGNITE-7165|https://issues.apache.org/jira/browse/IGNITE-7165].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4539: IGNITE-9264 Lost partitions raised twice if node ...

2018-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4539


---


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Maxim Muzafarov
Igniters,

Suppose, Dmitry is talking about IGNITE-5583 [1] - `Switch non-heap memory
metrics
to new page memory semantics` and related previous disscustions to it [4].

Also we have some additional improvements to CacheMetrics:
IGNITE-5490 [2] - `Implement replacement for obsolete
CacheMetrics#getOffHeapAllocatedSize`
IGNITE-5765 [3] - `CacheMetrics interface cleanup, documentation and fixes`


[1] https://issues.apache.org/jira/browse/IGNITE-5583
[2] https://issues.apache.org/jira/browse/IGNITE-5490
[3] https://issues.apache.org/jira/browse/IGNITE-5765
[4]
http://apache-ignite-developers.2346864.n4.nabble.com/Negative-non-heap-memory-maximum-td17990.html

On Fri, 17 Aug 2018 at 10:14 Dmitriy Pavlov  wrote:

> Hi Igniters,
>
> It is not an easy fix, so I'm not sure it is possible to do in 2.7.
>
> Offheap size is not reported by VM (it returns -1). To implement it we need
> totally migrate off-heap memory metrics to durable memory data.
>
> I think this issue was reported and I'll find the duplicate.
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 17 авг. 2018 г. в 6:10, Denis Magda :
>
> > Yes, it was at the end of my wordy email :)
> > https://issues.apache.org/jira/browse/IGNITE-9305
> >
> > --
> > Denis
> >
> > On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > Is there a blocker ticket for 2.7?
> > >
> > > On Thu, Aug 16, 2018, 19:59 Denis Magda  wrote:
> > >
> > > > Igniters,
> > > >
> > > > Was troubleshooting an Ignite deployment today and couldn't find out
> > from
> > > > the logs what was the actual off-heap space used.
> > > >
> > > > Those were the given memory resoures (Ignite 2.6):
> > > >
> > > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology
> > > > snapshot [ver=1, servers=1, clients=0, CPUs=64, *offheap=30.0GB*,
> > > > heap=24.0GB]
> > > >
> > > > And that weird stuff was reported by the node (pay attention to the
> > last
> > > > line):
> > > >
> > > > [2018-08-16 15:45:50,211][INFO
> > > >
> > > >
> > >
> >
> ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017]
> > > > Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> > > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> > uptime=00:38:00.257]
> > > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > > ^-- PageMemory [pages=6997377]
> > > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is
> > > always
> > > > the same!*
> > > >
> > > > Had to change the code by using dataRegion.getPhysicalMemoryPages()
> to
> > > find
> > > > out that actual off-heap usage size was
> > > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > > >
> > > > Let's fix this issue in 2.7, I proposed a new format. Please review
> and
> > > > share your thoughts:
> > > > https://issues.apache.org/jira/browse/IGNITE-9305
> > > >
> > > > --
> > > > Denis
> > > >
> > >
> >
>
-- 
--
Maxim Muzafarov


[GitHub] ignite pull request #1479: IGNITE-4533 GridDhtPartitionsExchangeFuture store...

2018-08-17 Thread ezhuravl
Github user ezhuravl closed the pull request at:

https://github.com/apache/ignite/pull/1479


---


[GitHub] ignite pull request #4564: IGNITE-9307 Added completing eviction future if n...

2018-08-17 Thread akalash
GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/4564

IGNITE-9307 Added completing eviction future if node was stopped



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4564.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4564


commit ecdf551d9b08b0d5eeb71a0a343eb500f2010983
Author: Anton Kalashnikov 
Date:   2018-08-17T07:57:00Z

IGNITE-9307 Added completing eviction future if node was stopped




---


Re: Spark SQL Table Name Resolution

2018-08-17 Thread Stuart Macdonald
Hi Dmitriy, thanks - that’s done now,

Stuart.

On 16 Aug 2018, at 22:23, Dmitriy Setrakyan  wrote:

Stuart, can you please move the ticket into PATCH_AVAILABLE state? You need
to click "Submit Patch" button in Jira.

D.

On Wed, Aug 15, 2018 at 10:22 AM, Stuart Macdonald 
wrote:

> Here's the initial pull request for this issue, please review and let me
> know your feedback. I had to combine the two approaches to enable this to
> work for both standard .read() where we can add the schema option, and
> catalog-based selects where we use schemaName.tableName. Happy to discuss
> on a call if this isn't clear.
>
> https://github.com/apache/ignite/pull/4551
>
> On Thu, Aug 9, 2018 at 2:32 PM, Stuart Macdonald 
> wrote:
>
> > Hi Nikolay, yes would be happy to - will likely be early next week. I’ll
> > go with the approach of adding a new optional field to the Spark data
> > source provider unless there are any objections.
> >
> > Stuart.
> >
> > > On 9 Aug 2018, at 14:20, Nikolay Izhikov  wrote:
> > >
> > > Stuart, do you want to work on this ticket?
> > >
> > > В Вт, 07/08/2018 в 11:13 -0700, Stuart Macdonald пишет:
> > >> Thanks Val, here’s the ticket:
> > >>
> > >> https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-9228
> > >>  > IGNITE-9228?filter=allopenissues>
> > >>
> > >> (Thanks for correcting my terminology - I work mostly with the
> > traditional
> > >> CacheConfiguration interface where I believe each cache occupies its
> own
> > >> schema.)
> > >>
> > >> Stuart.
> > >>
> > >> On 7 Aug 2018, at 18:34, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com>
> > >> wrote:
> > >>
> > >> Stuart,
> > >>
> > >> Two tables can have same names only if they are located in different
> > >> schemas. Said that, sdding schema name support makes sense to me for
> > sure.
> > >> We can implement this using either separate SCHEMA_NAME parameter, or
> > >> similar to what you suggested in option 3 but with schema name instead
> > of
> > >> cache name.
> > >>
> > >> Please feel free to create a ticket.
> > >>
> > >> -Val
> > >>
> > >> On Tue, Aug 7, 2018 at 9:32 AM Stuart Macdonald 
> > wrote:
> > >>
> > >> Hello Igniters,
> > >>
> > >>
> > >> The Ignite Spark SQL interface currently takes just “table name” as a
> > >>
> > >> parameter which it uses to supply a Spark dataset with data from the
> > >>
> > >> underlying Ignite SQL table with that name.
> > >>
> > >>
> > >> To do this it loops through each cache and finds the first one with
> the
> > >>
> > >> given table name [1]. This causes issues if there are multiple tables
> > >>
> > >> registered in different caches with the same table name as you can
> only
> > >>
> > >> access one of those caches from Spark. Is the right thing to do here:
> > >>
> > >>
> > >> 1. Simply not support such a scenario and note in the Spark
> > documentation
> > >>
> > >> that table names must be unique?
> > >>
> > >> 2. Pass an extra parameter through the Ignite Spark data source which
> > >>
> > >> optionally specifies the cache name?
> > >>
> > >> 3. Support namespacing in the existing table name parameter, ie
> > >>
> > >> “cacheName.tableName”?
> > >>
> > >>
> > >> Thanks,
> > >>
> > >> Stuart.
> > >>
> > >>
> > >> [1]
> > >>
> > >>
> > >> https://github.com/apache/ignite/blob/ca973ad99c6112160a305df05be945
> > 8e29f88307/modules/spark/src/main/scala/org/apache/ignite/
> > spark/impl/package.scala#L119
> >
>


Re: Wrong off-heap size is reported for a node

2018-08-17 Thread Dmitriy Pavlov
Hi Igniters,

It is not an easy fix, so I'm not sure it is possible to do in 2.7.

Offheap size is not reported by VM (it returns -1). To implement it we need
totally migrate off-heap memory metrics to durable memory data.

I think this issue was reported and I'll find the duplicate.

Sincerely,
Dmitriy Pavlov

пт, 17 авг. 2018 г. в 6:10, Denis Magda :

> Yes, it was at the end of my wordy email :)
> https://issues.apache.org/jira/browse/IGNITE-9305
>
> --
> Denis
>
> On Thu, Aug 16, 2018 at 11:03 PM Dmitriy Setrakyan 
> wrote:
>
> > Is there a blocker ticket for 2.7?
> >
> > On Thu, Aug 16, 2018, 19:59 Denis Magda  wrote:
> >
> > > Igniters,
> > >
> > > Was troubleshooting an Ignite deployment today and couldn't find out
> from
> > > the logs what was the actual off-heap space used.
> > >
> > > Those were the given memory resoures (Ignite 2.6):
> > >
> > > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology
> > > snapshot [ver=1, servers=1, clients=0, CPUs=64, *offheap=30.0GB*,
> > > heap=24.0GB]
> > >
> > > And that weird stuff was reported by the node (pay attention to the
> last
> > > line):
> > >
> > > [2018-08-16 15:45:50,211][INFO
> > >
> > >
> >
> ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017]
> > > Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> > > ^-- Node [id=c033026e, name=cluster_31-Dec-2017,
> uptime=00:38:00.257]
> > > ^-- H/N/C [hosts=1, nodes=1, CPUs=64]
> > > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%]
> > > ^-- PageMemory [pages=6997377]
> > > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB]
> > >* ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is
> > always
> > > the same!*
> > >
> > > Had to change the code by using dataRegion.getPhysicalMemoryPages() to
> > find
> > > out that actual off-heap usage size was
> > > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB
> > >
> > > Let's fix this issue in 2.7, I proposed a new format. Please review and
> > > share your thoughts:
> > > https://issues.apache.org/jira/browse/IGNITE-9305
> > >
> > > --
> > > Denis
> > >
> >
>


[jira] [Created] (IGNITE-9307) Node is hang when it was stopping during eviction

2018-08-17 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-9307:
-

 Summary: Node is hang when it was stopping during eviction
 Key: IGNITE-9307
 URL: https://issues.apache.org/jira/browse/IGNITE-9307
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov


{noformat}
"main" #1 prio=5 os_prio=0 tid=0x7f0ae800e000 nid=0x2e26 waiting on 
condition [0x7f0aef33]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.PartitionsEvictManager$GroupEvictionContext.awaitFinish(PartitionsEvictManager.java:362)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.PartitionsEvictManager$GroupEvictionContext$$Lambda$203/1143143890.accept(Unknown
 Source)
at 
java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.PartitionsEvictManager$GroupEvictionContext.awaitFinishAll(PartitionsEvictManager.java:348)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.PartitionsEvictManager$GroupEvictionContext.access$100(PartitionsEvictManager.java:265)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.PartitionsEvictManager.onCacheGroupStopped(PartitionsEvictManager.java:103)
at 
org.apache.ignite.internal.processors.cache.CacheGroupContext.stopGroup(CacheGroupContext.java:725)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2366)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2359)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:959)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:924)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2206)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2081)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2594)
- locked <0xf39b8770> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2557)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:374)
at org.apache.ignite.Ignition.stop(Ignition.java:225)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1153)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1196)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopAllGrids(GridAbstractTest.java:1174)
at 
org.apache.ignite.internal.processors.query.h2.IgniteSqlQueryMinMaxTest.afterTest(IgniteSqlQueryMinMaxTest.java:55)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.tearDown(GridAbstractTest.java:1766)
at 
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.tearDown(GridCommonAbstractTest.java:503)
at junit.framework.TestCase.runBare(TestCase.java:146)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[MTCGA]: new failures in builds [1470304, 1263230] needs to be handled

2018-08-17 Thread dpavlov . tasks
Hi Ignite Developer,

I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed. I 
hope you can help.

 *Recently contributed test failed in master Cache.CacheTest 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=7475554670073100554=%3Cdefault%3E=testDetails

 *Recently contributed test failed in master Cache.CacheTestAsync 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=6942811312536089681=%3Cdefault%3E=testDetails
 No changes in build

 *Recently contributed test failed in master Cache.CacheTestSsl 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=7983529646724841532=%3Cdefault%3E=testDetails
 No changes in build

- If your changes can led to this failure(s), please create issue with 
label MakeTeamCityGreenAgain and assign it to you.
-- If you have fix, please set ticket to PA state and write to dev list 
fix is ready 
-- For case fix will require some time please mute test and set label 
Muted_Test to issue 
- If you know which change caused failure please contact change author 
directly
- If you don't know which change caused failure please send message to 
dev list to find out
Should you have any questions please contact dpav...@apache.org or write to 
dev.list 
Best Regards,
MTCGA.Bot 
Notification generated at Fri Aug 17 09:49:25 MSK 2018 


[jira] [Created] (IGNITE-9306) Web console: validation message change causes height jumps

2018-08-17 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-9306:


 Summary: Web console: validation message change causes height jumps
 Key: IGNITE-9306
 URL: https://issues.apache.org/jira/browse/IGNITE-9306
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Ilya Borisov
Assignee: Ilya Borisov


When field validation errors change one to another, the field height jumps up 
and down minutely.

I think this is another case of ngAnimate causing trouble, so I'd first try to 
put an animation disabling class on error message element in attempt to solve 
the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)