Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Vladimir Ozerov
Same question. It would be very difficult to explain these two modes to
users. We should do our best to fix LOG_ONLY first. Without these
guarantees there is no reason to keep LOG_ONLY at all, user could simply
use BACKGROUND with high flush frequency. This is precisely how Cassandra
works.

p.1 - sounds like a bug
p.2 - sounds like a bug as well; hopefully it should not introduce serious
performance hit unless we write too much data to WAL, what would mean that
we should work on it's optimization (e.g. free list update overhead, no
delta updates, etc).
p.3 - sounds like a bug as well

On Fri, Mar 16, 2018 at 8:17 AM, Dmitriy Setrakyan 
wrote:

> Ivan,
>
> Is there a performance difference between LOG_ONLY and LOG_ONLY_SAFE?
>
> D.
>
> On Thu, Mar 15, 2018 at 4:23 PM, Ivan Rakov  wrote:
>
> > Igniters and especially Native Persistence experts,
> >
> > We decided to change default WAL mode from DEFAULT(FSYNC) to LOG_ONLY in
> > 2.4 release. That was difficult decision: we sacrificed power loss / OS
> > crash tolerance, but gained significant performance boost. From my
> > perspective, LOG_ONLY is right choice, but it still misses some critical
> > features that default mode should have.
> >
> > Let's focus on exact guarantees each mode provides. Documentation
> explains
> > it in pretty simple manner: LOG_ONLY - writes survive process crash,
> FSYNC
> > - writes survive power loss scenarios. I have to notice that
> documentation
> > doesn't describe what exactly can happen to node in LOG_ONLY mode in case
> > of power loss / OS crash scenario. Basically, there are two possible
> > negative outcomes: loss of several last updates (it's exactly what can
> > happen in BACKGROUND mode in case of process crash) and total storage
> > corruption (not only last updates, but all data will be lost). I've made
> a
> > quick research on this and came into conclusion that power loss in
> LOG_ONLY
> > can lead to storage corruption. There are several explanations for this:
> > 1) IgniteWriteAheadLogManager#fsync is kind of broken - it doesn't
> > perform actual fsync unless current WAL mode is FSYNC. We call this
> method
> > when we write checkpoint marker to WAL. As long as part of WAL before
> > checkpoint marker can be not synced, "physical" records that are
> necessary
> > for crash recovery in "Node stopped in the middle of checkpoint" scenario
> > may be corrupted after power loss. If that happens, we won't be able to
> > recover internal data structures, which means loss of all data.
> > 2) We don't fsync WAL archive files unless current WAL mode is FSYNC. WAL
> > archive can contain necessary "physical" records as well, which leads us
> to
> > the case described above.
> > 3) We do perform fsync on rollover (switch of current WAL segment) in all
> > modes, but only when there's enough space to write switch segment record
> -
> > see FileWriteHandle#close. So there's a little chance that we'll skip
> fsync
> > and bump into the same case.
> >
> > Enforcing fsync on that three situations will give us a guarantee that
> > LOG_ONLY will survive power loss scenarios with possibility of losing
> > several last updates. There still can be a total binary mess in the last
> > part of WAL, but as long as we perform CRC check during WAL replay, we'll
> > detect start of that mess. Extra fsyncs may cause slight performance
> > degradation - all writes will have to await for one fsync on every
> rollover
> > and checkpoint. It's still much faster than fsync on every write in WAL
> - I
> > expect a few percent (0-5%) drop comparing to current LOG_ONLY. But
> > degradation is degradation, and LOG_ONLY mode without extra fsyncs makes
> > sense as well - that's why we need to introduce "LOG_ONLY + extra fsyncs"
> > as separate WAL mode. I think, we should make it default - it provides
> > significant durability bonus for the cost of one extra fsync for each WAL
> > segment written.
> >
> > To sum it up, I propose a new set of possible WAL modes:
> > NONE - both process crash and power loss can lead to corruption
> > BACKGROUND - process crash can lead to last updates loss, power loss can
> > lead to corruption
> > LOG_ONLY - writes survive process crash, power loss can lead to
> corruption
> > LOG_ONLY_SAFE (default) - writes survive process crash, power loss can
> > lead to last updates loss
> > FSYNC - writes survive both process crash and power loss
> >
> > Thoughts?
> >
> >
> > Best Regards,
> > Ivan Rakov
> >
> >
>


Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Dmitry Pavlov
Folks, I do not expect any performance degradation here for high load
becase we already do fsync on rollover. So extra fsyncs will be almost
free. We should do this fsync without holding CP lock , of course.

(see also point 3:
3) We do perform fsync on rollover (switch of current WAL segment) in all
modes, but only when there's enough space to write switch segment record -
see FileWriteHandle # close. So there's a little chance that we'll skip
fsync and bump into the same case)

++1 from me for change Log only to be safe in all cases
+1 create new mode 'Log only safe'

пт, 16 мар. 2018 г. в 10:31, Vladimir Ozerov :

> Same question. It would be very difficult to explain these two modes to
> users. We should do our best to fix LOG_ONLY first. Without these
> guarantees there is no reason to keep LOG_ONLY at all, user could simply
> use BACKGROUND with high flush frequency. This is precisely how Cassandra
> works.
>
> p.1 - sounds like a bug
> p.2 - sounds like a bug as well; hopefully it should not introduce serious
> performance hit unless we write too much data to WAL, what would mean that
> we should work on it's optimization (e.g. free list update overhead, no
> delta updates, etc).
> p.3 - sounds like a bug as well
>
> On Fri, Mar 16, 2018 at 8:17 AM, Dmitriy Setrakyan 
> wrote:
>
> > Ivan,
> >
> > Is there a performance difference between LOG_ONLY and LOG_ONLY_SAFE?
> >
> > D.
> >
> > On Thu, Mar 15, 2018 at 4:23 PM, Ivan Rakov 
> wrote:
> >
> > > Igniters and especially Native Persistence experts,
> > >
> > > We decided to change default WAL mode from DEFAULT(FSYNC) to LOG_ONLY
> in
> > > 2.4 release. That was difficult decision: we sacrificed power loss / OS
> > > crash tolerance, but gained significant performance boost. From my
> > > perspective, LOG_ONLY is right choice, but it still misses some
> critical
> > > features that default mode should have.
> > >
> > > Let's focus on exact guarantees each mode provides. Documentation
> > explains
> > > it in pretty simple manner: LOG_ONLY - writes survive process crash,
> > FSYNC
> > > - writes survive power loss scenarios. I have to notice that
> > documentation
> > > doesn't describe what exactly can happen to node in LOG_ONLY mode in
> case
> > > of power loss / OS crash scenario. Basically, there are two possible
> > > negative outcomes: loss of several last updates (it's exactly what can
> > > happen in BACKGROUND mode in case of process crash) and total storage
> > > corruption (not only last updates, but all data will be lost). I've
> made
> > a
> > > quick research on this and came into conclusion that power loss in
> > LOG_ONLY
> > > can lead to storage corruption. There are several explanations for
> this:
> > > 1) IgniteWriteAheadLogManager#fsync is kind of broken - it doesn't
> > > perform actual fsync unless current WAL mode is FSYNC. We call this
> > method
> > > when we write checkpoint marker to WAL. As long as part of WAL before
> > > checkpoint marker can be not synced, "physical" records that are
> > necessary
> > > for crash recovery in "Node stopped in the middle of checkpoint"
> scenario
> > > may be corrupted after power loss. If that happens, we won't be able to
> > > recover internal data structures, which means loss of all data.
> > > 2) We don't fsync WAL archive files unless current WAL mode is FSYNC.
> WAL
> > > archive can contain necessary "physical" records as well, which leads
> us
> > to
> > > the case described above.
> > > 3) We do perform fsync on rollover (switch of current WAL segment) in
> all
> > > modes, but only when there's enough space to write switch segment
> record
> > -
> > > see FileWriteHandle#close. So there's a little chance that we'll skip
> > fsync
> > > and bump into the same case.
> > >
> > > Enforcing fsync on that three situations will give us a guarantee that
> > > LOG_ONLY will survive power loss scenarios with possibility of losing
> > > several last updates. There still can be a total binary mess in the
> last
> > > part of WAL, but as long as we perform CRC check during WAL replay,
> we'll
> > > detect start of that mess. Extra fsyncs may cause slight performance
> > > degradation - all writes will have to await for one fsync on every
> > rollover
> > > and checkpoint. It's still much faster than fsync on every write in WAL
> > - I
> > > expect a few percent (0-5%) drop comparing to current LOG_ONLY. But
> > > degradation is degradation, and LOG_ONLY mode without extra fsyncs
> makes
> > > sense as well - that's why we need to introduce "LOG_ONLY + extra
> fsyncs"
> > > as separate WAL mode. I think, we should make it default - it provides
> > > significant durability bonus for the cost of one extra fsync for each
> WAL
> > > segment written.
> > >
> > > To sum it up, I propose a new set of possible WAL modes:
> > > NONE - both process crash and power loss can lead to corruption
> > > BACKGROUND - process crash can lead 

Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Ivan Rakov

Vladimir,

Unlike BACKGROUND, LOG_ONLY provides strict write guarantees unless 
power loss has happened.
Seems like we need to measure performance difference to decide whether 
do we need separate WAL mode. If it will be invisible, we'll just fix 
these bugs without introducing new mode; if it will be perceptible, 
we'll continue the discussion about introducing LOG_ONLY_SAFE.

Makes sense?

Best Regards,
Ivan Rakov

On 16.03.2018 10:45, Dmitry Pavlov wrote:

Folks, I do not expect any performance degradation here for high load
becase we already do fsync on rollover. So extra fsyncs will be almost
free. We should do this fsync without holding CP lock , of course.

(see also point 3:
3) We do perform fsync on rollover (switch of current WAL segment) in all
modes, but only when there's enough space to write switch segment record -
see FileWriteHandle # close. So there's a little chance that we'll skip
fsync and bump into the same case)

++1 from me for change Log only to be safe in all cases
+1 create new mode 'Log only safe'

пт, 16 мар. 2018 г. в 10:31, Vladimir Ozerov :


Same question. It would be very difficult to explain these two modes to
users. We should do our best to fix LOG_ONLY first. Without these
guarantees there is no reason to keep LOG_ONLY at all, user could simply
use BACKGROUND with high flush frequency. This is precisely how Cassandra
works.

p.1 - sounds like a bug
p.2 - sounds like a bug as well; hopefully it should not introduce serious
performance hit unless we write too much data to WAL, what would mean that
we should work on it's optimization (e.g. free list update overhead, no
delta updates, etc).
p.3 - sounds like a bug as well

On Fri, Mar 16, 2018 at 8:17 AM, Dmitriy Setrakyan 
wrote:


Ivan,

Is there a performance difference between LOG_ONLY and LOG_ONLY_SAFE?

D.

On Thu, Mar 15, 2018 at 4:23 PM, Ivan Rakov 

wrote:

Igniters and especially Native Persistence experts,

We decided to change default WAL mode from DEFAULT(FSYNC) to LOG_ONLY

in

2.4 release. That was difficult decision: we sacrificed power loss / OS
crash tolerance, but gained significant performance boost. From my
perspective, LOG_ONLY is right choice, but it still misses some

critical

features that default mode should have.

Let's focus on exact guarantees each mode provides. Documentation

explains

it in pretty simple manner: LOG_ONLY - writes survive process crash,

FSYNC

- writes survive power loss scenarios. I have to notice that

documentation

doesn't describe what exactly can happen to node in LOG_ONLY mode in

case

of power loss / OS crash scenario. Basically, there are two possible
negative outcomes: loss of several last updates (it's exactly what can
happen in BACKGROUND mode in case of process crash) and total storage
corruption (not only last updates, but all data will be lost). I've

made

a

quick research on this and came into conclusion that power loss in

LOG_ONLY

can lead to storage corruption. There are several explanations for

this:

1) IgniteWriteAheadLogManager#fsync is kind of broken - it doesn't
perform actual fsync unless current WAL mode is FSYNC. We call this

method

when we write checkpoint marker to WAL. As long as part of WAL before
checkpoint marker can be not synced, "physical" records that are

necessary

for crash recovery in "Node stopped in the middle of checkpoint"

scenario

may be corrupted after power loss. If that happens, we won't be able to
recover internal data structures, which means loss of all data.
2) We don't fsync WAL archive files unless current WAL mode is FSYNC.

WAL

archive can contain necessary "physical" records as well, which leads

us

to

the case described above.
3) We do perform fsync on rollover (switch of current WAL segment) in

all

modes, but only when there's enough space to write switch segment

record

-

see FileWriteHandle#close. So there's a little chance that we'll skip

fsync

and bump into the same case.

Enforcing fsync on that three situations will give us a guarantee that
LOG_ONLY will survive power loss scenarios with possibility of losing
several last updates. There still can be a total binary mess in the

last

part of WAL, but as long as we perform CRC check during WAL replay,

we'll

detect start of that mess. Extra fsyncs may cause slight performance
degradation - all writes will have to await for one fsync on every

rollover

and checkpoint. It's still much faster than fsync on every write in WAL

- I

expect a few percent (0-5%) drop comparing to current LOG_ONLY. But
degradation is degradation, and LOG_ONLY mode without extra fsyncs

makes

sense as well - that's why we need to introduce "LOG_ONLY + extra

fsyncs"

as separate WAL mode. I think, we should make it default - it provides
significant durability bonus for the cost of one extra fsync for each

WAL

segment written.

To sum it up, I propose a new set of possible WAL modes:
NONE - both 

Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Ivan Rakov
It really depends on hardware and workload pattern. I expect that 
LOG_ONLY_SAFE will be either equal to LOG_ONLY or a few percent slower. 
We'll answer this question for sure after implementation of three fixes 
and benchmarking.
Let's first of all get understanding whether extra durability guarantees 
make sense. I think that it does: power loss itself is really unlikely 
scenario, but LOG_ONLY_SAFE will make it much less risky. It will 
guarantee presence of all partitions after power loss in the whole data 
center, it will also make rebalancing after power loss on one node much 
faster.


Best Regards,
Ivan Rakov

On 16.03.2018 8:17, Dmitriy Setrakyan wrote:

Ivan,

Is there a performance difference between LOG_ONLY and LOG_ONLY_SAFE?

D.

On Thu, Mar 15, 2018 at 4:23 PM, Ivan Rakov  wrote:


Igniters and especially Native Persistence experts,

We decided to change default WAL mode from DEFAULT(FSYNC) to LOG_ONLY in
2.4 release. That was difficult decision: we sacrificed power loss / OS
crash tolerance, but gained significant performance boost. From my
perspective, LOG_ONLY is right choice, but it still misses some critical
features that default mode should have.

Let's focus on exact guarantees each mode provides. Documentation explains
it in pretty simple manner: LOG_ONLY - writes survive process crash, FSYNC
- writes survive power loss scenarios. I have to notice that documentation
doesn't describe what exactly can happen to node in LOG_ONLY mode in case
of power loss / OS crash scenario. Basically, there are two possible
negative outcomes: loss of several last updates (it's exactly what can
happen in BACKGROUND mode in case of process crash) and total storage
corruption (not only last updates, but all data will be lost). I've made a
quick research on this and came into conclusion that power loss in LOG_ONLY
can lead to storage corruption. There are several explanations for this:
1) IgniteWriteAheadLogManager#fsync is kind of broken - it doesn't
perform actual fsync unless current WAL mode is FSYNC. We call this method
when we write checkpoint marker to WAL. As long as part of WAL before
checkpoint marker can be not synced, "physical" records that are necessary
for crash recovery in "Node stopped in the middle of checkpoint" scenario
may be corrupted after power loss. If that happens, we won't be able to
recover internal data structures, which means loss of all data.
2) We don't fsync WAL archive files unless current WAL mode is FSYNC. WAL
archive can contain necessary "physical" records as well, which leads us to
the case described above.
3) We do perform fsync on rollover (switch of current WAL segment) in all
modes, but only when there's enough space to write switch segment record -
see FileWriteHandle#close. So there's a little chance that we'll skip fsync
and bump into the same case.

Enforcing fsync on that three situations will give us a guarantee that
LOG_ONLY will survive power loss scenarios with possibility of losing
several last updates. There still can be a total binary mess in the last
part of WAL, but as long as we perform CRC check during WAL replay, we'll
detect start of that mess. Extra fsyncs may cause slight performance
degradation - all writes will have to await for one fsync on every rollover
and checkpoint. It's still much faster than fsync on every write in WAL - I
expect a few percent (0-5%) drop comparing to current LOG_ONLY. But
degradation is degradation, and LOG_ONLY mode without extra fsyncs makes
sense as well - that's why we need to introduce "LOG_ONLY + extra fsyncs"
as separate WAL mode. I think, we should make it default - it provides
significant durability bonus for the cost of one extra fsync for each WAL
segment written.

To sum it up, I propose a new set of possible WAL modes:
NONE - both process crash and power loss can lead to corruption
BACKGROUND - process crash can lead to last updates loss, power loss can
lead to corruption
LOG_ONLY - writes survive process crash, power loss can lead to corruption
LOG_ONLY_SAFE (default) - writes survive process crash, power loss can
lead to last updates loss
FSYNC - writes survive both process crash and power loss

Thoughts?


Best Regards,
Ivan Rakov






Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Dmitriy Setrakyan
On Fri, Mar 16, 2018 at 12:55 AM, Ivan Rakov  wrote:

> Vladimir,
>
> Unlike BACKGROUND, LOG_ONLY provides strict write guarantees unless power
> loss has happened.
> Seems like we need to measure performance difference to decide whether do
> we need separate WAL mode. If it will be invisible, we'll just fix these
> bugs without introducing new mode; if it will be perceptible, we'll
> continue the discussion about introducing LOG_ONLY_SAFE.
> Makes sense?
>

Yes, this sounds like the right approach.


Re: Reconsider default WAL mode: we need something between LOG_ONLY and FSYNC

2018-03-16 Thread Ivan Rakov

Ticket to track changes: https://issues.apache.org/jira/browse/IGNITE-7754

Best Regards,
Ivan Rakov

On 16.03.2018 10:58, Dmitriy Setrakyan wrote:

On Fri, Mar 16, 2018 at 12:55 AM, Ivan Rakov  wrote:


Vladimir,

Unlike BACKGROUND, LOG_ONLY provides strict write guarantees unless power
loss has happened.
Seems like we need to measure performance difference to decide whether do
we need separate WAL mode. If it will be invisible, we'll just fix these
bugs without introducing new mode; if it will be perceptible, we'll
continue the discussion about introducing LOG_ONLY_SAFE.
Makes sense?


Yes, this sounds like the right approach.





Re: Timeline for support of compute functions by thin clients

2018-03-16 Thread Pavel Tupitsyn
> for what
Literally no one wants to have JVM in their process and additional
dependencies :)
As much APIs as possible should be available in thin mode, that is the
point.

This thread is started by our user, after all :)

On Thu, Mar 15, 2018 at 10:25 PM, Denis Magda  wrote:

> Pavel,
>
> I just don't see a substantial reason why we need to support the
> compute APIs.
>
> As you properly mentioned, it's not easy to copy all the APIs and, again,
> for what. It's right that the thin client allows decoupling .NET from JVM,
> but its implementation won't be more performant than the regular client's
> one.
>
> So, personally, a thin client (.NET, Node.JS, Java, Python, etc.) is a
> lightweight connection to the cluster that supports classic client-server
> request-response operations. If someone needs more (compute, services,
> streaming, ML), then go for the regular client which is battle-tested and
> available for usage.
>
> --
> Denis
>
>
>
> On Wed, Mar 14, 2018 at 1:33 PM, Pavel Tupitsyn 
> wrote:
>
> > Hi Denis,
> >
> > > There are no any plans for that level of support
> > Why do you think so?
> > We already have ScanQuery with filter in .NET Thin Client, which involves
> > remote code execution on server nodes.
> > It is quite similar to Compute.Broadcast and such.
> >
> > Thanks,
> > Pavel
> >
> >
> > On Wed, Mar 14, 2018 at 11:32 PM, Denis Magda  wrote:
> >
> > > Raymond,
> > >
> > > Then I would suggest you keep using the regular .NET client that
> supports
> > > and optimized for computations. Is there any reason why you can't use
> the
> > > regular one?
> > >
> > > --
> > > Denis
> > >
> > > On Wed, Mar 14, 2018 at 12:53 PM, Raymond Wilson <
> > > raymond_wil...@trimble.com
> > > > wrote:
> > >
> > > > Hi Denis,
> > > >
> > > > We are using Ignite.Net and are planning to use 2.4 + .Net Core +
> thin
> > > > client support to enable lightweight containerisable services that
> > > interact
> > > > with the main Ignite compute grid.
> > > >
> > > > These work flows are less about Get/Put style semantics, and more
> about
> > > > using grid compute.
> > > >
> > > > Eg: Here's an example where a client context asks a remote context to
> > > > render
> > > > a bitmap tile in an ICompute:
> > > >
> > > > public Bitmap Execute(TileRenderRequestArgument arg)
> > > > {
> > > > IComputeFunc func =
> new
> > > > TileRenderRequestComputeFunc();
> > > >
> > > > return
> > > > _ignite.GetCluster().ForRemotes().GetCompute().Apply(func, arg);
> > > > }
> > > >
> > > > In this example, the calling context here could be a lightweight
> > Kestrel
> > > > web
> > > > service end point delegating rendering to a remote service.
> > > >
> > > > Thanks,
> > > > Raymond.
> > > >
> > > > -Original Message-
> > > > From: Denis Magda [mailto:dma...@apache.org]
> > > > Sent: Thursday, March 15, 2018 8:31 AM
> > > > To: dev@ignite.apache.org
> > > > Subject: Re: Timeline for support of compute functions by thin
> clients
> > > >
> > > > Hi Raymond,
> > > >
> > > > There are no any plans for that level of support. The thin clients
> are
> > > > targeted for classic client-server processing use cases when a client
> > > > request data from a server, does something with it locally and
> > > potentially
> > > > writes changes back to the server. ICache, SQL fall under this
> > category.
> > > >
> > > > Are you intended to use .NET thin client or anyone else?
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Mar 14, 2018 at 12:25 PM, Raymond Wilson <
> > > > raymond_wil...@trimble.com
> > > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > >
> > > > >
> > > > > The thin client implementation in Ignite 2.4 only covers a subset
> of
> > > > > the ICache interface.
> > > > >
> > > > >
> > > > >
> > > > > When will we see thin client support for compute, messaging etc?
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Raymond.
> > > > >
> > > >
> > >
> >
>


Re: Timeline for support of compute functions by thin clients

2018-03-16 Thread Vladimir Ozerov
Denis,

>From client perspective any compute task is also request - response. This
doesn't distinguish compute from any other API anyhow. There are no problem
to add closures, tasks, services, etc.. What is really difficult is
components requiring non-trivial thread interaction and complex request
workflows. E.g. streaming, COPY command, continuous queries, events.

On Thu, Mar 15, 2018 at 10:25 PM, Denis Magda  wrote:

> Pavel,
>
> I just don't see a substantial reason why we need to support the
> compute APIs.
>
> As you properly mentioned, it's not easy to copy all the APIs and, again,
> for what. It's right that the thin client allows decoupling .NET from JVM,
> but its implementation won't be more performant than the regular client's
> one.
>
> So, personally, a thin client (.NET, Node.JS, Java, Python, etc.) is a
> lightweight connection to the cluster that supports classic client-server
> request-response operations. If someone needs more (compute, services,
> streaming, ML), then go for the regular client which is battle-tested and
> available for usage.
>
> --
> Denis
>
>
>
> On Wed, Mar 14, 2018 at 1:33 PM, Pavel Tupitsyn 
> wrote:
>
> > Hi Denis,
> >
> > > There are no any plans for that level of support
> > Why do you think so?
> > We already have ScanQuery with filter in .NET Thin Client, which involves
> > remote code execution on server nodes.
> > It is quite similar to Compute.Broadcast and such.
> >
> > Thanks,
> > Pavel
> >
> >
> > On Wed, Mar 14, 2018 at 11:32 PM, Denis Magda  wrote:
> >
> > > Raymond,
> > >
> > > Then I would suggest you keep using the regular .NET client that
> supports
> > > and optimized for computations. Is there any reason why you can't use
> the
> > > regular one?
> > >
> > > --
> > > Denis
> > >
> > > On Wed, Mar 14, 2018 at 12:53 PM, Raymond Wilson <
> > > raymond_wil...@trimble.com
> > > > wrote:
> > >
> > > > Hi Denis,
> > > >
> > > > We are using Ignite.Net and are planning to use 2.4 + .Net Core +
> thin
> > > > client support to enable lightweight containerisable services that
> > > interact
> > > > with the main Ignite compute grid.
> > > >
> > > > These work flows are less about Get/Put style semantics, and more
> about
> > > > using grid compute.
> > > >
> > > > Eg: Here's an example where a client context asks a remote context to
> > > > render
> > > > a bitmap tile in an ICompute:
> > > >
> > > > public Bitmap Execute(TileRenderRequestArgument arg)
> > > > {
> > > > IComputeFunc func =
> new
> > > > TileRenderRequestComputeFunc();
> > > >
> > > > return
> > > > _ignite.GetCluster().ForRemotes().GetCompute().Apply(func, arg);
> > > > }
> > > >
> > > > In this example, the calling context here could be a lightweight
> > Kestrel
> > > > web
> > > > service end point delegating rendering to a remote service.
> > > >
> > > > Thanks,
> > > > Raymond.
> > > >
> > > > -Original Message-
> > > > From: Denis Magda [mailto:dma...@apache.org]
> > > > Sent: Thursday, March 15, 2018 8:31 AM
> > > > To: dev@ignite.apache.org
> > > > Subject: Re: Timeline for support of compute functions by thin
> clients
> > > >
> > > > Hi Raymond,
> > > >
> > > > There are no any plans for that level of support. The thin clients
> are
> > > > targeted for classic client-server processing use cases when a client
> > > > request data from a server, does something with it locally and
> > > potentially
> > > > writes changes back to the server. ICache, SQL fall under this
> > category.
> > > >
> > > > Are you intended to use .NET thin client or anyone else?
> > > >
> > > > --
> > > > Denis
> > > >
> > > > On Wed, Mar 14, 2018 at 12:25 PM, Raymond Wilson <
> > > > raymond_wil...@trimble.com
> > > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > >
> > > > >
> > > > > The thin client implementation in Ignite 2.4 only covers a subset
> of
> > > > > the ICache interface.
> > > > >
> > > > >
> > > > >
> > > > > When will we see thin client support for compute, messaging etc?
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Raymond.
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (IGNITE-7971) SQL: make sure CREATE INDEX doesn't corrupt the cluster

2018-03-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7971:
---

 Summary: SQL: make sure CREATE INDEX doesn't corrupt the cluster
 Key: IGNITE-7971
 URL: https://issues.apache.org/jira/browse/IGNITE-7971
 Project: Ignite
  Issue Type: Task
  Components: sql
Affects Versions: 2.4
Reporter: Vladimir Ozerov
 Fix For: 2.5


Consider the following scenario:
1) Define query entity with field {{X}} of type {{int}}
2) Start two nodes
3) Put an object with field {{X}} of type {{int}} to node 1
4) Put an object with field {{X}} of type {{*String*}} to node 2
5) Invoke {{CREATE INDEX}} on that field

We need to investigate what would happen. In the worst case index creation will 
fail on node 2. But it seems that we do not perform any rollback on server 
nodes in that case, only client node is informed about the problem. If it is 
so, we need to answer the following questions:
1) What would happen to subsequent SQL queries initiated from different nodes. 
Will they work?
2) What would happen to data in case of restarts with persistence?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7972) NPE in TTL manager.

2018-03-16 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-7972:


 Summary: NPE in TTL manager.
 Key: IGNITE-7972
 URL: https://issues.apache.org/jira/browse/IGNITE-7972
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Andrew Mashenkov
 Attachments: npe.log

TTL manager can try to evict expired entries on cache that wasn't initialized 
yet due to a race.
This lead to NPE in unwindEvicts method.

PFA stacktrace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3645: IGNITE-7964 rmvId is stored to MetaStorage metapa...

2018-03-16 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/3645

IGNITE-7964 rmvId is stored to MetaStorage metapage during operations



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7964

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3645


commit f63635eece62be3ce5f581d7a981da36985775a2
Author: Sergey Chugunov 
Date:   2018-03-16T08:57:18Z

IGNITE-7964 rmvId is stored to MetaStorage metapage during operations




---


Re: Test testJobIdCollision to use multiple JVMs [IGNITE-4706]

2018-03-16 Thread Sergey Chugunov
Dmitry, Maxim,

Thanks for bringing this up.

I reviewed all context about [1], it looks like the test is still valid but
is of low priority; I reflected it in jira ticket itself.

Also rewriting the test to multi-JVM fashion isn't an easy task, to me it
is much better to spend this time working on more important stuff.
Dmitry, could you suggest a better ticket for Maxim to look into?

[1] https://issues.apache.org/jira/browse/IGNITE-4706

--
Thanks,
Sergey.

On Fri, Mar 16, 2018 at 4:43 PM, Dmitry Pavlov 
wrote:

> Hi Sergey,
>
> Is this issue still actual for you?
>
> Sincerely,
> Dmitriy Pavlov
>
> пн, 26 февр. 2018 г. в 13:40, Maxim Muzafarov :
>
>> Hi all,
>>
>> I'm triyng to clarify for myseft issue [1] of rewriting this test case to
>> use multiple JVMs. I'm trying to reproduce it using steps described here
>> [2]:
>> As I correct understand issue description, I'm runing testJobIdCollision
>> and expecting to get exception:
>> "Received computation request with duplicate job ID"
>> , but I've got:
>> "Job has already been hold [ctx=GridJobContextImpl
>> [jobId=f7e74a1d161-08edbe47-9b65-4ed2-8d0c-a8a1a673, timeoutObj=null,
>> attrs={}]]"
>>
>> So, does this test-case actual or can be removed? Or we should use
>> another IgniteCallable
>> like othis one: IgniteWalRecoveryTest.LoadRunnable [4]?
>>
>> Also, IgniteClusterProcessProxy#forNodeId [3] doesn't implemented yet.
>> Brief search for some JIRA's of implementation this method doesn't return
>> anything.
>> What should we do with this?
>>
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-4706
>> [2] https://issues.apache.org/jira/browse/IGNITE-1384
>> [3]
>> https://github.com/apache/ignite/blob/master/modules/
>> core/src/test/java/org/apache/ignite/testframework/junits/multijvm/
>> IgniteClusterProcessProxy.java#L204
>> [4]
>> https://github.com/apache/ignite/blob/master/modules/
>> core/src/test/java/org/apache/ignite/internal/processors/
>> cache/persistence/db/wal/IgniteWalRecoveryTest.java#L1552
>>
>


[GitHub] ignite pull request #3651: IGNITE-7963 Opportunistically flush DataStreamer ...

2018-03-16 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/3651

IGNITE-7963 Opportunistically flush DataStreamer instead of endless w…

…ait.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7963

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3651.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3651


commit 060b0973ae6d93b46a590b9958c14134c098d9f6
Author: Ilya Kasnacheev 
Date:   2018-03-16T15:43:52Z

IGNITE-7963 Opportunistically flush DataStreamer instead of endless wait.




---


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2018-03-16 Thread Arseny Kovalchuk
Hi Dmitry.

Thanks for you attention to this issue.

I changed repository to jcenter and set Ignite version to 2.4.
Unfortunately the reproducer starts with the same error message in the log
(see attached).

I cannot say whether behavior of the whole cluster will change on 2.4, I
mean if the cluster can start on corrupted data on 2.4, because we have
wiped the data and restarted the cluster where the problem has arrived.
We'll move to 2.4 next week and continue testing of our software. We are
moving forward to production in April/May, and it would be good if we get
some clue how to deal with such situation with data in the future.



​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 16 March 2018 at 17:03, Dmitry Pavlov  wrote:

> Hi Arseny,
>
> I've observed in reproducer
> ignite_version=2.3.0
>
> Could you check if it is reproducible in our freshest release 2.4.0.
>
> I'm not sure about ticket number, but it is quite possible issue is
> already fixed.
>
> Sincerely,
> Dmitriy Pavlov
>
> чт, 15 мар. 2018 г. в 19:34, Dmitry Pavlov :
>
>> Hi Alexey,
>>
>> It may be serious issue. Could you recommend expert here who can pick up
>> this?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> чт, 15 мар. 2018 г. в 19:25, Arseny Kovalchuk <
>> arseny.kovalc...@synesis.ru>:
>>
>>> Hi, guys.
>>>
>>> I've got a reproducer for a problem which is generally reported as
>>> "Caused by: java.lang.IllegalStateException: Failed to get page IO
>>> instance (page content is corrupted)". Actually it reproduces the result. I
>>> don't have an idea how the data has been corrupted, but the cluster node
>>> doesn't want to start with this data.
>>>
>>> We got the issue again when some of server nodes were restarted several
>>> times by kubernetes. I suspect that the data got corrupted during such
>>> restarts. But the main functionality that we really desire to have, that
>>> the cluster DOESN'T HANG during next restart even if the data is corrupted!
>>> Anyway, there is no a tool that can help to correct such data, and as a
>>> result we wipe all data manually to start the cluster. So, having warnings
>>> about corrupted data in logs and just working cluster is the expected
>>> behavior.
>>>
>>> How to reproduce:
>>> 1. Download the data from here https://storage.
>>> googleapis.com/pub-data-0/data5.tar.gz (~200Mb)
>>> 2. Download and import Gradle project https://storage.
>>> googleapis.com/pub-data-0/project.tar.gz (~100Kb)
>>> 3. Unpack the data to the home folder, say /home/user1. You should get
>>> the path like */home/user1/data5*. Inside data5 you should have
>>> binary_meta, db, marshaller.
>>> 4. Open *src/main/resources/data-test.xml* and put the absolute path of
>>> unpacked data into *workDirectory* property of *igniteCfg5* bean. In
>>> this example it should be */home/user1/data5.* Do not
>>> edit consistentId! The consistentId is ignite-instance-5, so the real data
>>> is in the data5/db/ignite_instance_5 folder
>>> 5. Start application from ru.synesis.kipod.DataTestBootApp
>>> 6. Enjoy
>>>
>>> Hope it will help.
>>>
>>>
>>> ​
>>> Arseny Kovalchuk
>>>
>>> Senior Software Engineer at Synesis
>>> skype: arseny.kovalchuk
>>> mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
>>> ​LinkedIn Profile ​
>>>
>>> On 26 December 2017 at 21:15, Denis Magda  wrote:
>>>
 Cross-posting to the dev list.

 Ignite persistence maintainers please chime in.

 —
 Denis

>>> On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk <
 arseny.kovalc...@synesis.ru> wrote:

 Hi guys.

 Another issue when using Ignite 2.3 with native persistence enabled.
 See details below.

 We deploy Ignite along with our services in Kubernetes (v 1.8) on
 premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
 version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.

 We put about 230 events/second into Ignite, 70% of events are ~200KB in
 size and 30% are 5000KB. Smaller events have indexed fields and we query
 them via SQL.

 The cluster is activated from a client node which also streams events
 into Ignite from Kafka. We use custom implementation of streamer which uses
 cache.putAll() API.

 We started cluster from scratch without any persistent data. After a
 while we got corrupted data with the error message.

 [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
 org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
 - Partition eviction failed, this can cause grid hang.
 class org.apache.ignite.IgniteException: Runtime failure on search
 row: Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
 

Re: (Partition Map) Exchange at wiki

2018-03-16 Thread Denis Magda
Hi Dmitriy,

That's a great article. Future and current contributors/committer who will
be dealing with the part of the system, described by you, will be thankful
for that page.

Please clarify one thing for me. In your rebalancing example, you are
saying that the full-map exchange won't happen until node 4 rebalances all
the data it owns. For me, it sounds like the topology won't be changed to
the next version until all the nodes rebalance the data they own. Guess I
confused something.

--
Denis

On Fri, Mar 16, 2018 at 8:05 AM, Dmitry Pavlov 
wrote:

> Hi Igniters,
>
> According coming questions here at dev.list, I decided to arrange my
> entries about (Partition Map) Exchange as wiki article. Result is third
> page in the series 'under the hood':
> https://cwiki.apache.org/confluence/display/IGNITE/%
> 28Partition+Map%29+Exchange+-+under+the+hood
>
> I would like to thank Alexey Goncharuk and Dmitriy Govorukhin for the
> sharing their knowlege about the exchange and assistance in the study.
>
> Please share and recommend the article to new members of the community,
> suggest and make changes. Criticism is appreciated.
>
> Sincerely,
> Dmitriy Pavlov
>


[jira] [Created] (IGNITE-7978) Recovery from WAL may result in JVM crash

2018-03-16 Thread Denis Mekhanikov (JIRA)
Denis Mekhanikov created IGNITE-7978:


 Summary: Recovery from WAL may result in JVM crash
 Key: IGNITE-7978
 URL: https://issues.apache.org/jira/browse/IGNITE-7978
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Mekhanikov


{{GridCacheDatabaseSharedManager}} checks page tags, when acquiring page write 
locks. If the actual tag doesn't match the expected value, then 
{{PageSupport#writeLock}} method returns 0 pointer, which results in JVM crash.

A proposed solution here is to make {{GridCacheDatabaseSharedManager}} ignore 
the page tag, i.e. use {{restore=true}} option in {{PageSupport#writeLock}} 
method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3592: IGNITE-7862: flatten plugin updated to version 1....

2018-03-16 Thread nizhikov
Github user nizhikov closed the pull request at:

https://github.com/apache/ignite/pull/3592


---


Re: Maven. Issues with flatten plugin

2018-03-16 Thread Nikolay Izhikov
Hello, guys.

We finally updated flatten plugin in master.

Petr Ivanov, Alex Volkov - thank you very much!

В Пт, 02/03/2018 в 16:45 +0300, Petr Ivanov пишет:
> Updated all maven definitions I’ve found in templates of test project.
> Please, try once more.
> 
> 
> 
> > On 2 Mar 2018, at 16:36, Nikolay Izhikov  wrote:
> > 
> > Petr, thank you.
> > 
> > But seems it doesn't help 
> > 
> > "Failed to execute goal 
> > org.codehaus.mojo:flatten-maven-plugin:1.0.1:flatten (flatten) on project 
> > ignite-tools: 
> > The plugin org.codehaus.mojo:flatten-maven-plugin:1.0.1 requires Maven 
> > version 3.2.5"
> > 
> > https://ci.ignite.apache.org/viewLog.html?buildId=1118290=IgniteTests24Java8_IgniteActivateDeactivateCluster=buildLog&_focus=189
> > 
> > В Пт, 02/03/2018 в 13:38 +0300, Petr Ivanov пишет:
> > > Made some changes — 3.3.9 is now default maven.
> > > Please, rerun failed tests.
> > > 
> > > 
> > > 
> > > > On 2 Mar 2018, at 13:21, Nikolay Izhikov  wrote:
> > > > 
> > > > Hello, Petr.
> > > > 
> > > > I run TC for my PR [1] and have some issues on Team City:
> > > > 
> > > > "Failed to execute goal 
> > > > org.codehaus.mojo:flatten-maven-plugin:1.0.1:flatten (flatten) on 
> > > > project ignite-tools: 
> > > > The plugin org.codehaus.mojo:flatten-maven-plugin:1.0.1 requires Maven 
> > > > version 3.2.5"
> > > > 
> > > > Can we update maven to version 3.2.5 or higher on all Team city agents?
> > > > 
> > > > [1] https://github.com/apache/ignite/pull/3592
> > > > [2] 
> > > > https://ci.ignite.apache.org/viewLog.html?buildId=1117882=IgniteTests24Java8_IgniteVisorConsoleScala=buildLog&_focus=10143
> > > > 
> > > > В Пт, 02/03/2018 в 12:15 +0300, Nikolay Izhikov пишет:
> > > > > Sorry - IGNITE-7862 is the ticket.
> > > > > 
> > > > > 2 марта 2018 г. 12:14 PM пользователь "Nikolay Izhikov" 
> > > > >  написал:
> > > > > > Dmitry.
> > > > > > I'm already done it.
> > > > > > Will return with PR and TC results soon
> > > > > > 
> > > > > > 2 марта 2018 г. 11:53 AM пользователь "Dmitry Pavlov" 
> > > > > >  написал:
> > > > > > > Hi Petr,
> > > > > > > 
> > > > > > > Thank you, it is great that you found the solution with low 
> > > > > > > impact.
> > > > > > > 
> > > > > > > Lets create ticket and merge PR.
> > > > > > > 
> > > > > > > пт, 2 мар. 2018 г. в 10:06, Petr Ivanov :
> > > > > > > 
> > > > > > > > The problem is solved by updating flatten-maven-plugin version 
> > > > > > > > to 1.0.1.
> > > > > > > > 
> > > > > > > > Nikolay, please, double check it.
> > > > > > > > If it really solves the problem, please, fill the ticket (or 
> > > > > > > > point to
> > > > > > > > existing one), so I can update it and check impact on release 
> > > > > > > > procedure.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > On 1 Mar 2018, at 17:04, Nikolay Izhikov 
> > > > > > > > >  wrote:
> > > > > > > > > 
> > > > > > > > > Petr.
> > > > > > > > > 
> > > > > > > > > Thank you for trying!
> > > > > > > > > 
> > > > > > > > > Did you remove 'test' dependencies before running commands?
> > > > > > > > > 
> > > > > > > > > Because I commit in master only correct pom.xml for current 
> > > > > > > > > build
> > > > > > > > 
> > > > > > > > process,
> > > > > > > > > of course.
> > > > > > > > > 
> > > > > > > > > But, to make things works I have to copy paste transitive 
> > > > > > > > > dependencies
> > > > > > > > 
> > > > > > > > from
> > > > > > > > > spark-core.
> > > > > > > > > 
> > > > > > > > > 1 марта 2018 г. 4:55 PM пользователь "Petr Ivanov" 
> > > > > > > > > 
> > > > > > > > > написал:
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > I don't get what is the point.
> > > > > > > > > > > Did you try to reproduce issue?
> > > > > > > > > > > Or should I provide full traces to you?
> > > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > My point in inability to fully understand and describe the 
> > > > > > > > > > problem in
> > > > > > > > > > terms of mechanism which causes it.
> > > > > > > > > > For now I can see only indirect guessing.
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > And yes, I’ve run your commands and both times (in spark 
> > > > > > > > > > module
> > > > > > > > 
> > > > > > > > directory
> > > > > > > > > > and in root) it produces the same result:
> > > > > > > > > > 
> > > > > > > > > > 1.
> > > > > > > > > > ignite (master) $ JAVA_HOME="$(/usr/libexec/java_home -v 
> > > > > > > > > > 1.8)" mvn
> > > > > > > > > > install -U -Plgpl,examples,scala,-clean-libs,-release,ml
> > > > > > > > > > -Dtest=org.apache.ignite.testsuites.IgniteRDDTestSuite
> > > > > > > > > > -Dmaven.javadoc.skip=true -DfailIfNoTests=false
> > > > > > > > > > [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, 
> > > > > > > > > > Time elapsed:
> > > > > > > > > > 47.071 s - in 
> 

Re: New test failures in .NET

2018-03-16 Thread Dmitry Pavlov
I did this TC verification before merge. Have no idea why I missed it.

пт, 16 мар. 2018 г. в 16:06, Pavel Tupitsyn :

> Hi,
>
> The problem is introduced by our freshman committer Nikolay Izhikov in
> "IGNITE-7756: IgniteUuid added to predefined types" [1]
>
> IgniteUuid type id has changed on Java side, but not in .NET.
> I have pushed the fix to master branch.
>
> I can only ask everyone again to have some respect for your fellow Igniters
> and verify changes on TeamCity before merging. It is not that hard.
>
> Thank you,
> Pavel
>
>
>
> [1] https://github.com/apache/ignite/commit/70ca86a30a7589f9ff46
> 6b93a958362135347d02
>
>
>
> On Fri, Mar 16, 2018 at 2:51 PM, Dmitry Pavlov 
> wrote:
>
> > Hi,
> >
> > There are 31 test failures in .NET tests https://ci.ignite.apache.org/
> > viewLog.html?buildId=1137460=buildResultsDiv=
> > IgniteTests24Java8_IgnitePlatformNet
> >  Unfortunately it continues to reproduce.
> >
> > Igniters, who can advice how to fix it? Was there any changes in .NET
> > tests/new tests contributions?
> >
> > It seems there is one issue here, because 1st failure
> > "SetUp method failed. SetUp : System.NullReferenceException : Object
> > reference not set to an instance of an object."
> > and consequent failures are about Ignite instances.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
>


Re: New test failures in .NET

2018-03-16 Thread Dmitry Pavlov
Suite passes now,
https://ci.ignite.apache.org/viewLog.html?buildId=1140071=buildResultsDiv=IgniteTests24Java8_IgnitePlatformNet

Pavel, thank you!

пт, 16 мар. 2018 г. в 16:13, Nikolay Izhikov :

> Hello, Pavel.
>
> Sorry, my bad.
> Thanks for fixing.
>
> Will double check tests result before next commit.
>
> В Пт, 16/03/2018 в 16:06 +0300, Pavel Tupitsyn пишет:
> > Hi,
> >
> > The problem is introduced by our freshman committer Nikolay Izhikov in
> > "IGNITE-7756: IgniteUuid added to predefined types" [1]
> >
> > IgniteUuid type id has changed on Java side, but not in .NET.
> > I have pushed the fix to master branch.
> >
> > I can only ask everyone again to have some respect for your fellow
> Igniters
> > and verify changes on TeamCity before merging. It is not that hard.
> >
> > Thank you,
> > Pavel
> >
> >
> >
> > [1] https://github.com/apache/ignite/commit/70ca86a30a7589f9ff46
> > 6b93a958362135347d02
> >
> >
> >
> > On Fri, Mar 16, 2018 at 2:51 PM, Dmitry Pavlov 
> > wrote:
> >
> > > Hi,
> > >
> > > There are 31 test failures in .NET tests https://ci.ignite.apache.org/
> > > viewLog.html?buildId=1137460=buildResultsDiv=
> > > IgniteTests24Java8_IgnitePlatformNet
> > >  Unfortunately it continues to reproduce.
> > >
> > > Igniters, who can advice how to fix it? Was there any changes in .NET
> > > tests/new tests contributions?
> > >
> > > It seems there is one issue here, because 1st failure
> > > "SetUp method failed. SetUp : System.NullReferenceException : Object
> > > reference not set to an instance of an object."
> > > and consequent failures are about Ignite instances.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >


Re: Test testJobIdCollision to use multiple JVMs [IGNITE-4706]

2018-03-16 Thread Dmitry Pavlov
Hi Sergey,

Thank you for stepping in.

There is fresh test failures scope reflected in JIRA
https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%20labels%20%3D%20MakeTeamcityGreenAgain%20AND%20assignee%20is%20EMPTY%20AND%20resolution%20is%20EMPTY%20order%20by%20createdDate%20DESC%20%20%20

And any test failure from CI without investigation also may be picked up.

Sincerely,
Dmitriy Pavlov



пт, 16 мар. 2018 г. в 18:47, Sergey Chugunov :

> Dmitry, Maxim,
>
> Thanks for bringing this up.
>
> I reviewed all context about [1], it looks like the test is still valid
> but is of low priority; I reflected it in jira ticket itself.
>
> Also rewriting the test to multi-JVM fashion isn't an easy task, to me it
> is much better to spend this time working on more important stuff.
> Dmitry, could you suggest a better ticket for Maxim to look into?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4706
>
> --
> Thanks,
> Sergey.
>
> On Fri, Mar 16, 2018 at 4:43 PM, Dmitry Pavlov 
> wrote:
>
>> Hi Sergey,
>>
>> Is this issue still actual for you?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> пн, 26 февр. 2018 г. в 13:40, Maxim Muzafarov :
>>
>>> Hi all,
>>>
>>> I'm triyng to clarify for myseft issue [1] of rewriting this test case to
>>> use multiple JVMs. I'm trying to reproduce it using steps described here
>>> [2]:
>>> As I correct understand issue description, I'm runing testJobIdCollision
>>> and expecting to get exception:
>>> "Received computation request with duplicate job ID"
>>> , but I've got:
>>> "Job has already been hold [ctx=GridJobContextImpl
>>> [jobId=f7e74a1d161-08edbe47-9b65-4ed2-8d0c-a8a1a673, timeoutObj=null,
>>> attrs={}]]"
>>>
>>> So, does this test-case actual or can be removed? Or we should use
>>> another IgniteCallable
>>> like othis one: IgniteWalRecoveryTest.LoadRunnable [4]?
>>>
>>> Also, IgniteClusterProcessProxy#forNodeId [3] doesn't implemented yet.
>>> Brief search for some JIRA's of implementation this method doesn't return
>>> anything.
>>> What should we do with this?
>>>
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/IGNITE-4706
>>> [2] https://issues.apache.org/jira/browse/IGNITE-1384
>>> [3]
>>>
>>> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteClusterProcessProxy.java#L204
>>> [4]
>>>
>>> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java#L1552
>>>
>>
>


[GitHub] ignite pull request #3645: IGNITE-7964 rmvId is stored to MetaStorage metapa...

2018-03-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3645


---


Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread sebb
What is the project about? Why should I be interested in it?
[rhetorical questions]

The Announce emails are sent to people not on the developer or user lists.
Most will have no idea what the project is about.

So the e-mails should contain at least brief details of what the
product does, and some info on why the new release might be of
interest to them.

Readers should not have to click the link to find out the basic information
(although of course it is useful to have such links for further detail).

Please can you add that information to future announce mails?

Thanks.


On 16 March 2018 at 00:09, Denis Magda  wrote:
> Usually, Ignite community rolls out a new version once in 3 months, but we
> had to make an exception for Apache Ignite 2.4 that consumed five months in
> total.
>
> We could easily blame Thanksgiving, Christmas and New Year holidays for the
> delay and would be forgiven, but, in fact, we were forging the release you
> can't just pass by.
>
> Let's dive in and look for a big fish:
> https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
>
> The full list of the changes can be found here:
> https://ignite.apache.org/releases/2.4.0/release_notes.html
>
> Ready to try then navigate to our downloads page:
> https://ignite.apache.org/download.cgi
>
> --
> Denis


Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread Denis Magda
All our previous announcements were formatted precisely the way you
suggest. However, I haven't fount that template effective. Personally, I
archive an email immediately if see it's written the standard way and I
know nothing about the product.

That's why I decided to experiment targeting those who already know Ignite
and interested in solutions it provided in 2.4. Really appreciate your
feedback and will see how to incorporate your suggestions for future
announcements. Just don't like to see emails written by "robots".

--
Denis



On Fri, Mar 16, 2018 at 8:34 AM, sebb  wrote:

> What is the project about? Why should I be interested in it?
> [rhetorical questions]
>
> The Announce emails are sent to people not on the developer or user lists.
> Most will have no idea what the project is about.
>
> So the e-mails should contain at least brief details of what the
> product does, and some info on why the new release might be of
> interest to them.
>
> Readers should not have to click the link to find out the basic information
> (although of course it is useful to have such links for further detail).
>
> Please can you add that information to future announce mails?
>
> Thanks.
>
>
> On 16 March 2018 at 00:09, Denis Magda  wrote:
> > Usually, Ignite community rolls out a new version once in 3 months, but
> we
> > had to make an exception for Apache Ignite 2.4 that consumed five months
> in
> > total.
> >
> > We could easily blame Thanksgiving, Christmas and New Year holidays for
> the
> > delay and would be forgiven, but, in fact, we were forging the release
> you
> > can't just pass by.
> >
> > Let's dive in and look for a big fish:
> > https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
> >
> > The full list of the changes can be found here:
> > https://ignite.apache.org/releases/2.4.0/release_notes.html
> >
> > Ready to try then navigate to our downloads page:
> > https://ignite.apache.org/download.cgi
> >
> > --
> > Denis
>


Re: New test failures in .NET

2018-03-16 Thread Nikolay Izhikov
Hello, Pavel.

Sorry, my bad.
Thanks for fixing.

Will double check tests result before next commit.

В Пт, 16/03/2018 в 16:06 +0300, Pavel Tupitsyn пишет:
> Hi,
> 
> The problem is introduced by our freshman committer Nikolay Izhikov in
> "IGNITE-7756: IgniteUuid added to predefined types" [1]
> 
> IgniteUuid type id has changed on Java side, but not in .NET.
> I have pushed the fix to master branch.
> 
> I can only ask everyone again to have some respect for your fellow Igniters
> and verify changes on TeamCity before merging. It is not that hard.
> 
> Thank you,
> Pavel
> 
> 
> 
> [1] https://github.com/apache/ignite/commit/70ca86a30a7589f9ff46
> 6b93a958362135347d02
> 
> 
> 
> On Fri, Mar 16, 2018 at 2:51 PM, Dmitry Pavlov 
> wrote:
> 
> > Hi,
> > 
> > There are 31 test failures in .NET tests https://ci.ignite.apache.org/
> > viewLog.html?buildId=1137460=buildResultsDiv=
> > IgniteTests24Java8_IgnitePlatformNet
> >  Unfortunately it continues to reproduce.
> > 
> > Igniters, who can advice how to fix it? Was there any changes in .NET
> > tests/new tests contributions?
> > 
> > It seems there is one issue here, because 1st failure
> > "SetUp method failed. SetUp : System.NullReferenceException : Object
> > reference not set to an instance of an object."
> > and consequent failures are about Ignite instances.
> > 
> > Sincerely,
> > Dmitriy Pavlov
> > 

signature.asc
Description: This is a digitally signed message part


[GitHub] ignite pull request #3627: IGNITE-7932: Add example for Linear SVM with Iris...

2018-03-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3627


---


Re: IEP-14: Ignite failures handling (Discussion)

2018-03-16 Thread Andrey Gura
Hi!

Thank you all for your opinions and ideas!

While reading the thread I made two important conclusions:

1. Proposed API should be changed because possible actions enumeration
is bad idea. More clean and simple design should allow user provide
failure handler implementation with custom logic of failure handling
if needed.

2. Several failure handler implementations should be provided out-of
box in order to provide simple way of changing default behaviour
through configuration. The following implementations should be
provided:

 - NoOpFailureHandler - It's useful for tests and debugging.
 - RestartProcessFailureHandler - Specific implementation that
could be used only with ignite.(sh|bat).
 - StopNodeFailureHandler - This implementation will stop Ignite
node in case of critical error.
 - StopNodeOrHaltFailureHandler(boolean tryStop, long timeout) -
Default failure handler will try stop node if tryStop value is true.
If node can't be stopped or tryStop value is false then JVM process
will be terminated forcibly (Runtime.halt()). Default value for
tryStop parameter is false. Of course we should limit time of node
shutdown in order to prevent hangs.

As for the default behavior, I agree with those who believe that most
suitable default option is process termination (although I had a
different opinion before) and most strong argument for this choice is
impossibility of reasoning about system state in case of critical
error.
Also I believe that we can't choose solution that will be suitable for
any community member and the best that we can do is provide simple way
of changing this behavior.

So, I think, default behavior discussion should be finished. I'll
update IEP-14 [1] accordingly to my conclusions above. If you have any
ideas or thoughts about this conclusions, please feel free to share.

Thanks!

[1] 
https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling

On Fri, Mar 16, 2018 at 1:07 AM, Dmitriy Setrakyan
 wrote:
> On Thu, Mar 15, 2018 at 5:21 AM, Dmitry Pavlov 
> wrote:
>
>> Hi Dmitriy,
>>
>> It seems, here everyone agrees that killing the process will give a more
>> guaranteed result. The question is that the majority in the community does
>> not consider this to be acceptable in case Ignite as started as embedded
>> lib (e.g. from Java, using Ignition.start())
>>
>> What can help to accept the community's opinion? Let's remember Apache
>> principle: "community first".
>>
>
> I am still confused about the problem the majority of the community is
> trying to solve. If our priority is to keep the cluster in frozen state,
> then what is the reason for this task altogether?
>
> The priority should be to keep the cluster operational, not frozen. The
> only solution here is "kill" or "stop+kill". If the community does not
> accept this option as a default, then I propose to drop this task
> altogether, because we do not have to do anything to keep the cluster
> frozen.
>
>
>> If release 2.5 will show us it was inpractical, we will change default to
>> kill even for library. What do you think?
>>
>
> See above. I do not see a reason to continue with this task if the end
> result is identical to what we have today.
>
> I want to give the community another chance to speak up and voice their
> opinions again, having fully understood the context and the problem being
> solved here.
>
> D.


[jira] [Created] (IGNITE-7975) SQL TX: allow batch inserts

2018-03-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7975:
---

 Summary: SQL TX: allow batch inserts
 Key: IGNITE-7975
 URL: https://issues.apache.org/jira/browse/IGNITE-7975
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
Assignee: Alexander Paschenko
 Fix For: 2.5


Need to implement proper handling for batch inserts. It is disabled currently, 
see {{DmlStatementsProcessor#updateSqlFieldsBatched}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3505: Ignite 2.3.3

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3505


---


[GitHub] ignite pull request #3406: Ignite 2.1.11

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3406


---


[GitHub] ignite pull request #3452: Ignite 2.3.2.b1

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3452


---


Re: MTCGA: IGNITE-7791 and GridDhtPartitionsSingleMessage

2018-03-16 Thread Dmitry Pavlov
Hi Maxim,

I didn't know answer, so I decided to provide at least general intro
information. I hope it would be useful for you and for newcomers.

https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood

Sincerely,
Dmitriy Pavlov

вт, 13 мар. 2018 г. в 21:28, Dmitry Pavlov :

> Hi Alexey,
>
> Could you help with this question?
>
> I've observed such messages, it were probably sent by timeout, but not
> sure their purpose.
>
> Sincerely,
> Dmitriy Pavlov
>
> вт, 13 мар. 2018 г. в 20:58, Maxim Muzafarov :
>
>> Hi all,
>>
>> I'm working on [1] IgniteClientReconnectCacheTest class with frakly
>> test-case testReconnectCacheDestroyedAndCreated with success rate 32.4%.
>>
>> I've leaved comment in JIRA [2] and new test-case with reproducing this
>> issue.
>> Basicly, when we receiving GridDhtPartitionsSingleMessage with
>> exchId=null not
>> in proper time we've got this Assertion error. Ignite client instance
>> erases all it's caches after reconnect, so it has no information about
>> cache named 'static-cache' that persists on server nodes and when he
>> recieve this SignleMessage after reconnection it will have 'Failed to
>> reinitialize local partitions (preloading will be stopped)'.
>>
>> Should we perform clean-up [3] client caches in case of reconnect client
>> ignite instance?
>> Why we should clean clinent caches after node reconnects? Can't catch the
>> idea of it.
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-7791
>> [2]
>>
>> https://issues.apache.org/jira/browse/IGNITE-7791?focusedCommentId=16391409=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16391409
>> [3]
>>
>> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java#L190
>>
>


[GitHub] ignite pull request #3648: IGNITE-7863: Spark dependencies cleaned up.

2018-03-16 Thread nizhikov
GitHub user nizhikov opened a pull request:

https://github.com/apache/ignite/pull/3648

IGNITE-7863: Spark dependencies cleaned up.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nizhikov/ignite IGNITE-7863

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3648


commit a22150f6125f8bba535443b3bccac8efb43b1060
Author: Nikolay Izhikov 
Date:   2018-03-16T12:57:37Z

IGNITE-7863: Spark dependencies cleaned up.




---


Re: New test failures in .NET

2018-03-16 Thread Pavel Tupitsyn
Hi,

The problem is introduced by our freshman committer Nikolay Izhikov in
"IGNITE-7756: IgniteUuid added to predefined types" [1]

IgniteUuid type id has changed on Java side, but not in .NET.
I have pushed the fix to master branch.

I can only ask everyone again to have some respect for your fellow Igniters
and verify changes on TeamCity before merging. It is not that hard.

Thank you,
Pavel



[1] https://github.com/apache/ignite/commit/70ca86a30a7589f9ff46
6b93a958362135347d02



On Fri, Mar 16, 2018 at 2:51 PM, Dmitry Pavlov 
wrote:

> Hi,
>
> There are 31 test failures in .NET tests https://ci.ignite.apache.org/
> viewLog.html?buildId=1137460=buildResultsDiv=
> IgniteTests24Java8_IgnitePlatformNet
>  Unfortunately it continues to reproduce.
>
> Igniters, who can advice how to fix it? Was there any changes in .NET
> tests/new tests contributions?
>
> It seems there is one issue here, because 1st failure
> "SetUp method failed. SetUp : System.NullReferenceException : Object
> reference not set to an instance of an object."
> and consequent failures are about Ignite instances.
>
> Sincerely,
> Dmitriy Pavlov
>


[GitHub] ignite pull request #3602: IGNITE-7881 Tests for using TreeMap or TreeSet as...

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3602


---


Re: Ignite-7640 Refactor DiscoveryDataClusterState to be immutable (Done)

2018-03-16 Thread Dmitry Pavlov
Hi, It seems run all here was outdated, triggered one more run
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F3515%2Fhead


пн, 26 февр. 2018 г. в 13:22, Александр Меньшиков :

> Hi to all.
>
> I have done issue ignite-7640. Please review.
>
>
> JIRA: https://issues.apache.org/jira/browse/IGNITE-7640
> PR: https://github.com/apache/ignite/pull/3515
> TC:
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F3515%2Fhead
>  CR: https://reviews.ignite.apache.org/ignite/review/IGNT-CR-492
>


[GitHub] ignite pull request #3561: Ignite 1.7 master

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3561


---


[GitHub] ignite pull request #3564: Ignite gg 13518

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3564


---


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2018-03-16 Thread Dmitry Pavlov
Hi Arseny,

I've observed in reproducer
ignite_version=2.3.0

Could you check if it is reproducible in our freshest release 2.4.0.

I'm not sure about ticket number, but it is quite possible issue is already
fixed.

Sincerely,
Dmitriy Pavlov

чт, 15 мар. 2018 г. в 19:34, Dmitry Pavlov :

> Hi Alexey,
>
> It may be serious issue. Could you recommend expert here who can pick up
> this?
>
> Sincerely,
> Dmitriy Pavlov
>
> чт, 15 мар. 2018 г. в 19:25, Arseny Kovalchuk  >:
>
>> Hi, guys.
>>
>> I've got a reproducer for a problem which is generally reported as
>> "Caused by: java.lang.IllegalStateException: Failed to get page IO instance
>> (page content is corrupted)". Actually it reproduces the result. I don't
>> have an idea how the data has been corrupted, but the cluster node doesn't
>> want to start with this data.
>>
>> We got the issue again when some of server nodes were restarted several
>> times by kubernetes. I suspect that the data got corrupted during such
>> restarts. But the main functionality that we really desire to have, that
>> the cluster DOESN'T HANG during next restart even if the data is corrupted!
>> Anyway, there is no a tool that can help to correct such data, and as a
>> result we wipe all data manually to start the cluster. So, having warnings
>> about corrupted data in logs and just working cluster is the expected
>> behavior.
>>
>> How to reproduce:
>> 1. Download the data from here
>> https://storage.googleapis.com/pub-data-0/data5.tar.gz (~200Mb)
>> 2. Download and import Gradle project
>> https://storage.googleapis.com/pub-data-0/project.tar.gz (~100Kb)
>> 3. Unpack the data to the home folder, say /home/user1. You should get
>> the path like */home/user1/data5*. Inside data5 you should have
>> binary_meta, db, marshaller.
>> 4. Open *src/main/resources/data-test.xml* and put the absolute path of
>> unpacked data into *workDirectory* property of *igniteCfg5* bean. In
>> this example it should be */home/user1/data5.* Do not edit consistentId!
>> The consistentId is ignite-instance-5, so the real data is in
>> the data5/db/ignite_instance_5 folder
>> 5. Start application from ru.synesis.kipod.DataTestBootApp
>> 6. Enjoy
>>
>> Hope it will help.
>>
>>
>> ​
>> Arseny Kovalchuk
>>
>> Senior Software Engineer at Synesis
>> skype: arseny.kovalchuk
>> mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
>> ​LinkedIn Profile ​
>>
>> On 26 December 2017 at 21:15, Denis Magda  wrote:
>>
>>> Cross-posting to the dev list.
>>>
>>> Ignite persistence maintainers please chime in.
>>>
>>> —
>>> Denis
>>>
>> On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk <
>>> arseny.kovalc...@synesis.ru> wrote:
>>>
>>> Hi guys.
>>>
>>> Another issue when using Ignite 2.3 with native persistence enabled. See
>>> details below.
>>>
>>> We deploy Ignite along with our services in Kubernetes (v 1.8) on
>>> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
>>> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>>>
>>> We put about 230 events/second into Ignite, 70% of events are ~200KB in
>>> size and 30% are 5000KB. Smaller events have indexed fields and we query
>>> them via SQL.
>>>
>>> The cluster is activated from a client node which also streams events
>>> into Ignite from Kafka. We use custom implementation of streamer which uses
>>> cache.putAll() API.
>>>
>>> We started cluster from scratch without any persistent data. After a
>>> while we got corrupted data with the error message.
>>>
>>> [2017-12-26 07:44:14,251] ERROR [sys-#127%ignite-instance-2%]
>>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader:
>>> - Partition eviction failed, this can cause grid hang.
>>> class org.apache.ignite.IgniteException: Runtime failure on search row:
>>> Row@5b1479d6[ key: 171:1513946618964:3008806055072854, val:
>>> ru.synesis.kipod.event.KipodEvent [idHash=510912646, hash=-387621419,
>>> face_last_name=null, face_list_id=null, channel=171, source=,
>>> face_similarity=null, license_plate_number=null, descriptors=null,
>>> cacheName=kipod_events, cacheKey=171:1513946618964:3008806055072854,
>>> stream=171, alarm=false, processed_at=0, face_id=null, id=3008806055072854,
>>> persistent=false, face_first_name=null, license_plate_first_name=null,
>>> face_full_name=null, level=0, module=Kpx.Synesis.Outdoor,
>>> end_time=1513946624379, params=null, commented_at=0, tags=[vehicle, 0,
>>> human, 0, truck, 0, start_time=1513946618964, processed=false,
>>> kafka_offset=111259, license_plate_last_name=null, armed=false,
>>> license_plate_country=null, topic=MovingObject, comment=,
>>> expiration=1514033024000, original_id=null, license_plate_lists=null], ver:
>>> GridCacheVersion [topVer=125430590, order=1513955001926, nodeOrder=3] ][
>>> 3008806055072854, MovingObject, Kpx.Synesis.Outdoor, 0, , 1513946618964,
>>> 1513946624379, 171, 171, 

[GitHub] ignite pull request #3649: IGNITE-7962 Avoid swallowing unexpected exception...

2018-03-16 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/3649

IGNITE-7962 Avoid swallowing unexpected exceptions.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7962

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3649.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3649


commit 2cb6d469e89758684de3391ee679da250173b6b1
Author: Ilya Kasnacheev 
Date:   2018-03-16T13:23:03Z

IGNITE-7962 Avoid swallowing unexpected exceptions.




---


Re: Test testJobIdCollision to use multiple JVMs [IGNITE-4706]

2018-03-16 Thread Dmitry Pavlov
Hi Sergey,

Is this issue still actual for you?

Sincerely,
Dmitriy Pavlov

пн, 26 февр. 2018 г. в 13:40, Maxim Muzafarov :

> Hi all,
>
> I'm triyng to clarify for myseft issue [1] of rewriting this test case to
> use multiple JVMs. I'm trying to reproduce it using steps described here
> [2]:
> As I correct understand issue description, I'm runing testJobIdCollision
> and expecting to get exception:
> "Received computation request with duplicate job ID"
> , but I've got:
> "Job has already been hold [ctx=GridJobContextImpl
> [jobId=f7e74a1d161-08edbe47-9b65-4ed2-8d0c-a8a1a673, timeoutObj=null,
> attrs={}]]"
>
> So, does this test-case actual or can be removed? Or we should use
> another IgniteCallable
> like othis one: IgniteWalRecoveryTest.LoadRunnable [4]?
>
> Also, IgniteClusterProcessProxy#forNodeId [3] doesn't implemented yet.
> Brief search for some JIRA's of implementation this method doesn't return
> anything.
> What should we do with this?
>
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4706
> [2] https://issues.apache.org/jira/browse/IGNITE-1384
> [3]
>
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteClusterProcessProxy.java#L204
> [4]
>
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/persistence/db/wal/IgniteWalRecoveryTest.java#L1552
>


(Partition Map) Exchange at wiki

2018-03-16 Thread Dmitry Pavlov
Hi Igniters,

According coming questions here at dev.list, I decided to arrange my
entries about (Partition Map) Exchange as wiki article. Result is third
page in the series 'under the hood':
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood

I would like to thank Alexey Goncharuk and Dmitriy Govorukhin for the
sharing their knowlege about the exchange and assistance in the study.

Please share and recommend the article to new members of the community,
suggest and make changes. Criticism is appreciated.

Sincerely,
Dmitriy Pavlov


[jira] [Created] (IGNITE-7976) [Test failed] IgnitePersistentStoreCacheGroupsTest.testClusterRestartCachesWithH2Indexes fails on TC

2018-03-16 Thread Aleksey Plekhanov (JIRA)
Aleksey Plekhanov created IGNITE-7976:
-

 Summary: [Test failed] 
IgnitePersistentStoreCacheGroupsTest.testClusterRestartCachesWithH2Indexes 
fails on TC
 Key: IGNITE-7976
 URL: https://issues.apache.org/jira/browse/IGNITE-7976
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Test 
{{IgnitePersistentStoreCacheGroupsTest.testClusterRestartCachesWithH2Indexes}} 
always fail on TeamCity due to changes by IGNITE-7869. With following error:

{noformat}
javax.cache.CacheException: Failed to parse query. Table "PERSON" not found; 
SQL statement:
SELECT p._KEY, p._VAL FROM Person p WHERE p.lname=? ORDER BY p.fname [42102-195]
...
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3650: IGNITE-7976 Normalize query entites when dynamic ...

2018-03-16 Thread alex-plekhanov
GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3650

IGNITE-7976 Normalize query entites when dynamic start cache by store…

…d cache data

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-7976

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3650.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3650


commit 4bc90b0de290813e5ee64f7f4bab4944b24320b4
Author: Aleksey Plekhanov 
Date:   2018-03-16T15:29:41Z

IGNITE-7976 Normalize query entites when dynamic start cache by stored 
cache data




---


[GitHub] ignite pull request #3563: Ignite gg 13518

2018-03-16 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3563


---


Re: IgniteSet implementation: changes required

2018-03-16 Thread Andrey Kuznetsov
Dmitry, your way allows to reuse existing {{Ignite.set()}} API to create
both set flavors. We can adopt it unless somebody in the community objects.
Personally, I like {{IgniteCache.asSet()}} approach proposed by Vladimir O.
more, since it emphasizes the difference between sets being created, but
this will require API extension.

2018-03-16 8:30 GMT+03:00 Dmitriy Setrakyan :

> On Thu, Mar 15, 2018 at 12:24 AM, Andrey Kuznetsov 
> wrote:
>
> > Dmitriy,
> >
> > It's technically possible to produce both kinds of sets with
> > {{Ignite.set()}} call, but this will require to one more argument
> ('small'
> > vs 'large'). Doesn't it look less inuitive than separate
> > {{IgniteCache.asSet()}} ?
> >
> > And of course, we don't want to leave existing implementation broken.
> Pavel
> > P. has prepared the fix as part of [1].
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-5553
>
>
> Andrey, I am suggesting that we change all non-collocated sets to be based
> on IgniteCache. In this case you do not need any additional parameters.
>
> Makes sense?
>
> D.
>

-- 
Best regards,
  Andrey Kuznetsov.


[jira] [Created] (IGNITE-7977) Download page mirror choice is buried

2018-03-16 Thread Sebb (JIRA)
Sebb created IGNITE-7977:


 Summary: Download page mirror choice is buried
 Key: IGNITE-7977
 URL: https://issues.apache.org/jira/browse/IGNITE-7977
 Project: Ignite
  Issue Type: Improvement
 Environment: https://ignite.apache.org/download.cgi
Reporter: Sebb


The Ignite download page is generally good. However it's not immediately 
obvious how to change the mirror if the pre-selected one fails.

This is because the info is after the (long) table that lists all the releases.

Since the mirror only relates to the current release(s), which are at the 
start, IMO it would be better to place the mirror details/change button before 
the table.

The page also mentions the need to verify downloads, but does not describe how 
to do so.
Nor is there a link to the KEYS file.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Nodes which started in separate JVM couldn't stop properly (in tests)

2018-03-16 Thread Nikolay Izhikov
Hello, Guys.

I'm reviewed changes and it looks good to me.
There is a simple reproducer for a bug in test framework, see below.

It fails in master and works in branch.

I'm planning to merge the fix [1] if Run All will be OK.

Please, write to me if you have any objections.

[1] https://github.com/apache/ignite/pull/2382

```
public class MultiJvmSelfTest extends GridCommonAbstractTest {
@Override protected boolean isMultiJvm() { return true; }

public void testGrid() throws Exception {
final IgniteInternalFuture fut = GridTestUtils.runAsync(new RunnableX() 
{
@Override public void runx() throws Exception {
try {
startGrid(0);
startGrid(1);
}
finally {
stopGrid(1);
stopGrid(0);
}
}
});

try {
fut.get(20_000L);
} finally {
stopAllGrids(true);
}
}
}
```

В Чт, 15/03/2018 в 15:59 +, Dmitry Pavlov пишет:
> I see now. Thank you.
> 
> Nikolay, could you please merge this change?
> 
> чт, 15 мар. 2018 г. в 18:48, Vyacheslav Daradur :
> 
> > In brief:
> > Nodes in *separate* JVMs are shutting down by the computing task
> > *StopGridTask* which has sent from *local* JVM *synchronously* that
> > means *local* node must wait for task's finish.
> > 
> > At the same time when a node in *separate* JVM executes the received
> > *StopGridTask* which *synchronously* calls *G.stop(igniteInstanceName,
> > FALSE)* which is waiting for all computing task's finish, including
> > *StopGridTask* which has invoked it.
> > 
> > We have some kind of deadlock:
> > *Local* node is waiting for the computing task's finish which is
> > waiting for finish of execution *G.stop* which is waiting for all
> > computing tasks finish including *StopGridTask*.
> > 
> > We have not noticed that before because we use only stopAllGrids() in
> > out tests which stop local JVM without waiting for nodes in other
> > JVMs.
> > 
> > 
> > 
> > On Thu, Mar 15, 2018 at 6:11 PM, Dmitry Pavlov 
> > wrote:
> > > Please address comments in PR.
> > > 
> > > I did not fully understood why sync GridStopMessage message was lost, but
> > > async will be successfull. Probably we need discuss it briefly.
> > > 
> > > чт, 1 мар. 2018 г. в 12:11, Vyacheslav Daradur :
> > > > 
> > > > Thank you, Dmitry!
> > > > 
> > > > I'll join this review soon.
> > > > 
> > > > On Thu, Mar 1, 2018 at 12:07 PM, Dmitry Pavlov 
> > > > wrote:
> > > > > Hi Vyacheslav,
> > > > > 
> > > > > I will take a look, but first of all I am going to review
> > > > > https://reviews.ignite.apache.org/ignite/review/IGNT-CR-502  - it is
> > > > > impact
> > > > > change in testing framework. Hope you also will join to this review .
> > > > > 
> > > > > Sincerely,
> > > > > Dmitiry Pavlov
> > > > > 
> > > > > 
> > > > > чт, 1 мар. 2018 г. в 11:13, Vyacheslav Daradur :
> > > > > > 
> > > > > > Hi, Dmitry, could you please review it, because you are one of the
> > > > > > most experienced people in the testing framework.
> > > > > > 
> > > > > > Please see comment in Jira, because it is in pretty-format there.
> > > > > > 
> > > > > > On Thu, Feb 22, 2018 at 11:56 AM, Vyacheslav Daradur
> > > > > >  wrote:
> > > > > > > Hi Igniters!
> > > > > > > 
> > > > > > > I have investigated the issue [1] and found that stopping node in
> > > > > > > separate JVM may stuck thread or leave system process alive after
> > > > > > > test
> > > > > > > finished.
> > > > > > > The main reason is *StopGridTask* that we send from node in local
> > 
> > JVM
> > > > > > > to node in separate JVM via remote computing.
> > > > > > > We send job synchronously to be sure that node will be stopped, 
> > > > > > > but
> > > > > > > job calls synchronously *G.stop(igniteInstanceName, cancel))* with
> > > > > > > *cancel = false*, that means node must wait to compute jobs before
> > 
> > it
> > > > > > > goes down what leads to some kind of deadlock. Using of *cancel =
> > > > > > > true* would solve the issue but may break some tests’ logic, for
> > 
> > this
> > > > > > > reason, I've reworked the method’s synchronization logic [2].
> > > > > > > 
> > > > > > > We have not noticed that before because we use only
> > 
> > *stopAllGrids()*
> > > > > > > in out tests which stop local JVM without waiting for nodes in
> > 
> > other
> > > > > > > JVMs.
> > > > > > > I believe this fix should reduce the number of flaky tests on
> > > > > > > TeamCity, especially which fails because of a cluster from the
> > > > > > > previous test has not been stopped properly.
> > > > > > > 
> > > > > > > Ci.tests [3] look a bit better than in master.
> > > > > > > Please review prepared PR [2] and share your thoughts.
> > > > > > > 
> > > > > > > [1] 

[GitHub] ignite pull request #3647: Ignite 2.3.4

2018-03-16 Thread devozerov
GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/3647

Ignite 2.3.4



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.3.4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3647.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3647


commit 7a0300ae35894c389b126e95615f720a99a3d360
Author: devozerov 
Date:   2017-10-18T11:18:08Z

Merge branch 'ignite-2.3.1' into ignite-2.3.2

commit 5df25fc8adf01a8e4999563f3f31b79c195801d4
Author: devozerov 
Date:   2017-10-18T12:03:28Z

IGNITE-6662: SQL: fixed affinity key field name resolution during both 
parsig and table creation. This closes #2875.

commit ad01f9b099d0bf92537378859ad6d5a52de57748
Author: Alexey Kuznetsov 
Date:   2017-10-19T02:43:20Z

IGNITE-6647 Web Console: Implemented support of schema migration scripts.
(cherry picked from commit c65399c)

commit 0c66344bc752dac98b256dd140fcab95d1662862
Author: Pavel Tupitsyn 
Date:   2017-10-19T09:36:39Z

IGNITE-6627 .NET: Fix repeated known metadata updates

This closes #2876

commit 9c4411af5f6b6bf7686e52d91daa6b82e089d57a
Author: Pavel Tupitsyn 
Date:   2017-10-19T15:42:12Z

IGNITE-6675 .NET: Fix ignored IgniteConfiguration.IgniteHome

This closes #2886

commit 5dfeb916036984d3d8e12ad6d2d43e17a19f25ba
Author: tledkov-gridgain 
Date:   2017-10-19T19:20:18Z

IGNITE-6529: JDBC: fixed not-null column metadata. This closes #2884.

commit 008d87057734953b4e30059841a14eb2fbc3ddb7
Author: devozerov 
Date:   2017-10-19T19:30:31Z

IGNITE-6684: Renamed "ignitesql.sh|bat" to "sqlline.sh|bat".

commit 1b8abd214ed2afcd3fd1f6a4c71a19d6fe1a4b01
Author: Alexey Kuznetsov 
Date:   2017-10-20T04:23:23Z

IGNITE-6647 Added missing Mongo injector.
(cherry picked from commit 173ecef)

commit 745677631d260cb51cb601ae38af8528aa5d5c66
Author: Ivan Rakov 
Date:   2017-10-20T07:29:57Z

IGNITE-6030 Allow enabling persistence per data region

commit 6c58b4ac7c4527d583de49c4d8b250436273294c
Author: Alexey Goncharuk 
Date:   2017-10-20T10:43:44Z

IGNITE-6030 Fixed misspelled metric

commit 8ee033fdc50b11c6913e1b6ddc100c28f6bf4341
Author: Pavel Tupitsyn 
Date:   2017-10-20T11:38:11Z

IGNITE-6515 .NET: Enable persistence on per-cache basis

This closes #2891

commit 347696d2426ef5be8294253141e299097a6564cc
Author: Anton Vinogradov 
Date:   2017-10-20T14:15:39Z

Removed redundant libs from libs/optional

commit a221066b3d029afc392be704a810c0e830fc0c49
Author: Alexey Kuznetsov 
Date:   2017-10-20T14:15:02Z

IGNITE-6647 Web Console: Added folder for modules migrations.
(cherry picked from commit 3700717)

commit e6cb5300d51c5184e876b988c4683bc605685874
Author: devozerov 
Date:   2017-10-20T15:11:52Z

AI release notes.

commit d196045bf8b719f65b4025409112140196aa206c
Author: devozerov 
Date:   2017-10-21T15:47:04Z

IGNITE-6689: SQL: Added DATA_REGION option for CREATE TABLE.

commit da8a9d5a968ba071697a28adb01bc59f80d1893c
Author: Pavel Tupitsyn 
Date:   2017-10-23T08:55:33Z

Merge branch 'ignite-2.3.1' into ignite-2.3.2

# Conflicts:
#   
modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj

commit 69fdac3acf768ecb9df80d4412c4de5ffd5bc4f5
Author: Dmitriy Shabalin 
Date:   2017-10-23T09:09:47Z

IGNITE-5909 Added list editable component.
(cherry picked from commit 01daee6)

commit ec1a8e7f698e584e94284220aa13ff15449f366e
Author: oleg-ostanin 
Date:   2017-10-24T06:48:04Z

IGNITE-6706: Removed ignite-sqlline module from "optional" build directory. 
This closes #2901.

commit 3e52aca47b0a6a0a47f7d063bd0d2bb51489e523
Author: oleg-ostanin 
Date:   2017-10-24T06:49:55Z

IGNITE-6708: Removed ignite-compatibility module from "optional" build 
directory. This closes #2902.

commit 103d5b00aa697acca1d41fe39ec27404ac6ac555
Author: oleg-ostanin 
Date:   2017-10-24T07:32:32Z

IGNITE-6718: Skipped upload of sqlline and compatibility modules into maven 
central during build. This closes #2911.

commit 4a2c38333c112d4956d6394667672c1470503435
Author: apopov 
Date:   2017-10-24T08:56:33Z

IGNITE-6362 NPE in Log4J2Logger

commit b92a9c6cbd7a48d6399da6b8cdaad014ee5770c4
Author: Alexey Goncharuk 
Date:   2017-10-24T11:02:18Z

IGNITE-6721 - Fixed page evictions in mixed mode

commit 5150e8b25340794dba11f73d53e890176c528fb1
Author: Oleg Ostanin 
Date:   

Re: Maven. Issues with flatten plugin

2018-03-16 Thread Dmitry Pavlov
Folks, thank you!

I hope now we can now avoid transitive dependencies enlisting in each
module. It will remove odd work from test development.

пт, 16 мар. 2018 г. в 15:48, Nikolay Izhikov :

> Hello, guys.
>
> We finally updated flatten plugin in master.
>
> Petr Ivanov, Alex Volkov - thank you very much!
>
> В Пт, 02/03/2018 в 16:45 +0300, Petr Ivanov пишет:
> > Updated all maven definitions I’ve found in templates of test project.
> > Please, try once more.
> >
> >
> >
> > > On 2 Mar 2018, at 16:36, Nikolay Izhikov  wrote:
> > >
> > > Petr, thank you.
> > >
> > > But seems it doesn't help
> > >
> > > "Failed to execute goal
> org.codehaus.mojo:flatten-maven-plugin:1.0.1:flatten (flatten) on project
> ignite-tools:
> > > The plugin org.codehaus.mojo:flatten-maven-plugin:1.0.1 requires Maven
> version 3.2.5"
> > >
> > >
> https://ci.ignite.apache.org/viewLog.html?buildId=1118290=IgniteTests24Java8_IgniteActivateDeactivateCluster=buildLog&_focus=189
> > >
> > > В Пт, 02/03/2018 в 13:38 +0300, Petr Ivanov пишет:
> > > > Made some changes — 3.3.9 is now default maven.
> > > > Please, rerun failed tests.
> > > >
> > > >
> > > >
> > > > > On 2 Mar 2018, at 13:21, Nikolay Izhikov 
> wrote:
> > > > >
> > > > > Hello, Petr.
> > > > >
> > > > > I run TC for my PR [1] and have some issues on Team City:
> > > > >
> > > > > "Failed to execute goal
> org.codehaus.mojo:flatten-maven-plugin:1.0.1:flatten (flatten) on project
> ignite-tools:
> > > > > The plugin org.codehaus.mojo:flatten-maven-plugin:1.0.1 requires
> Maven version 3.2.5"
> > > > >
> > > > > Can we update maven to version 3.2.5 or higher on all Team city
> agents?
> > > > >
> > > > > [1] https://github.com/apache/ignite/pull/3592
> > > > > [2]
> https://ci.ignite.apache.org/viewLog.html?buildId=1117882=IgniteTests24Java8_IgniteVisorConsoleScala=buildLog&_focus=10143
> > > > >
> > > > > В Пт, 02/03/2018 в 12:15 +0300, Nikolay Izhikov пишет:
> > > > > > Sorry - IGNITE-7862 is the ticket.
> > > > > >
> > > > > > 2 марта 2018 г. 12:14 PM пользователь "Nikolay Izhikov" <
> nizhi...@apache.org> написал:
> > > > > > > Dmitry.
> > > > > > > I'm already done it.
> > > > > > > Will return with PR and TC results soon
> > > > > > >
> > > > > > > 2 марта 2018 г. 11:53 AM пользователь "Dmitry Pavlov" <
> dpavlov@gmail.com> написал:
> > > > > > > > Hi Petr,
> > > > > > > >
> > > > > > > > Thank you, it is great that you found the solution with low
> impact.
> > > > > > > >
> > > > > > > > Lets create ticket and merge PR.
> > > > > > > >
> > > > > > > > пт, 2 мар. 2018 г. в 10:06, Petr Ivanov  >:
> > > > > > > >
> > > > > > > > > The problem is solved by updating flatten-maven-plugin
> version to 1.0.1.
> > > > > > > > >
> > > > > > > > > Nikolay, please, double check it.
> > > > > > > > > If it really solves the problem, please, fill the ticket
> (or point to
> > > > > > > > > existing one), so I can update it and check impact on
> release procedure.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > On 1 Mar 2018, at 17:04, Nikolay Izhikov <
> nizhi...@apache.org> wrote:
> > > > > > > > > >
> > > > > > > > > > Petr.
> > > > > > > > > >
> > > > > > > > > > Thank you for trying!
> > > > > > > > > >
> > > > > > > > > > Did you remove 'test' dependencies before running
> commands?
> > > > > > > > > >
> > > > > > > > > > Because I commit in master only correct pom.xml for
> current build
> > > > > > > > >
> > > > > > > > > process,
> > > > > > > > > > of course.
> > > > > > > > > >
> > > > > > > > > > But, to make things works I have to copy paste
> transitive dependencies
> > > > > > > > >
> > > > > > > > > from
> > > > > > > > > > spark-core.
> > > > > > > > > >
> > > > > > > > > > 1 марта 2018 г. 4:55 PM пользователь "Petr Ivanov" <
> mr.wei...@gmail.com>
> > > > > > > > > > написал:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > I don't get what is the point.
> > > > > > > > > > > > Did you try to reproduce issue?
> > > > > > > > > > > > Or should I provide full traces to you?
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > My point in inability to fully understand and describe
> the problem in
> > > > > > > > > > > terms of mechanism which causes it.
> > > > > > > > > > > For now I can see only indirect guessing.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > And yes, I’ve run your commands and both times (in
> spark module
> > > > > > > > >
> > > > > > > > > directory
> > > > > > > > > > > and in root) it produces the same result:
> > > > > > > > > > >
> > > > > > > > > > > 1.
> > > > > > > > > > > ignite (master) $ JAVA_HOME="$(/usr/libexec/java_home
> -v 1.8)" mvn
> > > > > > > > > > > install -U
> -Plgpl,examples,scala,-clean-libs,-release,ml
> > > > > > > > > > > -Dtest=org.apache.ignite.testsuites.IgniteRDDTestSuite
> > > > > > > > > > > -Dmaven.javadoc.skip=true 

[jira] [Created] (IGNITE-7974) Authentication: SQL command to show users

2018-03-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7974:
---

 Summary: Authentication: SQL command to show users
 Key: IGNITE-7974
 URL: https://issues.apache.org/jira/browse/IGNITE-7974
 Project: Ignite
  Issue Type: Task
  Components: general, sql
Reporter: Vladimir Ozerov
 Fix For: 2.5


We introduced SQL commands to add/remove users. It should be possible to list 
them. In most databases it is done via special tables. We can either do the 
same thing - introduce fake H2 table which will delegate to metastore. 
Alternatively, we can implement our own command, e.g. {{SHOW USERS}} which will 
just output the list of users in any convenient form.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Abandoned Patch Available JIRA tickets

2018-03-16 Thread Dmitry Pavlov
Hi Igniters,

I want to raise up this thread again. If your ticket seems to be stuck in
review process, please write. I will try to help.

Can't promise it would be fast, but I hope together we can find a solution
for each particular case.

Sincerely,
Dmitriy Pavlov

чт, 8 февр. 2018 г. в 2:08, Denis Magda :

> Anyway, your suggestion requires to have self-disciplined committers that
> will be keeping track of the tickets they promised to review.
>
> I’m ready to propose a guideline here but the committers have to be
> committed to that. Otherwise, I’ll just waste my time.
>
> —
> Denis
>
> > On Feb 7, 2018, at 1:41 AM, Andrey Kuznetsov  wrote:
> >
> > Periodic punches can frustrate committers, and also it's unpleasant for
> > contributors. Most IP->PA transitions are commented with something like
> > "John Doe, this awesome feature is ready and needs your review." Maybe
> it's
> > better to adopt following rule of thumb? If the change is clean and
> > straightforward it should be reviewed is a day, otherwise committer puts
> a
> > comment in Jira issue about planned review time.
> >
> > Is this acceptable?
> >
> > 2018-02-07 0:14 GMT+03:00 Denis Magda >:
> >
> >> I guess it’s all about discipline.
> >>
> >> Committers need to walk-through a list of the pull-request regularly
> while
> >> contributors have to remind of a pending pull-request periodically. So
> both
> >> parts have to be proactive.
> >>
> >> Another approach is to find a volunteer from the community who will keep
> >> an eye on the contributions and spread them out among committers.
> >>
> >> Not sure I like the latter approach and would rather go for the one when
> >> both the committers and contributors are proactive and disciplined. But
> >> guess what, if you want to make the contributors proactive then the
> >> committers have to be an example.
> >>
> >> —
> >> Denis
> >>
> >>
> >>
> >
> >
> > --
> > Best regards,
> >  Andrey Kuznetsov.
>
>


Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread sebb
Perhaps have a look at the announce mails sent by Httpd and Tomcat.

Even though these projects are better known than most, they still
provide a short summary of what they do.
I don't think they read as though written by robots...

On 16 March 2018 at 17:02, Denis Magda  wrote:
> All our previous announcements were formatted precisely the way you suggest.
> However, I haven't fount that template effective. Personally, I archive an
> email immediately if see it's written the standard way and I know nothing
> about the product.
>
> That's why I decided to experiment targeting those who already know Ignite
> and interested in solutions it provided in 2.4. Really appreciate your
> feedback and will see how to incorporate your suggestions for future
> announcements. Just don't like to see emails written by "robots".
>
> --
> Denis
>
>
>
> On Fri, Mar 16, 2018 at 8:34 AM, sebb  wrote:
>>
>> What is the project about? Why should I be interested in it?
>> [rhetorical questions]
>>
>> The Announce emails are sent to people not on the developer or user lists.
>> Most will have no idea what the project is about.
>>
>> So the e-mails should contain at least brief details of what the
>> product does, and some info on why the new release might be of
>> interest to them.
>>
>> Readers should not have to click the link to find out the basic
>> information
>> (although of course it is useful to have such links for further detail).
>>
>> Please can you add that information to future announce mails?
>>
>> Thanks.
>>
>>
>> On 16 March 2018 at 00:09, Denis Magda  wrote:
>> > Usually, Ignite community rolls out a new version once in 3 months, but
>> > we
>> > had to make an exception for Apache Ignite 2.4 that consumed five months
>> > in
>> > total.
>> >
>> > We could easily blame Thanksgiving, Christmas and New Year holidays for
>> > the
>> > delay and would be forgiven, but, in fact, we were forging the release
>> > you
>> > can't just pass by.
>> >
>> > Let's dive in and look for a big fish:
>> > https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
>> >
>> > The full list of the changes can be found here:
>> > https://ignite.apache.org/releases/2.4.0/release_notes.html
>> >
>> > Ready to try then navigate to our downloads page:
>> > https://ignite.apache.org/download.cgi
>> >
>> > --
>> > Denis
>
>


Re: Deploying 2.4 artifacts to maven repo

2018-03-16 Thread Denis Magda
Hmm,

MvnRepository has not picked up the changes yet. It's strange. Filed a
ticket:
https://issues.apache.org/jira/browse/INFRA-16198

--
Denis

On Wed, Mar 14, 2018 at 5:34 AM, aaksenov  wrote:

> it was uploaded to maven central no 05.03 and it seems it was most probably
> just a local maven issue with cached metadata. Just not having it in
> mvnrepository become confusing, sorry
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Partition eviction failed, this can cause grid hang. (Caused by: java.lang.IllegalStateException: Failed to get page IO instance (page content is corrupted))

2018-03-16 Thread Gaurav Bajaj
Hi,

We also got exact same error. Ours is  setup without kubernetes. We are
using ignite data streamer to put data into caches. After streaming aroung
500k records streamer failed with exception mentioned in original email.

Thanks,
Gaurav

On 16-Mar-2018 4:44 PM, "Arseny Kovalchuk" 
wrote:

> Hi Dmitry.
>
> Thanks for you attention to this issue.
>
> I changed repository to jcenter and set Ignite version to 2.4.
> Unfortunately the reproducer starts with the same error message in the log
> (see attached).
>
> I cannot say whether behavior of the whole cluster will change on 2.4, I
> mean if the cluster can start on corrupted data on 2.4, because we have
> wiped the data and restarted the cluster where the problem has arrived.
> We'll move to 2.4 next week and continue testing of our software. We are
> moving forward to production in April/May, and it would be good if we get
> some clue how to deal with such situation with data in the future.
>
>
>
> ​
> Arseny Kovalchuk
>
> Senior Software Engineer at Synesis
> skype: arseny.kovalchuk
> mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
> ​LinkedIn Profile ​
>
> On 16 March 2018 at 17:03, Dmitry Pavlov  wrote:
>
>> Hi Arseny,
>>
>> I've observed in reproducer
>> ignite_version=2.3.0
>>
>> Could you check if it is reproducible in our freshest release 2.4.0.
>>
>> I'm not sure about ticket number, but it is quite possible issue is
>> already fixed.
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> чт, 15 мар. 2018 г. в 19:34, Dmitry Pavlov :
>>
>>> Hi Alexey,
>>>
>>> It may be serious issue. Could you recommend expert here who can pick up
>>> this?
>>>
>>> Sincerely,
>>> Dmitriy Pavlov
>>>
>>> чт, 15 мар. 2018 г. в 19:25, Arseny Kovalchuk <
>>> arseny.kovalc...@synesis.ru>:
>>>
 Hi, guys.

 I've got a reproducer for a problem which is generally reported as
 "Caused by: java.lang.IllegalStateException: Failed to get page IO
 instance (page content is corrupted)". Actually it reproduces the result. I
 don't have an idea how the data has been corrupted, but the cluster node
 doesn't want to start with this data.

 We got the issue again when some of server nodes were restarted several
 times by kubernetes. I suspect that the data got corrupted during such
 restarts. But the main functionality that we really desire to have, that
 the cluster DOESN'T HANG during next restart even if the data is corrupted!
 Anyway, there is no a tool that can help to correct such data, and as a
 result we wipe all data manually to start the cluster. So, having warnings
 about corrupted data in logs and just working cluster is the expected
 behavior.

 How to reproduce:
 1. Download the data from here https://storage.googleapi
 s.com/pub-data-0/data5.tar.gz (~200Mb)
 2. Download and import Gradle project https://storage.google
 apis.com/pub-data-0/project.tar.gz (~100Kb)
 3. Unpack the data to the home folder, say /home/user1. You should get
 the path like */home/user1/data5*. Inside data5 you should have
 binary_meta, db, marshaller.
 4. Open *src/main/resources/data-test.xml* and put the absolute path
 of unpacked data into *workDirectory* property of *igniteCfg5* bean.
 In this example it should be */home/user1/data5.* Do not
 edit consistentId! The consistentId is ignite-instance-5, so the real data
 is in the data5/db/ignite_instance_5 folder
 5. Start application from ru.synesis.kipod.DataTestBootApp
 6. Enjoy

 Hope it will help.


 ​
 Arseny Kovalchuk

 Senior Software Engineer at Synesis
 skype: arseny.kovalchuk
 mobile: +375 (29) 666-16-16 <+375%2029%20666-16-16>
 ​LinkedIn Profile ​

 On 26 December 2017 at 21:15, Denis Magda  wrote:

> Cross-posting to the dev list.
>
> Ignite persistence maintainers please chime in.
>
> —
> Denis
>
 On Dec 26, 2017, at 2:17 AM, Arseny Kovalchuk <
> arseny.kovalc...@synesis.ru> wrote:
>
> Hi guys.
>
> Another issue when using Ignite 2.3 with native persistence enabled.
> See details below.
>
> We deploy Ignite along with our services in Kubernetes (v 1.8) on
> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of 
> Ignite
> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>
> We put about 230 events/second into Ignite, 70% of events are ~200KB
> in size and 30% are 5000KB. Smaller events have indexed fields and we 
> query
> them via SQL.
>
> The cluster is activated from a client node which also streams events
> into Ignite from Kafka. We use custom implementation of streamer which 
> uses
> cache.putAll() API.
>
> We 

Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread Denis Magda
Thanks for the pointers, will check them up.

Have a good weekend,
Denis

On Fri, Mar 16, 2018 at 11:16 AM, sebb  wrote:

> Perhaps have a look at the announce mails sent by Httpd and Tomcat.
>
> Even though these projects are better known than most, they still
> provide a short summary of what they do.
> I don't think they read as though written by robots...
>
> On 16 March 2018 at 17:02, Denis Magda  wrote:
> > All our previous announcements were formatted precisely the way you
> suggest.
> > However, I haven't fount that template effective. Personally, I archive
> an
> > email immediately if see it's written the standard way and I know nothing
> > about the product.
> >
> > That's why I decided to experiment targeting those who already know
> Ignite
> > and interested in solutions it provided in 2.4. Really appreciate your
> > feedback and will see how to incorporate your suggestions for future
> > announcements. Just don't like to see emails written by "robots".
> >
> > --
> > Denis
> >
> >
> >
> > On Fri, Mar 16, 2018 at 8:34 AM, sebb  wrote:
> >>
> >> What is the project about? Why should I be interested in it?
> >> [rhetorical questions]
> >>
> >> The Announce emails are sent to people not on the developer or user
> lists.
> >> Most will have no idea what the project is about.
> >>
> >> So the e-mails should contain at least brief details of what the
> >> product does, and some info on why the new release might be of
> >> interest to them.
> >>
> >> Readers should not have to click the link to find out the basic
> >> information
> >> (although of course it is useful to have such links for further detail).
> >>
> >> Please can you add that information to future announce mails?
> >>
> >> Thanks.
> >>
> >>
> >> On 16 March 2018 at 00:09, Denis Magda  wrote:
> >> > Usually, Ignite community rolls out a new version once in 3 months,
> but
> >> > we
> >> > had to make an exception for Apache Ignite 2.4 that consumed five
> months
> >> > in
> >> > total.
> >> >
> >> > We could easily blame Thanksgiving, Christmas and New Year holidays
> for
> >> > the
> >> > delay and would be forgiven, but, in fact, we were forging the release
> >> > you
> >> > can't just pass by.
> >> >
> >> > Let's dive in and look for a big fish:
> >> > https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
> >> >
> >> > The full list of the changes can be found here:
> >> > https://ignite.apache.org/releases/2.4.0/release_notes.html
> >> >
> >> > Ready to try then navigate to our downloads page:
> >> > https://ignite.apache.org/download.cgi
> >> >
> >> > --
> >> > Denis
> >
> >
>


Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread Dmitriy Setrakyan
 Denis,

The brief pitch we provide on the home page should be good enough, no?

Apache Ignite™ is a memory-centric distributed database, caching, and
> processing platform for
> transactional, analytical, and streaming workloads, delivering in-memory
> speeds at petabyte scale


D.



On Fri, Mar 16, 2018 at 11:19 AM, Denis Magda  wrote:

> Thanks for the pointers, will check them up.
>
> Have a good weekend,
> Denis
>
> On Fri, Mar 16, 2018 at 11:16 AM, sebb  wrote:
>
> > Perhaps have a look at the announce mails sent by Httpd and Tomcat.
> >
> > Even though these projects are better known than most, they still
> > provide a short summary of what they do.
> > I don't think they read as though written by robots...
> >
> > On 16 March 2018 at 17:02, Denis Magda  wrote:
> > > All our previous announcements were formatted precisely the way you
> > suggest.
> > > However, I haven't fount that template effective. Personally, I archive
> > an
> > > email immediately if see it's written the standard way and I know
> nothing
> > > about the product.
> > >
> > > That's why I decided to experiment targeting those who already know
> > Ignite
> > > and interested in solutions it provided in 2.4. Really appreciate your
> > > feedback and will see how to incorporate your suggestions for future
> > > announcements. Just don't like to see emails written by "robots".
> > >
> > > --
> > > Denis
> > >
> > >
> > >
> > > On Fri, Mar 16, 2018 at 8:34 AM, sebb  wrote:
> > >>
> > >> What is the project about? Why should I be interested in it?
> > >> [rhetorical questions]
> > >>
> > >> The Announce emails are sent to people not on the developer or user
> > lists.
> > >> Most will have no idea what the project is about.
> > >>
> > >> So the e-mails should contain at least brief details of what the
> > >> product does, and some info on why the new release might be of
> > >> interest to them.
> > >>
> > >> Readers should not have to click the link to find out the basic
> > >> information
> > >> (although of course it is useful to have such links for further
> detail).
> > >>
> > >> Please can you add that information to future announce mails?
> > >>
> > >> Thanks.
> > >>
> > >>
> > >> On 16 March 2018 at 00:09, Denis Magda  wrote:
> > >> > Usually, Ignite community rolls out a new version once in 3 months,
> > but
> > >> > we
> > >> > had to make an exception for Apache Ignite 2.4 that consumed five
> > months
> > >> > in
> > >> > total.
> > >> >
> > >> > We could easily blame Thanksgiving, Christmas and New Year holidays
> > for
> > >> > the
> > >> > delay and would be forgiven, but, in fact, we were forging the
> release
> > >> > you
> > >> > can't just pass by.
> > >> >
> > >> > Let's dive in and look for a big fish:
> > >> > https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
> > >> >
> > >> > The full list of the changes can be found here:
> > >> > https://ignite.apache.org/releases/2.4.0/release_notes.html
> > >> >
> > >> > Ready to try then navigate to our downloads page:
> > >> > https://ignite.apache.org/download.cgi
> > >> >
> > >> > --
> > >> > Denis
> > >
> > >
> >
>


Re: [ANNOUNCE] Apache Ignite 2.4.0 Released: Machine Learning GA and Spark DataFrames

2018-03-16 Thread Denis Magda
Absolutely. The concern here was that I didn't provide the necessary
description in general.

--
Denis

On Fri, Mar 16, 2018 at 3:51 PM, Dmitriy Setrakyan 
wrote:

>  Denis,
>
> The brief pitch we provide on the home page should be good enough, no?
>
> Apache Ignite™ is a memory-centric distributed database, caching, and
> > processing platform for
> > transactional, analytical, and streaming workloads, delivering in-memory
> > speeds at petabyte scale
>
>
> D.
>
>
>
> On Fri, Mar 16, 2018 at 11:19 AM, Denis Magda  wrote:
>
> > Thanks for the pointers, will check them up.
> >
> > Have a good weekend,
> > Denis
> >
> > On Fri, Mar 16, 2018 at 11:16 AM, sebb  wrote:
> >
> > > Perhaps have a look at the announce mails sent by Httpd and Tomcat.
> > >
> > > Even though these projects are better known than most, they still
> > > provide a short summary of what they do.
> > > I don't think they read as though written by robots...
> > >
> > > On 16 March 2018 at 17:02, Denis Magda  wrote:
> > > > All our previous announcements were formatted precisely the way you
> > > suggest.
> > > > However, I haven't fount that template effective. Personally, I
> archive
> > > an
> > > > email immediately if see it's written the standard way and I know
> > nothing
> > > > about the product.
> > > >
> > > > That's why I decided to experiment targeting those who already know
> > > Ignite
> > > > and interested in solutions it provided in 2.4. Really appreciate
> your
> > > > feedback and will see how to incorporate your suggestions for future
> > > > announcements. Just don't like to see emails written by "robots".
> > > >
> > > > --
> > > > Denis
> > > >
> > > >
> > > >
> > > > On Fri, Mar 16, 2018 at 8:34 AM, sebb  wrote:
> > > >>
> > > >> What is the project about? Why should I be interested in it?
> > > >> [rhetorical questions]
> > > >>
> > > >> The Announce emails are sent to people not on the developer or user
> > > lists.
> > > >> Most will have no idea what the project is about.
> > > >>
> > > >> So the e-mails should contain at least brief details of what the
> > > >> product does, and some info on why the new release might be of
> > > >> interest to them.
> > > >>
> > > >> Readers should not have to click the link to find out the basic
> > > >> information
> > > >> (although of course it is useful to have such links for further
> > detail).
> > > >>
> > > >> Please can you add that information to future announce mails?
> > > >>
> > > >> Thanks.
> > > >>
> > > >>
> > > >> On 16 March 2018 at 00:09, Denis Magda  wrote:
> > > >> > Usually, Ignite community rolls out a new version once in 3
> months,
> > > but
> > > >> > we
> > > >> > had to make an exception for Apache Ignite 2.4 that consumed five
> > > months
> > > >> > in
> > > >> > total.
> > > >> >
> > > >> > We could easily blame Thanksgiving, Christmas and New Year
> holidays
> > > for
> > > >> > the
> > > >> > delay and would be forgiven, but, in fact, we were forging the
> > release
> > > >> > you
> > > >> > can't just pass by.
> > > >> >
> > > >> > Let's dive in and look for a big fish:
> > > >> > https://blogs.apache.org/ignite/entry/apache-ignite-2-4-brings
> > > >> >
> > > >> > The full list of the changes can be found here:
> > > >> > https://ignite.apache.org/releases/2.4.0/release_notes.html
> > > >> >
> > > >> > Ready to try then navigate to our downloads page:
> > > >> > https://ignite.apache.org/download.cgi
> > > >> >
> > > >> > --
> > > >> > Denis
> > > >
> > > >
> > >
> >
>


Re: IgniteSet implementation: changes required

2018-03-16 Thread Dmitriy Setrakyan
On Fri, Mar 16, 2018 at 7:39 AM, Andrey Kuznetsov  wrote:

> Dmitry, your way allows to reuse existing {{Ignite.set()}} API to create
> both set flavors. We can adopt it unless somebody in the community objects.
> Personally, I like {{IgniteCache.asSet()}} approach proposed by Vladimir O.
> more, since it emphasizes the difference between sets being created, but
> this will require API extension.
>

Andrey, I am suggesting that Ignite.set(...) in non-collocated mode behaves
exactly the same as the proposed IgniteCache.asSet() method. I do not like
the IgniteCache.asSet() API because it is inconsistent with Ignite data
structure design. All data structures are provided on Ignite API directly
and we should not change that.

D.


Re: IEP-14: Ignite failures handling (Discussion)

2018-03-16 Thread Dmitriy Setrakyan
Thanks Andrey! I have added a few comments to the IEP-14 page.

D.

On Fri, Mar 16, 2018 at 6:44 AM, Andrey Gura  wrote:

> Hi!
>
> Thank you all for your opinions and ideas!
>
> While reading the thread I made two important conclusions:
>
> 1. Proposed API should be changed because possible actions enumeration
> is bad idea. More clean and simple design should allow user provide
> failure handler implementation with custom logic of failure handling
> if needed.
>
> 2. Several failure handler implementations should be provided out-of
> box in order to provide simple way of changing default behaviour
> through configuration. The following implementations should be
> provided:
>
>  - NoOpFailureHandler - It's useful for tests and debugging.
>  - RestartProcessFailureHandler - Specific implementation that
> could be used only with ignite.(sh|bat).
>  - StopNodeFailureHandler - This implementation will stop Ignite
> node in case of critical error.
>  - StopNodeOrHaltFailureHandler(boolean tryStop, long timeout) -
> Default failure handler will try stop node if tryStop value is true.
> If node can't be stopped or tryStop value is false then JVM process
> will be terminated forcibly (Runtime.halt()). Default value for
> tryStop parameter is false. Of course we should limit time of node
> shutdown in order to prevent hangs.
>
> As for the default behavior, I agree with those who believe that most
> suitable default option is process termination (although I had a
> different opinion before) and most strong argument for this choice is
> impossibility of reasoning about system state in case of critical
> error.
> Also I believe that we can't choose solution that will be suitable for
> any community member and the best that we can do is provide simple way
> of changing this behavior.
>
> So, I think, default behavior discussion should be finished. I'll
> update IEP-14 [1] accordingly to my conclusions above. If you have any
> ideas or thoughts about this conclusions, please feel free to share.
>
> Thanks!
>
> [1] https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 14+Ignite+failures+handling
>
> On Fri, Mar 16, 2018 at 1:07 AM, Dmitriy Setrakyan
>  wrote:
> > On Thu, Mar 15, 2018 at 5:21 AM, Dmitry Pavlov 
> > wrote:
> >
> >> Hi Dmitriy,
> >>
> >> It seems, here everyone agrees that killing the process will give a more
> >> guaranteed result. The question is that the majority in the community
> does
> >> not consider this to be acceptable in case Ignite as started as embedded
> >> lib (e.g. from Java, using Ignition.start())
> >>
> >> What can help to accept the community's opinion? Let's remember Apache
> >> principle: "community first".
> >>
> >
> > I am still confused about the problem the majority of the community is
> > trying to solve. If our priority is to keep the cluster in frozen state,
> > then what is the reason for this task altogether?
> >
> > The priority should be to keep the cluster operational, not frozen. The
> > only solution here is "kill" or "stop+kill". If the community does not
> > accept this option as a default, then I propose to drop this task
> > altogether, because we do not have to do anything to keep the cluster
> > frozen.
> >
> >
> >> If release 2.5 will show us it was inpractical, we will change default
> to
> >> kill even for library. What do you think?
> >>
> >
> > See above. I do not see a reason to continue with this task if the end
> > result is identical to what we have today.
> >
> > I want to give the community another chance to speak up and voice their
> > opinions again, having fully understood the context and the problem being
> > solved here.
> >
> > D.
>


Re: IgniteSet implementation: changes required

2018-03-16 Thread Andrey Kuznetsov
Thanks, Dmitry. I agree ultimately, DS API uniformity is a weighty reason.

2018-03-17 3:54 GMT+03:00 Dmitriy Setrakyan :

> On Fri, Mar 16, 2018 at 7:39 AM, Andrey Kuznetsov 
> wrote:
>
> > Dmitry, your way allows to reuse existing {{Ignite.set()}} API to create
> > both set flavors. We can adopt it unless somebody in the community
> objects.
> > Personally, I like {{IgniteCache.asSet()}} approach proposed by Vladimir
> O.
> > more, since it emphasizes the difference between sets being created, but
> > this will require API extension.
> >
>
> Andrey, I am suggesting that Ignite.set(...) in non-collocated mode behaves
> exactly the same as the proposed IgniteCache.asSet() method. I do not like
> the IgniteCache.asSet() API because it is inconsistent with Ignite data
> structure design. All data structures are provided on Ignite API directly
> and we should not change that.
>
> D.
>



-- 
Best regards,
  Andrey Kuznetsov.


[GitHub] ignite pull request #3634: IGNITE-7879: Don't push down expressions with agg...

2018-03-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3634


---


[jira] [Created] (IGNITE-7973) TX SQL: plain INSERT should not be broadcasted to all data nodes

2018-03-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7973:
---

 Summary: TX SQL: plain INSERT should not be broadcasted to all 
data nodes
 Key: IGNITE-7973
 URL: https://issues.apache.org/jira/browse/IGNITE-7973
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
 Fix For: 2.5


At the moment all {{INSERT}} statements are broadcasted. This could be OK for 
{{INSERT ... SELECT}}, but is definitely not needed for {{INSERT ... VALUES}}. 
Instead we should construct final key-value pairs locally, and then send them 
to affected data nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


New test failures in .NET

2018-03-16 Thread Dmitry Pavlov
Hi,

There are 31 test failures in .NET tests
https://ci.ignite.apache.org/viewLog.html?buildId=1137460=buildResultsDiv=IgniteTests24Java8_IgnitePlatformNet
 Unfortunately it continues to reproduce.

Igniters, who can advice how to fix it? Was there any changes in .NET
tests/new tests contributions?

It seems there is one issue here, because 1st failure
"SetUp method failed. SetUp : System.NullReferenceException : Object
reference not set to an instance of an object."
and consequent failures are about Ignite instances.

Sincerely,
Dmitriy Pavlov


[GitHub] ignite pull request #3646: IGNITE-7931

2018-03-16 Thread 1vanan
GitHub user 1vanan opened a pull request:

https://github.com/apache/ignite/pull/3646

IGNITE-7931



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/1vanan/ignite ignite-7931

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3646.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3646


commit 93bd6973893c1e788ff1034980c0e0b8b1c4ef7c
Author: Fedotov 
Date:   2018-03-16T11:45:55Z

change arguments in keys variable




---