Re: Continuous query with Thin client

2021-04-19 Thread Pavel Tupitsyn
There was a mistake in the release notes, I've just fixed it.

Continuous queries were added to .NET thin client for 2.10,
and Java thin client gets them in 2.11.

On Mon, Apr 19, 2021 at 12:00 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> I suspect it’s the backend changes required to support continuous queries
> in thin-clients.
>
> On 19 Apr 2021, at 07:59, sourav dihidar  wrote:
>
> Thanks for your reply. I see in the release note of 2.10.0 under Java thin
> client , it is mentioned
>
>
>- Added thin Client Continuous Query
>
> Is that something else?
>
> Thanks
>
> On Fri, Apr 16, 2021, 10:11 PM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> Looks like that’s coming in 2.11:
>>
>> IGNITE-14402 
>>
>> It’s present in the .net thin client since 2.10.
>>
>> Regards,
>> Stephen
>>
>> On 16 Apr 2021, at 17:21, sourav dihidar  wrote:
>>
>> Hi Team,
>>
>> I understood that continuous query for java thin client has been
>> introduced in the latest ignite version(2.10.0). Do you have any
>> documentation around  this specially how to use it through thin client.
>>
>> Thanks
>>
>>
>>
>>
>
>


Re: Continuous query with Thin client

2021-04-19 Thread Stephen Darlington
I suspect it’s the backend changes required to support continuous queries in 
thin-clients.

> On 19 Apr 2021, at 07:59, sourav dihidar  wrote:
> 
> Thanks for your reply. I see in the release note of 2.10.0 under Java thin 
> client , it is mentioned
> 
> Added thin Client Continuous Query
> Is that something else?
> 
> Thanks 
> 
> On Fri, Apr 16, 2021, 10:11 PM Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote:
> Looks like that’s coming in 2.11:
> 
> IGNITE-14402 
> 
> It’s present in the .net thin client since 2.10.
> 
> Regards,
> Stephen
> 
>> On 16 Apr 2021, at 17:21, sourav dihidar > > wrote:
>> 
>> Hi Team,
>> 
>> I understood that continuous query for java thin client has been introduced 
>> in the latest ignite version(2.10.0). Do you have any documentation around  
>> this specially how to use it through thin client.
>> 
>> Thanks 
> 
> 




Re: Continuous query with Thin client

2021-04-19 Thread sourav dihidar
Thanks for your reply. I see in the release note of 2.10.0 under Java thin
client , it is mentioned


   - Added thin Client Continuous Query

Is that something else?

Thanks

On Fri, Apr 16, 2021, 10:11 PM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> Looks like that’s coming in 2.11:
>
> IGNITE-14402 
>
> It’s present in the .net thin client since 2.10.
>
> Regards,
> Stephen
>
> On 16 Apr 2021, at 17:21, sourav dihidar  wrote:
>
> Hi Team,
>
> I understood that continuous query for java thin client has been
> introduced in the latest ignite version(2.10.0). Do you have any
> documentation around  this specially how to use it through thin client.
>
> Thanks
>
>
>
>


Re: Continuous query with Thin client

2021-04-16 Thread Stephen Darlington
Looks like that’s coming in 2.11:

IGNITE-14402 

It’s present in the .net thin client since 2.10.

Regards,
Stephen

> On 16 Apr 2021, at 17:21, sourav dihidar  wrote:
> 
> Hi Team,
> 
> I understood that continuous query for java thin client has been introduced 
> in the latest ignite version(2.10.0). Do you have any documentation around  
> this specially how to use it through thin client.
> 
> Thanks 




Re: Continuous query not transactional ?

2020-10-16 Thread VeenaMithare
Hi Ilya,

That is what I assume too, could someone from the developers community help
confirm/comment on  this ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query not transactional ?

2020-10-16 Thread Ilya Kasnacheev
Hello!

I'm not sure, but I would assume that changes are visible after commit(),
but you can see these changes in any order, and you can see cache a update
without cache b update, for example. This is for committed transactions.

For rolled back transactions, I don't know. I expect you won't be able to
see change as you have described, but won't bet on it.

Regards,
-- 
Ilya Kasnacheev


чт, 15 окт. 2020 г. в 20:35, VeenaMithare :

> Hi ,
>
> This is in continuation of the below statement on this post :
>
> http://apache-ignite-users.70518.x6.nabble.com/Lag-before-records-are-visible-after-transaction-commit-tp33787p33861.html
>
> >>Continuous Query itself is not transactional and it looks like it can't
> be
> used for this at the moment. So, it gets notification before other entries
> were committed.
>
> Does this mean we could get dirty reads as updates in continuous query ?
> i.e. for eg if the code is as below:
> 1. Start transaction
> 2. update records of cache a
> 3. update records of cache b
> 4. update records for cache c
> 5. commit
>
> if update of cache a succeeds , but update of cache b fails, will the local
> listener for continuous query for 'cache a' get an update ?
>
> regards,
> Veena.
>
>
> regards
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query

2020-10-07 Thread narges saleh
Thanks Raymond for the pointer.

On Wed, Oct 7, 2020 at 4:39 PM Raymond Wilson 
wrote:

> It's possible to configure the continuous query to return only keys in the
> cache that are stored on the local node.
>
> On the C# client we do it like this:
>
> var _queryHandle = queueCache.QueryContinuous
> (qry: new ContinuousQuery<[..KeyType..],
> [...ValueType...]>(listener) {Local = true},
>   initialQry: new ScanQuery< [..KeyType..], [...ValueType...]
> > {Local = true});
>
> On Thu, Oct 8, 2020 at 9:53 AM narges saleh  wrote:
>
>> Thanks for the detailed explanation.
>>
>> I don't know how efficient it would be if you have to filter each record
>> one by one and then update each record, three times, to keep track of the
>> status, if you're dealing with millions of records each hour, even if the
>> cache is partitioned. I guess I will need to benchmark this.  thanks again.
>>
>> On Wed, Oct 7, 2020 at 12:00 PM Denis Magda  wrote:
>>
>>> I recalled a complete solution. That's what you would need to do if
>>> decide to process records in real-time with continuous queries in the
>>> *fault-tolerant fashion* (using pseudo-code rather than actual Ignite APIs).
>>>
>>> First. You need to add a flag field to the record's class that keeps the
>>> current processing status. Like that:
>>>
>>> MyRecord {
>>> int id;
>>> Date created;
>>> byte status; //0 - not processed, 1 - being processed withing a
>>> continuous query filter, 2 - processed by the filter, all the logic
>>> successfully completed
>>> }
>>>
>>> Second. The continuous query filter (that will be executed on nodes that
>>> store a copy of a record) needs to have the following structure.
>>>
>>> @IgniteAsyncCallback
>>> filterMethod(MyRecords updatedRecord) {
>>>
>>> if (isThisNodePrimaryNodeForTheRecord(updatedRecord)) { // execute on a
>>> primary node only
>>> updatedRecord.status = 1 // setting the flag to signify the
>>> processing is started.
>>>
>>> //execute your business logic
>>>
>>> updatedRecord.status = 2 // finished processing
>>> }
>>>   return false; //you don't want a notification to be sent to the client
>>> application or another node that deployed the continuous query
>>> }
>>>
>>> Third. If any node leaves the cluster or the whole cluster is restarted,
>>> then you need to execute your custom for all the records with status=0 or
>>> status=1. To do that you can broadcast a compute task:
>>>
>>> // Application side
>>>
>>> int[] unprocessedRecords = "select id from MyRecord where status < 2;"
>>>
>>> IgniteCompute.affinityRun(idsOfUnprocessedRecords,
>>> taskWithMyCustomLogic); //the task will be executed only on the nodes that
>>> store the records
>>>
>>> // Server side
>>>
>>> taskWithMyCustomLogic() {
>>> updatedRecord.status = 1 // setting the flag to signify the
>>> processing is started.
>>>
>>> //execute your business logic
>>>
>>> updatedRecord.status = 2 // finished processing
>>> }
>>>
>>>
>>> That's it. So, the third step already requires you to have a compute
>>> task that would run the calculation in case of failures. Thus, if the
>>> real-time aspect of the processing is not crucial right now, then you can
>>> start with the batch-based approach by running a compute task once at a
>>> time and then introduce the continuous queries-based improvement whenever
>>> is needed. You decide. Hope it helps.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Wed, Oct 7, 2020 at 9:19 AM Denis Magda  wrote:
>>>
 So, a lesson for the future, would the continuous query approach still
> be preferable if the calculation involves the cache with continuous query
> and say a look up table? For example, if I want to see the country in the
> cache employee exists in the list of the countries that I am interested 
> in.


 You can access other caches from within the filter but the logic has to
 be executed asynchronously to avoid deadlocks:
 https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html

 Also, what do I need to do if I want the filter for the continuous
> query to execute on the cache on the local node only? Say, I have the
> continuous query deployed as singleton service on each node, to capture
> certain changes to a cache on the local node.


 

 The filter will be deployed and executed on every server node. The
 filter is executed only on a server node that owns a record that is being
 modified and passed into a filter. Hmm, it's also said that the filter can
 be executed on a backup node. Check if it's true, and then you need to add
 a special check into the filter that would allow executing the logic only
 if it's the primary node:

 https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#remote-filter


 -
 Denis

Re: Continuous Query

2020-10-07 Thread Raymond Wilson
It's possible to configure the continuous query to return only keys in the
cache that are stored on the local node.

On the C# client we do it like this:

var _queryHandle = queueCache.QueryContinuous
(qry: new ContinuousQuery<[..KeyType..],
[...ValueType...]>(listener) {Local = true},
  initialQry: new ScanQuery< [..KeyType..], [...ValueType...]
> {Local = true});

On Thu, Oct 8, 2020 at 9:53 AM narges saleh  wrote:

> Thanks for the detailed explanation.
>
> I don't know how efficient it would be if you have to filter each record
> one by one and then update each record, three times, to keep track of the
> status, if you're dealing with millions of records each hour, even if the
> cache is partitioned. I guess I will need to benchmark this.  thanks again.
>
> On Wed, Oct 7, 2020 at 12:00 PM Denis Magda  wrote:
>
>> I recalled a complete solution. That's what you would need to do if
>> decide to process records in real-time with continuous queries in the
>> *fault-tolerant fashion* (using pseudo-code rather than actual Ignite APIs).
>>
>> First. You need to add a flag field to the record's class that keeps the
>> current processing status. Like that:
>>
>> MyRecord {
>> int id;
>> Date created;
>> byte status; //0 - not processed, 1 - being processed withing a
>> continuous query filter, 2 - processed by the filter, all the logic
>> successfully completed
>> }
>>
>> Second. The continuous query filter (that will be executed on nodes that
>> store a copy of a record) needs to have the following structure.
>>
>> @IgniteAsyncCallback
>> filterMethod(MyRecords updatedRecord) {
>>
>> if (isThisNodePrimaryNodeForTheRecord(updatedRecord)) { // execute on a
>> primary node only
>> updatedRecord.status = 1 // setting the flag to signify the
>> processing is started.
>>
>> //execute your business logic
>>
>> updatedRecord.status = 2 // finished processing
>> }
>>   return false; //you don't want a notification to be sent to the client
>> application or another node that deployed the continuous query
>> }
>>
>> Third. If any node leaves the cluster or the whole cluster is restarted,
>> then you need to execute your custom for all the records with status=0 or
>> status=1. To do that you can broadcast a compute task:
>>
>> // Application side
>>
>> int[] unprocessedRecords = "select id from MyRecord where status < 2;"
>>
>> IgniteCompute.affinityRun(idsOfUnprocessedRecords,
>> taskWithMyCustomLogic); //the task will be executed only on the nodes that
>> store the records
>>
>> // Server side
>>
>> taskWithMyCustomLogic() {
>> updatedRecord.status = 1 // setting the flag to signify the
>> processing is started.
>>
>> //execute your business logic
>>
>> updatedRecord.status = 2 // finished processing
>> }
>>
>>
>> That's it. So, the third step already requires you to have a compute task
>> that would run the calculation in case of failures. Thus, if the real-time
>> aspect of the processing is not crucial right now, then you can start with
>> the batch-based approach by running a compute task once at a time and then
>> introduce the continuous queries-based improvement whenever is needed. You
>> decide. Hope it helps.
>>
>>
>> -
>> Denis
>>
>>
>> On Wed, Oct 7, 2020 at 9:19 AM Denis Magda  wrote:
>>
>>> So, a lesson for the future, would the continuous query approach still
 be preferable if the calculation involves the cache with continuous query
 and say a look up table? For example, if I want to see the country in the
 cache employee exists in the list of the countries that I am interested in.
>>>
>>>
>>> You can access other caches from within the filter but the logic has to
>>> be executed asynchronously to avoid deadlocks:
>>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>>>
>>> Also, what do I need to do if I want the filter for the continuous query
 to execute on the cache on the local node only? Say, I have the continuous
 query deployed as singleton service on each node, to capture certain
 changes to a cache on the local node.
>>>
>>>
>>> 
>>>
>>> The filter will be deployed and executed on every server node. The
>>> filter is executed only on a server node that owns a record that is being
>>> modified and passed into a filter. Hmm, it's also said that the filter can
>>> be executed on a backup node. Check if it's true, and then you need to add
>>> a special check into the filter that would allow executing the logic only
>>> if it's the primary node:
>>>
>>> https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#remote-filter
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Wed, Oct 7, 2020 at 4:39 AM narges saleh 
>>> wrote:
>>>
 Also, what do I need to do if I want the filter for the continuous
 query to execute on the cache on the local node only? Say, I have the
 

Re: Continuous Query

2020-10-07 Thread narges saleh
Thanks for the detailed explanation.

I don't know how efficient it would be if you have to filter each record
one by one and then update each record, three times, to keep track of the
status, if you're dealing with millions of records each hour, even if the
cache is partitioned. I guess I will need to benchmark this.  thanks again.

On Wed, Oct 7, 2020 at 12:00 PM Denis Magda  wrote:

> I recalled a complete solution. That's what you would need to do if decide
> to process records in real-time with continuous queries in the
> *fault-tolerant fashion* (using pseudo-code rather than actual Ignite APIs).
>
> First. You need to add a flag field to the record's class that keeps the
> current processing status. Like that:
>
> MyRecord {
> int id;
> Date created;
> byte status; //0 - not processed, 1 - being processed withing a
> continuous query filter, 2 - processed by the filter, all the logic
> successfully completed
> }
>
> Second. The continuous query filter (that will be executed on nodes that
> store a copy of a record) needs to have the following structure.
>
> @IgniteAsyncCallback
> filterMethod(MyRecords updatedRecord) {
>
> if (isThisNodePrimaryNodeForTheRecord(updatedRecord)) { // execute on a
> primary node only
> updatedRecord.status = 1 // setting the flag to signify the processing
> is started.
>
> //execute your business logic
>
> updatedRecord.status = 2 // finished processing
> }
>   return false; //you don't want a notification to be sent to the client
> application or another node that deployed the continuous query
> }
>
> Third. If any node leaves the cluster or the whole cluster is restarted,
> then you need to execute your custom for all the records with status=0 or
> status=1. To do that you can broadcast a compute task:
>
> // Application side
>
> int[] unprocessedRecords = "select id from MyRecord where status < 2;"
>
> IgniteCompute.affinityRun(idsOfUnprocessedRecords, taskWithMyCustomLogic);
> //the task will be executed only on the nodes that store the records
>
> // Server side
>
> taskWithMyCustomLogic() {
> updatedRecord.status = 1 // setting the flag to signify the processing
> is started.
>
> //execute your business logic
>
> updatedRecord.status = 2 // finished processing
> }
>
>
> That's it. So, the third step already requires you to have a compute task
> that would run the calculation in case of failures. Thus, if the real-time
> aspect of the processing is not crucial right now, then you can start with
> the batch-based approach by running a compute task once at a time and then
> introduce the continuous queries-based improvement whenever is needed. You
> decide. Hope it helps.
>
>
> -
> Denis
>
>
> On Wed, Oct 7, 2020 at 9:19 AM Denis Magda  wrote:
>
>> So, a lesson for the future, would the continuous query approach still be
>>> preferable if the calculation involves the cache with continuous query and
>>> say a look up table? For example, if I want to see the country in the cache
>>> employee exists in the list of the countries that I am interested in.
>>
>>
>> You can access other caches from within the filter but the logic has to
>> be executed asynchronously to avoid deadlocks:
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>>
>> Also, what do I need to do if I want the filter for the continuous query
>>> to execute on the cache on the local node only? Say, I have the continuous
>>> query deployed as singleton service on each node, to capture certain
>>> changes to a cache on the local node.
>>
>>
>> 
>>
>> The filter will be deployed and executed on every server node. The filter
>> is executed only on a server node that owns a record that is being modified
>> and passed into a filter. Hmm, it's also said that the filter can be
>> executed on a backup node. Check if it's true, and then you need to add a
>> special check into the filter that would allow executing the logic only if
>> it's the primary node:
>>
>> https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#remote-filter
>>
>>
>> -
>> Denis
>>
>>
>> On Wed, Oct 7, 2020 at 4:39 AM narges saleh  wrote:
>>
>>> Also, what do I need to do if I want the filter for the continuous query
>>> to execute on the cache on the local node only? Say, I have the continuous
>>> query deployed as singleton service on each node, to capture certain
>>> changes to a cache on the local node.
>>>
>>> On Wed, Oct 7, 2020 at 5:54 AM narges saleh 
>>> wrote:
>>>
 Thank you, Denis.
 So, a lesson for the future, would the continuous query approach still
 be preferable if the calculation involves the cache with continuous query
 and say a look up table? For example, if I want to see the country in the
 cache employee exists in the list of the countries that I am interested in.

 On Tue, Oct 6, 2020 at 4:11 PM Denis Magda 

Re: Continuous Query

2020-10-07 Thread Denis Magda
I recalled a complete solution. That's what you would need to do if decide
to process records in real-time with continuous queries in the
*fault-tolerant fashion* (using pseudo-code rather than actual Ignite APIs).

First. You need to add a flag field to the record's class that keeps the
current processing status. Like that:

MyRecord {
int id;
Date created;
byte status; //0 - not processed, 1 - being processed withing a
continuous query filter, 2 - processed by the filter, all the logic
successfully completed
}

Second. The continuous query filter (that will be executed on nodes that
store a copy of a record) needs to have the following structure.

@IgniteAsyncCallback
filterMethod(MyRecords updatedRecord) {

if (isThisNodePrimaryNodeForTheRecord(updatedRecord)) { // execute on a
primary node only
updatedRecord.status = 1 // setting the flag to signify the processing
is started.

//execute your business logic

updatedRecord.status = 2 // finished processing
}
  return false; //you don't want a notification to be sent to the client
application or another node that deployed the continuous query
}

Third. If any node leaves the cluster or the whole cluster is restarted,
then you need to execute your custom for all the records with status=0 or
status=1. To do that you can broadcast a compute task:

// Application side

int[] unprocessedRecords = "select id from MyRecord where status < 2;"

IgniteCompute.affinityRun(idsOfUnprocessedRecords, taskWithMyCustomLogic);
//the task will be executed only on the nodes that store the records

// Server side

taskWithMyCustomLogic() {
updatedRecord.status = 1 // setting the flag to signify the processing
is started.

//execute your business logic

updatedRecord.status = 2 // finished processing
}


That's it. So, the third step already requires you to have a compute task
that would run the calculation in case of failures. Thus, if the real-time
aspect of the processing is not crucial right now, then you can start with
the batch-based approach by running a compute task once at a time and then
introduce the continuous queries-based improvement whenever is needed. You
decide. Hope it helps.


-
Denis


On Wed, Oct 7, 2020 at 9:19 AM Denis Magda  wrote:

> So, a lesson for the future, would the continuous query approach still be
>> preferable if the calculation involves the cache with continuous query and
>> say a look up table? For example, if I want to see the country in the cache
>> employee exists in the list of the countries that I am interested in.
>
>
> You can access other caches from within the filter but the logic has to be
> executed asynchronously to avoid deadlocks:
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>
> Also, what do I need to do if I want the filter for the continuous query
>> to execute on the cache on the local node only? Say, I have the continuous
>> query deployed as singleton service on each node, to capture certain
>> changes to a cache on the local node.
>
>
> 
>
> The filter will be deployed and executed on every server node. The filter
> is executed only on a server node that owns a record that is being modified
> and passed into a filter. Hmm, it's also said that the filter can be
> executed on a backup node. Check if it's true, and then you need to add a
> special check into the filter that would allow executing the logic only if
> it's the primary node:
>
> https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#remote-filter
>
>
> -
> Denis
>
>
> On Wed, Oct 7, 2020 at 4:39 AM narges saleh  wrote:
>
>> Also, what do I need to do if I want the filter for the continuous query
>> to execute on the cache on the local node only? Say, I have the continuous
>> query deployed as singleton service on each node, to capture certain
>> changes to a cache on the local node.
>>
>> On Wed, Oct 7, 2020 at 5:54 AM narges saleh  wrote:
>>
>>> Thank you, Denis.
>>> So, a lesson for the future, would the continuous query approach still
>>> be preferable if the calculation involves the cache with continuous query
>>> and say a look up table? For example, if I want to see the country in the
>>> cache employee exists in the list of the countries that I am interested in.
>>>
>>> On Tue, Oct 6, 2020 at 4:11 PM Denis Magda  wrote:
>>>
 Thanks

 Then, I would consider the continuous queries based solution as long as
 the records can be updated in real-time:

- You can process the records on the fly and don't need to come up
with any batch task.
- The continuous query filter will be executed once on a node that
stores the record's primary copy. If the primary node fails in the 
 middle
of the filter's calculation execution, then the filter will be executed 
 on
a backup node. So, you will not lose 

Re: Continuous Query

2020-10-07 Thread Denis Magda
>
> So, a lesson for the future, would the continuous query approach still be
> preferable if the calculation involves the cache with continuous query and
> say a look up table? For example, if I want to see the country in the cache
> employee exists in the list of the countries that I am interested in.


You can access other caches from within the filter but the logic has to be
executed asynchronously to avoid deadlocks:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html

Also, what do I need to do if I want the filter for the continuous query to
> execute on the cache on the local node only? Say, I have the continuous
> query deployed as singleton service on each node, to capture certain
> changes to a cache on the local node.



The filter will be deployed and executed on every server node. The filter
is executed only on a server node that owns a record that is being modified
and passed into a filter. Hmm, it's also said that the filter can be
executed on a backup node. Check if it's true, and then you need to add a
special check into the filter that would allow executing the logic only if
it's the primary node:
https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#remote-filter


-
Denis


On Wed, Oct 7, 2020 at 4:39 AM narges saleh  wrote:

> Also, what do I need to do if I want the filter for the continuous query
> to execute on the cache on the local node only? Say, I have the continuous
> query deployed as singleton service on each node, to capture certain
> changes to a cache on the local node.
>
> On Wed, Oct 7, 2020 at 5:54 AM narges saleh  wrote:
>
>> Thank you, Denis.
>> So, a lesson for the future, would the continuous query approach still be
>> preferable if the calculation involves the cache with continuous query and
>> say a look up table? For example, if I want to see the country in the cache
>> employee exists in the list of the countries that I am interested in.
>>
>> On Tue, Oct 6, 2020 at 4:11 PM Denis Magda  wrote:
>>
>>> Thanks
>>>
>>> Then, I would consider the continuous queries based solution as long as
>>> the records can be updated in real-time:
>>>
>>>- You can process the records on the fly and don't need to come up
>>>with any batch task.
>>>- The continuous query filter will be executed once on a node that
>>>stores the record's primary copy. If the primary node fails in the middle
>>>of the filter's calculation execution, then the filter will be executed 
>>> on
>>>a backup node. So, you will not lose any updates but might need to
>>>introduce some logic/flag that confirms the calculation is not executed
>>>twice for a single record (this can happen if the primary node failed in
>>>the middle of the calculation execution and then the backup node picked 
>>> up
>>>and started executing the calculation from scratch).
>>>- Updates of other tables or records from within the continuous
>>>query filter must go through an async thread pool. You need to use
>>>IgniteAsyncCallback annotation for that:
>>>
>>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>>>
>>> Alternatively, you can always run the calculation in the batch-fashion:
>>>
>>>- Run a compute task once in a while
>>>- Read all the latest records that satisfy the requests with SQL or
>>>any other APIs
>>>- Complete the calculation, mark already processed records just in
>>>case if everything is failed in the middle and you need to run the
>>>calculation from scratch
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Mon, Oct 5, 2020 at 8:33 PM narges saleh 
>>> wrote:
>>>
 Denis
  The calculation itself doesn't involve an update or read of another
 record, but based on the outcome of the calculation, the process might make
 changes in some other tables.

 thanks.

 On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:

> Good. Another clarification:
>
>- Does that calculation change the state of the record (updates
>any fields)?
>- Does the calculation read or update any other records?
>
> -
> Denis
>
>
> On Sat, Oct 3, 2020 at 1:34 PM narges saleh 
> wrote:
>
>> The latter; the server needs to perform some calculations on the data
>> without sending any notification to the app.
>>
>> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>>
>>> And after you detect a record that satisfies the condition, do you
>>> need to send any notification to the application? Or is it more like a
>>> server detects and does some calculation logically without updating the 
>>> app.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>>> wrote:
>>>
 The detection 

Re: Continuous Query

2020-10-07 Thread narges saleh
Also, what do I need to do if I want the filter for the continuous query to
execute on the cache on the local node only? Say, I have the continuous
query deployed as singleton service on each node, to capture certain
changes to a cache on the local node.

On Wed, Oct 7, 2020 at 5:54 AM narges saleh  wrote:

> Thank you, Denis.
> So, a lesson for the future, would the continuous query approach still be
> preferable if the calculation involves the cache with continuous query and
> say a look up table? For example, if I want to see the country in the cache
> employee exists in the list of the countries that I am interested in.
>
> On Tue, Oct 6, 2020 at 4:11 PM Denis Magda  wrote:
>
>> Thanks
>>
>> Then, I would consider the continuous queries based solution as long as
>> the records can be updated in real-time:
>>
>>- You can process the records on the fly and don't need to come up
>>with any batch task.
>>- The continuous query filter will be executed once on a node that
>>stores the record's primary copy. If the primary node fails in the middle
>>of the filter's calculation execution, then the filter will be executed on
>>a backup node. So, you will not lose any updates but might need to
>>introduce some logic/flag that confirms the calculation is not executed
>>twice for a single record (this can happen if the primary node failed in
>>the middle of the calculation execution and then the backup node picked up
>>and started executing the calculation from scratch).
>>- Updates of other tables or records from within the continuous query
>>filter must go through an async thread pool. You need to use
>>IgniteAsyncCallback annotation for that:
>>
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>>
>> Alternatively, you can always run the calculation in the batch-fashion:
>>
>>- Run a compute task once in a while
>>- Read all the latest records that satisfy the requests with SQL or
>>any other APIs
>>- Complete the calculation, mark already processed records just in
>>case if everything is failed in the middle and you need to run the
>>calculation from scratch
>>
>>
>> -
>> Denis
>>
>>
>> On Mon, Oct 5, 2020 at 8:33 PM narges saleh  wrote:
>>
>>> Denis
>>>  The calculation itself doesn't involve an update or read of another
>>> record, but based on the outcome of the calculation, the process might make
>>> changes in some other tables.
>>>
>>> thanks.
>>>
>>> On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:
>>>
 Good. Another clarification:

- Does that calculation change the state of the record (updates any
fields)?
- Does the calculation read or update any other records?

 -
 Denis


 On Sat, Oct 3, 2020 at 1:34 PM narges saleh 
 wrote:

> The latter; the server needs to perform some calculations on the data
> without sending any notification to the app.
>
> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>
>> And after you detect a record that satisfies the condition, do you
>> need to send any notification to the application? Or is it more like a
>> server detects and does some calculation logically without updating the 
>> app.
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>> wrote:
>>
>>> The detection should happen at most a couple of minutes after a
>>> record is inserted in the cache but all the detections are local to the
>>> node. But some records with the current timestamp might show up in the
>>> system with big delays.
>>>
>>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda 
>>> wrote:
>>>
 What are your requirements? Do you need to process the records as
 soon as they are put into the cluster?



 On Friday, October 2, 2020, narges saleh 
 wrote:

> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and
> reliability, which approach is preferable? Does a continuous query 
> based
> approach impose a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda 
> wrote:
>
>> Hi Narges,
>>
>> Use continuous queries if you need to be notified in real-time,
>> i.e. 1) a record is inserted, 2) the continuous filter confirms the
>> record's time satisfies your condition, 3) the continuous queries 
>> notifies
>> your application that does require processing.
>>
>> The jobs are better for a batching use case when it's ok to
>> process records together with some delay.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>> wrote:
>>

Re: Continuous Query

2020-10-07 Thread narges saleh
Thank you, Denis.
So, a lesson for the future, would the continuous query approach still be
preferable if the calculation involves the cache with continuous query and
say a look up table? For example, if I want to see the country in the cache
employee exists in the list of the countries that I am interested in.

On Tue, Oct 6, 2020 at 4:11 PM Denis Magda  wrote:

> Thanks
>
> Then, I would consider the continuous queries based solution as long as
> the records can be updated in real-time:
>
>- You can process the records on the fly and don't need to come up
>with any batch task.
>- The continuous query filter will be executed once on a node that
>stores the record's primary copy. If the primary node fails in the middle
>of the filter's calculation execution, then the filter will be executed on
>a backup node. So, you will not lose any updates but might need to
>introduce some logic/flag that confirms the calculation is not executed
>twice for a single record (this can happen if the primary node failed in
>the middle of the calculation execution and then the backup node picked up
>and started executing the calculation from scratch).
>- Updates of other tables or records from within the continuous query
>filter must go through an async thread pool. You need to use
>IgniteAsyncCallback annotation for that:
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html
>
> Alternatively, you can always run the calculation in the batch-fashion:
>
>- Run a compute task once in a while
>- Read all the latest records that satisfy the requests with SQL or
>any other APIs
>- Complete the calculation, mark already processed records just in
>case if everything is failed in the middle and you need to run the
>calculation from scratch
>
>
> -
> Denis
>
>
> On Mon, Oct 5, 2020 at 8:33 PM narges saleh  wrote:
>
>> Denis
>>  The calculation itself doesn't involve an update or read of another
>> record, but based on the outcome of the calculation, the process might make
>> changes in some other tables.
>>
>> thanks.
>>
>> On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:
>>
>>> Good. Another clarification:
>>>
>>>- Does that calculation change the state of the record (updates any
>>>fields)?
>>>- Does the calculation read or update any other records?
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Sat, Oct 3, 2020 at 1:34 PM narges saleh 
>>> wrote:
>>>
 The latter; the server needs to perform some calculations on the data
 without sending any notification to the app.

 On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:

> And after you detect a record that satisfies the condition, do you
> need to send any notification to the application? Or is it more like a
> server detects and does some calculation logically without updating the 
> app.
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
> wrote:
>
>> The detection should happen at most a couple of minutes after a
>> record is inserted in the cache but all the detections are local to the
>> node. But some records with the current timestamp might show up in the
>> system with big delays.
>>
>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda 
>> wrote:
>>
>>> What are your requirements? Do you need to process the records as
>>> soon as they are put into the cluster?
>>>
>>>
>>>
>>> On Friday, October 2, 2020, narges saleh 
>>> wrote:
>>>
 Thank you Dennis for the reply.
 From the perspective of performance/resource overhead and
 reliability, which approach is preferable? Does a continuous query 
 based
 approach impose a lot more overhead?

 On Fri, Oct 2, 2020 at 9:52 AM Denis Magda 
 wrote:

> Hi Narges,
>
> Use continuous queries if you need to be notified in real-time,
> i.e. 1) a record is inserted, 2) the continuous filter confirms the
> record's time satisfies your condition, 3) the continuous queries 
> notifies
> your application that does require processing.
>
> The jobs are better for a batching use case when it's ok to
> process records together with some delay.
>
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
> wrote:
>
>> Hi All,
>>  If I want to watch for a rolling timestamp pattern in all the
>> records that get inserted to all my caches, is it more efficient to 
>> use
>> timer based jobs (that checks all the records in some interval) or
>> continuous queries that locally filter on the pattern? These records 
>> can
>> get inserted in any order  and some can arrive with delays.
>> An 

Re: Continuous Query

2020-10-06 Thread Denis Magda
Thanks

Then, I would consider the continuous queries based solution as long as the
records can be updated in real-time:

   - You can process the records on the fly and don't need to come up with
   any batch task.
   - The continuous query filter will be executed once on a node that
   stores the record's primary copy. If the primary node fails in the middle
   of the filter's calculation execution, then the filter will be executed on
   a backup node. So, you will not lose any updates but might need to
   introduce some logic/flag that confirms the calculation is not executed
   twice for a single record (this can happen if the primary node failed in
   the middle of the calculation execution and then the backup node picked up
   and started executing the calculation from scratch).
   - Updates of other tables or records from within the continuous query
   filter must go through an async thread pool. You need to use
   IgniteAsyncCallback annotation for that:
   
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html

Alternatively, you can always run the calculation in the batch-fashion:

   - Run a compute task once in a while
   - Read all the latest records that satisfy the requests with SQL or any
   other APIs
   - Complete the calculation, mark already processed records just in case
   if everything is failed in the middle and you need to run the calculation
   from scratch


-
Denis


On Mon, Oct 5, 2020 at 8:33 PM narges saleh  wrote:

> Denis
>  The calculation itself doesn't involve an update or read of another
> record, but based on the outcome of the calculation, the process might make
> changes in some other tables.
>
> thanks.
>
> On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:
>
>> Good. Another clarification:
>>
>>- Does that calculation change the state of the record (updates any
>>fields)?
>>- Does the calculation read or update any other records?
>>
>> -
>> Denis
>>
>>
>> On Sat, Oct 3, 2020 at 1:34 PM narges saleh  wrote:
>>
>>> The latter; the server needs to perform some calculations on the data
>>> without sending any notification to the app.
>>>
>>> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>>>
 And after you detect a record that satisfies the condition, do you need
 to send any notification to the application? Or is it more like a server
 detects and does some calculation logically without updating the app.

 -
 Denis


 On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
 wrote:

> The detection should happen at most a couple of minutes after a record
> is inserted in the cache but all the detections are local to the node. But
> some records with the current timestamp might show up in the system with
> big delays.
>
> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>
>> What are your requirements? Do you need to process the records as
>> soon as they are put into the cluster?
>>
>>
>>
>> On Friday, October 2, 2020, narges saleh 
>> wrote:
>>
>>> Thank you Dennis for the reply.
>>> From the perspective of performance/resource overhead and
>>> reliability, which approach is preferable? Does a continuous query based
>>> approach impose a lot more overhead?
>>>
>>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda 
>>> wrote:
>>>
 Hi Narges,

 Use continuous queries if you need to be notified in real-time,
 i.e. 1) a record is inserted, 2) the continuous filter confirms the
 record's time satisfies your condition, 3) the continuous queries 
 notifies
 your application that does require processing.

 The jobs are better for a batching use case when it's ok to process
 records together with some delay.


 -
 Denis


 On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
 wrote:

> Hi All,
>  If I want to watch for a rolling timestamp pattern in all the
> records that get inserted to all my caches, is it more efficient to 
> use
> timer based jobs (that checks all the records in some interval) or
> continuous queries that locally filter on the pattern? These records 
> can
> get inserted in any order  and some can arrive with delays.
> An example is to watch for all the records whose timestamp ends in
> 50, if the timestamp is in the format -mm-dd hh:mi.
>
> thanks
>
>
>>
>> --
>> -
>> Denis
>>
>>


Re: Continuous Query

2020-10-05 Thread narges saleh
Denis
 The calculation itself doesn't involve an update or read of another
record, but based on the outcome of the calculation, the process might make
changes in some other tables.

thanks.

On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:

> Good. Another clarification:
>
>- Does that calculation change the state of the record (updates any
>fields)?
>- Does the calculation read or update any other records?
>
> -
> Denis
>
>
> On Sat, Oct 3, 2020 at 1:34 PM narges saleh  wrote:
>
>> The latter; the server needs to perform some calculations on the data
>> without sending any notification to the app.
>>
>> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>>
>>> And after you detect a record that satisfies the condition, do you need
>>> to send any notification to the application? Or is it more like a server
>>> detects and does some calculation logically without updating the app.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>>> wrote:
>>>
 The detection should happen at most a couple of minutes after a record
 is inserted in the cache but all the detections are local to the node. But
 some records with the current timestamp might show up in the system with
 big delays.

 On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:

> What are your requirements? Do you need to process the records as soon
> as they are put into the cluster?
>
>
>
> On Friday, October 2, 2020, narges saleh  wrote:
>
>> Thank you Dennis for the reply.
>> From the perspective of performance/resource overhead and
>> reliability, which approach is preferable? Does a continuous query based
>> approach impose a lot more overhead?
>>
>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>>
>>> Hi Narges,
>>>
>>> Use continuous queries if you need to be notified in real-time, i.e.
>>> 1) a record is inserted, 2) the continuous filter confirms the record's
>>> time satisfies your condition, 3) the continuous queries notifies your
>>> application that does require processing.
>>>
>>> The jobs are better for a batching use case when it's ok to process
>>> records together with some delay.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>>> wrote:
>>>
 Hi All,
  If I want to watch for a rolling timestamp pattern in all the
 records that get inserted to all my caches, is it more efficient to use
 timer based jobs (that checks all the records in some interval) or
 continuous queries that locally filter on the pattern? These records 
 can
 get inserted in any order  and some can arrive with delays.
 An example is to watch for all the records whose timestamp ends in
 50, if the timestamp is in the format -mm-dd hh:mi.

 thanks


>
> --
> -
> Denis
>
>


Re: Continuous Query

2020-10-05 Thread Denis Magda
Good. Another clarification:

   - Does that calculation change the state of the record (updates any
   fields)?
   - Does the calculation read or update any other records?

-
Denis


On Sat, Oct 3, 2020 at 1:34 PM narges saleh  wrote:

> The latter; the server needs to perform some calculations on the data
> without sending any notification to the app.
>
> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>
>> And after you detect a record that satisfies the condition, do you need
>> to send any notification to the application? Or is it more like a server
>> detects and does some calculation logically without updating the app.
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>> wrote:
>>
>>> The detection should happen at most a couple of minutes after a record
>>> is inserted in the cache but all the detections are local to the node. But
>>> some records with the current timestamp might show up in the system with
>>> big delays.
>>>
>>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>>>
 What are your requirements? Do you need to process the records as soon
 as they are put into the cluster?



 On Friday, October 2, 2020, narges saleh  wrote:

> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and reliability,
> which approach is preferable? Does a continuous query based approach 
> impose
> a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>
>> Hi Narges,
>>
>> Use continuous queries if you need to be notified in real-time, i.e.
>> 1) a record is inserted, 2) the continuous filter confirms the record's
>> time satisfies your condition, 3) the continuous queries notifies your
>> application that does require processing.
>>
>> The jobs are better for a batching use case when it's ok to process
>> records together with some delay.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>> wrote:
>>
>>> Hi All,
>>>  If I want to watch for a rolling timestamp pattern in all the
>>> records that get inserted to all my caches, is it more efficient to use
>>> timer based jobs (that checks all the records in some interval) or
>>> continuous queries that locally filter on the pattern? These records can
>>> get inserted in any order  and some can arrive with delays.
>>> An example is to watch for all the records whose timestamp ends in
>>> 50, if the timestamp is in the format -mm-dd hh:mi.
>>>
>>> thanks
>>>
>>>

 --
 -
 Denis




Re: Continuous Query

2020-10-05 Thread Ilya Kasnacheev
Please send an empty message to: user-unsubscr...@ignite.apache.org to
unsubscribe yourself from the list.

Regards,
-- 
Ilya Kasnacheev


пн, 5 окт. 2020 г. в 07:35, Priya Yadav :

> unsubscribe
> --
> *From:* narges saleh 
> *Sent:* Sunday, 4 October 2020 2:03 AM
> *To:* user@ignite.apache.org 
> *Subject:* Re: Continuous Query
>
> The latter; the server needs to perform some calculations on the data
> without sending any notification to the app.
>
> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>
> And after you detect a record that satisfies the condition, do you need to
> send any notification to the application? Or is it more like a server
> detects and does some calculation logically without updating the app.
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 11:22 AM narges saleh  wrote:
>
> The detection should happen at most a couple of minutes after a record is
> inserted in the cache but all the detections are local to the node. But
> some records with the current timestamp might show up in the system with
> big delays.
>
> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>
> What are your requirements? Do you need to process the records as soon as
> they are put into the cluster?
>
>
>
> On Friday, October 2, 2020, narges saleh  wrote:
>
> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and reliability,
> which approach is preferable? Does a continuous query based approach impose
> a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>
> Hi Narges,
>
> Use continuous queries if you need to be notified in real-time, i.e. 1) a
> record is inserted, 2) the continuous filter confirms the record's time
> satisfies your condition, 3) the continuous queries notifies your
> application that does require processing.
>
> The jobs are better for a batching use case when it's ok to process
> records together with some delay.
>
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 3:50 AM narges saleh  wrote:
>
> Hi All,
>  If I want to watch for a rolling timestamp pattern in all the records
> that get inserted to all my caches, is it more efficient to use timer based
> jobs (that checks all the records in some interval) or  continuous queries
> that locally filter on the pattern? These records can get inserted in any
> order  and some can arrive with delays.
> An example is to watch for all the records whose timestamp ends in 50, if
> the timestamp is in the format -mm-dd hh:mi.
>
> thanks
>
>
>
> --
> -
> Denis
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>


Re: Continuous Query

2020-10-04 Thread Priya Yadav
unsubscribe

From: narges saleh 
Sent: Sunday, 4 October 2020 2:03 AM
To: user@ignite.apache.org 
Subject: Re: Continuous Query

The latter; the server needs to perform some calculations on the data without 
sending any notification to the app.

On Fri, Oct 2, 2020 at 4:25 PM Denis Magda 
mailto:dma...@apache.org>> wrote:
And after you detect a record that satisfies the condition, do you need to send 
any notification to the application? Or is it more like a server detects and 
does some calculation logically without updating the app.

-
Denis


On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
mailto:snarges...@gmail.com>> wrote:
The detection should happen at most a couple of minutes after a record is 
inserted in the cache but all the detections are local to the node. But some 
records with the current timestamp might show up in the system with big delays.

On Fri, Oct 2, 2020 at 12:23 PM Denis Magda 
mailto:dma...@apache.org>> wrote:
What are your requirements? Do you need to process the records as soon as they 
are put into the cluster?



On Friday, October 2, 2020, narges saleh 
mailto:snarges...@gmail.com>> wrote:
Thank you Dennis for the reply.
>From the perspective of performance/resource overhead and reliability, which 
>approach is preferable? Does a continuous query based approach impose a lot 
>more overhead?

On Fri, Oct 2, 2020 at 9:52 AM Denis Magda 
mailto:dma...@apache.org>> wrote:
Hi Narges,

Use continuous queries if you need to be notified in real-time, i.e. 1) a 
record is inserted, 2) the continuous filter confirms the record's time 
satisfies your condition, 3) the continuous queries notifies your application 
that does require processing.

The jobs are better for a batching use case when it's ok to process records 
together with some delay.


-
Denis


On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
mailto:snarges...@gmail.com>> wrote:
Hi All,
 If I want to watch for a rolling timestamp pattern in all the records that get 
inserted to all my caches, is it more efficient to use timer based jobs (that 
checks all the records in some interval) or  continuous queries that locally 
filter on the pattern? These records can get inserted in any order  and some 
can arrive with delays.
An example is to watch for all the records whose timestamp ends in 50, if the 
timestamp is in the format -mm-dd hh:mi.

thanks



--
-
Denis

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Continuous Query

2020-10-03 Thread narges saleh
The latter; the server needs to perform some calculations on the data
without sending any notification to the app.

On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:

> And after you detect a record that satisfies the condition, do you need to
> send any notification to the application? Or is it more like a server
> detects and does some calculation logically without updating the app.
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 11:22 AM narges saleh  wrote:
>
>> The detection should happen at most a couple of minutes after a record is
>> inserted in the cache but all the detections are local to the node. But
>> some records with the current timestamp might show up in the system with
>> big delays.
>>
>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>>
>>> What are your requirements? Do you need to process the records as soon
>>> as they are put into the cluster?
>>>
>>>
>>>
>>> On Friday, October 2, 2020, narges saleh  wrote:
>>>
 Thank you Dennis for the reply.
 From the perspective of performance/resource overhead and reliability,
 which approach is preferable? Does a continuous query based approach impose
 a lot more overhead?

 On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:

> Hi Narges,
>
> Use continuous queries if you need to be notified in real-time, i.e.
> 1) a record is inserted, 2) the continuous filter confirms the record's
> time satisfies your condition, 3) the continuous queries notifies your
> application that does require processing.
>
> The jobs are better for a batching use case when it's ok to process
> records together with some delay.
>
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
> wrote:
>
>> Hi All,
>>  If I want to watch for a rolling timestamp pattern in all the
>> records that get inserted to all my caches, is it more efficient to use
>> timer based jobs (that checks all the records in some interval) or
>> continuous queries that locally filter on the pattern? These records can
>> get inserted in any order  and some can arrive with delays.
>> An example is to watch for all the records whose timestamp ends in
>> 50, if the timestamp is in the format -mm-dd hh:mi.
>>
>> thanks
>>
>>
>>>
>>> --
>>> -
>>> Denis
>>>
>>>


Re: Continuous Query

2020-10-02 Thread Denis Magda
And after you detect a record that satisfies the condition, do you need to
send any notification to the application? Or is it more like a server
detects and does some calculation logically without updating the app.

-
Denis


On Fri, Oct 2, 2020 at 11:22 AM narges saleh  wrote:

> The detection should happen at most a couple of minutes after a record is
> inserted in the cache but all the detections are local to the node. But
> some records with the current timestamp might show up in the system with
> big delays.
>
> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>
>> What are your requirements? Do you need to process the records as soon as
>> they are put into the cluster?
>>
>>
>>
>> On Friday, October 2, 2020, narges saleh  wrote:
>>
>>> Thank you Dennis for the reply.
>>> From the perspective of performance/resource overhead and reliability,
>>> which approach is preferable? Does a continuous query based approach impose
>>> a lot more overhead?
>>>
>>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>>>
 Hi Narges,

 Use continuous queries if you need to be notified in real-time, i.e. 1)
 a record is inserted, 2) the continuous filter confirms the record's time
 satisfies your condition, 3) the continuous queries notifies your
 application that does require processing.

 The jobs are better for a batching use case when it's ok to process
 records together with some delay.


 -
 Denis


 On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
 wrote:

> Hi All,
>  If I want to watch for a rolling timestamp pattern in all the records
> that get inserted to all my caches, is it more efficient to use timer 
> based
> jobs (that checks all the records in some interval) or  continuous queries
> that locally filter on the pattern? These records can get inserted in any
> order  and some can arrive with delays.
> An example is to watch for all the records whose timestamp ends in 50,
> if the timestamp is in the format -mm-dd hh:mi.
>
> thanks
>
>
>>
>> --
>> -
>> Denis
>>
>>


Re: Continuous Query

2020-10-02 Thread narges saleh
The detection should happen at most a couple of minutes after a record is
inserted in the cache but all the detections are local to the node. But
some records with the current timestamp might show up in the system with
big delays.

On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:

> What are your requirements? Do you need to process the records as soon as
> they are put into the cluster?
>
>
>
> On Friday, October 2, 2020, narges saleh  wrote:
>
>> Thank you Dennis for the reply.
>> From the perspective of performance/resource overhead and reliability,
>> which approach is preferable? Does a continuous query based approach impose
>> a lot more overhead?
>>
>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>>
>>> Hi Narges,
>>>
>>> Use continuous queries if you need to be notified in real-time, i.e. 1)
>>> a record is inserted, 2) the continuous filter confirms the record's time
>>> satisfies your condition, 3) the continuous queries notifies your
>>> application that does require processing.
>>>
>>> The jobs are better for a batching use case when it's ok to process
>>> records together with some delay.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>>> wrote:
>>>
 Hi All,
  If I want to watch for a rolling timestamp pattern in all the records
 that get inserted to all my caches, is it more efficient to use timer based
 jobs (that checks all the records in some interval) or  continuous queries
 that locally filter on the pattern? These records can get inserted in any
 order  and some can arrive with delays.
 An example is to watch for all the records whose timestamp ends in 50,
 if the timestamp is in the format -mm-dd hh:mi.

 thanks


>
> --
> -
> Denis
>
>


Re: Continuous Query

2020-10-02 Thread Denis Magda
What are your requirements? Do you need to process the records as soon as
they are put into the cluster?



On Friday, October 2, 2020, narges saleh  wrote:

> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and reliability,
> which approach is preferable? Does a continuous query based approach impose
> a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>
>> Hi Narges,
>>
>> Use continuous queries if you need to be notified in real-time, i.e. 1) a
>> record is inserted, 2) the continuous filter confirms the record's time
>> satisfies your condition, 3) the continuous queries notifies your
>> application that does require processing.
>>
>> The jobs are better for a batching use case when it's ok to process
>> records together with some delay.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh  wrote:
>>
>>> Hi All,
>>>  If I want to watch for a rolling timestamp pattern in all the records
>>> that get inserted to all my caches, is it more efficient to use timer based
>>> jobs (that checks all the records in some interval) or  continuous queries
>>> that locally filter on the pattern? These records can get inserted in any
>>> order  and some can arrive with delays.
>>> An example is to watch for all the records whose timestamp ends in 50,
>>> if the timestamp is in the format -mm-dd hh:mi.
>>>
>>> thanks
>>>
>>>

-- 
-
Denis


Re: Continuous Query

2020-10-02 Thread narges saleh
Thank you Dennis for the reply.
>From the perspective of performance/resource overhead and reliability,
which approach is preferable? Does a continuous query based approach impose
a lot more overhead?

On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:

> Hi Narges,
>
> Use continuous queries if you need to be notified in real-time, i.e. 1) a
> record is inserted, 2) the continuous filter confirms the record's time
> satisfies your condition, 3) the continuous queries notifies your
> application that does require processing.
>
> The jobs are better for a batching use case when it's ok to process
> records together with some delay.
>
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 3:50 AM narges saleh  wrote:
>
>> Hi All,
>>  If I want to watch for a rolling timestamp pattern in all the records
>> that get inserted to all my caches, is it more efficient to use timer based
>> jobs (that checks all the records in some interval) or  continuous queries
>> that locally filter on the pattern? These records can get inserted in any
>> order  and some can arrive with delays.
>> An example is to watch for all the records whose timestamp ends in 50, if
>> the timestamp is in the format -mm-dd hh:mi.
>>
>> thanks
>>
>>


Re: Continuous Query

2020-10-02 Thread Denis Magda
Hi Narges,

Use continuous queries if you need to be notified in real-time, i.e. 1) a
record is inserted, 2) the continuous filter confirms the record's time
satisfies your condition, 3) the continuous queries notifies your
application that does require processing.

The jobs are better for a batching use case when it's ok to process records
together with some delay.


-
Denis


On Fri, Oct 2, 2020 at 3:50 AM narges saleh  wrote:

> Hi All,
>  If I want to watch for a rolling timestamp pattern in all the records
> that get inserted to all my caches, is it more efficient to use timer based
> jobs (that checks all the records in some interval) or  continuous queries
> that locally filter on the pattern? These records can get inserted in any
> order  and some can arrive with delays.
> An example is to watch for all the records whose timestamp ends in 50, if
> the timestamp is in the format -mm-dd hh:mi.
>
> thanks
>
>


Re: Continuous Query on a varying set of keys

2020-05-26 Thread Ilya Kasnacheev
Hello!

Yes, using other continuous query to watch changes of this set looks OK. Of
course there will be some yak shaving to do.

Regards,
-- 
Ilya Kasnacheev


вт, 26 мая 2020 г. в 12:28, zork :

> Thanks.
> So I can think of two ways using which such a set could be maintained by
> the
> remote node:
> 1. The remote node listens to a new topic through which the local node
> sends
> it a message whenever the set changes.
> 2. Or, the local node puts the set values in a new table in the cache
> itself
> and remote node can maybe listen to it using another continuous query.
> Is that right?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-26 Thread zork
Thanks. 
So I can think of two ways using which such a set could be maintained by the
remote node:
1. The remote node listens to a new topic through which the local node sends
it a message whenever the set changes.
2. Or, the local node puts the set values in a new table in the cache itself
and remote node can maybe listen to it using another continuous query.
Is that right?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on a varying set of keys

2020-05-26 Thread Ilya Kasnacheev
Hello!

No, updating local set will not be sent over to the remote node, but on the
remote node you can probably update it from your filter:

qry.setRemoteFilterFactory(new Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryEventFilter()
{
@Override public boolean
evaluate(CacheEntryEvent e) {
 // This method is implemented and called
on remote node
 set = updateSetIfNeeded(set);
 return set.contains(e.getKey());
}
};
}
});

Please note that it is not advisable to do cache operations from filter, so
you should probably do that in background by e.g. registering a service.

Regards,
-- 
Ilya Kasnacheev


вт, 26 мая 2020 г. в 11:31, zork :

> Hi,
> Sorry but I could not get it to work.
>
> The standard way of defining a remote filter as shown in sample repo is
> something like:
>
> qry.setRemoteFilterFactory(new Factory String>>() {
> @Override public CacheEntryEventFilter
> create() {
> return new CacheEntryEventFilter()
> {
> @Override public boolean
> evaluate(CacheEntryEvent e) {
> return e.getKey() > 10;
> }
> };
> }
> });
>
> In the above, instead of having *10* constant, I need to have a HashSet
> from
> which I can check if the updated key exists in it or not (see the snippet
> below) And I need the changes in the HashSet to be reflected in the filter.
> However it's not making sense to me because the HashSet which is modified
> is
> on one node while the filter is on another node so I expect the remote node
> would already have it serialized when the filter was first created and it
> would not change even if the set changes in the local node.
>
> HashSet = new HashSet<>();
> set.add(20);
> qry.setRemoteFilterFactory(new Factory String>>() {
> @Override public CacheEntryEventFilter
> create() {
> return new CacheEntryEventFilter()
> {
> @Override public boolean
> evaluate(CacheEntryEvent e) {
> return set.contains(e.getKey());
> }
> };
> }
> });
> set.add(10)  // would this actually change the filter on remote node?
>
> Perhaps I'm missing something very obvious here. Please help me identify
> it.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-26 Thread zork
Hi,
Sorry but I could not get it to work.

The standard way of defining a remote filter as shown in sample repo is
something like:

qry.setRemoteFilterFactory(new Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryEventFilter()
{
@Override public boolean
evaluate(CacheEntryEvent e) {
return e.getKey() > 10;
}
};
}
});

In the above, instead of having *10* constant, I need to have a HashSet from
which I can check if the updated key exists in it or not (see the snippet
below) And I need the changes in the HashSet to be reflected in the filter.
However it's not making sense to me because the HashSet which is modified is
on one node while the filter is on another node so I expect the remote node
would already have it serialized when the filter was first created and it
would not change even if the set changes in the local node.

HashSet = new HashSet<>();
set.add(20);
qry.setRemoteFilterFactory(new Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryEventFilter()
{
@Override public boolean
evaluate(CacheEntryEvent e) {
return set.contains(e.getKey());
}
};
}
});
set.add(10)  // would this actually change the filter on remote node?

Perhaps I'm missing something very obvious here. Please help me identify it.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on a varying set of keys

2020-05-18 Thread Ilya Kasnacheev
Hello!

Remote filter is code. It can execute arbitrary logic. It can adjust to
what it needs to filter, change its own behavior with time.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 15:40, zork :

> Hi Ilya,
> Thanks for your response.
> I'm aware of remote filters but can these filters be modified once the
> query
> is already attached?
> Because if not, then this would not solve my use case as the filter would
> always give me updates on a fixed subset of keys, however in my case this
> subset is varying (based on what keys a user subscribes from the GUI).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-18 Thread zork
Hi Ilya,
Thanks for your response.
I'm aware of remote filters but can these filters be modified once the query
is already attached?
Because if not, then this would not solve my use case as the filter would
always give me updates on a fixed subset of keys, however in my case this
subset is varying (based on what keys a user subscribes from the GUI).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on a varying set of keys

2020-05-18 Thread Ilya Kasnacheev
Hello!

Continuous query has a notion of 'remote filter'. This is a piece of code
which is executed near data (on server nodes) to determine if the update
needs to be sent over network.

https://apacheignite.readme.io/docs/continuous-queries#remote-filter

If you define a proper remote filter, updates will not flow over the
network unless this is actually needed.

Regards,
-- 
Ilya Kasnacheev


вс, 17 мая 2020 г. в 22:14, zork :

> Hi,
>
> We have a table in ignite cache which would have say around 1Mn entries at
> anytime. Now we wish to listen on updates on a subset of these keys (say
> 5-10 thousand keys) and this subset keeps on changing as the user
> subscribes/unsubscribes to these keys.
>
> The way it is currently working is one continuous query is attached for
> every key whenever it is subscribed and it is closed whenever that key is
> no
> longer of interest (or unsubscribed). The problem with this is that since
> there are so many continuous queries (a few thousands), the application
> goes
> out of memory. Also, it would mean all those queries would be evaluated on
> the remote node for every update.
>
> To overcome this, what we intend to do is to have just one continuous query
> which would listen to all the updates on this table (i.e. all the keys) and
> on receiving these updates we would have to filter those of our interest on
> our end. But this would mean unnecessary updates would flow over the
> network
> and it doesn't sound like a very good solution too.
>
> Can someone suggest a better way this problem could be addressed? Do we
> have
> something else in ignite to cater such requirement?
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous query setPageSize and setTimeInterval question

2020-04-30 Thread akorensh
It is likely entries are being quickly produced and filling up the buffer
giving the effect of an immediate update. 
You can test it via specific delays and logging.
Make the server print out the counter of the object.
say:
  if the key is an integer and the value is too then print out the key as
you put it in.
  do the same on the client when it receives the object.

Produce a number less than the buffer size , put in a 10 second delay, see
what the client gives then produce more items, and again observe the client. 

You can also set pageSize to a very large number -- say 4 and watch it
update in intervals. -- be aware that here memory effects might come into
play especially if your objects are large.


Here is an example:
  the server will produce 100 then sleep.
  the client has set the pageCount to 1000
  for every 10 "sleeping.." it will print out the 1000 entries. 

*server:*
   int i = 0;
int sleepCounter = 1;
while (true) {
cache.put(i++, Integer.toString(i));
System.out.println("added entry: " + i);
if(i%100 == 0){
System.out.println("sleeping: " + sleepCounter++);
if(sleepCounter %10 == 0) sleepCounter = 0;
Thread.sleep(1000);
}
}

*client: *
 ContinuousQuery < Integer, String > qry = new ContinuousQuery<>();
 qry.setTimeInterval(0);
 qry.setPageSize(1000);
 qry.setLocalListener((evts) -> evts.forEach(e ->
System.out.println("key=" + e.getKey() + ", val=" + e.getValue(;
  cache.query(qry);






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query setPageSize and setTimeInterval question

2020-04-30 Thread crypto_ricklee
This was what I thought, however, no matter what number I set to the
pageSize, e.g., 5, 20, 100, my local listener got the update immediately, 1
by 1, but not batching by the page size...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query setPageSize and setTimeInterval question

2020-04-29 Thread akorensh
You are correct. Your local reciever will get an update every 100 entries.

setPageSize sets the number of entries to batch together before sending.
When the server has accumulated a number of entries larger than getPageSize
it sends a message to the receiver. 

Like I mentioned before, setIntervalSize() allows you to send a batch every
set interval irrespective of whether pageSize has been reached or not

from the doc:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/ContinuousQuery.html

Continuous queries allow registering a remote filter and a local listener
for cache updates. 
If an update event passes the filter, it will be sent to the node that
executed the query, and local listener will be notified.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query setPageSize and setTimeInterval question

2020-04-28 Thread crypto_ricklee
Thanks Alex,

But I still not quite understand the expected behaviour. If I set the page
size to 100 and interval to 0, should the local query be triggered for every
100 updates?

Regards,
Rick



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query setPageSize and setTimeInterval question

2020-04-28 Thread akorensh
Hi,
   setTimeInterval limits the time Ignite will wait for an internal buffer
to fill up before sending.

   It is not a time delay setting.

   From the doc:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/ContinuousQuery.html#setTimeInterval-long-
 When a cache update happens, entry is first put into a buffer. Entries
from buffer will be sent to the 
 master node only if the buffer is full (its size can be provided via
Query.setPageSize(int) method) or time 
 provided via this method is exceeded.

 Default time interval is 0 which means that time check is disabled and
entries will be sent only when 
buffer is full.
   
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query Questions

2020-04-01 Thread narges saleh
So, if I define the CQ as a service, and the node crashes, wouldn't ignite
start a new service with CQ already registered, say if the CQ registration
is in service init?
I can  do initial query at the as well.


On Wed, Apr 1, 2020 at 7:33 PM Evgenii Zhuravlev 
wrote:

> Well, with this use case, if one of the nodes goes down, there is always a
> chance to lost notifications. I don't think that it's possible to recover
> lost notifications with out of the box solution, but if you will be able to
> track the last processed notification and store update time in entries, you
> will be able to find not processed entries. Otherwise, you will need to
> register CQ again and process all the entries using initialQuery.
>
> Evgenii
>
> ср, 1 апр. 2020 г. в 13:16, narges saleh :
>
>> Thanks Evgenii for the recommendation and the heads up.
>>
>> Is there a way to recover the lost notifications or even know if a
>> notification is lost?
>>
>> On Wed, Apr 1, 2020 at 12:15 PM Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> 1) I would recommend checking ContinouousQuery.setLocal:
>>> https://www.gridgain.com/sdk/ce/latest/javadoc/org/apache/ignite/cache/query/Query.html#setLocal-boolean-.
>>> Please check if it fits your requirements.
>>> 2) You will need to do this in a separate thread, because cache
>>> operations shouldn't be used inside CQ listeners, as they are executed
>>> synchronously.
>>>
>>> In case of using local CQ, there is a chance to miss notifications in
>>> case of node failure, it's described in javadoc.
>>>
>>> Evgenii
>>>
>>>
>>> вт, 31 мар. 2020 г. в 03:00, narges saleh :
>>>
 Hi All,
 I'd like to get your feedback regarding the following pattern.

 1) CQ setup that listens to the changes to a cache on the local node
 only.
 2) Upon receiving notification on a change, the listener makes
 additions to two other caches, one being on the local node (partitioned)
 and the other cache being replicated across all the nodes in the cluster.

 Is this setup performant and reliable in terms of the data staying in
 sync across the cluster?

 thanks.





Re: Continuous Query Questions

2020-04-01 Thread Evgenii Zhuravlev
Well, with this use case, if one of the nodes goes down, there is always a
chance to lost notifications. I don't think that it's possible to recover
lost notifications with out of the box solution, but if you will be able to
track the last processed notification and store update time in entries, you
will be able to find not processed entries. Otherwise, you will need to
register CQ again and process all the entries using initialQuery.

Evgenii

ср, 1 апр. 2020 г. в 13:16, narges saleh :

> Thanks Evgenii for the recommendation and the heads up.
>
> Is there a way to recover the lost notifications or even know if a
> notification is lost?
>
> On Wed, Apr 1, 2020 at 12:15 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> 1) I would recommend checking ContinouousQuery.setLocal:
>> https://www.gridgain.com/sdk/ce/latest/javadoc/org/apache/ignite/cache/query/Query.html#setLocal-boolean-.
>> Please check if it fits your requirements.
>> 2) You will need to do this in a separate thread, because cache
>> operations shouldn't be used inside CQ listeners, as they are executed
>> synchronously.
>>
>> In case of using local CQ, there is a chance to miss notifications in
>> case of node failure, it's described in javadoc.
>>
>> Evgenii
>>
>>
>> вт, 31 мар. 2020 г. в 03:00, narges saleh :
>>
>>> Hi All,
>>> I'd like to get your feedback regarding the following pattern.
>>>
>>> 1) CQ setup that listens to the changes to a cache on the local node
>>> only.
>>> 2) Upon receiving notification on a change, the listener makes additions
>>> to two other caches, one being on the local node (partitioned) and the
>>> other cache being replicated across all the nodes in the cluster.
>>>
>>> Is this setup performant and reliable in terms of the data staying in
>>> sync across the cluster?
>>>
>>> thanks.
>>>
>>>
>>>


Re: Continuous Query Questions

2020-04-01 Thread narges saleh
Thanks Evgenii for the recommendation and the heads up.

Is there a way to recover the lost notifications or even know if a
notification is lost?

On Wed, Apr 1, 2020 at 12:15 PM Evgenii Zhuravlev 
wrote:

> Hi,
>
> 1) I would recommend checking ContinouousQuery.setLocal:
> https://www.gridgain.com/sdk/ce/latest/javadoc/org/apache/ignite/cache/query/Query.html#setLocal-boolean-.
> Please check if it fits your requirements.
> 2) You will need to do this in a separate thread, because cache operations
> shouldn't be used inside CQ listeners, as they are executed synchronously.
>
> In case of using local CQ, there is a chance to miss notifications in case
> of node failure, it's described in javadoc.
>
> Evgenii
>
>
> вт, 31 мар. 2020 г. в 03:00, narges saleh :
>
>> Hi All,
>> I'd like to get your feedback regarding the following pattern.
>>
>> 1) CQ setup that listens to the changes to a cache on the local node only.
>> 2) Upon receiving notification on a change, the listener makes additions
>> to two other caches, one being on the local node (partitioned) and the
>> other cache being replicated across all the nodes in the cluster.
>>
>> Is this setup performant and reliable in terms of the data staying in
>> sync across the cluster?
>>
>> thanks.
>>
>>
>>


Re: Continuous Query Questions

2020-04-01 Thread Evgenii Zhuravlev
Hi,

1) I would recommend checking ContinouousQuery.setLocal:
https://www.gridgain.com/sdk/ce/latest/javadoc/org/apache/ignite/cache/query/Query.html#setLocal-boolean-.
Please check if it fits your requirements.
2) You will need to do this in a separate thread, because cache operations
shouldn't be used inside CQ listeners, as they are executed synchronously.

In case of using local CQ, there is a chance to miss notifications in case
of node failure, it's described in javadoc.

Evgenii


вт, 31 мар. 2020 г. в 03:00, narges saleh :

> Hi All,
> I'd like to get your feedback regarding the following pattern.
>
> 1) CQ setup that listens to the changes to a cache on the local node only.
> 2) Upon receiving notification on a change, the listener makes additions
> to two other caches, one being on the local node (partitioned) and the
> other cache being replicated across all the nodes in the cluster.
>
> Is this setup performant and reliable in terms of the data staying in sync
> across the cluster?
>
> thanks.
>
>
>


Re: Continuous Query Questions

2020-03-13 Thread Evgenii Zhuravlev
I'm not sure, because the final overhead depends on the object sizes. There
is a buffer for CQ, which stores 1000 entries by default, but you can
decrease it using property IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE

Evgenii

вт, 18 февр. 2020 г. в 18:09, narges saleh :

> Hi Evgeni,
>
> There will be several thousands notifications/day if I have it send
> notification only when certain patterns are visited, in about 100+ caches,
> which brings up another question: wouldn't having 100+ CQs be creating too
> much overhead?
>
> thanks.
>
> On Tue, Feb 18, 2020 at 2:17 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi,
>>
>> How many notifications do you want to get? If it's just a several
>> notifications, then you can even register separate CQ for each of the entry
>> with its own remote filters. At the same time, if you have a requirement to
>> send these notifications for a lot of entries, then this approach will
>> create a big overhead.
>>
>> Its possible to unregister a CQ after you get first notification - you
>> just need to return FALSE from a remote filter. Also, you can send not
>> exact entry, but only some fields using Transformer:
>> https://www.gridgain.com/docs/latest/developers-guide/key-value-api/continuous-queries#remote-transformer.
>> You can create some another object, which will contain only part of the
>> fields.
>>
>> Best Regards,
>> Evgeni
>>
>>
>> пн, 17 февр. 2020 г. в 03:58, narges saleh :
>>
>>> Hi All,
>>> I am getting the following streams of the following records:
>>> name, org, year, month, day
>>> 1- john, acc, 2004, 2, 1
>>> 2- pete, rd, 2004, 3,1
>>> 3- jim,hr,2004, 5,2
>>> 4- jerry,math,2005,2,1
>>> 5- betty,park,2005,3,2
>>> 6- carry,acc,2006,1,1
>>>
>>> I want to get notification for the first occurrence of a particular
>>> value. So, I want to get notifications when I get records 1, 4 and 6, and
>>> in this case, I want to get the fields, org, and year back only.
>>>
>>> Questions:
>>> 1) Is CQ overkill in this case? If yes, what's a better alternative?
>>> 2) If not, how can I set up CQ to get only one record per occurrence?
>>> 3) How would I return only org and year back with the CQ transformer,
>>> considering that I am working with a flat object? Note that in reality this
>>> record has 25-30 fields (I am showing only 5 of them).
>>>
>>> thanks.
>>>
>>


Re: Continuous Query Questions

2020-02-18 Thread narges saleh
Hi Evgeni,

There will be several thousands notifications/day if I have it send
notification only when certain patterns are visited, in about 100+ caches,
which brings up another question: wouldn't having 100+ CQs be creating too
much overhead?

thanks.

On Tue, Feb 18, 2020 at 2:17 PM Evgenii Zhuravlev 
wrote:

> Hi,
>
> How many notifications do you want to get? If it's just a several
> notifications, then you can even register separate CQ for each of the entry
> with its own remote filters. At the same time, if you have a requirement to
> send these notifications for a lot of entries, then this approach will
> create a big overhead.
>
> Its possible to unregister a CQ after you get first notification - you
> just need to return FALSE from a remote filter. Also, you can send not
> exact entry, but only some fields using Transformer:
> https://www.gridgain.com/docs/latest/developers-guide/key-value-api/continuous-queries#remote-transformer.
> You can create some another object, which will contain only part of the
> fields.
>
> Best Regards,
> Evgeni
>
>
> пн, 17 февр. 2020 г. в 03:58, narges saleh :
>
>> Hi All,
>> I am getting the following streams of the following records:
>> name, org, year, month, day
>> 1- john, acc, 2004, 2, 1
>> 2- pete, rd, 2004, 3,1
>> 3- jim,hr,2004, 5,2
>> 4- jerry,math,2005,2,1
>> 5- betty,park,2005,3,2
>> 6- carry,acc,2006,1,1
>>
>> I want to get notification for the first occurrence of a particular
>> value. So, I want to get notifications when I get records 1, 4 and 6, and
>> in this case, I want to get the fields, org, and year back only.
>>
>> Questions:
>> 1) Is CQ overkill in this case? If yes, what's a better alternative?
>> 2) If not, how can I set up CQ to get only one record per occurrence?
>> 3) How would I return only org and year back with the CQ transformer,
>> considering that I am working with a flat object? Note that in reality this
>> record has 25-30 fields (I am showing only 5 of them).
>>
>> thanks.
>>
>


Re: Continuous Query Questions

2020-02-18 Thread Evgenii Zhuravlev
Hi,

How many notifications do you want to get? If it's just a several
notifications, then you can even register separate CQ for each of the entry
with its own remote filters. At the same time, if you have a requirement to
send these notifications for a lot of entries, then this approach will
create a big overhead.

Its possible to unregister a CQ after you get first notification - you just
need to return FALSE from a remote filter. Also, you can send not exact
entry, but only some fields using Transformer:
https://www.gridgain.com/docs/latest/developers-guide/key-value-api/continuous-queries#remote-transformer.
You can create some another object, which will contain only part of the
fields.

Best Regards,
Evgeni


пн, 17 февр. 2020 г. в 03:58, narges saleh :

> Hi All,
> I am getting the following streams of the following records:
> name, org, year, month, day
> 1- john, acc, 2004, 2, 1
> 2- pete, rd, 2004, 3,1
> 3- jim,hr,2004, 5,2
> 4- jerry,math,2005,2,1
> 5- betty,park,2005,3,2
> 6- carry,acc,2006,1,1
>
> I want to get notification for the first occurrence of a particular value.
> So, I want to get notifications when I get records 1, 4 and 6, and in this
> case, I want to get the fields, org, and year back only.
>
> Questions:
> 1) Is CQ overkill in this case? If yes, what's a better alternative?
> 2) If not, how can I set up CQ to get only one record per occurrence?
> 3) How would I return only org and year back with the CQ transformer,
> considering that I am working with a flat object? Note that in reality this
> record has 25-30 fields (I am showing only 5 of them).
>
> thanks.
>


Re: Continuous query order on transactional cache

2020-01-16 Thread Ilya Kasnacheev
Hello!

Why?

I don't think that transactions, semantically have any guarantees about the
order of updates inside a transaction.

I'd go with A).

Regards,
-- 
Ilya Kasnacheev


чт, 16 янв. 2020 г. в 17:39, Barney Pippin :

> Hi,
>
> If I have a continuous query running and it's listening to a transactional
> cache, what order will I receive the notifications if say 5 updates are
> committed in a single transaction?
>
> Is the order:
> A) Undefined
> B) The order the cache updates are written to the cache prior to the commit
> C) Another order?
>
> Thanks,
>
> James
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query from client app

2018-06-21 Thread Pat Patterson
Fixed it! I needed to put my thread to sleep while the query was open, like
this:

try (QueryCursor> cur = cityCache.query(qry)) {
  // Iterating over existing data stored in cache.
  for (Cache.Entry e : cur)
System.out.println("key=" + e.getKey() + ", val=" + e.getValue());

  boolean done = false;
  while (!done) {
try {
  Thread.sleep(1000);
} catch (InterruptedException e) {
  done = true;
}
  }
}

It would be helpful if this, and the need to define the remote filter as a
nested static class, was mentioned at
https://apacheignite.readme.io/docs/continuous-queries#section-local-listener

Cheers,

Pat

--

Pat Patterson | Technical Director | http://about.me/patpatterson


On Wed, Jun 20, 2018 at 8:56 PM Pat Patterson  wrote:

> Hi,
>
> I'm wrestling with Continuous Queries. I'm successfully writing data into
> Ignite via JDBC; now I want to do a Continuous Query from a client app as
> I'm writing that data. I got past several issues by setting
> 'peerClassLoadingEnabled', using binary objects, and implementing my local
> listener and remote filter as static nested classes rather than lambdas.
> Now I have an app that executes with no errors, and loads some initial
> data, but it doesn't get any notifications via a Continuous Query.
>
> Here's my app:
>
> public class Main {
>   public static class LocalListener implements
> CacheEntryUpdatedListener {
> @Override
> public void onUpdated(Iterable evts) throws
> CacheEntryListenerException {
>   evts.forEach(e -> System.out.println("e=" + e));
> }
>   }
>
>   public static class RemoteFilter implements
> CacheEntryEventSerializableFilter {
> @Override
> public boolean evaluate(CacheEntryEvent evt) throws
> CacheEntryListenerException {
>   System.out.println("###");
>   return true;
> }
>   }
>
>   public static void main(String[] args) throws Exception {
> Ignition.setClientMode(true);
>
> System.out.println("Starting Ignite");
>
> // Connecting to the cluster.
> Ignite ignite =
> Ignition.start("/Users/pat/Downloads/apache-ignite-fabric-2.5.0-bin/config/default-config.xml");
>
> System.out.println("Started Ignite");
>
> // Getting a reference to an underlying cache created for City table
> above.
> IgniteCache cache =
> ignite.cache("SQL_PUBLIC_CITY").withKeepBinary();
>
> BinaryObject city = cache.get(1L);
>
> System.out.println(city);
>
> QueryCursor> query = cache.query(new SqlFieldsQuery("SELECT
> name FROM City"));
> System.out.println(query.getAll());
>
> ContinuousQuery qry = new ContinuousQuery<>();
>
> qry.setLocalListener(new LocalListener<>());
>
> qry.setRemoteFilter(new RemoteFilter<>());
>
> try (QueryCursor> cur =
> cache.query(qry)) {
>   // Iterating over existing data stored in cache.
>   for (Cache.Entry e : cur)
> System.out.println("key=" + e.getKey() + ", val=" + e.getValue());
> }
>   }
> }
>
> And here's default-config.xml, shared by both my server and client
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>   
> 
> 
>
> I'm doing this to test from sqlline:
>
> CREATE TABLE City (id LONG PRIMARY KEY, name VARCHAR) WITH
> "template=replicated";
> INSERT INTO City (id, name) VALUES (1, 'Forest Hill');
> INSERT INTO City (id, name) VALUES (2, 'Denver');
>
> And my app's output is:
>
> [usual startup stuff]
> Started Ignite
> SQL_PUBLIC_CITY_13ff453a_0162_4c9a_a224_699fbf252790 [idHash=1642017078,
> hash=1261261831, NAME=Forest Hill]
> [[Forest Hill], [Denver]]
>
> I add another city in sqlline, but I get no output in my app.
>
> Any ideas?
>
> Cheers,
>
> Pat
>
> --
>
> Pat Patterson | Technical Director | http://about.me/patpatterson
>


RE: Continuous query - Exactly once based event across multiple nodes..

2018-06-07 Thread Raymond Wilson
I another possibility to create a continuous query per node in your node
affinity set for the cache and have the continuous query return local
values, like this:

using (IContinuousQueryHandle>
queryHandle = queueCache.QueryContinuous
(qry: new ContinuousQuery< Key, Value >(new LocalListener())
{ Local = true },
 initialQry: new ScanQuery< Key, Value > { Local = true }))
(
// Perform the initial query to grab all existing elements
foreach (var item in queryHandle.GetInitialQueryCursor())
{
if (NodeIsPrimaryForThisKey(Key)) // Don’t let backups
get involved
   handler.Add(item.Key);
}

// move into steady state management of arriving elements...
)

-Original Message-
From: Николай Ижиков [mailto:nizhikov@gmail.com] On Behalf Of Nikolay
Izhikov
Sent: Monday, May 7, 2018 6:40 PM
To: user@ignite.apache.org
Cc: JP 
Subject: Re: Continuous query - Exactly once based event across multiple
nodes..

Hello, JP.

You should use target node in remote filter.

You should check "Is primary node for some record equal to target node?" in
your filter.
Please, see code below.
You can find related discussion and full example here [1].

@IgniteAsyncCallback
public static class RemoteFactory implements
Factory> {
private final ClusterNode node;

public RemoteFactory(ClusterNode node) {
this.node = node;
}

@Override
public CacheEntryEventFilter create() {
return new CacheEntryEventFilter() {
@IgniteInstanceResource
private Ignite ignite;

@Override
public boolean evaluate(CacheEntryEvent cacheEntryEvent) {
Affinity aff = ignite.affinity("myCache");

ClusterNode primary =
aff.mapKeyToNode(cacheEntryEvent.getKey());

return primary.id().equals(node.id());
}
};
}
}



[1] https://issues.apache.org/jira/browse/IGNITE-8035


В Вс, 06/05/2018 в 23:33 -0700, JP пишет:
> Using continuous query,
>
> How to achieve event trigger for cache exactly only once per key even
> if continuous query is listening in multiple nodes or multiple listener.
> example:
> 1. Scenario 1:
>  Node A: Start Continuous query
>  Node B: Start Continuous query
>  Node C: Insert or Update or Delete record ex: number from 1 to 100
>
> Expected Output should be as below
>  Node A - 1 ,2,3,4,550
>  Node B - 51, 52, 53, 54, ... 100
>  Above output is the expected output. Here, event per key should be
> triggered exactly once across nodes.
>
> Actual Output should be as below
>  Node A - 1 ,2,3,4,5,100
>  Node B - 1, 2, 3, 4,5 ... 100
>
> If this is not possible in Continuous query, then is there any way to
> achieve this.
>
> 2. Scenario 2:
> To achieve expected output,
>  I am using singleton service per Cluster.
> Ex: Cluster A
>   - Singleton service with Continuous query for cache
> Here problem is, service is running in only one instance.
> How to achieve above output with multiple instance of service?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query - Exactly once based event across multiple nodes..

2018-06-07 Thread vkulichenko
JP,

Do you have a solution for this? Do you need any more help?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query - Exactly once based event across multiple nodes..

2018-05-07 Thread JP
Vkulichenko,
 I want to update in multiple databases based on event triggered in the
ignite cache.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query - Exactly once based event across multiple nodes..

2018-05-07 Thread vkulichenko
JP,

Can you please describe the business case behind this? What are you trying
to achieve on application level? What guarantees are needed and why?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous query - Exactly once based event across multiple nodes..

2018-05-07 Thread JP
Thanks... This solution worked but problem is, it creating multiple remote
filter instance..




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: continuous query - changes from local server only

2018-02-08 Thread Vinokurov Pavel
Som,

You could create the continuous query on each client node with the filter
described above.

2018-02-08 19:55 GMT+03:00 Som Som <2av10...@gmail.com>:

> i've got both client and server nodes on each of 3 physical servers, that
> is my cluster. there is a partitioned cache, each server node stores only a
> part of keys. i start the application on my dev machine that app is also
> client of the cluster further i put new key into the cluster. i would like
> to see this change only in client which is located with server node which
> stores this new key.
>
> 8 февр. 2018 г. 11:41 ДП пользователь "dkarachentsev" <
> dkarachent...@gridgain.com> написал:
>
> Hi,
>
> You may fuse filter for that, for example:
>
> ContinuousQuery qry = new ContinuousQuery<>();
>
> fine al Set nodes = new
> HashSet<>(client.cluster().forDataNodes("cache")
> .forHost(client.cluster().localNode()).nodes());
>
> qry.setRemoteFilterFactory(new
> Factory>() {
> @Override public CacheEntryEventFilter
> create() {
> return new CacheEntryEventFilter() {
> @IgniteInstanceResource
> private Ignite ignite;
>
> @Override public boolean evaluate(
> CacheEntryEvent Integer> event) throws CacheEntryListenerException {
> // Server nodes on current host
> return nodes.contains(ignite.cluster(
> ).localNode());
> }
> };
> }
> });
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


-- 

Regards

Pavel Vinokurov


Re: continuous query - changes from local server only

2018-02-08 Thread Som Som
i've got both client and server nodes on each of 3 physical servers, that
is my cluster. there is a partitioned cache, each server node stores only a
part of keys. i start the application on my dev machine that app is also
client of the cluster further i put new key into the cluster. i would like
to see this change only in client which is located with server node which
stores this new key.

8 февр. 2018 г. 11:41 ДП пользователь "dkarachentsev" <
dkarachent...@gridgain.com> написал:

Hi,

You may fuse filter for that, for example:

ContinuousQuery qry = new ContinuousQuery<>();

fine al Set nodes = new
HashSet<>(client.cluster().forDataNodes("cache")
.forHost(client.cluster().localNode()).nodes());

qry.setRemoteFilterFactory(new
Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryEventFilter() {
@IgniteInstanceResource
private Ignite ignite;

@Override public boolean evaluate(
CacheEntryEvent event) throws CacheEntryListenerException {
// Server nodes on current host
return nodes.contains(ignite.cluster().localNode());
}
};
}
});

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: continuous query - changes from local server only

2018-02-08 Thread dkarachentsev
Hi,

You may fuse filter for that, for example:

ContinuousQuery qry = new ContinuousQuery<>();

final Set nodes = new
HashSet<>(client.cluster().forDataNodes("cache")
.forHost(client.cluster().localNode()).nodes());

qry.setRemoteFilterFactory(new
Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryEventFilter() {
@IgniteInstanceResource
private Ignite ignite;

@Override public boolean evaluate(
CacheEntryEvent event) throws CacheEntryListenerException {
// Server nodes on current host
return nodes.contains(ignite.cluster().localNode());
}
};
}
});

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query remote listener misses some events or respond really late

2017-09-28 Thread begineer
Hi..
I know its quite late to reply, But I am seeing this issue intermittently
almost everyday. But can't reproduce it locally on dev machine. As suggested
I have moved logs before null check to see if null event is logged. However,
I didn't see it printed in logs. Also, it was suggested to check if events
(in question) reaches remote listener(log should print), no log is printed
in such scenario so I assume event does not reach remote listener
immediately.

Same event is processed after several hours later. like 4 hours some times
even after one day. 

I tried to add same event manually to cache object, it is processed
immediately 
(only if original event is stuck).

Also, host logs are clean, I couldn't find anything suspicious. 
Please let me know if you want any more information. I will try to fetch it.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on Multiple caches

2017-09-11 Thread slava.koptilin
Hi Rishikesh,

Is it possible to create another kafka stream based on Curr_stream1 &
Curr_stream2?
In this case, you will be able to stream (Curr_stream1.f0 - Curr_stream2.f0)
into a new Ignite cache and use continuous query.

In any way, it would be great if you can share your solution with the
community.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query event buffering OOME

2017-09-08 Thread Nikolai Tikhonov
Hi Michal,

I've looked at code and your points look reasonable. In now moment, how you
correct noted, you can decrease size of the buffer via
IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE property to 50 or 100.

On Tue, Sep 5, 2017 at 9:14 PM, mcherkasov  wrote:

> Hi Michal,
>
> Those buffers are required to make sure that all messages are delivered to
> all subscribers and delivered in right order.
> However I agree, 1M is a relatively large number for this.
>
> I will check this question with Continuous Query experts and will update
> you
> tomorrow.
>
> Thanks,
> Mikhail.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query event buffering OOME

2017-09-05 Thread mcherkasov
Hi Michal,

Those buffers are required to make sure that all messages are delivered to
all subscribers and delivered in right order.
However I agree, 1M is a relatively large number for this.

I will check this question with Continuous Query experts and will update you
tomorrow.

Thanks,
Mikhail.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on Multiple caches

2017-08-28 Thread rishi007bansod
Hi,
 In our case data is coming from 2 kafka streams. We want to compare current
data from 2 streams and take some action(e.g. raise alert). We want to make
this processing event based i.e. as soon as data comes from 2 streams, we
should take action associated with this event. 
For example, 
((Curr_stream1.f0 - Curr_stream2.f0) > T ) then > raise alert.

Initially I thought of caching both streams data and then compare it, but it
will take more time to process.

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444p16473.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query on Multiple caches

2017-08-28 Thread slava.koptilin
Hi Rishikesh,

ContinuosQuery is designed to work with single cache only.
So, there is no way to use it with multiple caches.
Could you please share your use case in more details?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444p16450.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query remote listener misses some events or respond really late

2017-06-09 Thread Sasha Belyak
Thank for your reply. From code I see that you log only entries with non
null values. If your absolutely shure that you never put null in cache - I
will create loadtest to reproduce it and create issue for you. But it will
be great, if you move logging before event.getValue! = null.

среда, 7 июня 2017 г. пользователь begineer написал:

> Hi.. Sorry its quite late to reply. CQ is setup in execute method of
> service
> not in init(), but we do have initialQuery in CQ to scan existing events to
> matching the filter. Below is snapshot of one of the many ignite services
> set to process trade on when trade moves to particular status.
>
> As you can see, I have added logs to remote filter predicate. But these
> logs
> don't get printed when trade get stuck at particular status. So I assume,
> remote filter does not pick the events it is supposed to track.
>
> public enum TradeStatus {
> NEW, CHANGED, EXPIRED, FAILED, UNCHANGED , SUCCESS
> }
>
>
> /**
>  * Ignite Service which picks up CHANGED trade delivery items
>  */
> public class ChangedTradeService implements Service{
>
> @IgniteInstanceResource
> private transient Ignite ignite;
> private transient IgniteCache tradeCache;
> private transient QueryCursor> cursor;
>
> @Override
> public void init(ServiceContext serviceContext) throws Exception {
> tradeCache = ignite.cache("tradeCache");
> }
>
> @Override
> public void execute(ServiceContext serviceContext) throws
> Exception {
> ContinuousQuery query = new
> ContinuousQuery<>();
> query.setLocalListener((CacheEntryUpdatedListenerAsync Trade>)
> events -> events
> .forEach(event ->
> process(event.getValue(;
> query.setRemoteFilterFactory(
> factoryOf(checkStatus(status)));
> query.setInitialQuery(new ScanQuery<>(
> checkStatusPredicate(status)));
> QueryCursor> cursor =
> tradeCache.query(query);
> cursor.forEach(entry -> process(entry.getValue()));
> }
>
> private void process(Trade item){
>  log.info("transition started for trade id :"+item.getPkey());
> //move the trade to next state(e.g SUCCESS) and next
> Service(contains CQ,
> which is looking for SUCCESS status) will pick this up for processing
> further and so on
>  log.info("transition finished for trade id
> :"+item.getPkey());
> }
>
> @Override
> public void cancel(ServiceContext serviceContext) {
> cursor.close();
> }
>
> static CacheEntryEventFilterAsync
> checkStatus(TradeStatus
> status) {
> return event -> event.getValue() != null &&
> checkStatusPredicate(status).apply(event.getKey(), event.getValue());
> }
>
> static IgniteBiPredicate
> checkStatusPredicate(TradeStatus status) {
> return (k, v) -> {
> LOG.debug("Status checking for: {} Event value: {}
> isStatus: {}", status,
> v, v.getStatus() == status);
> return v.getStatus() == status;
> };
> }
> }
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p13476.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread Sasha Belyak
As far as I understant you create CQ in Service.init, so node with running
service is CQ node. All other nodes from grid will send CQ events to this
node to process in your service and if you don't configure nodeFilter for
service - any node can run it, so any node can be CQ node.
But it shouldn't be a problem if you create CQ in Service.init() and
haven't too heavy load on you cluster (anyway if data owner node failed to
deliver messages to node with running service (CQ node) - you should see it
in logs). If you give some code examples  how you use CQ - I can say more.

2017-05-05 17:59 GMT+07:00 begineer :

> Thanks, In my application, all nodes are server nodes
> And how do we be sure that nodes removed/ reconnect to grid is CQ node, it
> can be any.
> Also, Is this issue possible in all below scenarios?
> 1. if node happens to be CQ node or any node?
> 2. node is removed from grid forcefully(manual shutdown)
> 3. node went down due to some reason and grid dropped it
>
> 3rd one looks like safe option since it is dropped by grid so grid should
> be
> ware where to shift the CQ? Please correct me if I am wrong.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12454.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread begineer
Thanks, In my application, all nodes are server nodes
And how do we be sure that nodes removed/ reconnect to grid is CQ node, it
can be any. 
Also, Is this issue possible in all below scenarios?
1. if node happens to be CQ node or any node? 
2. node is removed from grid forcefully(manual shutdown)
3. node went down due to some reason and grid dropped it

3rd one looks like safe option since it is dropped by grid so grid should be
ware where to shift the CQ? Please correct me if I am wrong.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-or-respond-really-late-tp12338p12454.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread Sasha Belyak
If node with CQ leave grid (or just reconnect to grid, if it client node) -
you should recreate CQ, because some cache updates can happen when node
with CQ listener can't receive it. What happen it this case:
1) Node with changed cache entry process CQ, entry pass remote filter and
node try to send continues query event message to CQ node
2) If sender node can't push msg by any reasons (sender will retry few
times) - it can't wait receiver too long and drop it.
3) After CQ node return to the cluster - it must recreate CQ to process
initialQuery to get such events.
If you sure that no CQ owners node leaves grid - we need to continue,
becouse it can be bug.
And yes, I think that it is not evidently that you must recreate CQ after
client reconnect, but that is how ignite work now.

2017-05-05 16:56 GMT+07:00 begineer :

> Umm. actually nothing get logged in such scenario. However, as you
> indicated
> earlier, I could see trades get stuck if a node leaves the grid(not
> always).
> Do you know why that happens? Is that a bug?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12452.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread begineer
Umm. actually nothing get logged in such scenario. However, as you indicated
earlier, I could see trades get stuck if a node leaves the grid(not always).
Do you know why that happens? Is that a bug?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-or-respond-really-late-tp12338p12452.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query remote listener misses some events or respond really late

2017-05-04 Thread Sasha Belyak
Can you share you log files?

2017-05-03 19:05 GMT+07:00 begineer :

> 1) How you use ContinuousQuery: with initialQuery or without? : with
> initial
> query having same predicate
> 2) Did some nodes disconnect when you loose updates? no
> 3) Did you log entries in CQ.localListener? Just to be sure that error in
> CQ
> logic, not in your service logic. :
>  No log entries in remote filter, nor in locallistner
> 4) Can someone update old entries? Maybe they just get into CQ again after
> 4-5 hours by external update?
>--- I tried adding same events just to trigger event again, some time it
> moves ahead(event discovered), some times get stuck at same state.
> Also, CQ detects them at its won after long time mentioned, we dont add any
> event in this case.
> Regards,
> Surinder
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12387.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread begineer
1) How you use ContinuousQuery: with initialQuery or without? : with initial
query having same predicate
2) Did some nodes disconnect when you loose updates? no
3) Did you log entries in CQ.localListener? Just to be sure that error in CQ
logic, not in your service logic. :  
 No log entries in remote filter, nor in locallistner
4) Can someone update old entries? Maybe they just get into CQ again after
4-5 hours by external update?
   --- I tried adding same events just to trigger event again, some time it
moves ahead(event discovered), some times get stuck at same state.
Also, CQ detects them at its won after long time mentioned, we dont add any
event in this case.
Regards,
Surinder



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-or-respond-really-late-tp12338p12387.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread Sasha Belyak
1) How you use ContinuousQuery: with initialQuery or without?
2) Did some nodes disconnect when you loose updates?
3) Did you log entries in CQ.localListener? Just to be sure that error in
CQ logic, not in your service logic.
4) Can someone update old entries? Maybe they just get into CQ again after
4-5 hours by external update?

2017-05-03 17:13 GMT+07:00 begineer :

> Hi Thanks for looking into this. Its not easily reproduce-able. I only see
> it
> some times. Here is my cache and service configuration
>
> Cache configuration:
>
> readThrough="true"
> writeThrough="true"
> writeBehindEnabled="true"
> writeBehindFlushThreadCount="5"
> backups="1"
> readFromBackup="true"
>
> service configuartion:
>
> maxPerNodeCount="1"
> totalCount="1"
>
> Cache is distributed over 12 nodes.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12382.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread begineer
Hi Thanks for looking into this. Its not easily reproduce-able. I only see it
some times. Here is my cache and service configuration

Cache configuration:

readThrough="true"
writeThrough="true"
writeBehindEnabled="true"
writeBehindFlushThreadCount="5"
backups="1"
readFromBackup="true"

service configuartion:

maxPerNodeCount="1" 
totalCount="1"

Cache is distributed over 12 nodes.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-or-respond-really-late-tp12338p12382.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread Sasha Belyak
Hi,
I'm trying to reproduce it in one host (with 6 ignite server node) but all
work fine for me. Can you share ignite configuration, cache configuration,
logs or some reproducer?

2017-05-02 15:48 GMT+07:00 begineer :

> Hi,
> I am currently facing intermittent issue with continuous query. Cant really
> reproduce it but if any one faced this issue, please do let me know
> My application is deployed on 12 nodes with 5-6 services are used to detect
> respective events using continuous query.
> Lets say I have a cache of type
> Cache where Trade is like this
> class Trade{
> int pkey,
> String type
> 
> TradeState state;//enum
> }
> CQ detects the new entry to cache(with updated state) and checks if trade
> has the state which matches its remote filter criteria.
> A Trade moves from state1-state5. each CQ listens to one stage and do some
> processing and move it to next state where next CQ will detect it and act
> accordingly.
> Problem is sometimes, trade get stuck in some state and does not move. I
> have put logs in remote listener Predicate method(which checks the filter
> criteria) but these logs don't get printed on console. Some times CQ detect
> events after 4-5 hours.
> I am using ignite 1.8.2
> Does any one seen this behavior, I will be grateful for help extended
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query

2016-06-30 Thread aosmakoff
In my case the remote filters are being called and return true appropriately,
however it seems that the local listeners are not always being called. I am
trying to understand what could be going wrong, starting with ruling out a
possible misunderstanding of the framework behavior regarding events
propagation.
I will keep looking for the possible cause and will give you an update
later.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-tp5981p6032.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Continuous Query

2016-06-29 Thread Denis Magda
Hi Alex,

All the local listeners that satisfy remote filters’ conditions will be 
notified. However there is one to one relationship between remote filter and 
local listener meaning that if you have 4 CQs then 4 remote filters will be 
executed and 4 local listeners notified if needed.

What happens in particular on your side? Don’t you see that a local listener is 
notified or remote filter gets called? 

What version are you on?

—
Denis

> On Jun 29, 2016, at 3:58 AM, Alex Osmakoff  
> wrote:
> 
> Hi There,
>  
> I am using Continuous Query mechanism and most of the time it works just 
> fine. However, in some cases, it seems intermittently, CQ does not pick up 
> the update in the cache where it should. 
>  
> Could you please clarify the behaviour of Continuous Query in the following 
> scenario:
>  
> My business logic might create multiple identical CQs in separate processing 
> tasks. As I have no control on where the particular task gets executed within 
> the grid it is possible that two identical queries gets created on the same 
> node. Now when the cache gets updated and remote filter picks up the update 
> to pass it to the local listener of the query, would local listeners in both 
> queries be notified or only one? I think the same applies to CACHE_PUT_EVENT 
> propagation in general: if there are two (or more) listeners and only one 
> event would all the listeners be notified regardless of their location?
>  
> Many thanks,
>  
> Regards,
>  
> Alex
> 
> This email and any attachment is confidential. If you are not the intended 
> recipient, please delete this message. Macquarie does not guarantee the 
> integrity of any emails or attachments. For important disclosures and 
> information about the incorporation and regulated status of Macquarie Group 
> entities please see: www.macquarie.com/disclosures 
>