Thanks

Then, I would consider the continuous queries based solution as long as the
records can be updated in real-time:

   - You can process the records on the fly and don't need to come up with
   any batch task.
   - The continuous query filter will be executed once on a node that
   stores the record's primary copy. If the primary node fails in the middle
   of the filter's calculation execution, then the filter will be executed on
   a backup node. So, you will not lose any updates but might need to
   introduce some logic/flag that confirms the calculation is not executed
   twice for a single record (this can happen if the primary node failed in
   the middle of the calculation execution and then the backup node picked up
   and started executing the calculation from scratch).
   - Updates of other tables or records from within the continuous query
   filter must go through an async thread pool. You need to use
   IgniteAsyncCallback annotation for that:
   
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteAsyncCallback.html

Alternatively, you can always run the calculation in the batch-fashion:

   - Run a compute task once in a while
   - Read all the latest records that satisfy the requests with SQL or any
   other APIs
   - Complete the calculation, mark already processed records just in case
   if everything is failed in the middle and you need to run the calculation
   from scratch


-
Denis


On Mon, Oct 5, 2020 at 8:33 PM narges saleh <[email protected]> wrote:

> Denis
>  The calculation itself doesn't involve an update or read of another
> record, but based on the outcome of the calculation, the process might make
> changes in some other tables.
>
> thanks.
>
> On Mon, Oct 5, 2020 at 7:04 PM Denis Magda <[email protected]> wrote:
>
>> Good. Another clarification:
>>
>>    - Does that calculation change the state of the record (updates any
>>    fields)?
>>    - Does the calculation read or update any other records?
>>
>> -
>> Denis
>>
>>
>> On Sat, Oct 3, 2020 at 1:34 PM narges saleh <[email protected]> wrote:
>>
>>> The latter; the server needs to perform some calculations on the data
>>> without sending any notification to the app.
>>>
>>> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda <[email protected]> wrote:
>>>
>>>> And after you detect a record that satisfies the condition, do you need
>>>> to send any notification to the application? Or is it more like a server
>>>> detects and does some calculation logically without updating the app.
>>>>
>>>> -
>>>> Denis
>>>>
>>>>
>>>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh <[email protected]>
>>>> wrote:
>>>>
>>>>> The detection should happen at most a couple of minutes after a record
>>>>> is inserted in the cache but all the detections are local to the node. But
>>>>> some records with the current timestamp might show up in the system with
>>>>> big delays.
>>>>>
>>>>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda <[email protected]> wrote:
>>>>>
>>>>>> What are your requirements? Do you need to process the records as
>>>>>> soon as they are put into the cluster?
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Friday, October 2, 2020, narges saleh <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Thank you Dennis for the reply.
>>>>>>> From the perspective of performance/resource overhead and
>>>>>>> reliability, which approach is preferable? Does a continuous query based
>>>>>>> approach impose a lot more overhead?
>>>>>>>
>>>>>>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Narges,
>>>>>>>>
>>>>>>>> Use continuous queries if you need to be notified in real-time,
>>>>>>>> i.e. 1) a record is inserted, 2) the continuous filter confirms the
>>>>>>>> record's time satisfies your condition, 3) the continuous queries 
>>>>>>>> notifies
>>>>>>>> your application that does require processing.
>>>>>>>>
>>>>>>>> The jobs are better for a batching use case when it's ok to process
>>>>>>>> records together with some delay.
>>>>>>>>
>>>>>>>>
>>>>>>>> -
>>>>>>>> Denis
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh <[email protected]>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi All,
>>>>>>>>>  If I want to watch for a rolling timestamp pattern in all the
>>>>>>>>> records that get inserted to all my caches, is it more efficient to 
>>>>>>>>> use
>>>>>>>>> timer based jobs (that checks all the records in some interval) or
>>>>>>>>> continuous queries that locally filter on the pattern? These records 
>>>>>>>>> can
>>>>>>>>> get inserted in any order  and some can arrive with delays.
>>>>>>>>> An example is to watch for all the records whose timestamp ends in
>>>>>>>>> 50, if the timestamp is in the format yyyy-mm-dd hh:mi.
>>>>>>>>>
>>>>>>>>> thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> -
>>>>>> Denis
>>>>>>
>>>>>>

Reply via email to