Re: Continuous Query

2020-10-05 Thread narges saleh
Denis
 The calculation itself doesn't involve an update or read of another
record, but based on the outcome of the calculation, the process might make
changes in some other tables.

thanks.

On Mon, Oct 5, 2020 at 7:04 PM Denis Magda  wrote:

> Good. Another clarification:
>
>- Does that calculation change the state of the record (updates any
>fields)?
>- Does the calculation read or update any other records?
>
> -
> Denis
>
>
> On Sat, Oct 3, 2020 at 1:34 PM narges saleh  wrote:
>
>> The latter; the server needs to perform some calculations on the data
>> without sending any notification to the app.
>>
>> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>>
>>> And after you detect a record that satisfies the condition, do you need
>>> to send any notification to the application? Or is it more like a server
>>> detects and does some calculation logically without updating the app.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>>> wrote:
>>>
 The detection should happen at most a couple of minutes after a record
 is inserted in the cache but all the detections are local to the node. But
 some records with the current timestamp might show up in the system with
 big delays.

 On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:

> What are your requirements? Do you need to process the records as soon
> as they are put into the cluster?
>
>
>
> On Friday, October 2, 2020, narges saleh  wrote:
>
>> Thank you Dennis for the reply.
>> From the perspective of performance/resource overhead and
>> reliability, which approach is preferable? Does a continuous query based
>> approach impose a lot more overhead?
>>
>> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>>
>>> Hi Narges,
>>>
>>> Use continuous queries if you need to be notified in real-time, i.e.
>>> 1) a record is inserted, 2) the continuous filter confirms the record's
>>> time satisfies your condition, 3) the continuous queries notifies your
>>> application that does require processing.
>>>
>>> The jobs are better for a batching use case when it's ok to process
>>> records together with some delay.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>>> wrote:
>>>
 Hi All,
  If I want to watch for a rolling timestamp pattern in all the
 records that get inserted to all my caches, is it more efficient to use
 timer based jobs (that checks all the records in some interval) or
 continuous queries that locally filter on the pattern? These records 
 can
 get inserted in any order  and some can arrive with delays.
 An example is to watch for all the records whose timestamp ends in
 50, if the timestamp is in the format -mm-dd hh:mi.

 thanks


>
> --
> -
> Denis
>
>


Re: Continuous Query

2020-10-05 Thread Denis Magda
Good. Another clarification:

   - Does that calculation change the state of the record (updates any
   fields)?
   - Does the calculation read or update any other records?

-
Denis


On Sat, Oct 3, 2020 at 1:34 PM narges saleh  wrote:

> The latter; the server needs to perform some calculations on the data
> without sending any notification to the app.
>
> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>
>> And after you detect a record that satisfies the condition, do you need
>> to send any notification to the application? Or is it more like a server
>> detects and does some calculation logically without updating the app.
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 11:22 AM narges saleh 
>> wrote:
>>
>>> The detection should happen at most a couple of minutes after a record
>>> is inserted in the cache but all the detections are local to the node. But
>>> some records with the current timestamp might show up in the system with
>>> big delays.
>>>
>>> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>>>
 What are your requirements? Do you need to process the records as soon
 as they are put into the cluster?



 On Friday, October 2, 2020, narges saleh  wrote:

> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and reliability,
> which approach is preferable? Does a continuous query based approach 
> impose
> a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>
>> Hi Narges,
>>
>> Use continuous queries if you need to be notified in real-time, i.e.
>> 1) a record is inserted, 2) the continuous filter confirms the record's
>> time satisfies your condition, 3) the continuous queries notifies your
>> application that does require processing.
>>
>> The jobs are better for a batching use case when it's ok to process
>> records together with some delay.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, Oct 2, 2020 at 3:50 AM narges saleh 
>> wrote:
>>
>>> Hi All,
>>>  If I want to watch for a rolling timestamp pattern in all the
>>> records that get inserted to all my caches, is it more efficient to use
>>> timer based jobs (that checks all the records in some interval) or
>>> continuous queries that locally filter on the pattern? These records can
>>> get inserted in any order  and some can arrive with delays.
>>> An example is to watch for all the records whose timestamp ends in
>>> 50, if the timestamp is in the format -mm-dd hh:mi.
>>>
>>> thanks
>>>
>>>

 --
 -
 Denis




Re: Ignite thread pool configuration

2020-10-05 Thread Denis Magda
Give it a try but do some load testing close to your production workload.
And then ramp the numbers up if needed.

-
Denis


On Mon, Oct 5, 2020 at 12:56 AM VeenaMithare 
wrote:

> Thanks Denis,
>
> I am thinking of setting the below thread pools as this on both client and
> server since we dont use data streamer, IGFS or Peer Class loading:
>
> 
> 
> 
>
> Also our thick clients dont connect using REST . So thinking of adding the
> below configuration on our thick client configuration.
>
> 
>  class="org.apache.ignite.configuration.ConnectorConfiguration">
> 
> 
> 
>
> Hope that is okay ?
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Binary memory restoration

2020-10-05 Thread Raymond Wilson
Thanks for the thoughts Ilya and Vladimir.

We'll do a comparison with 2.9 when it releases to see if that makes any
difference.

One of the advantages with persistent storage is that it is effectively
'instant start'. Our WAL size is around 5Gb, perhaps this should be
decreased to reduce system start-up time?

Thanks,
Raymond.

On Wed, Sep 30, 2020 at 7:31 AM Vladimir Pligin 
wrote:

> It's possible that it happens because of
> https://issues.apache.org/jira/browse/IGNITE-13068.
> We need to scan the entire SQL primary index during startup in case you
> have
> at least on query entity configured.
> As far as I can see it's going to be a part of the Ignite 2.9 release.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 

Raymond Wilson
Solution Architect, Civil Construction Software Systems (CCSS)
11 Birmingham Drive | Christchurch, New Zealand
+64-21-2013317 Mobile
raymond_wil...@trimble.com




Re: [Announcement] New Ignite docs merged to the master

2020-10-05 Thread Denis Magda
Just in case, we're still fighting some odd servers-synchronization issue
with the INFRA team: https://issues.apache.org/jira/browse/INFRA-20925

Thus, empty browsers cache to see the latest version of the docs. Sometimes
you might need to do this several times depending on where your request is
served :)

-
Denis


On Mon, Oct 5, 2020 at 3:00 PM Denis Magda  wrote:

> Igniters,
>
> Just a shout-out that the new docs made it to the master (sources in
> AsciiDoc format) with the latest version published to the website (HTML
> format): https://ignite.apache.org/docs/latest/
>
> Thanks to everyone involved. Special thanks go to @Artem Budnikov
>  and @Mauricio Stekl  who
> were the main contributing force behind the effort.
>
> Effectively now, send all pull-request and do changes in the master
> branch. I'll prepare the how-to-contribute instruction in the following
> week or two.
>
> Also, I'll prepare a detailed status and share it in the other discussion
> below. Use that discussion to report any issues:
> http://apache-ignite-developers.2346864.n4.nabble.com/New-Ignite-Docs-Status-and-Incomplete-Items-td49268.html
>
> -
> Denis
>


[Announcement] New Ignite docs merged to the master

2020-10-05 Thread Denis Magda
Igniters,

Just a shout-out that the new docs made it to the master (sources in
AsciiDoc format) with the latest version published to the website (HTML
format): https://ignite.apache.org/docs/latest/

Thanks to everyone involved. Special thanks go to @Artem Budnikov
 and @Mauricio Stekl  who
were the main contributing force behind the effort.

Effectively now, send all pull-request and do changes in the master branch.
I'll prepare the how-to-contribute instruction in the following week or two.

Also, I'll prepare a detailed status and share it in the other discussion
below. Use that discussion to report any issues:
http://apache-ignite-developers.2346864.n4.nabble.com/New-Ignite-Docs-Status-and-Incomplete-Items-td49268.html

-
Denis


Re: Eviction policy enablement leads to ignite cluster blocking, it does not use pages from freelist

2020-10-05 Thread Evgenii Zhuravlev
Hi Prasad,

What operations do you run on the cluster? What is the size of objects? Is
it possible to share full logs from nodes? Do you have some kind of small
reproducer for this issue? It would be really helpful.

Thanks,
Evgenii

пн, 5 окт. 2020 г. в 07:53, Prasad Pillala :

> Hi,
>
>
>
> evictDataPage() always leads to ignite cluster blocked, due to some reason.
>
> This method does not seem to consider the freelist, which is still have
> some/many pages available. But evictDataPage() still trying to evict few
> entries from filled pages, and after sometime (in few mins, after it
> reached evictionThreshold memory); it is not getting any pages/entries to
> evict. It started reporting "Too many failed attempts to evict page: 30".
>
>
>
> My igniteconfiguration as follows:
>
> DataRegionConfiguration
>
> dataRegionConfig.setMaxSize(8L * 1024 * 1024 * 1024)//8GB
>
>
>
>
> dataRegionConfig.setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)//tried
>
> LRU2 as well
>
> ...
>
> igniteDataCfg.setPageSize(pageSizeKB)//16KB
>
>
>
>Ignite version - 2.8.0
>
>
>
> Using only Off-Heap for caching. DataRegion persistence is disabled, as we
> have 3rd party persistence configured with read-through & write-through
> enabled.
>
>
>
> When I tried different evictionThreshold, still got the same result. Not
> sure, what is the problem with my configuration.
>
>
>
> Many thanks in advance for your help.
>
>
>
>
>
> *Stay ahead of today’s supply chain complexities with Luminate Control
> Tower. Start a free 30-day trial here!
>  *
>


Re: Occasional duplicate ID with IgniteAtomicSequence in a cluster

2020-10-05 Thread Vladimir Pligin
Do you have a reproducer of such behavior? It would be really helpful.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Eviction policy enablement leads to ignite cluster blocking, it does not use pages from freelist

2020-10-05 Thread Prasad Pillala
Hi,



evictDataPage() always leads to ignite cluster blocked, due to some reason.

This method does not seem to consider the freelist, which is still have 
some/many pages available. But evictDataPage() still trying to evict few 
entries from filled pages, and after sometime (in few mins, after it reached 
evictionThreshold memory); it is not getting any pages/entries to evict. It 
started reporting "Too many failed attempts to evict page: 30".



My igniteconfiguration as follows:

DataRegionConfiguration

dataRegionConfig.setMaxSize(8L * 1024 * 1024 * 1024)//8GB



dataRegionConfig.setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU)//tried

LRU2 as well

...

igniteDataCfg.setPageSize(pageSizeKB)//16KB



   Ignite version - 2.8.0



Using only Off-Heap for caching. DataRegion persistence is disabled, as we have 
3rd party persistence configured with read-through & write-through enabled.



When I tried different evictionThreshold, still got the same result. Not sure, 
what is the problem with my configuration.



Many thanks in advance for your help.




Stay ahead of today's supply chain complexities with Luminate Control Tower. 
Start a free 30-day trial 
here!


Re: Continuous Query

2020-10-05 Thread Ilya Kasnacheev
Please send an empty message to: user-unsubscr...@ignite.apache.org to
unsubscribe yourself from the list.

Regards,
-- 
Ilya Kasnacheev


пн, 5 окт. 2020 г. в 07:35, Priya Yadav :

> unsubscribe
> --
> *From:* narges saleh 
> *Sent:* Sunday, 4 October 2020 2:03 AM
> *To:* user@ignite.apache.org 
> *Subject:* Re: Continuous Query
>
> The latter; the server needs to perform some calculations on the data
> without sending any notification to the app.
>
> On Fri, Oct 2, 2020 at 4:25 PM Denis Magda  wrote:
>
> And after you detect a record that satisfies the condition, do you need to
> send any notification to the application? Or is it more like a server
> detects and does some calculation logically without updating the app.
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 11:22 AM narges saleh  wrote:
>
> The detection should happen at most a couple of minutes after a record is
> inserted in the cache but all the detections are local to the node. But
> some records with the current timestamp might show up in the system with
> big delays.
>
> On Fri, Oct 2, 2020 at 12:23 PM Denis Magda  wrote:
>
> What are your requirements? Do you need to process the records as soon as
> they are put into the cluster?
>
>
>
> On Friday, October 2, 2020, narges saleh  wrote:
>
> Thank you Dennis for the reply.
> From the perspective of performance/resource overhead and reliability,
> which approach is preferable? Does a continuous query based approach impose
> a lot more overhead?
>
> On Fri, Oct 2, 2020 at 9:52 AM Denis Magda  wrote:
>
> Hi Narges,
>
> Use continuous queries if you need to be notified in real-time, i.e. 1) a
> record is inserted, 2) the continuous filter confirms the record's time
> satisfies your condition, 3) the continuous queries notifies your
> application that does require processing.
>
> The jobs are better for a batching use case when it's ok to process
> records together with some delay.
>
>
> -
> Denis
>
>
> On Fri, Oct 2, 2020 at 3:50 AM narges saleh  wrote:
>
> Hi All,
>  If I want to watch for a rolling timestamp pattern in all the records
> that get inserted to all my caches, is it more efficient to use timer based
> jobs (that checks all the records in some interval) or  continuous queries
> that locally filter on the pattern? These records can get inserted in any
> order  and some can arrive with delays.
> An example is to watch for all the records whose timestamp ends in 50, if
> the timestamp is in the format -mm-dd hh:mi.
>
> thanks
>
>
>
> --
> -
> Denis
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>


Re: unsubscribe

2020-10-05 Thread Ilya Kasnacheev
Hello!

Please send an empty message to: user-unsubscr...@ignite.apache.org to
unsubscribe yourself from the list.

Regards,
-- 
Ilya Kasnacheev


вс, 4 окт. 2020 г. в 22:36, Celes :

> unsubscribe
>


Re: Ignite thread pool configuration

2020-10-05 Thread VeenaMithare
Thanks Denis,

I am thinking of setting the below thread pools as this on both client and
server since we dont use data streamer, IGFS or Peer Class loading: 





Also our thick clients dont connect using REST . So thinking of adding the
below configuration on our thick client configuration.







Hope that is okay ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/