Re: Performance of Ignite integrating with PostgreSQL

2018-03-21 Thread Vinokurov Pavel
Also it makes sense to use new 2.4 version.

2018-03-22 8:37 GMT+03:00 Vinokurov Pavel :

> >> IgniteCache igniteCache = ignite.getOrCreateCache("
> testCache ");
> please, change to  ignite.cache("testCache") to be sure the we use
> configuration from the file.
>
> 2018-03-22 8:19 GMT+03:00 Vinokurov Pavel :
>
>> You already showed the cache configuration, but could you show jdbc
>> connection initialization
>>
>> 2018-03-22 7:59 GMT+03:00 Vinokurov Pavel :
>>
>>> Hi,
>>>
>>> Could you please show the "PATH/example-cache.xml" file.
>>>
>>> 2018-03-21 9:40 GMT+03:00 :
>>>
>>>> Hi Vinokurov,
>>>>
>>>>
>>>>
>>>> Thanks for your reply.
>>>>
>>>> I try to write batches by 100 entries.
>>>>
>>>> And I got a worse result.
>>>>
>>>> The writing speed is down to 12.09 KB per second.
>>>>
>>>> Below is my code which I try to use putAll and writeAll to rewrite.
>>>>
>>>> Did I make some mistakes?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Main function:
>>>>
>>>> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>>>>
>>>> IgniteCache igniteCache =
>>>> ignite.getOrCreateCache("testCache ");
>>>>
>>>> for(int i = 0; i < 100; i++)
>>>>
>>>> {
>>>>
>>>>  parameterMap.put(Integer.toString(i), "writeAll_val");
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>> while(true)
>>>>
>>>> {
>>>>
>>>>  igniteCache.putAll(parameterMap);
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Write all to PostgreSQL through JDBC:
>>>>
>>>> @Override
>>>>
>>>> public void writeAll(Collection>>> extends String>> entries) throws CacheWriterException {
>>>>
>>>> Iterator> it =
>>>> entries.iterator();
>>>>
>>>> Map parameterMap = new HashMap<>();
>>>>
>>>> int count = 1;
>>>>
>>>> while (it.hasNext()) {
>>>>
>>>> Cache.Entry entry = it.next();
>>>>
>>>> String valCount = "val";
>>>>
>>>> valCount += Integer.toString(count);
>>>>
>>>> parameterMap.put(valCount, entry.getValue());
>>>>
>>>> count++;
>>>>
>>>> it.remove();
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>> String sqlString = "INSERT INTO test_writeall(val) VALUES "
>>>>
>>>>+ "(:val1),(:val2),(:val3),(:val
>>>> 4),(:val5),(:val6),(:val7),(:val8),(:val9),(:val10),"
>>>>
>>>>    + "(:val11),(:val12),(:val13),(:
>>>> val14),(:val15),(:val16),(:val17),(:val18),(:val19),(:val20),"
>>>>
>>>>+ "(:val21),(:val22),(:val23),(:
>>>> val24),(:val25),(:val26),(:val27),(:val28),(:val29),(:val30),"
>>>>
>>>>+ "(:val31),(:val32),(:val33),(:
>>>> val34),(:val35),(:val36),(:val37),(:val38),(:val39),(:val40),"
>>>>
>>>>+ "(:val41),(:val42),(:val43),(:
>>>> val44),(:val45),(:val46),(:val47),(:val48),(:val49),(:val50),"
>>>>
>>>>+ "(:val51),(:val52),(:val53),(:
>>>> val54),(:val55),(:val56),(:val57),(:val58),(:val59),(:val60),"
>>>>
>>>>+ "(:val61),(:val62),(:val63),(:
>>>> val64),(:val65),(:val66),(:val67),(:val68),(:val69),(:val70),"
>>>>
>>>>+ "(:val71),(:val72),(:val73),(:
>>>> val74),(:val75),(:val76),(:val77),(:val78),(:val79),(:val80),"
>>>>
>>>>+ "(:val81),(:val82),(:val83),(:
>>>> val84),(:val85),(:val86),(:val87),(:val88),(:val89),(:val90),"
>>>>
>>>>+ "(:val91),(:val92),(:val93),(:
>>>> v

Re: Performance of Ignite integrating with PostgreSQL

2018-03-21 Thread Vinokurov Pavel
>> IgniteCache igniteCache = ignite.getOrCreateCache("testCache
");
please, change to  ignite.cache("testCache") to be sure the we use
configuration from the file.

2018-03-22 8:19 GMT+03:00 Vinokurov Pavel :

> You already showed the cache configuration, but could you show jdbc
> connection initialization
>
> 2018-03-22 7:59 GMT+03:00 Vinokurov Pavel :
>
>> Hi,
>>
>> Could you please show the "PATH/example-cache.xml" file.
>>
>> 2018-03-21 9:40 GMT+03:00 :
>>
>>> Hi Vinokurov,
>>>
>>>
>>>
>>> Thanks for your reply.
>>>
>>> I try to write batches by 100 entries.
>>>
>>> And I got a worse result.
>>>
>>> The writing speed is down to 12.09 KB per second.
>>>
>>> Below is my code which I try to use putAll and writeAll to rewrite.
>>>
>>> Did I make some mistakes?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Main function:
>>>
>>> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>>>
>>> IgniteCache igniteCache =
>>> ignite.getOrCreateCache("testCache ");
>>>
>>> for(int i = 0; i < 100; i++)
>>>
>>> {
>>>
>>>  parameterMap.put(Integer.toString(i), "writeAll_val");
>>>
>>> }
>>>
>>>
>>>
>>> while(true)
>>>
>>> {
>>>
>>>  igniteCache.putAll(parameterMap);
>>>
>>> }
>>>
>>>
>>>
>>>
>>>
>>> Write all to PostgreSQL through JDBC:
>>>
>>> @Override
>>>
>>> public void writeAll(Collection>> String>> entries) throws CacheWriterException {
>>>
>>> Iterator> it =
>>> entries.iterator();
>>>
>>> Map parameterMap = new HashMap<>();
>>>
>>> int count = 1;
>>>
>>> while (it.hasNext()) {
>>>
>>> Cache.Entry entry = it.next();
>>>
>>> String valCount = "val";
>>>
>>> valCount += Integer.toString(count);
>>>
>>> parameterMap.put(valCount, entry.getValue());
>>>
>>> count++;
>>>
>>> it.remove();
>>>
>>> }
>>>
>>>
>>>
>>> String sqlString = "INSERT INTO test_writeall(val) VALUES "
>>>
>>>+ "(:val1),(:val2),(:val3),(:val
>>> 4),(:val5),(:val6),(:val7),(:val8),(:val9),(:val10),"
>>>
>>>+ "(:val11),(:val12),(:val13),(:
>>> val14),(:val15),(:val16),(:val17),(:val18),(:val19),(:val20),"
>>>
>>>+ "(:val21),(:val22),(:val23),(:
>>> val24),(:val25),(:val26),(:val27),(:val28),(:val29),(:val30),"
>>>
>>>+ "(:val31),(:val32),(:val33),(:
>>> val34),(:val35),(:val36),(:val37),(:val38),(:val39),(:val40),"
>>>
>>>+ "(:val41),(:val42),(:val43),(:
>>> val44),(:val45),(:val46),(:val47),(:val48),(:val49),(:val50),"
>>>
>>>+ "(:val51),(:val52),(:val53),(:
>>> val54),(:val55),(:val56),(:val57),(:val58),(:val59),(:val60),"
>>>
>>>+ "(:val61),(:val62),(:val63),(:
>>> val64),(:val65),(:val66),(:val67),(:val68),(:val69),(:val70),"
>>>
>>>+ "(:val71),(:val72),(:val73),(:
>>> val74),(:val75),(:val76),(:val77),(:val78),(:val79),(:val80),"
>>>
>>>+ "(:val81),(:val82),(:val83),(:
>>> val84),(:val85),(:val86),(:val87),(:val88),(:val89),(:val90),"
>>>
>>>+ "(:val91),(:val92),(:val93),(:
>>> val94),(:val95),(:val96),(:val97),(:val98),(:val99),(:val100);";
>>>
>>>
>>>
>>> jdbcTemplate.update(sqlString, parameterMap);
>>>
>>> }
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
>>> *Sent:* Wednesday, March 14, 2018 5:42 PM
>>> *To:* user@ignite.apache.org
>>> *Subject:* Re: 

Re: Performance of Ignite integrating with PostgreSQL

2018-03-21 Thread Vinokurov Pavel
You already showed the cache configuration, but could you show jdbc
connection initialization

2018-03-22 7:59 GMT+03:00 Vinokurov Pavel :

> Hi,
>
> Could you please show the "PATH/example-cache.xml" file.
>
> 2018-03-21 9:40 GMT+03:00 :
>
>> Hi Vinokurov,
>>
>>
>>
>> Thanks for your reply.
>>
>> I try to write batches by 100 entries.
>>
>> And I got a worse result.
>>
>> The writing speed is down to 12.09 KB per second.
>>
>> Below is my code which I try to use putAll and writeAll to rewrite.
>>
>> Did I make some mistakes?
>>
>>
>>
>>
>>
>>
>>
>> Main function:
>>
>> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>>
>> IgniteCache igniteCache =
>> ignite.getOrCreateCache("testCache ");
>>
>> for(int i = 0; i < 100; i++)
>>
>> {
>>
>>  parameterMap.put(Integer.toString(i), "writeAll_val");
>>
>> }
>>
>>
>>
>> while(true)
>>
>> {
>>
>>  igniteCache.putAll(parameterMap);
>>
>> }
>>
>>
>>
>>
>>
>> Write all to PostgreSQL through JDBC:
>>
>> @Override
>>
>> public void writeAll(Collection> String>> entries) throws CacheWriterException {
>>
>> Iterator> it =
>> entries.iterator();
>>
>> Map parameterMap = new HashMap<>();
>>
>> int count = 1;
>>
>> while (it.hasNext()) {
>>
>> Cache.Entry entry = it.next();
>>
>> String valCount = "val";
>>
>> valCount += Integer.toString(count);
>>
>> parameterMap.put(valCount, entry.getValue());
>>
>> count++;
>>
>> it.remove();
>>
>> }
>>
>>
>>
>> String sqlString = "INSERT INTO test_writeall(val) VALUES "
>>
>>+ "(:val1),(:val2),(:val3),(:val
>> 4),(:val5),(:val6),(:val7),(:val8),(:val9),(:val10),"
>>
>>+ "(:val11),(:val12),(:val13),(:
>> val14),(:val15),(:val16),(:val17),(:val18),(:val19),(:val20),"
>>
>>+ "(:val21),(:val22),(:val23),(:
>> val24),(:val25),(:val26),(:val27),(:val28),(:val29),(:val30),"
>>
>>+ "(:val31),(:val32),(:val33),(:
>> val34),(:val35),(:val36),(:val37),(:val38),(:val39),(:val40),"
>>
>>+ "(:val41),(:val42),(:val43),(:
>> val44),(:val45),(:val46),(:val47),(:val48),(:val49),(:val50),"
>>
>>+ "(:val51),(:val52),(:val53),(:
>> val54),(:val55),(:val56),(:val57),(:val58),(:val59),(:val60),"
>>
>>+ "(:val61),(:val62),(:val63),(:
>> val64),(:val65),(:val66),(:val67),(:val68),(:val69),(:val70),"
>>
>>+ "(:val71),(:val72),(:val73),(:
>> val74),(:val75),(:val76),(:val77),(:val78),(:val79),(:val80),"
>>
>>+ "(:val81),(:val82),(:val83),(:
>> val84),(:val85),(:val86),(:val87),(:val88),(:val89),(:val90),"
>>
>>+ "(:val91),(:val92),(:val93),(:
>> val94),(:val95),(:val96),(:val97),(:val98),(:val99),(:val100);";
>>
>>
>>
>> jdbcTemplate.update(sqlString, parameterMap);
>>
>> }
>>
>>
>>
>>
>>
>>
>>
>> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
>> *Sent:* Wednesday, March 14, 2018 5:42 PM
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>>
>>
>>
>> Hi,
>>
>>
>>
>> You could try to use igniteCache.putAll  for write batches by 1000
>> entries.
>>
>> Use following script in PostgresDBStore#writeAll method to put data into
>> the database:
>>
>> String sqlString = "INSERT INTO test(val) VALUES (:val1)(:val2)(:val3);";
>>
>>
>>
>>
>>
>> 2018-03-14 11:58 GMT+03:00 :
>>
>> Hi,
>>
>> I try to use Ignite to integrate with PostgreSQL.
>>
>> And I use “atop” to monitor the data write to PostgreSQL.
>>
>> Then observed that the writing speed is 1 MB per second.
>>
>> This performance is not really good. Below is my configuration and code.
>> Please help me to improve it.
>>
>> Thanks.
>>
>>
>>
>

Re: Performance of Ignite integrating with PostgreSQL

2018-03-21 Thread Vinokurov Pavel
Hi,

Could you please show the "PATH/example-cache.xml" file.

2018-03-21 9:40 GMT+03:00 :

> Hi Vinokurov,
>
>
>
> Thanks for your reply.
>
> I try to write batches by 100 entries.
>
> And I got a worse result.
>
> The writing speed is down to 12.09 KB per second.
>
> Below is my code which I try to use putAll and writeAll to rewrite.
>
> Did I make some mistakes?
>
>
>
>
>
>
>
> Main function:
>
> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>
> IgniteCache igniteCache = 
> ignite.getOrCreateCache("testCache
> ");
>
> for(int i = 0; i < 100; i++)
>
> {
>
>  parameterMap.put(Integer.toString(i), "writeAll_val");
>
> }
>
>
>
> while(true)
>
> {
>
>  igniteCache.putAll(parameterMap);
>
> }
>
>
>
>
>
> Write all to PostgreSQL through JDBC:
>
> @Override
>
> public void writeAll(Collection String>> entries) throws CacheWriterException {
>
> Iterator> it =
> entries.iterator();
>
> Map parameterMap = new HashMap<>();
>
> int count = 1;
>
> while (it.hasNext()) {
>
> Cache.Entry entry = it.next();
>
> String valCount = "val";
>
> valCount += Integer.toString(count);
>
> parameterMap.put(valCount, entry.getValue());
>
> count++;
>
> it.remove();
>
> }
>
>
>
> String sqlString = "INSERT INTO test_writeall(val) VALUES "
>
>+ "(:val1),(:val2),(:val3),(:
> val4),(:val5),(:val6),(:val7),(:val8),(:val9),(:val10),"
>
>+ "(:val11),(:val12),(:val13),(:
> val14),(:val15),(:val16),(:val17),(:val18),(:val19),(:val20),"
>
>+ "(:val21),(:val22),(:val23),(:
> val24),(:val25),(:val26),(:val27),(:val28),(:val29),(:val30),"
>
>+ "(:val31),(:val32),(:val33),(:
> val34),(:val35),(:val36),(:val37),(:val38),(:val39),(:val40),"
>
>+ "(:val41),(:val42),(:val43),(:
> val44),(:val45),(:val46),(:val47),(:val48),(:val49),(:val50),"
>
>+ "(:val51),(:val52),(:val53),(:
> val54),(:val55),(:val56),(:val57),(:val58),(:val59),(:val60),"
>
>+ "(:val61),(:val62),(:val63),(:
> val64),(:val65),(:val66),(:val67),(:val68),(:val69),(:val70),"
>
>+ "(:val71),(:val72),(:val73),(:
> val74),(:val75),(:val76),(:val77),(:val78),(:val79),(:val80),"
>
>+ "(:val81),(:val82),(:val83),(:
> val84),(:val85),(:val86),(:val87),(:val88),(:val89),(:val90),"
>
>+ "(:val91),(:val92),(:val93),(:
> val94),(:val95),(:val96),(:val97),(:val98),(:val99),(:val100);";
>
>
>
> jdbcTemplate.update(sqlString, parameterMap);
>
> }
>
>
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Wednesday, March 14, 2018 5:42 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> Hi,
>
>
>
> You could try to use igniteCache.putAll  for write batches by 1000
> entries.
>
> Use following script in PostgresDBStore#writeAll method to put data into
> the database:
>
> String sqlString = "INSERT INTO test(val) VALUES (:val1)(:val2)(:val3);";
>
>
>
>
>
> 2018-03-14 11:58 GMT+03:00 :
>
> Hi,
>
> I try to use Ignite to integrate with PostgreSQL.
>
> And I use “atop” to monitor the data write to PostgreSQL.
>
> Then observed that the writing speed is 1 MB per second.
>
> This performance is not really good. Below is my configuration and code.
> Please help me to improve it.
>
> Thanks.
>
>
>
> There is my cache configuration:
>
>   
>
>  "testCache"/>
>
>  value="PARTITIONED"/>
>
>  value=" ATOMIC"/>
>
>  value="PRIMARY"/>
>
>  
>
>  value="true"/>
>
>   value="true"/>
>
>
>
>   value="64"/>
>
>   value="131072" />
>
>   value="131072" />
>
>
>
>  
>

Re: ContinuousQuery - SqlFieldsQuery as InitialQuery

2018-03-20 Thread Vinokurov Pavel
Hello!

At the compilation time it is impossible to know result type of the
SqlFieldsQuery.
So you can try following:
((ContinuousQuery)q).setInitialQuery(new SqlFieldsQuery("select _val from
table"));

Thanks,
Pavel

2018-03-20 20:43 GMT+03:00 au.fp2018 :

> Hello All,
>
> I'm trying to setup a ContinuousQuery to retrieve _VAL objects from the
> cache. So I tried using the SqlQuery but unfortunately I get the following
> error:
>
> Only queries starting with 'SELECT *' and 'SELECT alias.*' are
> supported
> (rewrite your query or use SqlFieldsQuery instead): select _VAL from
> "test"."TEST" where test_id=?
>
> If I convert the query to SqlFieldsQuery then the return type does not
> match
> the type expected by the setInitalQuery() method:
>
> contQuery.setInitialQuery(initQueryUsingSqlFieldsQuery);// <--
> doesn't compile
>
> Is there an example of using SqlFieldsQuery as the InitialQuery?
> How else can I retrieve the initial snapshot using SQL, when I need the
> fully deserialzed _VAL object?
>
> Thanks,
> AU
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Performance of Ignite integrating with PostgreSQL

2018-03-14 Thread Vinokurov Pavel
Hi,

You could try to use igniteCache.putAll  for write batches by 1000 entries.
Use following script in PostgresDBStore#writeAll method to put data into
the database:
String sqlString = "INSERT INTO test(val) VALUES (:val1)(:val2)(:val3);";


2018-03-14 11:58 GMT+03:00 :

> Hi,
>
> I try to use Ignite to integrate with PostgreSQL.
>
> And I use “atop” to monitor the data write to PostgreSQL.
>
> Then observed that the writing speed is 1 MB per second.
>
> This performance is not really good. Below is my configuration and code.
> Please help me to improve it.
>
> Thanks.
>
>
>
> There is my cache configuration:
>
>   
>
>  "testCache"/>
>
>  value="PARTITIONED"/>
>
> 
>
>  value="PRIMARY"/>
>
>  
>
>  value="true"/>
>
>   value="true"/>
>
>
>
>   value="64"/>
>
>   value="131072" />
>
>   value="131072" />
>
>
>
>  
>
>  
>
> 
>
>  
>
>  
>
>  
>
>  
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
> java.lang.String
>
>
> java.lang.String
>
>
>
> 
>
> 
>
>
>
>
>
> Main function:
>
> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>
> IgniteCache igniteCache = 
> ignite.getOrCreateCache("testCache
> ");
>
> int seqint = 0;
>
> while(true)
>
> {
>
> igniteCache.put(Integer.toString(seqint),
> "valueString");
>
> seqint++;
>
> }
>
>
>
>
>
> Write behind to PostgreSQL through JDBC:
>
> @Override
>
> public void write(Cache.Entry entry)
> throws CacheWriterException {
>
> Map parameterMap = new HashMap<>();
>
> parameterMap.put(“val”, entry.getValue());
>
> String sqlString = "INSERT INTO test(val) VALUES (:val);";
>
> jdbcTemplate.update(sqlString, parameterMap);
>
> }
>
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 

Regards

Pavel Vinokurov


Re: Ignite with Spring Cache on K8S, eviction problem

2018-02-19 Thread Vinokurov Pavel
Could you show service methods with cache annotations.

2018-02-19 17:19 GMT+03:00 lukaszbyjos :

> I still have a problem with evicting one cache. What's recommended setting
> for partitioned cache?
> From service B I want to evict "ch-current" cache but service A still see
> old values.
> At the same time just after evicting this cache the other one is evicted
> and
> looks like it's working.
> Cache "ch-current" keys are created like this
> "@CacheEvict(value = "ch-current", key = "{#uid.concat('-').concat(0)}"),"
> "@Cacheable(cacheNames = "ch-current", key =
> "{#userId.name().concat('-').concat(#page)}")"
> I changed key creation after your suggestions.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Timeout while running checkpoint

2018-02-12 Thread Vinokurov Pavel
It is normal behavior.
According to documentation the checkpointing process could be triggered by
the  timeout( 3min by default) or the size of the checkpointing buffer.
In your case every 3 mins Ingite starts the checkpointing process to sync
dirty pages from RAM on disk.
The log message indicates there are not dirty pages in RAM.

https://apacheignite.readme.io/v2.3/docs/persistence-checkpointing#section-checkpointing-tuning
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size


2018-02-12 12:16 GMT+03:00 Josephine Barboza :

> 3 mins on both nodes
>
>
>
> 2018-02-12 09:10:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO
> GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were
> modified) [checkpointLockWait=0ms, checkpointLockHoldTime=3ms,
> reason='timeout']
>
> 2018-02-12 09:13:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO
> GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were
> modified) [checkpointLockWait=0ms, checkpointLockHoldTime=2ms,
> reason='timeout']
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 2:23 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> How often the "Skipping checkpoint" message occurred in logs?
>
>
>
> 2018-02-12 10:47 GMT+03:00 Josephine Barboza :
>
> No I haven’t overridden checkpointFreq value.
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 1:03 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> Hi,
>
>
>
> The timeout could be caused by value 
> PersistentStoreConfiguration#checkpointFreq
> parameter.
>
> Have you overrided *checkpointFreq* config parameter?
>
>
>
> 2018-02-12 10:05 GMT+03:00 Josephine Barboza :
>
> Hi,
>
> I’m constantly seeing a lot of information logs after setting up a cluster
> in ignite of two nodes
>
>
>
> Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms,
> checkpointLockHoldTime=1ms, reason='timeout']
>
>
>
> Why could the process be timing out? I am using persistent store
> configuration with v2.1.
>
>
>
> Thanks,
>
> Josephine
>
> *IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.*
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>



-- 

Regards

Pavel Vinokurov


Re: Timeout while running checkpoint

2018-02-12 Thread Vinokurov Pavel
How often the "Skipping checkpoint" message occurred in logs?

2018-02-12 10:47 GMT+03:00 Josephine Barboza :

> No I haven’t overridden checkpointFreq value.
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 1:03 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> Hi,
>
>
>
> The timeout could be caused by value 
> PersistentStoreConfiguration#checkpointFreq
> parameter.
>
> Have you overrided *checkpointFreq* config parameter?
>
>
>
> 2018-02-12 10:05 GMT+03:00 Josephine Barboza :
>
> Hi,
>
> I’m constantly seeing a lot of information logs after setting up a cluster
> in ignite of two nodes
>
>
>
> Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms,
> checkpointLockHoldTime=1ms, reason='timeout']
>
>
>
> Why could the process be timing out? I am using persistent store
> configuration with v2.1.
>
>
>
> Thanks,
>
> Josephine
>
> *IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.*
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>



-- 

Regards

Pavel Vinokurov


Re: Add ignite jars using maven

2018-02-11 Thread Vinokurov Pavel
I think the right way is to add dependencies into the pom file.

2018-02-12 10:37 GMT+03:00 Rajarshi Pain :

> Hi,
>
>
> Previously i was manually adding all the jars to my build path, now i am
> using maven to do this. Is there any way to add all ignite jars with it's
> dependency in to the project?
> Or i have to add one by into the pom file?
>
>
> --
>
> Thanks
> Rajarshi
>



-- 

Regards

Pavel Vinokurov


Re: Timeout while running checkpoint

2018-02-11 Thread Vinokurov Pavel
Hi,

The timeout could be caused by value
PersistentStoreConfiguration#checkpointFreq
parameter.
Have you overrided *checkpointFreq* config parameter?

2018-02-12 10:05 GMT+03:00 Josephine Barboza :

> Hi,
>
> I’m constantly seeing a lot of information logs after setting up a cluster
> in ignite of two nodes
>
>
>
> Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms,
> checkpointLockHoldTime=1ms, reason='timeout']
>
>
>
> Why could the process be timing out? I am using persistent store
> configuration with v2.1.
>
>
>
> Thanks,
>
> Josephine
> IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.
>



-- 

Regards

Pavel Vinokurov


Re: Ignite with Spring Cache on K8S, eviction problem

2018-02-11 Thread Vinokurov Pavel
Spring creates a composite key that might be not suitable for the
distributed Ignite cache.
You could use the custom composite key using concatenation #user+ " - "+#id.


2018-02-09 17:50 GMT+03:00 lukaszbyjos :

> Hi. I have k8s cluster with one ignite server and few services as clients.
> I have problem with evicting values using spring annotations.
> Apps have cache "example-user" and when one service evict by key another
> one
> still have values.
>
> There you can find cache config and example repo for spring
> https://gist.github.com/Mistic92/8649515ff026e24ca0870ed61739a17c
>
> What should I change or do because currently I don't have any idea :(
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Loading cache from Oracle Table

2018-02-09 Thread Vinokurov Pavel
Hi Prasad,

Within your implementation of CacheStore.loadCacheYou you could use
multiple threads to retrieve rows by batches.
Note that each thread should use different jdbc connection.

2018-02-09 13:57 GMT+03:00 Prasad Bhalerao :

> Hi,
>
> I have multiple oracle tables with more 50 million rows. I want to load
> those table in cache.To load the cache I am using CacheStore.loadCache
> method.
>
> Is there anyway where I can load a single table in multithreaded way to
> improve the loading performance?
> What I actually want to do is,
> 1) Get all the distinct keys from the table.
> 2) Divide the list of key in batches.
> 3) Give each batch of keys to a separate thread which will fetch the data
> from the same table in parallel mode.
> e.g. Thread T1 will fetch the data for key 1 to 100 and thread T2 2ill
> fetch the data for keys 101 to 200 and so on.
>
> Does ignite provide any mechanism to do this?
>
> Note: I do not have partitionId in my table.
>
>
> Thanks,
> Prasad
>



-- 

Regards

Pavel Vinokurov


Re: continuous query - changes from local server only

2018-02-08 Thread Vinokurov Pavel
Som,

You could create the continuous query on each client node with the filter
described above.

2018-02-08 19:55 GMT+03:00 Som Som <2av10...@gmail.com>:

> i've got both client and server nodes on each of 3 physical servers, that
> is my cluster. there is a partitioned cache, each server node stores only a
> part of keys. i start the application on my dev machine that app is also
> client of the cluster further i put new key into the cluster. i would like
> to see this change only in client which is located with server node which
> stores this new key.
>
> 8 февр. 2018 г. 11:41 ДП пользователь "dkarachentsev" <
> dkarachent...@gridgain.com> написал:
>
> Hi,
>
> You may fuse filter for that, for example:
>
> ContinuousQuery qry = new ContinuousQuery<>();
>
> fine al Set nodes = new
> HashSet<>(client.cluster().forDataNodes("cache")
> .forHost(client.cluster().localNode()).nodes());
>
> qry.setRemoteFilterFactory(new
> Factory>() {
> @Override public CacheEntryEventFilter
> create() {
> return new CacheEntryEventFilter() {
> @IgniteInstanceResource
> private Ignite ignite;
>
> @Override public boolean evaluate(
> CacheEntryEvent Integer> event) throws CacheEntryListenerException {
> // Server nodes on current host
> return nodes.contains(ignite.cluster(
> ).localNode());
> }
> };
> }
> });
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


-- 

Regards

Pavel Vinokurov


Re: code sample for cluster configuration

2018-02-07 Thread Vinokurov Pavel
The client mode allows to push/retrieve the data to the cluster(one or
several server nodes). A client node doesn't store data, but the client is
able to use the whole set of Ignite APIs.

2018-02-08 8:01 GMT+03:00 Rajesh Kishore :

> Thanks Pavel, I am aware about this code.
> I am able to establish a cluster as well. I have following requirement:
> From my application - I want to retrieve/insert records on different
> cluster server. My application code is single instance , my application
> code should be unaware from which ignite cluster server its
> retrieving/inserting the data.
> Would just setting Ignition.setClientMode(true) is enough for this
> usecase, would my application to push the data to one of the portioned
> cluster server ?
>
> Thanks,
> Rajesh
>
> On Wed, Feb 7, 2018 at 8:43 PM, Vinokurov Pavel  > wrote:
>
>> Hi Rajesh,
>>
>> There is a good sample with enabled persistance -
>> https://github.com/apache/ignite/blob/master/examples/src/
>> main/java/org/apache/ignite/examples/persistentstore/Persi
>> stentStoreExample.java
>> Also documentation about Ignite persistance  presented in
>> https://apacheignite.readme.io/docs/distributed-persistent-store.
>>
>>
>> 2018-02-07 13:14 GMT+03:00 Rajesh Kishore :
>>
>>> Hi,
>>>
>>> I want to try a two node setup for Ignite cluster with native file based
>>> persistence enabled .
>>>
>>> Any samples, or pointer ?
>>>
>>>
>>> -Rajesh
>>>
>>
>>
>>
>> --
>>
>> Regards
>>
>> Pavel Vinokurov
>>
>
>


-- 

Regards

Pavel Vinokurov


Re: code sample for cluster configuration

2018-02-07 Thread Vinokurov Pavel
Hi Rajesh,

There is a good sample with enabled persistance -
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/persistentstore/PersistentStoreExample.java
Also documentation about Ignite persistance  presented in
https://apacheignite.readme.io/docs/distributed-persistent-store.


2018-02-07 13:14 GMT+03:00 Rajesh Kishore :

> Hi,
>
> I want to try a two node setup for Ignite cluster with native file based
> persistence enabled .
>
> Any samples, or pointer ?
>
>
> -Rajesh
>



-- 

Regards

Pavel Vinokurov