Re: Ignite query performance with lots of joins

2019-09-20 Thread Павлухин Иван
Hi,

I checked provided test data. I was able to speedup a query execution
with Ignite about 2 times on my machine by using extra configuration
property System.setProperty("IGNITE_MAX_INDEX_PAYLOAD_SIZE", "256");
See a following documentation section about configuring index inline
size [1]. You can try the same in your environment. Shortly inline
size is needed to tune an indexed search speed. By default ignite
index pages can contain very limited pieces of indexed values (default
inline size is 10 bytes). If indexed values do not fit inline size
then actual values will be searched in another page (data page). It
can lead to a performance degradation.

> Not sure how to interpret the above statement. The support for SQL is an 
> attractive feature of Ignite/Gridgain, but if it doesn't perform on a single 
> node with little data I don't see how it will perform on a multi-node cluster.

Actually data distribution is a tradeoff. And usually it sounds as
"doing more work with more resources". And a gain here is not linear.
But as final result you can reach higher throughput by adding more
computational resources. Of course it depends on a particular
workload. Complex joins might be not good candidate here.

[1] 
https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-increasing-index-inline-size

ср, 18 сент. 2019 г. в 11:03, spoutnik_be :
>
> Unfortunately, I am nowhere near Silicon Valley these days ;-)
>
> Any update on possible optimizations that could bring us somewhat closer
> than H2 timings?
>
> Thanks, L.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: DataStreamer addData takes lot of time after 500 million writes

2019-09-19 Thread Павлухин Иван
Hi,

Is the problem specific to DataStreamer? Will the problem disappear if
insert the same data using IgniteCache.put (or putAll batches)?

вс, 15 сент. 2019 г. в 15:49, KR Kumar :
>
> Hi all  - Why data streamer take lot of time randomly after 500+ million 
> writes, it frequently and very consistently takes lot of time to finish the 
> writes and to the extent of 25 to 45 seconds for write. May be its flushing 
> the data as i have flush frequency set but why not in the beginning and why 
> only in the end. And also I see heap size going up after sometime and trend 
> is consistently upwards.
>
> Here is the streamer configuration:
>
> dataStreamer.autoFlushFrequency(1);
> dataStreamer.perNodeBufferSize(32 * 1024);
> dataStreamer.perNodeParallelOperations(32);
>
> Not sure if this is of any use but here is the dataStorageConfiguration
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>  name="defaultDataRegionConfiguration">
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  name="initialSize" value="#{512L *  1024 * 1024}">
>  name="maxSize" value="#{4L * 1024 * 1024 * 1024}" />
>  name="persistenceEnabled" value="true" />
>  name="checkpointPageBufferSize" value="#{2L *1024 * 1024 * 1024}" />
> 
> 
>  value="#{4 * 1024}" />
>  value="${grid.data}" />
>  value="${grid.wal}" />
>  name="walArchivePath" value="${grid.wal}/archive" />
>  value="BACKGROUND" />
>  name="walFlushFrequency" value="5000">
> 
> 



-- 
Best regards,
Ivan Pavlukhin


Re: Topology snapshot explanation

2019-09-15 Thread Павлухин Иван
Hi Rick,

1. Clients should not "reserve" offheap memory.
2. Near caches do not use offheap.

Could you please check that a client process does not request offheap
memory from OS? I hope this is just a trouble with a "Topology
snapshot" misleading message.

сб, 14 сент. 2019 г. в 12:36, rick_tem :
>
> Hi,
>
> So you are saying clients too are using the data region settings and
> reserving gigs of data for offheap (if gigs of data are configured) even if
> I don't have near-cache configured?  I am using the same Spring
> configuration for clients and servers, can I override that behavior at
> runtime?
>
> Thanks,
> Rick
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Sudden node failure on Ignite v2.7.5

2019-07-19 Thread Павлухин Иван
Unfortunately I do not know. Currently there is no release activity on 2.8.

пт, 19 июл. 2019 г. в 09:39, ihalilaltun :
>
> Hi Ivan
>
> Thanks for the reply. I've checked the jira issue and it says it will be
> released in v2.8, when do you think v2.8 will be released?
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Sudden node failure on Ignite v2.7.5

2019-07-18 Thread Павлухин Иван
Hi,

It seems that the issue [1] was already fixed.

[1] https://issues.apache.org/jira/browse/IGNITE-11953

вт, 16 июл. 2019 г. в 09:30, ihalilaltun :
>
> Hi Pavel,
>
> Thanks for you reply. Since we use the whole sysyem on production
> environment we cannot apply the second solution.
> Do you have any estimated time for the first solution/fix?
>
> Thanks.
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: onheapCacheEnabled enormous heap consumption

2019-07-14 Thread Павлухин Иван
Andrey,

Yes it is a little bit complicated for understanding.

CacheConfiguration.evictionPolicy takes it roots back in history where
there were neither offheap nor persisence, cache data was stored in
java heap only. Also, as far as I know, today
CacheConfiguration.evictionPolicy (as already mentioned) works only
when onheap caching is enabled and an entry is still available in
offheap after it was evicted from onheap. And it worth noting (correct
me if I am wrong) that data is NOT buffered in onheap and then written
to offheap on eviction but written to both places on each writing
operation. So, I expect that onheap caching can speedup only read
operations.

DataRegionConfiguration.pageEvictionMode is a different thing (similar
naming brings confusion) and it appeared with PageMemory. It cannot be
configured on a cache level. It is applicable ONLY for data regions
WITHOUT persistense and controls offheap pages eviction. If
pageEvictionMode is not DataPageEvictionMode.DISABLED then in case of
not enough memory (see also evictionThreshold) some page will be
evicted (actually cleaned and reused).

If persistence is enabled DataRegionConfiguration.pageEvictionMode has
no effect (there should be a warning in Ignite node startup logs). For
a persistent data regions there is an internal page replacement
algorithm allowing to reuse offeap memory pages when some data need to
be pulled from disk.

пн, 8 июл. 2019 г. в 18:38, Ilya Kasnacheev :
>
> Hello!
>
> Data is always written to persistence immediately (via WAL). You can control 
> eviction of offheap with evictionThreshold and pageEvictionMode settings of 
> DataRegionConfiguration.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:50, Andrey Dolmatov :
>>
>> When data overfit dataRegion max size, so no more available offheap space, 
>> then data goes to persistence. So, what option controls how data pages 
>> should be evicted from offheap to persistence.
>>
>> On Mon, Jul 8, 2019, 5:33 PM Ilya Kasnacheev  
>> wrote:
>>>
>>> Hello!
>>>
>>> Data is always stored in offheap. Eviction strictly controls onheap cache. 
>>> Once data is evicted from onheap it is available in offheap.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 8 июл. 2019 г. в 17:31, Andrey Dolmatov :

 We plan to use persistence in production. I didn't understand, 
 CacheConfiguration.EvictionPolicy specify heap->offheap eviction, 
 offheap->persistence eviction or both. It's not clear for me.

 On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev  
 wrote:
>
> Hello!
>
> Oops, I was wrong. This is indeed the wrong setting.
>
> Have you tried specifying evictionPolicy? I think it is the one that 
> controls eviction from onheap cache. You can put a LruEvictionPolicy of 
> 100 000 here, for example.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :
>>
>> No, because we didnt specify QueryEntity.
>> Does onheapCacheEnabled uses for SQL only?
>> What default value for sqlOnheapCacheMaxSize?
>>
>> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :
>>>
>>> Hello!
>>>
>>> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify 
>>> 100 000 if you like.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :

 We use simple replicated KV cache.
 We try to upload 32 000 000 small records  to it (about 
 6Gb in data region, persistance disabled). We load data using 
 DataStreamer.

 If we set onheapCacheEnabled=false, server node consumes heap about 
 500 Mb.
 If we set onheapCacheEnabled=true, server node consumes heap about 6 
 Gb.

 Why DataStreamer uses heap memory to load data? Why on-heap size is 
 unlimited (not just 100.000 records)? What default on-heap eviction 
 policy?

 
 
 
 

 Thanks!



-- 
Best regards,
Ivan Pavlukhin


Re: How to improve the performance of COPY commands?

2019-07-11 Thread Павлухин Иван
Hi,

Currently COPY is a mechanism designed for the fastest data load. Yes,
you can try separate your data in chunks and execute COPY in parallel.
By the way, where is your input located and what is it size in bytes
(Gb)? Is persistence enabled? Does a DataRegion have enough memory to
keep all data?

ср, 10 июл. 2019 г. в 05:02, 18624049226 <18624049...@163.com>:
>
> If the COPY command is used to import a large amount of data, the execution 
> time is a little long.
> In the current test environment, the performance is about more than 10,000/s, 
> so if it is 100 million data, it will take several hours.
>
> Is there a faster way to import, or is COPY working in parallel?
>
> thanks!
>


-- 
Best regards,
Ivan Pavlukhin


Re: Spring Indexing example

2019-06-25 Thread Павлухин Иван
Hi Rick,

You can try following:













вт, 25 июн. 2019 г. в 17:54, rick_tem :
>
> Hi,
> With the below index definition, is there a way to sort createdTime
> descending and processedTime ascending?  What would the definition look
> like?
>
> Thanks!
> Rick
>
>  class="org.apache.ignite.cache.QueryIndex">
> 
> 
> 
> recId
> 
> createdTime
> 
> processedTime
> 
> partitionId
> 
> 
>  value="SORTED"/>
> 
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: unable to query the cache after restart

2019-06-25 Thread Павлухин Иван
Have no clever idea, waiting for a reproducer.

вт, 25 июн. 2019 г. в 05:59, goutham manchikatla :
>
> It didn't work as expected, tried disable/enable WAL too, still I see the 
> same behavior.
>
> On Mon, Jun 24, 2019 at 7:32 AM Павлухин Иван  wrote:
>>
>> Hi Goutham,
>>
>> I did not get from your last message does it work now as expected? If
>> not does it work without a trick with disable/enable WAL?
>>
>> ср, 19 июн. 2019 г. в 19:18, goutham manchikatla :
>> >
>> > Hi Denis,
>> >
>> > I tried removing Default_Region name property from config, still I see the 
>> > same behavior. But when I trigger load_cache process, and query the 
>> > cache(after a complete cluster restart) , I am getting the response.
>> >
>> > Below is the Load_cache process.
>> >  try (Ignite ignite = Ignition.start(configFile)) {
>> >
>> >
>> > // Start loading cache on all caching nodes.
>> >
>> > final IgniteCache cache = ignite.cache(cacheName);
>> >
>> > long ts = System.currentTimeMillis();
>> >
>> >
>> > IgniteCluster cluster = ignite.cluster();
>> >
>> > cluster.disableWal(cacheName);
>> >
>> >
>> > LOG.info("Disabling WAL for Initial Data Loading");
>> >
>> >
>> > cache.loadCache(null, ignite, cacheName, sqlquery);
>> >
>> >
>> > LOG.info("Loaded Cache in " + (System.currentTimeMillis() - ts) + " 
>> > millisecs");
>> >
>> >
>> > cluster.enableWal(cacheName);
>> >
>> >
>> > LOG.info("Enabling WAL after the pre-loading is complete");
>> >
>> > }
>> >
>> >
>> > Thanks,
>> >
>> > Goutham
>> >
>> >
>> > On Tue, Jun 18, 2019 at 8:45 PM Denis Magda  wrote:
>> >>
>> >> Try to remove Default_Region name property from your config. If to follow 
>> >> this example that’s how persistence is enabled for the default region:
>> >> https://apacheignite.readme.io/docs/distributed-persistent-store
>> >>
>> >> Denis
>> >>
>> >>
>> >> On Thursday, June 6, 2019, goutham manchikatla  wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> I didn't change any code between restarts. Below is the configuration.
>> >>>
>> >>> 
>> >>> > >>> class="org.apache.ignite.configuration.DataStorageConfiguration">
>> >>> 
>> >>> 
>> >>>
>> >>> 
>> >>> 
>> >>> > >>> class="org.apache.ignite.configuration.DataRegionConfiguration">
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> > >>> class="org.apache.ignite.configuration.DataRegionConfiguration">
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> > >>> value="true"/>
>> >>> 
>> >>> > >>> value="RANDOM_LRU"/>
>> >>> 
>> >>> > >>> value="#{1024L * 1024 * 1024}"/>
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> 
>> >>> > >>> class="org.apache.ignite.configuration.CacheConfiguration">
>> >>> > >>> value="500MB_Region"/>
>> >>> 
>> >>>  

Re: Canceling a running query on ignite

2019-06-24 Thread Павлухин Иван
Hi,

How do you execute your SQL queries? Java API SqlFieldsQuery has a
setTimeout method. Also there is a QueryCursor.cancel method.

Also KILL query SQL command [1] is targeted for 2.8.

[1] https://issues.apache.org/jira/browse/IGNITE-11564

пт, 21 июн. 2019 г. в 18:41, Ph Tham :
>
> Hello All,
>
> When we run a complex SQL query on ignite cluster from a client, not clear on 
> what is expected? Does it run forever or can we specify a timeout? Is there a 
> way it can be canceled so that it does not bring the server down?
>
> Thanks



-- 
Best regards,
Ivan Pavlukhin


Re: Can we avoid the PME when restarts a node in cluster.

2019-06-24 Thread Павлухин Иван
It worth noting that IGNITE-9420 seems to be relevant for clusters
with enabled persistence.

пт, 21 июн. 2019 г. в 12:57, Ilya Kasnacheev :
>
> Hello!
>
> 2.7.5 definitely did not include it and there is no set date for 2.8.
>
> Regards,
>
> пт, 21 июн. 2019 г., 6:50 Justin Ji :
>>
>> 2.7.5 may not include the feature~
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: unable to query the cache after restart

2019-06-24 Thread Павлухин Иван
Hi Goutham,

I did not get from your last message does it work now as expected? If
not does it work without a trick with disable/enable WAL?

ср, 19 июн. 2019 г. в 19:18, goutham manchikatla :
>
> Hi Denis,
>
> I tried removing Default_Region name property from config, still I see the 
> same behavior. But when I trigger load_cache process, and query the 
> cache(after a complete cluster restart) , I am getting the response.
>
> Below is the Load_cache process.
>  try (Ignite ignite = Ignition.start(configFile)) {
>
>
> // Start loading cache on all caching nodes.
>
> final IgniteCache cache = ignite.cache(cacheName);
>
> long ts = System.currentTimeMillis();
>
>
> IgniteCluster cluster = ignite.cluster();
>
> cluster.disableWal(cacheName);
>
>
> LOG.info("Disabling WAL for Initial Data Loading");
>
>
> cache.loadCache(null, ignite, cacheName, sqlquery);
>
>
> LOG.info("Loaded Cache in " + (System.currentTimeMillis() - ts) + " 
> millisecs");
>
>
> cluster.enableWal(cacheName);
>
>
> LOG.info("Enabling WAL after the pre-loading is complete");
>
> }
>
>
> Thanks,
>
> Goutham
>
>
> On Tue, Jun 18, 2019 at 8:45 PM Denis Magda  wrote:
>>
>> Try to remove Default_Region name property from your config. If to follow 
>> this example that’s how persistence is enabled for the default region:
>> https://apacheignite.readme.io/docs/distributed-persistent-store
>>
>> Denis
>>
>>
>> On Thursday, June 6, 2019, goutham manchikatla  wrote:
>>>
>>> Hi,
>>>
>>> I didn't change any code between restarts. Below is the configuration.
>>>
>>> 
>>> >> class="org.apache.ignite.configuration.DataStorageConfiguration">
>>> 
>>> 
>>>
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> value="true"/>
>>> 
>>> >> value="RANDOM_LRU"/>
>>> 
>>> >> value="#{1024L * 1024 * 1024}"/>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
>>> >> value="com.cachestore.AccountCacheStore">
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.cache.QueryEntity">
>>> >> value="java.lang.String">
>>> >> value="com.domain.Account">
>>> 
>>> 
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> >> value="java.lang.String">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>>
>>> On Thu, Jun 6, 2019 at 8:32 AM Ilya Kasnacheev  
>>> wrote:

 Hello!

 This is strange. What's cache configuration? Is there a reproducer? Did 
 you change the code between restarts, including key/value types, if any?

 Regards,
 --
 Ilya Kasnacheev


 чт, 6 июн. 2019 г. в 16:16, goutham manchikatla :
>
> Yes , the query worked before restart.
>
> On Thu, Jun 6, 

Re: Stop JVM on network Segmenation

2019-06-19 Thread Павлухин Иван
Hi Taruk,

There is no such thing out of box. You can try to use
org.apache.ignite.Ignition#addListener and handle
org.apache.ignite.IgniteState#STOPPED_ON_SEGMENTATION state change
according to your needs.

Perhaps, if you can describe why do you need a specific handling then
the Community might suggest other options.

пн, 17 июн. 2019 г. в 12:22, tarunk :
>
> Hi All,
>
> Can anyone please help with below original query ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: unable to query the cache after restart

2019-06-08 Thread Павлухин Иван
Hi,

First bet is that it is a problem with configuring persistence for a
cache, but I have not found any problems in config. I can suggest to
check a directory with database files to check that the same directory
is used among node restarts and that it contains files for the cache
in question.

чт, 6 июн. 2019 г. в 19:39, goutham manchikatla :
>
> Hi,
> I will work on the reproducer project.
>
> I am using 2.7 version. Also I tried with Java SQL API
>
> SqlFieldsQuery sql = new SqlFieldsQuery(query);
>
> QueryCursor> cursor = cache.query(sql)
>
>
> On Thu, Jun 6, 2019 at 10:32 AM Ilya Kasnacheev  
> wrote:
>>
>> Hello!
>>
>> Can you make a reproducer project which will exhibit this behavior? One 
>> which will fill enough data in cache so that this behavior is observable 
>> after restart.
>>
>> BTW, what's the version you are on?
>>
>> Have you tried scan query (via Java code)?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 6 июн. 2019 г. в 19:25, goutham manchikatla :
>>>
>>> http://localhost:8080/ignite?user=ignite=ignite=qryexe=Account=10=lincs_cache=select%20*%20from%20lincs.account%20LIMIT%2010
>>>
>>> Yes I tried using Debeaver JDBC, query -SELECT * FROM LINCS.ACCOUNT LIMIT 
>>> 10;
>>>
>>> Still the same behavior.
>>>
>>> On Thu, Jun 6, 2019 at 10:20 AM Ilya Kasnacheev  
>>> wrote:

 Hello!

 What's the query in question? Have you tried using e.g. sqlline to connect 
 via JDBC?

 Regards,
 --
 Ilya Kasnacheev


 чт, 6 июн. 2019 г. в 19:15, goutham manchikatla :
>
> Hi,
>
> I reproduced the behavior. I stopped the cache nodes and started them 
> again. I see the metadata, cache count, but no query response:
>
> {"successStatus":0,"sessionToken":"94DAD112C4E848E98663AF5883BBDDE2","response":[{"cacheName":"lincs_cache","types":["Account"],"keyClasses":{"Account":"java.lang.String"},"valClasses":{"Account":"com.domain.Account"},"fields":{"Account":{"ACCOUNTNUMBER":"java.lang.String","FIRSTNAME":"java.lang.String","LASTNAME":"java.lang.String","SERVADDRLINE1":"java.lang.String","SERVADDRLINE2":"java.lang.String","SERVADDRCITY":"java.lang.String","SERVADDRSTATE":"java.lang.String","SERVADDRZIP":"java.lang.String","BILLADDRLINE1":"java.lang.String","BILLADDRLINE2":"java.lang.String","BILLADDRCITY":"java.lang.String","BILLADDRSTATE":"java.lang.String","BILLADDRZIP":"java.lang.String","BILLINGSYSTEM":"java.lang.String"}},"indexes":{"Account":[]}}],"error":null}
>
>  Record count:
>
> {"successStatus":0,"affinityNodeId":null,"sessionToken":"0BBB1DA51FA243298D378D1F2D2DFE80","response":121039244,"error":null}
>
> Query Output:
>
> {"successStatus":0,"sessionToken":"69E405FB1E93472FA3F06A1312E31597","error":null,"response":{"items":[],"last":true,"fieldsMetadata":[],"queryId":6}}
>
>
> I don't see any data in the response.
>
>
> On Thu, Jun 6, 2019 at 9:50 AM Ilya Kasnacheev 
>  wrote:
>>
>> Hello!
>>
>> Looks OK. Can you reproduce the behavior, or is it a one-time 
>> occurrence? What happens if you try to scan that cache? Anything 
>> suspicious in your logs?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 6 июн. 2019 г. в 18:30, goutham manchikatla :
>>>
>>> Hi,
>>>
>>> I didn't change any code between restarts. Below is the configuration.
>>>
>>> 
>>> >> class="org.apache.ignite.configuration.DataStorageConfiguration">
>>> 
>>> 
>>>
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.DataRegionConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> value="true"/>
>>> 
>>> >> value="RANDOM_LRU"/>
>>> 
>>> >> value="#{1024L * 1024 * 1024}"/>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> >> value="500MB_Region"/>
>>> 
>>> 
>>> 

Re: FW: class loading, peer class loading, jars, fun times in ignite

2019-05-29 Thread Павлухин Иван
Hi Scott,

As far as I know, peer class loading does not work for data classes
(which are stored in a cache). It works for tasks sended for execution
using IgniteCompute.

It is only a partial answer. Could you describe your use case in more details?

вт, 28 мая 2019 г. в 23:35, Scott Cote :
>
> Whoops – sent to the wrong list …
>
>
>
> From: Scott Cote
> Sent: Tuesday, May 28, 2019 1:04 PM
> To: d...@ignite.apache.org
> Subject: class loading, peer class loading, jars, fun times in ignite
>
>
>
> I am fairly certain that I don’t know how to use peer class loading properly.
>
>
>
> Am using Apache Ignite 2.7.  If I have a node running on 192.168.1.2 with a 
> peer class loading enabled, and I start up a second node – 192.168.1.3, 
> client mode enabled and peer class loading enabled, then I expected the 
> following:
>
>
>
> Running the snippet (based on 
> https://apacheignite.readme.io/docs/getting-started#section-first-ignite-data-grid-application
>  ) on the client (192.168.1.3):
>
>
>
> try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
>
> IgniteCache cache = 
> ignite.getOrCreateCache("myCacheName");
>
>
>
> // Store keys in cache (values will end up on different cache nodes).
>
> for (int i = 0; i < 10; i++)
>
> cache.put(i,new MyWrapperOfString( Integer.toString(i)));
>
>
>
> for (int i = 0; i < 10; i++)
>
> System.out.println("Got [key=" + i + ", val=" + cache.get(i) + ']');
>
> }
>
>
>
>
>
> Would cause the cache of “MyWrapperOfString” instances to be available on 
> 192.168.1.2 and on 192.168.1.3 .   Also be able to observe the cache using 
> visor, etc ….
>
>
>
> However – I instead get an error that the class “MyWrapperOfString” is not 
> available on 192.168.1.2.   Now if I take the jar that the class is packed, 
> and place it in the lib folder, all is happy.
>
>
>
> Should I have to do this?
>
> If yes – how do I update the jar if I have a cluster of nodes doing this?   
> Do I have to shutdown the entire cluster in order to not have class loader 
> problems?
>
> I thought the peer class loading is supposed to solve this problem.
>
>
>
> I think it would be VERY INSTRUCTIVE for the snippet that I anchored to not 
> use a standard java library cache object, but to demonstrate the need to 
> package value object into a jar and stuff it into the lib folder (If this is 
> what is expected). Running lambdas that use basic java primitives is 
> cool, but is this the normal?
>
>
>
> Switching up …. Is there interest in me creating class loader that would load 
> java classes into the vm that could be incorporated into ignite?   So instead 
> of reading a jar, you load the class bytes into a cache .  You want to hot 
> load a new class?  Fine ! pump into the DISTRIBUTED_CLASS_PATH_CACHE .
>
>
>
> Cheers.
>
>
>
> SCott
>
>



-- 
Best regards,
Ivan Pavlukhin


Re: Behavior of Ignite during PME on new nodes added to the cluster

2019-05-03 Thread Павлухин Иван
Hi Evangelos and Matt,

As far as know there were issues with a join of a client node in
previous Ignite versions. In new versions a joining client should not
cause any spikes.

In fact PME is (unfortunately) a widely known beast in the Ignite
world. Fundamentally PME can (and should) perform smooth when new
server nodes join the cluster not very frequently. I will bring some
details what happens when a new server node joins the cluster. I hope
it will help to answer a question 3 from a first message in this
thread.

As its name hints PME is a process when all nodes agree on a data
distribution in the cluster after an events which leads to a
redistribution. E.g. such event is node joining. And data distribution
is a knowledge that a partition i is located on a node j. And for
correct cluster operations each node should agree on the same
distribution (consensus). So, it is all about a consistent data
distribution.

Consquently some data should be rebalanced after nodes come to an
agreement on a distribution. And Ignite uses a clever trick to allow
operations during data is rebalanced. When new node joins:
1. PME occurs and nodes agree on a same data distribution among nodes.
And in that distribution all primary partitions belong to same nodes
which they belong before PME. Also temporary backup partitions are
assigned to the new node which will become a primary node for those
partitions (keep reading).
2. Rebalance starts and delivers a data to the temporary backup
partitions* mentioned before. The cluster is fully operational
meanwhile.
3. Once rebalance completes another one PME happens. Now the temporary
backups become primary (and other redundant partitions are marked for
unload).
* it worth noting here that a partition was empty and loaded during
rebalance is marked as MOVING. It is not readable because it does not
containt all data yet, but all writes come to this partition as well
in order to make it up to date when rebalnce completes.
(In Ignite the described trick is sometimes called "late affinity assignment")

So, PME should not be very heavy because it is mainly about
establishing an agreement on data distribution. Heavier data rebalance
process happens when a cluster is fully operational. But PME still
requires a silence period during establishing an agreement. As you
might know PME and write operations use a mechanism similar to a
read-write lock. Write operations are guarded by that lock in a shared
mode. PME acquires that lock in an exclusive mode. So, at any moment
we can have either several running write operations or only one
running PME. It means that PME have to await all write operations to
complete before it can start. Also it blocks all new write operations
to start. Therefore long running transactions blocking PME can lead to
a prolonged "silence" period.

чт, 25 апр. 2019 г. в 00:58, Evangelos Morakis :
>
> Matt thank you for your reply,
> Indeed I saw your question too yesterday. In regards to points 3-4 of my 
> question I suppose that as you mention, if one shuts down gracefully the 
> client node and if  the number of threads responsible for rebalancing the 
> data gets tweaked, then I guess the amount of time the cluster blocks could 
> be managed. For point 2 I think it’s necessary for someone from the dev team 
> to provide a bit more insight as to what ignite’s behavior is in regards to 
> client nodes joining/leaving the cluster as I fail to understand why PEM is 
> triggered for such nodes given their natural exclusion  from computations and 
> the lack of storage of cache data in them. Indeed if the case is that PEM is 
> triggered for client nodes when joining/leaving, scenarios where remote 
> clients come and go on demand become  difficult to accommodate at best, and 
> this sounds very restrictive. I simply need to know more on this otherwise it 
> would not be possible to develop a working strategy for accommodating clients 
> that come, do a bit of work, and then they leave until next time.
>
> Kind regards
>
> Dr. Evangelos Morakis
> Software Architect
>
> > On 24 Apr 2019, at 21:21, MattNohelty  wrote:
> >
> > I have these same questions and posted about this yesterday
> > (http://apache-ignite-users.70518.x6.nabble.com/What-happens-when-a-client-gets-disconnected-td27959.html).
> > Based on my understanding:
> >
> > 1) Yes, PME will always happen when a server node joins
> >
> > 2) This is my biggest question.  I'm currently using 2.4 and it appears PME
> > is happening when a client connects or disconnects but I received one
> > response that seemed to indicate that PME should not happen in this case in
> > the newest versions of Ignite.  I agree with your reasoning that these
> > rebalancing processes do not seem necessary as all the data is on the server
> > nodes which is what prompted my initial question.
> >
> > 3) The responses I received do say that the cluster blocks while this
> > happens and I've seen evidence of this as well.  I've only seen 

Re: sending event on update of a specific cache

2019-04-07 Thread Павлухин Иван
Hi,

You can try a following combination:
1. Use IgniteEvents to setup event listeners.
2. Disable events for all other caches via CacheConfiguration.setEventsDisabled.

вс, 7 апр. 2019 г. в 15:18, matanlevy :
>
> Hi,
>
> I am using ignite cache and I would like to know if there is any mechanishm
> that I can use to trigger update only for a *specific *cache.
>
> my use case is using REST API in order to perform simple actions on the
> cache(simple get and put), so I can't use Continous Query for that.
>
> I know that I can use the event mechanism for that, but I am affraid the
> overhead for eachupdate in each cache in the clusther is too high.
>
> I am looking for a way to filter it so only updates on my specific cache
> will trigger events.
>
> Thanks!
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Primary partitions return zero partitions before rebalance.

2019-04-04 Thread Павлухин Иван
Ah, sorry for that. PME = Partition Map Exchange. It is described
along with late affinity assignment in article you referenced earlier
[1].

[1] 
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood#id-(PartitionMap)Exchange-under

вт, 2 апр. 2019 г. в 20:22, Koitoer :
>
> Sorry but what is exactly the PME ?
>
> On Mon, Apr 1, 2019 at 1:55 AM Павлухин Иван  wrote:
>>
>> Hi,
>>
>> Sorry for the late answer. An observed result seems expected to me. I
>> suppose following:
>> 1. EVT_CACHE_REBALANCE_STOPPED is fired when a particular node loaded
>> all partitions which it will be responsible for.
>> 2. All nodes it the cluster must become aware that partition
>> assignment was changed. So, PME will happen to make all nodes aware of
>> new assignment.
>> 3. Once PME completes all nodes will consistently treat just entered
>> node as primary for a corresponding set of partitions.
>>
>> Do not hesitate to write back if you feel that something is going wrong.
>>
>> вт, 19 мар. 2019 г. в 19:30, Koitoer :
>> >
>> > Hello Igniters
>> >
>> > The version of Ignite that we are using is 2.7.0. I'm adding the events 
>> > that I want to hear via the IgniteConfiguration using the 
>> > `setIncludeEventTypes`
>> > Then using ignite.event().localListen(listenerPredicate, eventTypes);
>> >
>> > EVT_CACHE_REBALANCE_STARTED,
>> > EVT_CACHE_REBALANCE_STOPPED,
>> > EVT_CACHE_REBALANCE_PART_LOADED,
>> > EVT_CACHE_REBALANCE_PART_UNLOADED,
>> > EVT_CACHE_REBALANCE_PART_DATA_LOST
>> >
>> > Once I listen any of the events above, I used 
>> > `ignite.affinity(cacheName.name())`  to retrieve the Affinity function in 
>> > which I'm calling the `primaryPartitions` method or `allPartitions` using 
>> > the ClusterNode instance that represents `this` node.
>> >
>> > Once I hear the rebalance process stop event I created a thread in charge 
>> > of checking the partition assignment as follows.
>> >
>> > new Thread(() -> {
>> > for (int attempt = 0; attempt <= attempts; attempt++) {
>> > log.info("event=partitionAssignmentRetryLogic attempt={}, 
>> > before={}, now={}", attempt, assignedPartitions,
>> > affinity.primaryPartitions(clusterNode));
>> >
>> > try {
>> > if (affinity.primaryPartitions(clusterNode).length != 0) {
>> > log.info("event=partitionAssignmentRetryLogicSuccess");
>> > }
>> > TimeUnit.SECONDS.sleep(delay);
>> > } catch (Exception e) {
>> > log.error("event=ErrorOnTimerWait message={}", e.getMessage(), 
>> > e);
>> > }
>> > }
>> > }).start();
>> >
>> >
>> > After a couple of attempts (some seconds), the `primaryPartitions` is 
>> > returning the correct set of partitions assigned to a node.  I will check 
>> > the AffinityAssignment for trying to do this in a cleaner way as you 
>> > suggest.
>> >
>> >
>> > On Fri, Mar 15, 2019 at 12:11 PM Павлухин Иван  wrote:
>> >>
>> >> Hi,
>> >>
>> >> What Ignite version do you use?
>> >> How do you register your listener?
>> >> On what object do you call primaryPartitions/allPartitions?
>> >>
>> >> It is true that Ignite uses late affinitly assignment. And it means
>> >> that for each topology change (node enter or node leave) parttion
>> >> assigment changes twice. First time temporay backups are created which
>> >> should be rebalanced from other nodes (EVT_CACHE_REBALANCE_STARTED
>> >> takes place here). Second time redundant partition replicas should be
>> >> marked as unusable (and unloaded after that)
>> >> (EVT_CACHE_REBALANCE_STOPPED). And it is useful to understand that
>> >> Affinity interface calculates partition distribution using affinity
>> >> function and such distribution might differ from real partitoin
>> >> assignment. And it differes when rebalance is in progress. See
>> >> AffinityAssignment interface.
>> >>
>> >> ср, 13 мар. 2019 г. в 21:59, Koitoer :
>> >> >
>> >> > Hi All.
>> >> >
>> >> > I'm trying to follow the rebalance events of my ignite cluster so I'm 
>> >> > able to track which partitions are assigned to 

Re: quick Q- date type in cache is converted to ticks. is this based on time since from 1970?

2019-04-03 Thread Павлухин Иван
Yes, internally it is stored as number of milliseconds since
1970-01-01 00:00:00.000

пт, 29 мар. 2019 г. в 16:17, wt :
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Primary partitions return zero partitions before rebalance.

2019-04-01 Thread Павлухин Иван
Hi,

Sorry for the late answer. An observed result seems expected to me. I
suppose following:
1. EVT_CACHE_REBALANCE_STOPPED is fired when a particular node loaded
all partitions which it will be responsible for.
2. All nodes it the cluster must become aware that partition
assignment was changed. So, PME will happen to make all nodes aware of
new assignment.
3. Once PME completes all nodes will consistently treat just entered
node as primary for a corresponding set of partitions.

Do not hesitate to write back if you feel that something is going wrong.

вт, 19 мар. 2019 г. в 19:30, Koitoer :
>
> Hello Igniters
>
> The version of Ignite that we are using is 2.7.0. I'm adding the events that 
> I want to hear via the IgniteConfiguration using the `setIncludeEventTypes`
> Then using ignite.event().localListen(listenerPredicate, eventTypes);
>
> EVT_CACHE_REBALANCE_STARTED,
> EVT_CACHE_REBALANCE_STOPPED,
> EVT_CACHE_REBALANCE_PART_LOADED,
> EVT_CACHE_REBALANCE_PART_UNLOADED,
> EVT_CACHE_REBALANCE_PART_DATA_LOST
>
> Once I listen any of the events above, I used 
> `ignite.affinity(cacheName.name())`  to retrieve the Affinity function in 
> which I'm calling the `primaryPartitions` method or `allPartitions` using the 
> ClusterNode instance that represents `this` node.
>
> Once I hear the rebalance process stop event I created a thread in charge of 
> checking the partition assignment as follows.
>
> new Thread(() -> {
> for (int attempt = 0; attempt <= attempts; attempt++) {
> log.info("event=partitionAssignmentRetryLogic attempt={}, before={}, 
> now={}", attempt, assignedPartitions,
> affinity.primaryPartitions(clusterNode));
>
> try {
> if (affinity.primaryPartitions(clusterNode).length != 0) {
> log.info("event=partitionAssignmentRetryLogicSuccess");
> }
> TimeUnit.SECONDS.sleep(delay);
> } catch (Exception e) {
> log.error("event=ErrorOnTimerWait message={}", e.getMessage(), e);
> }
> }
> }).start();
>
>
> After a couple of attempts (some seconds), the `primaryPartitions` is 
> returning the correct set of partitions assigned to a node.  I will check the 
> AffinityAssignment for trying to do this in a cleaner way as you suggest.
>
>
> On Fri, Mar 15, 2019 at 12:11 PM Павлухин Иван  wrote:
>>
>> Hi,
>>
>> What Ignite version do you use?
>> How do you register your listener?
>> On what object do you call primaryPartitions/allPartitions?
>>
>> It is true that Ignite uses late affinitly assignment. And it means
>> that for each topology change (node enter or node leave) parttion
>> assigment changes twice. First time temporay backups are created which
>> should be rebalanced from other nodes (EVT_CACHE_REBALANCE_STARTED
>> takes place here). Second time redundant partition replicas should be
>> marked as unusable (and unloaded after that)
>> (EVT_CACHE_REBALANCE_STOPPED). And it is useful to understand that
>> Affinity interface calculates partition distribution using affinity
>> function and such distribution might differ from real partitoin
>> assignment. And it differes when rebalance is in progress. See
>> AffinityAssignment interface.
>>
>> ср, 13 мар. 2019 г. в 21:59, Koitoer :
>> >
>> > Hi All.
>> >
>> > I'm trying to follow the rebalance events of my ignite cluster so I'm able 
>> > to track which partitions are assigned to each node at any point in time. 
>> > I am listening to the `EVT_CACHE_REBALANCE_STARTED` and 
>> > `EVT_CACHE_REBALANCE_STOPPED`
>> > events from Ignite and that is working well, except in the case one node 
>> > crash and another take its place.
>> >
>> > My cluster is 5 nodes.
>> > Ex. Node 1 has let's say 100 partitions, after I kill this node the 
>> > partitions that were assigned to it, got rebalance across the entire 
>> > cluster, I'm able to track that done with the STOPPED event and checking 
>> > the affinity function in each one of them using the `primaryPartitions` 
>> > method gives me that, if I add all those numbers I get 1024 partitions, 
>> > which is why I was expected.
>> >
>> > However when a new node replaces the previous one, I see a rebalance 
>> > process occurs and now I'm getting that some of the partitions `disappear` 
>> > from the already existing nodes (which is expected as well as new node 
>> > will take some partitions from them) but when the STOPPED event is 
>> > listened by this new node if I call the `primaryPartitions` that one 
&g

Re: How to use atomic operations on C++ thin client?

2019-03-15 Thread Павлухин Иван
Hi Jack,

Should be included into next version [1]. Stay tuned.

[1] https://issues.apache.org/jira/browse/IGNITE-9904

пт, 15 мар. 2019 г. в 01:32, jackluo923 :
>
> After digging deeper, it appears that thin-client atomic cache operations are
> not implemented. I have implemented and tested the atomic cache operations
> in C++ thin-client locally and they appear to work correctly but I haven't
> done any extensive testing. Is there any reason why atomic operations are
> not available in c++ thin-client, but available in other languages?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Primary partitions return zero partitions before rebalance.

2019-03-15 Thread Павлухин Иван
Hi,

What Ignite version do you use?
How do you register your listener?
On what object do you call primaryPartitions/allPartitions?

It is true that Ignite uses late affinitly assignment. And it means
that for each topology change (node enter or node leave) parttion
assigment changes twice. First time temporay backups are created which
should be rebalanced from other nodes (EVT_CACHE_REBALANCE_STARTED
takes place here). Second time redundant partition replicas should be
marked as unusable (and unloaded after that)
(EVT_CACHE_REBALANCE_STOPPED). And it is useful to understand that
Affinity interface calculates partition distribution using affinity
function and such distribution might differ from real partitoin
assignment. And it differes when rebalance is in progress. See
AffinityAssignment interface.

ср, 13 мар. 2019 г. в 21:59, Koitoer :
>
> Hi All.
>
> I'm trying to follow the rebalance events of my ignite cluster so I'm able to 
> track which partitions are assigned to each node at any point in time. I am 
> listening to the `EVT_CACHE_REBALANCE_STARTED` and 
> `EVT_CACHE_REBALANCE_STOPPED`
> events from Ignite and that is working well, except in the case one node 
> crash and another take its place.
>
> My cluster is 5 nodes.
> Ex. Node 1 has let's say 100 partitions, after I kill this node the 
> partitions that were assigned to it, got rebalance across the entire cluster, 
> I'm able to track that done with the STOPPED event and checking the affinity 
> function in each one of them using the `primaryPartitions` method gives me 
> that, if I add all those numbers I get 1024 partitions, which is why I was 
> expected.
>
> However when a new node replaces the previous one, I see a rebalance process 
> occurs and now I'm getting that some of the partitions `disappear` from the 
> already existing nodes (which is expected as well as new node will take some 
> partitions from them) but when the STOPPED event is listened by this new node 
> if I call the `primaryPartitions` that one returns an empty list, but if I 
> used the  `allPartitions` method that one give me a list (I think at this 
> point is primary + backups).
>
> If I let pass some time and I execute the `primaryPartitions` method again I 
> am able to retrieve the partitions that I was expecting to see after the 
> STOPPED event comes. I read here 
> https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood#id-(PartitionMap)Exchange-under
>  the hood-LateAffinityAssignment that it could be a late assignment, that 
> after the cache rebalance the new node needs to bring all the entries to 
> fill-out the cache and after that, the `primaryPartitions` will return 
> something.
> Will be great to know if this actually what is happening.
>
> My question is if there is any kind of event that I should listen so I can be 
> aware that this process (if this is what is happening) already finish. I 
> would like to said, "After you bring this node into the cluster the 
> partitions assigned to that node are the following: XXX, XXX".
>
> Also, I'm aware of the event `EVT_CACHE_REBALANCE_PART_LOADED` but I'm seeing 
> a ton of them and at this point, I would be able to know when the last one 
> arrives and say that are now my primary partitions.
>
> Thanks in advance.



-- 
Best regards,
Ivan Pavlukhin


Re: GridTimeoutProcessor - Timeout has occurred - Too frequent messages

2019-03-14 Thread Павлухин Иван
Hi,

Yes, perhaps there is no reach documentation for mentioned classes.
But on the other hand they are internal classes which could be changed
at any time. I will try to outline roles for 2 classes.
1. CancellableTask is used internally as a general way to cancel some
action sheduled in the future. GridTimeouProcessor.schedule returns
CacncellableTask which allows to cancel scheduled task.
2. GridCommunicationMessageSet is used during processing ordered
messages. GridIoManager supports ordered message delivery. And in this
case timeouts should be involved during processing in case if some
previous messages were not received.

Unfortunately I cannot say anything meaningful about
CacheContinuousQueryManager$BackupCleaner.

Giving a good answer to your questions is almost equal to creating a
documentation for aforementioned classes. I think that it is much easy
to receive an answer if you have a problem in your use case, e.g.
something does not work or works improperly.

пн, 11 мар. 2019 г. в 12:04, userx :
>
> Hi Ivan,
>
> Thanks for the reply. I totally buy your point that these messages are not
> bad. What I wanted to understand  basically was the role of the following
> GridTimeOut objects
>
> 1) CancelableTask
> 2) GridCommunicationMessageSet
> 3) CacheContinuousQueryManager$BackupCleaner
>
> There is no documentation available in the class for the above three
> classes. So was just trying to understand the role of each of them.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: GridTimeoutProcessor - Timeout has occurred - Too frequent messages

2019-03-11 Thread Павлухин Иван
Hi,

There is nothing bad with message you see. GridTimeoutProcessor is
used internally by Ignite for scheduling a task execution after a
delay. Debug message "Timeout has occurred" is logged every time
scheduling delay has ended. It is a debug message and a purpose of it
is checking that arbitrary task was executed by a timeout processor.

сб, 9 мар. 2019 г. в 08:33, userx :
>
> Hi,
>
> Any thoughts on  the same.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Why does write behind require to enable write through?

2019-03-08 Thread Павлухин Иван
Hi,

Ignite CacheConfiguration inherits setWriteThrough method from JCache
API [1]. Enabled writeThrough in JCache means that cache is backed by
some other storage to which writes are propagated along with writes to
to the cache. Unfortunately (or not?) JCache uses write-through term
while Ignite supports write-behind write propagation policy. Perhaps
API is not clear enough but I think that JCache conformance is the
clue here.

[1] 
https://static.javadoc.io/javax.cache/cache-api/1.1.0/javax/cache/configuration/MutableConfiguration.html#setWriteThrough-boolean-

вт, 5 мар. 2019 г. в 16:59, relax ken :
>
> Thanks Ilya. I guess conceptually there are many explanations and definitions 
> about those two on Internet which may agree, disagree, or consensus on some 
> point. My question is more about their impact when they are true or false in 
> Ignite.
>
> For example, if it's always the case, why doesn't Ignite just encapsulate 
> this kind assumption, take care it and auto set write through true while 
> write behind is set to true. Why does Ignite give this kind of option and 
> warning? Will there be any difference when write behind is true but write 
> through is not true? try to understand deeper about those options to avoid 
> any unexpected behaviour.
>
> On Tue, Mar 5, 2019 at 1:46 PM Ilya Kasnacheev  
> wrote:
>>
>> Hello!
>>
>> It is because write-behing is a kind of write-through. Like random access 
>> memory is a kind of computer memory.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 5 мар. 2019 г. в 14:43, relax ken :
>>>
>>> Hi,
>>>
>>> I am new to Ignite. When I enable write behind, I always get a warning 
>>> "Write-behind mode for the cache store also requires 
>>> CacheConfiguration.setWriteThrough(true) property." Why does write behind 
>>> require write through when I am using write behind only?
>>>
>>> Here is my configuration
>>>
>>> CacheConfiguration cconfig = new CacheConfiguration<>();
>>> cconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> cconfig.setCacheMode(CacheMode.PARTITIONED);
>>> cconfig.setName(Constants.DocCacheName);
>>> cconfig.setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new 
>>> Duration(TimeUnit.SECONDS, cacheConfig.cacheExpirySecs)));
>>>
>>> cconfig.setWriteBehindEnabled(true);
>>> cconfig.setWriteBehindFlushFrequency(cacheConfig.writeBehindFlushIntervalSec);
>>>  // MS
>>>
>>>
>>> Thanks
>>>
>>> Ken



-- 
Best regards,
Ivan Pavlukhin


Re: Access a cache loaded by DataStreamer with SQL

2019-03-02 Thread Павлухин Иван
Hi Mike,

You can find a simple example of loading data with data streamer and
querying it with SQL in a following gist [1].

It is possible but somehow tricky to load table created by DDL using
data streamer. Perhaps, SQL COPY could be handy for you [2]. It uses
data streamer under the hood.

[1] https://gist.github.com/pavlukhin/0b30671b76abf01b2cc30230a11cc1f7
[2] https://apacheignite-sql.readme.io/docs/copy

пт, 1 мар. 2019 г. в 22:48, Mike Needham :
>
> I have looked at the documentation and the code samples and nothing is doing 
> what I am trying to do.  I want to be able to use the datastreamer to load 3 
> or 4 TABLES in a cache for an application that we use.  If I create the 
> tables using a create table syntax how do attach a datastreamer to the 
> different caches if the cache name is PUBLIC for all of them?
>
> On Thu, Feb 28, 2019 at 8:13 AM Ilya Kasnacheev  
> wrote:
>>
>> Hello!
>>
>> I have linked the documentation page, there are also some code examples in 
>> distribution.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 28 февр. 2019 г. в 17:10, Mike Needham :
>>>
>>> Is there any examples that show the steps to do this correctly?  I stumbled 
>>> upon this but have no idea if it is the best way to do this
>>>
>>> On Thu, Feb 28, 2019 at 6:27 AM Ilya Kasnacheev  
>>> wrote:

 Hello!

 There's no restriction on cache name but setting it up for the first time 
 may be tricky indeed.

 Regards,
 --
 Ilya Kasnacheev


 ср, 27 февр. 2019 г. в 19:48, needbrew99 :
>
> OK, was able to get it working.  Apparently the cache name has to be 
> PUBLIC
> and it will create a table based on the object definition that I have.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>>
>>>
>>> --
>>> Some days it just not worth chewing through the restraints
>
>
>
> --
> Some days it just not worth chewing through the restraints



-- 
Best regards,
Ivan Pavlukhin


Re: same cache cannot update twice in one transaction

2019-02-28 Thread Павлухин Иван
Hi,

MVCC in Ignite is targeted to provide transactional consistency
guarantees. I suppose that with eventually consistent 3rd party store
it would be impossible to give much guarantees in general. Do think
that such eventually consistent store will be widely used? What kind
of guarantees should it provide? Is it easy to use it properly?
Currently we do not have answers to these questions. Feedback is
appreciated.

Also I must say that MVCC feature is currently in beta stage and
limitations are listed in documentation [1].

[1] 
https://apacheignite.readme.io/docs/multiversion-concurrency-control#section-other-limitations

чт, 28 февр. 2019 г. в 22:34, xmw45688 :
>
> Hi Ilya,
>
> It'd better if this was mentioned in Ignite Doc.
>
> It seems very limited if MVCC only supports  the Ignite native persistence.
> Yes, supporting MVCC in 3rd party persistence is challenging.  However, do
> we really need MVCC when the data from Cache (where MVCC already enabled) is
> ready to write to a 3rd party persistence store.   I think that an "eventual
> consistence" for writing cached data into a 3rd persistence layer seems
> sufficient when Ignite is used as cache stored, and the data in cache store
> is persistent.
>
> Does Ignite have a plan to support MVCC in cache layer and write the data
> from the cached store into a 3rd party persistence store with some limited
> feature like "eventual consistence".
>
> Can some gurus shed some lights on this subject?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: What is the correct way of keeping ignite alive in a console app

2019-02-27 Thread Павлухин Иван
Hi,

Your application exits because Ignite node is started in
try-with-resources block. So, the node is stopped upon leaving try
block. If you write simply
public static void main(String[] args) {
  Ignition.start(igniteConfiguration);
}

application will continue running after main method completion.

ср, 27 февр. 2019 г. в 19:36, PBSLogitek :
>
> Hello
>
> What is the best way to keep my app running after i have initialized my
> ignite instance?
>
>
> public static void main(String[] args) {
> try (Ignite ignite = Ignition.start(igniteConfiguration)) {
>
>
> }
>
> // How to wait here in a correct way to make ignite not exit the
> application
>
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Native memory tracking of an Ignite Java 1.8 Application

2019-02-18 Thread Павлухин Иван
Prasad,

Thank you for sharing results!

пн, 18 февр. 2019 г. в 20:06, Prasad Bhalerao :
>
> Hi,
>
> Thank you for the hint. I just wrote a small program to allocate 1 GB memory 
> and to free the same memory using UNSAFE api. I took the native memory 
> tracking report 3 times, before allocating 1 GB memory, after allocating 1 GB 
> memory and after freeing 1 GB memory.
>
> Here is the snippet of reports. From the result I can conclude that allocated 
> offheap memory using UNSAFE api is listed under "Internal" category of the 
> report.
>
> NOTE:  DirectByteBuffer internally uses unsafe apis.
>
> 1) Before allocating 1 GB memory (off heap).
>
> -  Internal (reserved=9782KB, committed=9782KB)
> (malloc=9718KB #3360)
> (mmap: reserved=64KB, committed=64KB)
>
> 2) After allocating 1 GB memory (off heap).
>
> -  Internal (reserved=1058386KB, committed=1058386KB)
> (malloc=1058322KB #3415)
> (mmap: reserved=64KB, committed=64KB)
>
> 3) After freeing up 1GB memory
>
> -  Internal (reserved=9798KB, committed=9798KB)
> (malloc=9734KB #3299)
> (mmap: reserved=64KB, committed=64KB)
>
> Sample program:
>
> public static void main(String[] args) throws NoSuchFieldException, 
> IllegalAccessException, InterruptedException {
>
>   Field f = Unsafe.class.getDeclaredField("theUnsafe");
>   f.setAccessible(true);
>   final Unsafe unsafe = (Unsafe) f.get(null);
>
>   System.out.println("1)Now going to sleep");
>   Thread.sleep(6);
>   System.out.println("Now allocating 1GB off-heap.");
>   long address = unsafe.allocateMemory(1024 * 1024 * 1024);
>   System.out.println("Allocated 1GB off-heap.");
>   System.out.println("2)Now going to sleep");
>   Thread.sleep(6);
>   unsafe.freeMemory(address);
>   System.out.println("3)Now going to sleep");
>   Thread.sleep(6);
>   System.out.println("Exited.");
> }
>
>
>
> Thanks.,
> Prasad
>
> On Mon, Feb 18, 2019 at 7:56 PM Павлухин Иван  wrote:
>>
>> Prasad,
>>
>> Someone has already posted a snippet [1].
>>
>> [1] https://gist.github.com/prasanthj/48e7063cac88eb396bc9961fb3149b58
>>
>> пн, 18 февр. 2019 г. в 17:23, Павлухин Иван :
>> >
>> > Hi Prasad,
>> >
>> > As far as I remember offheap memory allocated with use of Unsafe is
>> > not reflected in Native Memory Tracking report. You are right that
>> > documentation is not verbose about reported categories [1]. It might
>> > be the case that memory allocated by ByteBuffer.allocateDirect falls
>> > into "internal" category. You can check it out by writing an example
>> > application using ByteBuffer.allocateDirect.
>> >
>> > [1] 
>> > https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr022.html#BABHIFJC
>> >
>> > пн, 18 февр. 2019 г. в 16:04, Dmitriy Pavlov :
>> > >
>> > > Hi.
>> > >
>> > > Please don't use posting to both dev/user lists simultaneously.
>> > >
>> > > If your question is not related to any contribution you are planning to 
>> > > do, then the user list is a better place to ask, because a possible 
>> > > answer may be interesting to all Ignite users.
>> > >
>> > > If you are going to fix any issue and would like to discuss a proposal, 
>> > > please use dev list.
>> > >
>> > > Sincerely,
>> > > Dmitriy Pavlov
>> > >
>> > > пн, 18 февр. 2019 г. в 16:00, Prasad Bhalerao 
>> > > :
>> > >>
>> > >> Hi,
>> > >>
>> > >> I have set the off heap size to 500 MB and max heap size to 512 MB.
>> > >>
>> > >> My process is taking around 1.7 GB on Windows 10 as per the task 
>> > >> manager. So I decided to track the memory distribution using jcmd to 
>> > >> find out if there are any memory leaks in non-heap space.
>> > >>
>> > >> After pushing the data to cache I took the native memory summary using 
>> > >> jcmd tool.
>> > >>
>> > >> I am trying to understand in which of the following section, allocated 
>> > >> off heap memory goes?
>> > >>
>> > >> Does off heap come under &

Re: Native memory tracking of an Ignite Java 1.8 Application

2019-02-18 Thread Павлухин Иван
Prasad,

Someone has already posted a snippet [1].

[1] https://gist.github.com/prasanthj/48e7063cac88eb396bc9961fb3149b58

пн, 18 февр. 2019 г. в 17:23, Павлухин Иван :
>
> Hi Prasad,
>
> As far as I remember offheap memory allocated with use of Unsafe is
> not reflected in Native Memory Tracking report. You are right that
> documentation is not verbose about reported categories [1]. It might
> be the case that memory allocated by ByteBuffer.allocateDirect falls
> into "internal" category. You can check it out by writing an example
> application using ByteBuffer.allocateDirect.
>
> [1] 
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr022.html#BABHIFJC
>
> пн, 18 февр. 2019 г. в 16:04, Dmitriy Pavlov :
> >
> > Hi.
> >
> > Please don't use posting to both dev/user lists simultaneously.
> >
> > If your question is not related to any contribution you are planning to do, 
> > then the user list is a better place to ask, because a possible answer may 
> > be interesting to all Ignite users.
> >
> > If you are going to fix any issue and would like to discuss a proposal, 
> > please use dev list.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > пн, 18 февр. 2019 г. в 16:00, Prasad Bhalerao 
> > :
> >>
> >> Hi,
> >>
> >> I have set the off heap size to 500 MB and max heap size to 512 MB.
> >>
> >> My process is taking around 1.7 GB on Windows 10 as per the task manager. 
> >> So I decided to track the memory distribution using jcmd to find out if 
> >> there are any memory leaks in non-heap space.
> >>
> >> After pushing the data to cache I took the native memory summary using 
> >> jcmd tool.
> >>
> >> I am trying to understand in which of the following section, allocated off 
> >> heap memory goes?
> >>
> >> Does off heap come under "Internal" category?  Can any ignite memory 
> >> expert help me with this?
> >>
> >> Oracle documentation  does not clearly talks about it. I am also attaching 
> >> the native memory detail file.
> >>
> >> C:\Java64\jdk1.8.0_144\bin>jcmd.exe 16956 VM.native_memory summary
> >>
> >> 16956:
> >>
> >>  Total: reserved=3513712KB, committed=2249108KB
> >>
> >> - Java Heap (reserved=524288KB, committed=524288KB)
> >>
> >> (mmap: reserved=524288KB, committed=524288KB)
> >>
> >>
> >>
> >> - Class (reserved=1127107KB, committed=86507KB)
> >>
> >> (classes #13259)
> >>
> >> (malloc=10947KB #17120)
> >>
> >> (mmap: reserved=1116160KB, committed=75560KB)
> >>
> >>
> >>
> >> -Thread (reserved=89748KB, committed=89748KB)
> >>
> >> (thread #88)
> >>
> >> (stack: reserved=89088KB, committed=89088KB)
> >>
> >> (malloc=270KB #454)
> >>
> >> (arena=391KB #175)
> >>
> >>
> >>
> >> -  Code (reserved=254854KB, committed=30930KB)
> >>
> >> (malloc=5254KB #8013)
> >>
> >> (mmap: reserved=249600KB, committed=25676KB)
> >>
> >>
> >>
> >> -GC (reserved=29656KB, committed=29576KB)
> >>
> >> (malloc=10392KB #385)
> >>
> >> (mmap: reserved=19264KB, committed=19184KB)
> >>
> >>
> >>
> >> -  Compiler (reserved=188KB, committed=188KB)
> >>
> >> (malloc=57KB #243)
> >>
> >> (arena=131KB #3)
> >>
> >>
> >>
> >> -  Internal (reserved=1464736KB, committed=1464736KB)
> >>
> >> (malloc=1464672KB #40848)
> >>
> >> (mmap: reserved=64KB, committed=64KB)
> >>
> >>
> >>
> >> -Symbol (reserved=18973KB, committed=18973KB)
> >>
> >> (malloc=15353KB #152350)
> >>
> >> (arena=3620KB #1)
> >>
> >>
> >>
> >> -Native Memory Tracking (reserved=3450KB, committed=3450KB)
> >>
> >> (malloc=14KB #167)
> >>
> >> (tracking overhead=3436KB)
> >>
> >>
> >>
> >> -   Arena Chunk (reserved=712KB, committed=712KB)
> >>
> >> (malloc=712KB)
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Native memory tracking of an Ignite Java 1.8 Application

2019-02-18 Thread Павлухин Иван
Hi Prasad,

As far as I remember offheap memory allocated with use of Unsafe is
not reflected in Native Memory Tracking report. You are right that
documentation is not verbose about reported categories [1]. It might
be the case that memory allocated by ByteBuffer.allocateDirect falls
into "internal" category. You can check it out by writing an example
application using ByteBuffer.allocateDirect.

[1] 
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr022.html#BABHIFJC

пн, 18 февр. 2019 г. в 16:04, Dmitriy Pavlov :
>
> Hi.
>
> Please don't use posting to both dev/user lists simultaneously.
>
> If your question is not related to any contribution you are planning to do, 
> then the user list is a better place to ask, because a possible answer may be 
> interesting to all Ignite users.
>
> If you are going to fix any issue and would like to discuss a proposal, 
> please use dev list.
>
> Sincerely,
> Dmitriy Pavlov
>
> пн, 18 февр. 2019 г. в 16:00, Prasad Bhalerao :
>>
>> Hi,
>>
>> I have set the off heap size to 500 MB and max heap size to 512 MB.
>>
>> My process is taking around 1.7 GB on Windows 10 as per the task manager. So 
>> I decided to track the memory distribution using jcmd to find out if there 
>> are any memory leaks in non-heap space.
>>
>> After pushing the data to cache I took the native memory summary using jcmd 
>> tool.
>>
>> I am trying to understand in which of the following section, allocated off 
>> heap memory goes?
>>
>> Does off heap come under "Internal" category?  Can any ignite memory expert 
>> help me with this?
>>
>> Oracle documentation  does not clearly talks about it. I am also attaching 
>> the native memory detail file.
>>
>> C:\Java64\jdk1.8.0_144\bin>jcmd.exe 16956 VM.native_memory summary
>>
>> 16956:
>>
>>  Total: reserved=3513712KB, committed=2249108KB
>>
>> - Java Heap (reserved=524288KB, committed=524288KB)
>>
>> (mmap: reserved=524288KB, committed=524288KB)
>>
>>
>>
>> - Class (reserved=1127107KB, committed=86507KB)
>>
>> (classes #13259)
>>
>> (malloc=10947KB #17120)
>>
>> (mmap: reserved=1116160KB, committed=75560KB)
>>
>>
>>
>> -Thread (reserved=89748KB, committed=89748KB)
>>
>> (thread #88)
>>
>> (stack: reserved=89088KB, committed=89088KB)
>>
>> (malloc=270KB #454)
>>
>> (arena=391KB #175)
>>
>>
>>
>> -  Code (reserved=254854KB, committed=30930KB)
>>
>> (malloc=5254KB #8013)
>>
>> (mmap: reserved=249600KB, committed=25676KB)
>>
>>
>>
>> -GC (reserved=29656KB, committed=29576KB)
>>
>> (malloc=10392KB #385)
>>
>> (mmap: reserved=19264KB, committed=19184KB)
>>
>>
>>
>> -  Compiler (reserved=188KB, committed=188KB)
>>
>> (malloc=57KB #243)
>>
>> (arena=131KB #3)
>>
>>
>>
>> -  Internal (reserved=1464736KB, committed=1464736KB)
>>
>> (malloc=1464672KB #40848)
>>
>> (mmap: reserved=64KB, committed=64KB)
>>
>>
>>
>> -Symbol (reserved=18973KB, committed=18973KB)
>>
>> (malloc=15353KB #152350)
>>
>> (arena=3620KB #1)
>>
>>
>>
>> -Native Memory Tracking (reserved=3450KB, committed=3450KB)
>>
>> (malloc=14KB #167)
>>
>> (tracking overhead=3436KB)
>>
>>
>>
>> -   Arena Chunk (reserved=712KB, committed=712KB)
>>
>> (malloc=712KB)



-- 
Best regards,
Ivan Pavlukhin


Re: I have a question about Java scan ignite cache

2019-02-14 Thread Павлухин Иван
Hi,

But what result do you observe in your experiment?

ср, 13 февр. 2019 г. в 05:16, chengpei :
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.CacheWriteSynchronizationMode;
> import org.apache.ignite.cache.query.QueryCursor;
> import org.apache.ignite.cache.query.ScanQuery;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.lang.IgniteBiPredicate;
>
> import javax.cache.Cache;
>
> public class QueryTest {
>
> public static Ignite ignite;
>
> public static IgniteCache cache;
>
> public static void main(String[] args) {
> try {
> init();
> //insertData();
> queryData();
> } catch (Exception e) {
> throw e;
> } finally {
> ignite.close();
> }
> }
>
> private static void queryData() {
> System.out.println("query data start");
> QueryCursor> query = cache.query(new
> ScanQuery(new IgniteBiPredicate() {
> @Override
> public boolean apply(String s, Person person) {
> System.out.println(s + " : " + person);
> return person.getAge() > 22;
> }
> }));
> for (Cache.Entry entry : query) {
> System.out.println("queryData() > key:" + entry.getKey() +
> ", value:" + entry.getValue());
> }
> System.out.println("query data end");
> }
>
> private static void insertData() {
> System.out.println("insert data start");
> Person p1 = new Person("Jack", 20);
> Person p2 = new Person("Tom", 21);
> Person p3 = new Person("Mike", 22);
> Person p4 = new Person("Luci", 23);
> Person p5 = new Person("Debug", 24);
> cache.put(p1.getName(), p1);
> cache.put(p2.getName(), p2);
> cache.put(p3.getName(), p3);
> cache.put(p4.getName(), p4);
> cache.put(p5.getName(), p5);
> System.out.println("insert data end");
> }
>
> private static void init() {
> System.out.println("init ignite cache start");
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setClientMode(true);
> cfg.setPeerClassLoadingEnabled(true);
>
> CacheConfiguration cardCacheCfg = new CacheConfiguration();
> cardCacheCfg.setName("Person_Cache");
> cardCacheCfg.setCacheMode(CacheMode.PARTITIONED);
>
> cardCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> cardCacheCfg.setBackups(2);
> cardCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> cfg.setCacheConfiguration(cardCacheCfg);
> ignite = Ignition.start(cfg);
> cache =  ignite.getOrCreateCache("Person_Cache");
> System.out.println("init ignite cache end");
> }
>
> }
> class Person {
>
> private String name;
>
> private int age;
>
> public String getName() {
> return name;
> }
>
> public void setName(String name) {
> this.name = name;
> }
>
> public int getAge() {
> return age;
> }
>
> public void setAge(int age) {
> this.age = age;
> }
>
> public Person(String name, int age) {
> this.name = name;
> this.age = age;
> }
>
> @Override
> public String toString() {
> return "Person{" +
> "name='" + name + '\'' +
> ", age=" + age +
> '}';
> }
> }
>
> 
>
> I want the result to be
> key:Luci, value:Person{name='Luci', age=23}
> key:Debug, value:Person{name='Debug', age=24}
>
> and Do I have to put lib in ignite lib folder ?
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: DataStreamer not loading complete data.

2019-02-12 Thread Павлухин Иван
Hi,

Have you called "Close" after last item of data was fed to your streamer?
If it is not the case could you please provide your code to check?

вт, 12 февр. 2019 г. в 16:10, Hemasundara Rao <
hemasundara@travelcentrictechnology.com>:

> Hi,
> While using DataStreamer (.net ) to load data , it is not loading complete
> data.
> For example if I am loading  1,000,000 records to Ignite only 990,000 or
> less getting loaded.
> Am I missing here any configuration?
> It is not giving any exception and loading less data.
> Please advise me if I am missing something required either client or
> server side to work DataStreamer properly.
>
> Thanks and regards,
> Hemasundara Rao Pottangi  | Senior Project Leader
>
> [image: HotelHub-logo]
> HotelHub LLP
> Phone: +91 80 6741 8700
> Cell: +91 99 4807 7054
> Email: hemasundara@hotelhub.com
> Website: www.hotelhub.com 
> --
>
> HotelHub LLP is a service provider working on behalf of Travel Centric
> Technology Ltd, a company registered in the United Kingdom.
> DISCLAIMER: This email message and all attachments are confidential and
> may contain information that is Privileged, Confidential or exempt from
> disclosure under applicable law. If you are not the intended recipient, you
> are notified that any dissemination, distribution or copying of this email
> is strictly prohibited. If you have received this email in error, please
> notify us immediately by return email to
> noti...@travelcentrictechnology.com and destroy the original message.
> Opinions, conclusions and other information in this message that do not
> relate to the official business of Travel Centric Technology Ltd or
> HotelHub LLP, shall be understood to be neither given nor endorsed by
> either company.
>
>

-- 
Best regards,
Ivan Pavlukhin


Re: Unable to form connection between ignite(v 2.7) node inside kubernetes-1.11.3

2019-01-29 Thread Павлухин Иван
Hi Lalit,

Usually topics related to some sort of contribution are discussed on
dev list. I added user list to recipients list. You will get an answer
for usability questions on user list quicker.

вт, 29 янв. 2019 г. в 00:00, Lalit Jadhav :
>
> While starting one node it gets up with time delay around 50-60 sec. but
> when we scale deployment to 2-3 then those nodes are unable to connect to
> st node.
>
> Also getting below error on 2nd and 3rd node.
>
> ERROR TcpDiscoverySpi:586 - Failed to get registered addresses from IP
> > finder on start (retrying every 2000ms; change 'reconnectDelay' to
> > configure the frequency of retries). class
> > org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP
> > addresses. at
> > org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
> > at
> > org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> > at
> > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
> > at
> > org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
> > at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066) at
> > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> > at
> > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
> > at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158) at
> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678) at
> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:618) at
> > org.apache.ignite.Ignition.getOrStart(Ignition.java:415) at
> > com.cloud.ignite.server.IgniteServer.startIgnite(IgniteServer.java:57) at
> > com.cloud.ignite.server.IgniteServer.(IgniteServer.java:39) at
> > com.cloud.ignite.server.IgniteServer.getInstance(IgniteServer.java:107) at
> > com.cloud.ignite.server.IgniteServer.main(IgniteServer.java:133) Caused by:
> > java.net.ConnectException: Connection refused (Connection refused) at
> > java.net.PlainSocketImpl.socketConnect(Native Method) at
> > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> > at
> > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> > at
> > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
> > java.net.Socket.connect(Socket.java:589) at
> > sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:673) at
> > sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at
> > sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
> > sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at
> > sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at
> > sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264) at
> > sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at
> > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
> > at
> > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
> > at
> > sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
> > at
> > org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:153)
>
>
>
>
> --
> Thanks and Regards,
> Lalit Jadhav.



-- 
Best regards,
Ivan Pavlukhin


Re: Cache updates slow on Linux Vs Windows

2019-01-18 Thread Павлухин Иван
I suppose that Windows is faster in particular case. Am I wrong?

пн, 14 янв. 2019 г. в 18:00, ilya.kasnacheev :
>
> Hello!
>
> Did you figure out anything? I went through your log but did not have any
> exact ideas. Is it possible that Windows node is slowed down by very active
> I/O?
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: How to dump thread stacks in Ignite docker container

2019-01-14 Thread Павлухин Иван
Hi Justin,

One way you can do it:
1. Access to standard output of a default process started in an Ignite
container.
2. Find pid of a java process in the container (run ps in the container).
3. Run "kill -3 ${java_pid}" in the container.
4. Observe a thread dump in the container output (mentioned in 1).

пт, 11 янв. 2019 г. в 11:52, Justin Ji :
>
> Hi Igniters -
>
> I tried to dump the thread stacks, but I don't know how to dump the
>
> thread stacks from a docker container since it only contains a simplified
>
> JRE, does not have JSTACK tools, and I also googled a lot of information but
>
> found that there is no suitable method.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-09 Thread Павлухин Иван
Hi Prasad,

> javax.cache.CacheException: Only pessimistic repeatable read transactions are 
> supported at the moment.
Exception mentioned by you should happen only for cache with
TRANSACTIONAL_SNAPSHOT atomicity mode configured. Have you configured
TRANSACTIONAL_SNAPSHOT atomicity for any cache? As Denis mentioned
there are number of bugs related to TRANSACTIONAL_SNAPSHOT, e.g. [1].

[1] https://issues.apache.org/jira/browse/IGNITE-10520

вс, 6 янв. 2019 г. в 20:03, Denis Magda :
>
> Hello,
>
> Ignite versions prior to 2.7 never supported transactions for SQL queries. 
> You were enlisting SQL in transactions for your own risk. Ignite version 2.7 
> introduced true transactional support for SQL based on MVCC. Presently it's 
> in beta with GA to be available around Q2-Q3 this year. The community is 
> working on optimizations.
>
> Please refer to this docs for more details:
> https://apacheignite.readme.io/docs/multiversion-concurrency-control
> https://apacheignite-sql.readme.io/docs/transactions
> https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control
>
> --
> Denis
>
> On Sat, Jan 5, 2019 at 7:48 PM Prasad Bhalerao  
> wrote:
>>
>> Can someone please explain if anything has changed in ignite 2.7.
>>
>> Started getting this exception after upgrading to 2.7.
>>
>>
>> -- Forwarded message -
>> From: Prasad Bhalerao 
>> Date: Fri 4 Jan, 2019, 8:41 PM
>> Subject: Re: Getting javax.cache.CacheException after upgrading to Ignite
>> 2.7
>> To: 
>>
>>
>> Can someone please help me with this?
>>
>> On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao > wrote:
>>
>> > Hi
>> >
>> > After upgrading to 2.7 version I am getting following exception. I am
>> > executing a SELECT sql inside optimistic transaction with serialization
>> > isolation level.
>> >
>> > 1) Has anything changed from 2.6 to 2.7 version?  This work fine prior to
>> > 2.7 version.
>> >
>> > After changing it to Pessimistic and isolation level to REPEATABLE_READ it
>> > works fine.
>> >
>> >
>> >
>> >
>> >
>> >
>> > *javax.cache.CacheException: Only pessimistic repeatable read transactions
>> > are supported at the moment.at
>> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
>> > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
>> > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
>> > com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
>> >
>> > Thanks,
>> > Prasad
>> >



-- 
Best regards,
Ivan Pavlukhin


Re: How to define a cache template?

2018-12-29 Thread Павлухин Иван
Hi,

Perhaps following docs section can help you [1].

[1] https://apacheignite.readme.io/docs/cache-template

сб, 29 дек. 2018 г. в 04:56, yangjiajun <1371549...@qq.com>:
>
> Hello!
>
> I want to make some cache settings global.It means I need to define a cache
> template and then use it,right?But I did not find any docs related to this.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Ignite.close method blocked indefinitely

2018-12-24 Thread Павлухин Иван
Hi userx,

How do you start you client? You use a term IgniteClient, but it looks
like that you are using Ignite instance running in client mode. By
IgniteClient so-called "thin client" is assumed (java interface for
thin client has name "IgniteClient").

>From the thread dump I can see that DataStreamerImpl.closeEx waits for
a write lock while read lock is held by
DataStreamerImpl.acquireRemapSemaphore which is waiting for a permit
from "remap semaphore". Actually, looks like a bug. Could you provide
a reproducer?

As a workaround I can suggest configuring IgniteDataStreamer.timeout.
Another option is interrupting a thread which supplies data to a
streamer.

пн, 24 дек. 2018 г. в 18:09, userx :
>
> Hi,
>
> I am trying to write some data to IgniteCache (PersistentMode) from a
> Igniteclient to an IgniteServer. My IgniteClient is a simple java program
> which is serving some requests. As a matter of fact, the IgniteClient is
> instantiated lazily from a java program ( a server serving requests),
> connects to the cluster and stays there.
> It starts a DataStreamer and for an argument sake i have given a timeout of
> 5 minutes to stream the data. If the Streamer is not able to do so (in case
> of huge data), I purposely call ignite.close which should kill all the
> threads initiated by IgniteClient and make the DataStreamer entries Garbage
> so that GC can take care of the same and clean my java program heap. But to
> my surprise, Ignite.close is in a TIMED_WAITING stage  and the Ignite
> related threads started by IgniteClient do not get killed. Here is the
> snapshot of thread dump.
>
> "TCP Worker Pool-thread-1" #190 prio=5 os_prio=0 tid=0x7f4d70001000
> nid=0x40d3 waiting on condition [0x7f4d9dda3000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at
> org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:206)
> at
> org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.closeEx(DataStreamerImpl.java:1222)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.closeEx(DataStreamerImpl.java:1207)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.onKernalStop(DataStreamProcessor.java:155)
> at 
> org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2101)
> at 
> org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2049)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
> - locked <0x0004c3992740> (a
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2551)
> at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:372)
> at org.apache.ignite.Ignition.stop(Ignition.java:229)
> at 
> org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:3370)
> at
> com.XXX.datagrid.DataGridClient.stopIgniteClient(DataGridClient.java:114)
> at com.XXX.datagrid.DataGridClient.writeAll(DataGridClient.java:208)
> at com.XXX.datagrid.DataGridClient.writeAll(DataGridClient.java:215)
> at 
> com.XXX.datagrid.DataGridClient.writeToGrid(DataGridClient.java:146)
> at
> com.XXX.calculator.components.PropertyMeasures.performIgnitePersistence(PropertyMeasures.java:1198)
> at
> com.XXX.calculator.components.PropertyMeasures.execute(PropertyMeasures.java:251)
> at
> com.XXX.indexfactory.pipeline.BusinessPipeline.execute(BusinessPipeline.java:225)
> at
> com.XXX.indexfactory.pipeline.AlcyoneThread.runCalculation(AlcyoneThread.java:406)
> at
> com.XXX.indexfactory.pipeline.AlcyoneThread.process(AlcyoneThread.java:160)
> at
> com.XXX.calculator.XXXCalculator.executeWithResponseData(XXXCalculator.java:101)
> at
> com.XXX.calculator.XXXCalculator.executeWithResponse(XXXCalculator.java:66)
> at
> com.XXX.calculator.soa.server.CalculationRequestHandler.processCalculationRequest(CalculationRequestHandler.java:171)
> at
> com.XXX.calculator.soa.server.CalculationRequestHandler.processRequest(CalculationRequestHandler.java:89)
> at 
> com.XXX.alcyone.tcp.TCPSocketDispatcher.run(TCPSocketDispatcher.java:62)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>
> Is there a way I can force kill the IgniteClient without killing my Java
> Program from which the IgniteClient was initiated. Complete details of
> thread dump.
>
>
>
> 2018-12-23 06:07:17
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.172-b11 mixed mode):
>

Re: CAP Theorem (CP? or AP?)

2018-12-24 Thread Павлухин Иван
Hi Jose,

First of you refer to a slide about Data Center Replication, an
commercial feature of GridGain. Ignite does not provide such feature.
Also, SQL and Cache API could behave different.

You can check how Cache API shows itself in your experiments.
CacheAtomicityMode and PartitionLossPolicy (cache configuration
options) can change the behavior in respect of consistency.

пн, 24 дек. 2018 г. в 09:55, joseheitor :
>
> In this GridGain presentation:
>
> https://www.youtube.com/watch?v=u8BFLDfOdy8=1806s
> 
>
> Valentin Kulichenko explains the CAP theorem and states that Apache Ignite
> is designed to favour Strong-Consistency (CP) over High-Availability (AP).
>
> However, in my test case, my system appears to be behaving as an AP system.
> Here is my setup:
>
> 4 partitioned nodes in 2 availability-zones [AZa-1, AZa-2] [AZb-3, AZb-4],
> configured as described in this post:
>
> http://apache-ignite-users.70518.x6.nabble.com/RESOLVED-Cluster-High-Availability-tp25740.html
> 
>
> With 7,000 records loaded into a table in the cluster with JDBC Thin client:
>
> 1. [OK] I can connect to any node and verify that there are 7,000 records
> with a SELECT COUNT(*)
>
> 2. [OK] If I kill all nodes in AZ-a [AZa-1, AZa-2], and connect to one of
> the remaining online nodes in AZ-b, I can still verify that there are 7,000
> records with a SELECT COUNT(*)
>
> 3. [?] I then kill one of the remaining two nodes in AZ-b and connect to the
> single remaining node. Now a SELECT COUNT(*) returns a value of 3,444
> records.
>
> This seems to illustrate that the partitioning and backup configuration is
> working as intended. But if Ignite is strongly-consistent (CP), shouldn't
> the final query fail rather than return an inaccurate result (AP)?
>
> Or am I missing some crucial configuration element(s)?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2018-12-24 Thread Павлухин Иван
Hi summasumma,

> Means, i should have minimum 16 Gb of RAM (8 dataregion+ 8 directmem) for
> Ignite to run properly i guess.

Not quite. Actually, I am not aware that Ignite requires some special
tuning of MaxDirectMemorySize. If direct memory causes OOME then the
exception message usually points it out (e.g. "Direct buffer memory").
Check if it is your case.

Indeed you should be careful when configure DataRegion.maxSize because
OOME is a real problem and you will get it if put more data into
Ignite than specified limit. One should carefully plan how many data
is going to be stored into Ignite when in-memory mode is used. Also,
it is possible to use Ignite native persistence or configure swap to
overcome OOME [1]. Also page about capacity planning might be useful
here [2].

[1] https://apacheignite.readme.io/docs/durable-memory
[2] https://apacheignite.readme.io/docs/capacity-planning

пн, 24 дек. 2018 г. в 09:53, summasumma :
>
> Thanks Ivan.
>
> This means having both configuration as follows:
> xml--> 
> jvmption --> "-XX:MaxDirectMemorySize=8g"
>
> Means, i should have minimum 16 Gb of RAM (8 dataregion+ 8 directmem) for
> Ignite to run properly i guess.
>
> In my current setup i have 16GB of total ram and given 12GB=maxSize
> Dataregion and jvmoption as '-XX:MaxDirectMemorySize=8g'. And this is
> crashing Ignite with OOME after a while (though not immediatly) when i try
> to do a performance testing of Update operation. so this means either i
> should increase the RAM or decrease the XX:MaxDirectMemorySize to 4g ?
>
> Please clarify
>
> Thanks
> ...summa
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2018-12-23 Thread Павлухин Иван
Hi summasumma,

DataRegion maxSize and jvm MaxDirectMemorySize are completely different.
Ignite DataRegion uses offheap memory allocated with help of Unsafe.
And that memory is not related to "direct memory" which jvm allocates
when direct buffers are used (e.g. ByteBuffer.allocateDirect). To
constraint max amount of memory for direct buffers one can use
MaxDirectMemorySize jvm option. As far as I know, by default
MaxDirectMemorySize is equal to Xmx. Consult [1] for more details.

[1] https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html

2018-12-23 11:17 GMT+03:00, Павлухин Иван :
> Hi collnc,
>
> Perhaps, documentation can answer you question [1].
>
> [1] https://apacheignite.readme.io/docs/durable-memory-tuning
>
> 2018-12-21 20:39 GMT+03:00, summasumma :
>> In the above example,
>>
>> is setting  "
>> "
>> in xml config file same as adding a jvmoption
>> "-XX:MaxDirectMemorySize=8g"
>> ?
>> or its different?
>>
>> Can somone please clarify?
>>
>> Thanks
>> ...summa
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


-- 
Best regards,
Ivan Pavlukhin


Re: Do we require to set MaxDirectMemorySize JVM parameter?

2018-12-23 Thread Павлухин Иван
Hi collnc,

Perhaps, documentation can answer you question [1].

[1] https://apacheignite.readme.io/docs/durable-memory-tuning

2018-12-21 20:39 GMT+03:00, summasumma :
> In the above example,
>
> is setting  "
> "
> in xml config file same as adding a jvmoption "-XX:MaxDirectMemorySize=8g"
> ?
> or its different?
>
> Can somone please clarify?
>
> Thanks
> ...summa
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: How Ignite transfer cached data between nodes in the cluster

2018-12-20 Thread Павлухин Иван
Hi,

Actually, all code paths are not trivial. If you would like to dig
into data retrieval process for IgniteCache.get you can explore
GridPartitionedSingleGetFuture and check where
GridNearSingleGetRequest is created and how is it handled.
ср, 19 дек. 2018 г. в 18:26, vyhc...@hotmail.com :
>
> I am trying to understand the logic and find the source code on how Ignite
> retrieves cached data if the data is cached on different node in the
> cluster.
>
> As an example, say the cluster has 3 node which are A, B & C. Data retrieval
> request sent from node A, and data is cached on node C. I found that Ignite
> use GridIoManager and TcpCommunicationSpi with NIO to transfer messages? But
> what I couldn't find and try to understand is how the cached data got
> transferred from node C to A. Can anyone point me the source code for
> storing/retrieving the cached data between nodes in the Ignite cluster?
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Can i use SQL query and Cache Operations in same transaction (JTA)

2018-12-19 Thread Павлухин Иван
Hi Hyungbai,

Please be aware that MVCC is included into Ignite release as kind of
experimental feature as stated in release notes [1].
I do not know plans for providing releases with MVCC fixes. But
perhaps you can try nightly builds [2] once the related ticket [3] is
resolved. And I believe that it can be resolved in near future.

[1] https://ignite.apache.org/releases/2.7.0/release_notes.html
[2] https://ignite.apache.org/download.cgi#nightly-builds
[3] https://issues.apache.org/jira/browse/IGNITE-10685
пн, 17 дек. 2018 г. в 05:06, Hyungbai :
>
> Thank you for the reply.
>
> I think it is absolutely necessary when using JTA.
> I hope it will be patched soon.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Ignite in docker (Native Persistence)

2018-12-18 Thread Павлухин Иван
Hi Rahul,

Could you please share an ignite configuration and how do you launch a
docker container with Ignite?
Do you see something in your ignitedata/persistence ignitedata/wal
ignitedata/wal/archive after container stop?
I guess you can configure a consistentId by configuring
IgniteConfiguration bean property:

  
  ...

вт, 18 дек. 2018 г. в 12:57, RahulMetangale :
>
> Hi All,
>
> I followed following steps for persistence in docker but i am observing that
> cache is not retained after restart. From documentation i see that
> consisitentID need to be set to retain cache after restart but i am not sure
> how it can set in configuration xml file. Any help is greatly appreciated.
>
> Here are the steps i followed:
> 1. Created following folder on docker host in var directory
> mkdir -p ignitedata/persistence ignitedata/wal ignitedata/wal/archive
> 2. Updated default-config.xml
>
>
> 3. Ran following command to deploy ignite docker container. I updated
> default-config.xml inside container hence i did not pass the CONFIG_URI.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Migrate from 2.6 to 2.7

2018-12-11 Thread Павлухин Иван
Hi Andrey,

It looks like your persisted data was read incorrectly by upgraded
Ignite. It would be great if you provide runnable reproducer.

Regarding Optimistic Serializable transactions. They are still
supported by caches with TRANSACTIONAL atomicity mode. In your error
it looks like that your caches are treated as TRANSACTIONAL_SNAPSHOT
atomicity mode. You can read about that (experimental) mode in
documentation [1]. Briefly, this mode allows SQL transactions and
supports only PESSIMISTIC REPEATABLE_READ transaction configuration.

[1] 
https://apacheignite-sql.readme.io/v2.7/docs/multiversion-concurrency-control
пн, 10 дек. 2018 г. в 17:13, Андрей Григорьев :
>
> Hello, when I tried to migrate to new version i had error. Optimistic 
> Serializable isn't supported?
>
>
> ```
> Caused by: class 
> org.apache.ignite.internal.processors.query.IgniteSQLException: Only 
> pessimistic repeatable read transactions are supported at the moment.
> at 
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690)
> at 
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
> at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)
> ... 10 more
> ```
> Enviroment: JDK 1.8, Apache Ignite 2.7 (clear install, persistence mode, 3 
> nodes).  Apache Ignite Client 2.7 from maven:
>
> 2.7.0
>
> 
> org.apache.ignite
> ignite-core
> ${ignite.version}
> 
> 
> org.apache.ignite
> ignite-indexing
> ${ignite.version}
> 
>
> Cache configuration:
>
> CacheConfiguration cfg = new CacheConfiguration<>();
> cfg.setBackups(backupsCount);
> cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> Transaction configuration:
>
> TransactionConfiguration txCfg = new TransactionConfiguration();
> txCfg.setDefaultTxConcurrency(TransactionConcurrency.OPTIMISTIC);
> txCfg.setDefaultTxIsolation(TransactionIsolation.SERIALIZABLE);
> cfg.setTransactionConfiguration(txCfg);
>
> And second exception from nodes, when i try to read persisted value from 
> cache (set transaction mode to pessimistic repetable).
>
> ```
> [15:33:46,990][SEVERE][query-#278][GridMapQueryExecutor] Failed to execute 
> local query.
> class org.apache.ignite.IgniteCheckedException: Failed to execute SQL query. 
> Внутренняя ошибка: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on bounds: [lower=RowSimple [vals=[null, null, null, null, 
> null, null, null, 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]], 
> upper=RowSimple [vals=[null, null, null, null, null, null, null, 
> 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]]]"
> General error: "class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on bounds: [lower=RowSimple [vals=[null, null, null, null, 
> null, null, null, 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]], 
> upper=RowSimple [vals=[null, null, null, null, null, null, null, 
> 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]]]"; SQL statement:
> SELECT
> "HumanName".__Z0._KEY __C0_0,
> "HumanName".__Z0._VAL __C0_1
> FROM "HumanName".HUMANNAMEMODEL __Z0
> WHERE (__Z0.PARENTID = ?1) AND (__Z0.VERSION = ?2) [5-197]
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1428)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1489)
> at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:930)
> at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:705)
> at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onMessage(GridMapQueryExecutor.java:240)
> at 
> 

Re: Events question

2018-11-09 Thread Павлухин Иван
Hi Mikael,

In order to use event storage you should configure EventStorageSpi.
By default an event storage is not enabled. Which fits your case.

You can find more details about event storage in [1].

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/eventstorage/memory/MemoryEventStorageSpi.html

пт, 9 нояб. 2018 г. в 15:11, Mikael :

> Hi!
>
> The event documentation say you can query events, so they are stored
> locally, how long does it store them ? can I control this in any way ?
> say I just want to use event listeners and not interested in query them,
> so no use to keep them around once they have been caught by the
> listener, is that possible ?
>
> It sounds like keeping lots of events around after they have been
> triggered would be wasting memory, or maybe it would not make any
> difference ?
>
> Mikael
>
>
>

-- 
Best regards,
Ivan Pavlukhin


Re: Unable to load more than 5g data through sqlline

2018-11-07 Thread Павлухин Иван
Hi Debashis,

Sorry for late answer. How much RAM does your server have?
You configured your data region with 7 gb max size. This size
defines how much RAM could be allocated for the region. If your
server has not enough RAM the OS cannot allocate enough for
Ignite and kills it. With persistence enabled you can store as much
data into Ignite as your disk capacity allows. Try to decrease the data
region max size. I hope this will help.

чт, 1 нояб. 2018 г. в 16:41, debashissinha :

> Also at the same time the ignite node is restarting
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: Unable to load more than 5g data through sqlline

2018-10-31 Thread Павлухин Иван
Hi Debashis,

Is sqlline started on the same machine? Perhaps sqlline ate all the
available memory
but the system decided to kill Ignite. Could you split incoming data into
relatively
small chunks and try it out?

вт, 30 окт. 2018 г. в 23:07, debashissinha :

> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_005158.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_005224.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004447.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004532.jpg>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1918/20181031_004559.jpg>
>
>
> Hi All,
> I would request help from anyone who can assist me in a critical error.
> I am trying to benchmark ignite on single node without tcp discovery
> enabled
> with tpcds benchmark data.
> For this I am using customer table(image attached) and loading 5 gb of csv
> file through sql line.
>
> The config( image attached) is for 20 gb of default data region with wal
> mode none and eviction mode is only lru . I have also enabled native
> persistence . Also in my ignite.sh script I am adding G1GC option in jvm
> opts.
>
> After almost 1.63 gb of data getting inserted and which corresponds to
> roughly 1200 rows of data ignite is silently restarting giving an error
> kill ignite.sh line 181 with jvm opts. The error for this is attached.
>
> I am having the following config
> no of cpus 2
> heap memory 1 gb
> data region max size(off heap size) 20 gb.
> Cluster mode enabled.
>
> Can some one kindly advise me where I am going wrong.
>
> Thanks & Regards
> Debashis Sinha
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: how to handle dataregion out of memmory gracefully

2018-10-31 Thread Павлухин Иван
Hi Wayne,

You can see a message written to a console during Ignite startup with
a calculated amount of memory required for a server. It looks as follows:

Nodes started on local machine require more than 80% of physical RAM
what can lead to significant slowdown due to swapping (please decrease
JVM heap size, data region size or checkpoint buffer size)
[required=694MB, available=996MB]

As already said you should clearly understand that your server does not
require more memory than available. I believe that it is a responsibility of
a server administrator to prevent a memory exhaustion. Also I could suggest
to configure enough swap space and a monitoring which will notify admin
when the system begins swapping.


пн, 29 окт. 2018 г. в 9:50, Ilya Kasnacheev :

> Hello!
>
> I'm afraid that Ignite is not usable currently after suffering Out Of
> Memory error. You should be careful to prevent that from happening.
>
> Currently there is no graceful way of dealing with it.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вс, 28 окт. 2018 г. в 11:58, wt :
>
>> in testing i managed to exceed a data regions space and this region is
>> memory
>> only. When this happens an unhandled exception is thrown from the
>> underlying
>> ignite dlls and the process crashes. How can i handle this gracefully
>> without losing the server?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
Best regards,
Ivan Pavlukhin


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-17 Thread Павлухин Иван
Hi Michael,

The troubles could be related to Python library. It seems in Python 2.7
there is no such thing as "byte array". And value passed to the client is
string in this case.
I checked that Ignite recognizes bytes array type and stores in as byte
array internally. I did following experiment with Spymemcached [1].
public class Memcached {
public static void main(String[] args) throws IOException {
MemcachedClient client = new MemcachedClient(
new BinaryConnectionFactory(),
AddrUtil.getAddresses("127.0.0.1:11211"));

client.add("a", Integer.MAX_VALUE, new byte[]{1, 2, 3});
client.add("b", Integer.MAX_VALUE, "123");

System.out.println(Arrays.toString((byte[])client.get("a")));
System.out.println(client.get("b"));

System.exit(0);
}
}

And I see expected output:
[1, 2, 3]
123

[1] https://mvnrepository.com/artifact/net.spy/spymemcached/2.12.3

ср, 17 окт. 2018 г. в 10:25, Павлухин Иван :

> Hi Michael,
>
> Answering one of your questions.
> > Does ignite internally have a way to store the data type when cache
> entry is stored?
> Yes, internally Ignite maintains data types for stored keys and values.
>
> Could you confirm that for real memcached your example works as expected?
> I will try reproduce your Python example. It should not be hard to check
> what exactly is stored inside Ignite.
>
> ср, 17 окт. 2018 г. в 5:25, Michael Fong :
>
>> bump :)
>>
>> Could anyone please help to answer a newbie question? Thanks in advance!
>>
>> On Mon, Oct 15, 2018 at 4:22 PM Michael Fong 
>> wrote:
>>
>>> Hi,
>>>
>>> I kind of able to reproduce it with a small python script
>>>
>>> import pylibmc
>>>
>>> client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
>>>
>>>
>>> ##abc
>>> val = "abcd".decode("hex")
>>> client.set("pyBin1", val)
>>>
>>> print "val decode w/ iso-8859-1: %s" % val.encode("hex")
>>>
>>> get_val = client.get("pyBin1")
>>>
>>> print "Value for 'pyBin1': %s" % get_val.encode("hex")
>>>
>>>
>>> where the the program intends to insert a byte[] into ignite using
>>> memcache binary protocol.
>>> The output is
>>>
>>> val decode w/ iso-8859-1: abcd
>>> Value for 'pyBin1': *efbfbdefbfbd*
>>>
>>> where, 'ef bf bd' are the replacement character for UTF-8 String.
>>> Therefore, the value field seems to be treated as String in Ignite.
>>>
>>> Regards,
>>>
>>> Michael
>>>
>>>
>>>
>>> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:
>>>
>>>> Hi, it looks strange to me. Do you have a reproducer?
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>
>>>
>
> --
> Best regards,
> Ivan Pavlukhin
>


-- 
Best regards,
Ivan Pavlukhin


Re: Writing binary [] to ignite via memcache binary protocol

2018-10-17 Thread Павлухин Иван
Hi Michael,

Answering one of your questions.
> Does ignite internally have a way to store the data type when cache entry
is stored?
Yes, internally Ignite maintains data types for stored keys and values.

Could you confirm that for real memcached your example works as expected? I
will try reproduce your Python example. It should not be hard to check what
exactly is stored inside Ignite.

ср, 17 окт. 2018 г. в 5:25, Michael Fong :

> bump :)
>
> Could anyone please help to answer a newbie question? Thanks in advance!
>
> On Mon, Oct 15, 2018 at 4:22 PM Michael Fong 
> wrote:
>
>> Hi,
>>
>> I kind of able to reproduce it with a small python script
>>
>> import pylibmc
>>
>> client = pylibmc.Client (["127.0.0.1:11211"], binary=True)
>>
>>
>> ##abc
>> val = "abcd".decode("hex")
>> client.set("pyBin1", val)
>>
>> print "val decode w/ iso-8859-1: %s" % val.encode("hex")
>>
>> get_val = client.get("pyBin1")
>>
>> print "Value for 'pyBin1': %s" % get_val.encode("hex")
>>
>>
>> where the the program intends to insert a byte[] into ignite using
>> memcache binary protocol.
>> The output is
>>
>> val decode w/ iso-8859-1: abcd
>> Value for 'pyBin1': *efbfbdefbfbd*
>>
>> where, 'ef bf bd' are the replacement character for UTF-8 String.
>> Therefore, the value field seems to be treated as String in Ignite.
>>
>> Regards,
>>
>> Michael
>>
>>
>>
>> On Thu, Oct 4, 2018 at 9:38 PM Maxim.Pudov  wrote:
>>
>>> Hi, it looks strange to me. Do you have a reproducer?
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>

-- 
Best regards,
Ivan Pavlukhin


Re: Ignite Events

2018-10-11 Thread Павлухин Иван
Hi drosso,

Luckily there is no mystery. By the way, what version of Ignite do you use?

The clue to strange behavior here is topology change during your test
execution. As I see, Putter node is a server data node as well, so it will
hold some data partitions on it and consequently will receive some
OBJECT_PUT events. The second seeming strange thing here is observing
events for same keys on different nodes. It is explained by so-called "late
affinity assignment". Putter enters cluster and some partitions are loaded
to it from other nodes. But Putter is usable before all data is actually
loaded, instead of waiting data and freezing cluster for possibly long time
Ignite creates temporary backup partition on Putter node and primary
partition is kept on one of ServerNodes from your example (and when all
data is loaded by Putter from other nodes partitions on it will be
considered primary and previous primary partitions on other nodes will be
destroyed). Events like OBJECT_PUT are fired on backup partitions as well.
And it explains why you observe events for same keys on different nodes. If
you make Putter non-data node for the target cache (e.g. by starting it as
a client node) then you will see events only on ServerNodes.

чт, 11 окт. 2018 г. в 11:20, drosso :

> Hi Ivan,
> thank you for your interest! here below you can find the code for the 2
> sample programs:
>
> *** ServerNode.java **
>
> package TestATServerMode;
>
> import javax.cache.Cache;
> import javax.cache.event.CacheEntryEvent;
> import javax.cache.event.CacheEntryUpdatedListener;
> import javax.cache.event.EventType;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteException;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
> import org.apache.ignite.cache.query.ContinuousQuery;
> import org.apache.ignite.cache.query.QueryCursor;
> import org.apache.ignite.cache.query.ScanQuery;
> import org.apache.ignite.events.*;
> import org.apache.ignite.lang.IgniteBiPredicate;
> import org.apache.ignite.lang.IgnitePredicate;
>
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ;
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED;
>
> import java.util.UUID;
>
> /**
>  * Starts up an empty node with example compute configuration.
>  */
> public class ServerNode {
> /**
>  * Start up an empty node with example compute configuration.
>  *
>  * @param args
>  *Command line arguments, none required.
>  * @throws IgniteException
>  * If failed.
>  */
> private static final String CACHE_NAME = "MyCache";
>
> @SuppressWarnings("deprecation")
> public static void main(String[] args) throws IgniteException {
> Ignition.start("config/example-ignite.xml");
>
> Ignite ignite = Ignition.ignite();
>
> // Get an instance of named cache.
> final IgniteCache cache =
> ignite.getOrCreateCache(CACHE_NAME);
>
> // Sample remote filter
>
> IgnitePredicate locLsnr = new
> IgnitePredicate() {
> @Override
> public boolean apply(CacheEvent evt) {
> System.out.println("LOCAL cache event
> [evt=" + evt.name() + ",
> cacheName=" + evt.cacheName() + ", key="
> + evt.key() + ']');
>
> return true; // Return true to continue
> listening.
> }
> };
>
> // Register event listener for all local task execution
> events.
> ignite.events().localListen(locLsnr, EVT_CACHE_OBJECT_PUT);
>
>
> }
> }
>
>
>  Putter.java *
>
> package TestATServerMode;
>
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;
>
> import java.sql.Time;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.events.CacheEvent;
> import org.apache.ignite.lang.IgnitePredicate;
>
>
> @SuppressWarnings("TypeMayBeWeakened")
> public class Putter {
> /** Cache name. */
> private static final String CACHE_NAME = "MyCache";
>
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws InterruptedException
>  */
> public static void main(String[] args) {
>
> // Mark this cluster member as client.
> //Ignition.setClientMode(true);
>
> try (Ignite ignite = Ignition.start("config/example-ignite.xml")) {
> System.out.println();
>  

Re: Ignite Events

2018-10-10 Thread Павлухин Иван
Hi drosso,

Indeed looks strange. If you provide a reproducer I will take a look.

ср, 10 окт. 2018 г. в 16:44, drosso :

> Hi,
> I've been playing with Ignite events for a couple of days but there's a
> behavior of my sample programs that I really can't understand.
> I've prepared 2 sample programs:
> 1. a Putter program that "gets or creates" a simple cache "MyCache" with an
> Integer Key and a String Value and puts 9 elements (from 1 to 9) into
> "MyCache"
> 2. a ServerNode program that defines a local listener on "MyCache" for PUT
> events and displays the newly added keys.
>
> N.B. All programs are launched as "Server" Ignite nodes and "MyCache" is
> defined as PARTITIONED with 0 backuop copies
>
> If I launch 2 instances of ServerNode and 1 instance of Putter, I obtain
> the
> following output:
>
> ServerNode 1 displays the keys: 2, 3, 5, 7, 9
> ServerNode 2 display the keys: 1, 4,6,8
>
> Now, this is already an output that puzzles me: if the cache is
> partitioned,
> the keys should be spread onto all Server instances, so I should not see
> all
> the keys reported by the 2 ServerNode instances. There should be some keys
> missing (i.e. those on the Putter server node).
>
> Moreover, if I add a local listener also on the Putter node, the output
> becomes still more puzzling:
>
> ServerNode 1 displays the keys: 2, 3, 5, 7, 9
> ServerNode 2 display the keys: 1, 4,6,8
> Putter displays the keys : 2,3,5
>
> What am I missing here ? There surely must be something that I
> misunderstood, but I can't figure out what it could be.
> Any help will be much appreciated!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: lost partition recovery with native persistence

2018-10-05 Thread Павлухин Иван
Please ignore my message. I misunderstood the problem.

пт, 5 окт. 2018 г. в 14:16, Павлухин Иван :

> Hi Roman,
>
> Actually, Ignite with enabled persistence supports crash recovery. It is
> mentioned in [1].
>
> [1]
> https://apacheignite.readme.io/v2.6/docs/distributed-persistent-store#section-transactional-guarantees
>
> пт, 5 окт. 2018 г. в 13:30, Maxim.Pudov :
>
>> Great idea, I like it. However, it's better to discuss development plans
>> on
>> development list http://apache-ignite-developers.2346864.n4.nabble.com
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


-- 
Best regards,
Ivan Pavlukhin


Re: lost partition recovery with native persistence

2018-10-05 Thread Павлухин Иван
Hi Roman,

Actually, Ignite with enabled persistence supports crash recovery. It is
mentioned in [1].

[1]
https://apacheignite.readme.io/v2.6/docs/distributed-persistent-store#section-transactional-guarantees

пт, 5 окт. 2018 г. в 13:30, Maxim.Pudov :

> Great idea, I like it. However, it's better to discuss development plans on
> development list http://apache-ignite-developers.2346864.n4.nabble.com
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Ivan Pavlukhin


Re: java.lang.NullPointerException in GridDhtPartitionsExchangeFuture

2018-09-27 Thread Павлухин Иван
Hi Subash,

Correct me if I am wrong, but in current Ignite version persistence is not
mandatory and it is even disabled by default. With disabled persistence
nothing will be written to disk. 2.x version storage architecture was
developed as an improvement of the previous one. Ignite 2.x uses off-heap
memory, but it is vanilla RAM local to JVM process (managed by Ignite
instead of automatic JVM management). Several benefits of off-heap are
mentioned in [1].

[1]
https://apacheignite.readme.io/docs/durable-memory#section-in-memory-features

2018-09-27 14:45 GMT+03:00 Ilya Kasnacheev :

> Hello!
>
> It's hard to answer these questions without thorough review of logs and
> 1.9 code, and I doubt anyone will volunteer to do that since 1.x branch
> does not see any new development in Apache Ignite.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 26 сент. 2018 г. в 22:36, HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
>> Hi Kasnacheev,
>>
>> Thank you very much for the response..
>>
>>
>>
>> We use v1.9, because 2.x uses mandatory ignite native persistence(local
>> disk) along with durable memory management(RAM), and no option for a **java
>> heap only** cache storage. We wanted to keep away from storing anything in
>> disk. Hence using 1.9 Ignite.
>>
>> Can you please clarify “I believe that cache in question is no longer
>> consistent between nodes on metadata level” ?
>>
>>
>>
>> 1.   Like what could have caused & under what conditions this
>> inconsistent state between nodes on cache metadata occurred ?
>>
>>
>>
>> 2.   And does 2.x fixes the problem?( Is there a way you can suggest
>> to reproduce this issue, so that we can know for sure 2.x fixes the problem)
>>
>>
>>
>> It appeared from our logs that the cache.put threads hangs forever after
>> this error. Can that be possible with this?
>>
>>
>>
>>
>>
>>
>>
>> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
>> *Sent:* Wednesday, September 26, 2018 12:31 PM
>> *To:* user@ignite.apache.org
>> *Subject:* Re: java.lang.NullPointerException in
>> GridDhtPartitionsExchangeFuture
>>
>>
>> This email is from an external source - exercise caution regarding links
>> and attachments.
>>
>> Hello!
>>
>>
>>
>> This is a bad error message. I believe that cache in question is no
>> longer consistent between nodes on metadata level, and you can't fix that
>> without full restart or at least dropping and recreating the cache.
>>
>> I don't think that you will get much support from community on 1.x since
>> the focus has shifted to 2.x. Have you considered upgrading?
>>
>>
>>
>> Regards,
>>
>> --
>>
>> Ilya Kasnacheev
>>
>>
>>
>>
>>
>> ср, 26 сент. 2018 г. в 18:50, HEWA WIDANA GAMAGE, SUBASH <
>> subash.hewawidanagam...@fmr.com>:
>>
>> This is the only single error from ignite happened after JVM startup.
>> Looks like I only posted the exception stack trace. Here’s the message and
>> from which thread it logged.
>>
>>
>>
>> level:ERROR
>>
>>
>>
>>  logger: 
>> org.apache.ignite.internal.processors.cache.GridCacheIoManager
>>
>>
>>
>>
>>  message:Failed processing message
>> [senderId=57ee6544-e0b3-45cc-bdb5-f3fd37d7db1e, 
>> msg=GridDhtPartitionsSingleMessage
>> [parts=null, partCntrs=null, client=false, compress=false, super=
>> GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId
>> [topVer=AffinityTopologyVersion [topVer=13, minorTopVer=0],
>> nodeId=57ee6544, evt=NODE_JOINED], lastVer=GridCacheVersion [topVer=0,
>> time=0, order=1536996016410, nodeOrder=0], flags=0, super=GridCacheMessage
>> [msgId=1, depInfo=null, err=null, skipPrepare=false, cacheId=0,
>> cacheId=0 [IM_GROUP=fsy-pi-dt-ssam ]
>>
>>
>>
>>  thread:sys-stripe-2-#3%null%
>>
>>
>>
>> *From:* Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
>> *Sent:* Tuesday, September 25, 2018 11:11 AM
>> *To:* user@ignite.apache.org
>> *Subject:* Re: java.lang.NullPointerException in
>> GridDhtPartitionsExchangeFuture
>>
>>
>> This email is from an external source - exercise caution regarding links
>> and attachments.
>>
>> Hello!
>>
>>
>>
>> It's hard to say without reviewing logs, but it seems that there's some
>> inconsistency with regards to cache metadata on nodes.
>>
>>
>>
>> Regards,
>>
>> --
>>
>> Ilya Kasnacheev
>>
>>
>>
>>
>>
>> вт, 25 сент. 2018 г. в 0:13, HEWA WIDANA GAMAGE, SUBASH <
>> subash.hewawidanagam...@fmr.com>:
>>
>> Hi all,
>>
>> We use Ignite 1.9.
>>
>>
>>
>> We could see this in our logs.  All we do is cache.get() , cache.put()
>> operations. With this log being seen, is it possible for  cache.put or
>> ignite.getOrCreateCache() method calling threads be blocked forever ?
>> (unfortunately we couldn’t get a thread dump to prove that, but from
>> application logs, it looks like it).
>>
>>
>>
>> java.lang.NullPointerException: null
>>
>> at org.apache.ignite.internal.
>> 

Re: SQL query and Indexes architecture

2018-09-21 Thread Павлухин Иван
Hi Eugene,

In community wiki there are several "under the hood" documents like [1].
Unfortunately there is no document about SQL. It seems that such document
would be useful for you and many others. Perhaps, if you have habit to
document your findings (in blogs or somewhere else) it could become
valuable contribution to Ignite project.

2018-09-17 17:57 GMT+03:00 Ilya Kasnacheev :

> Hello!
>
> I recommend starting with H2TreeIndex class. Maybe dropping mails on
> developer list with precise questions.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 сент. 2018 г. в 16:25, eugene miretsky :
>
>> Thanks!
>>
>> I am curious about the process of loading data from Ignite to H2 on the
>> fly, as H2 creating indexes but storing them in Ignite. Can you point me to
>> some JIRAs that discuss it, or which part of the code is responsible for
>> that?
>>
>> On Mon, Sep 17, 2018 at 9:18 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> 1. 1. H2 executes the query, during which it has to load rows from
>>> tables, and Ignite does the row loading part. Then Ignite will collect
>>> query results on all nodes and aggregate them on a single node.
>>> 1. 2. Index is created by H2, but it is stored in Ignite pages (?).
>>> 2. Maybe you're right, I have to admit I'm unfamiliar with precise
>>> details here.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 17 сент. 2018 г. в 16:02, eugene miretsky >> >:
>>>
 Thanks!


1.
1.  "Ignite feeds H2 rows that it asks for, and H2 creates indexes
   on them and executes queries on them." - what exactly do you mean by 
 that?
   Do you mean that all parts of a query that use indexes are executed 
 by H2,
   then the actual data is retrieved from Ignite pages, and the final
   (non-indexed) parts of the query executed by Ignite?
   2.  What happens when I create an index on a new column? Is the
   index created in Ignite (and stored in Ignite pages?), or is it 
 created in
   H2?
2.  The reason I was asking about AFFINITY_KEY, _key_PK and
_key_PK_hash indexed is that in this   code

 
  it
looks like they are created in H2



 On Mon, Sep 17, 2018 at 8:36 AM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> 1. H2 does not store data but, as far as my understanding goes, it
> created SQL indexes from data. Ignite feeds H2 rows that it asks for, and
> H2 creates indexes on them and executes queries on them.
> 2. Ignite always has special index on your key (since it's a key-value
> storage it can always find tuple by key). Ignite is also aware of key's
> hash code, and affinity key value always maps to one partition of data (of
> 1024 by default). Those are not H2 indexes and they're mostly used on
> planning stage. E.g. you can map query to one node if affinity key is
> present in the request.
> 3. Data is brought onto the heap to read any fields from row. GROUP BY
> will hold its tuples on heap. Ignite has configurable index inlining where
> you can avoid reading objects from heap just to access indexed fields.
> 4. With GROUP BY, lazy evaluation will not help you much. It will
> still have to hold all data on heap at some point. Lazy evaluation mostly
> helps with "SELECT * FROM table" type queries which provide very large and
> boring result set.
>
> Hope this helps.
> --
> Ilya Kasnacheev
>
>
> пт, 14 сент. 2018 г. в 17:39, eugene miretsky <
> eugene.miret...@gmail.com>:
>
>> Hello,
>>
>> Trying to understand how exactly SQL queries are executed in Ignite.
>> A few questions
>>
>>
>>1. To what extent is H2 used? Does it store the data? Does it
>>create the indexes? Is it used only for generating execution plans? I
>>believe that all the data used to be stored in H2, but with the new 
>> durable
>>memory architecture, I believe that's no longer the case.
>>2. Which indexes are used? Ignite creates  B+ tree indexes and
>>stores them in Index pages, but I also see AFFINITY_KEY, _key_PK and
>>_key_PK_hash indexes created in H2.
>>3. When is data brought onto the heap? I am assuming that groupby
>>and aggregate require all the matching queries to first be copied from
>>off-heap to heap
>>4. How does lazy evaluation work? For example, for group_by, does
>>it bring batches of matching records with the same group_by key onto 
>> the
>>heap?
>>
>> I am not necessarily looking for the exact answers, but rather
>> pointer in the right direction 

Re: How to set query timeout or cancel query when submit SQL query from SQL tool like DBweaver and sqlline?

2018-09-16 Thread Павлухин Иван
Hi Ray,

As far as I know query timeout can be configured only for
SqlQuery/SqlFieldsQuery API. Mentioned global timeout configuration through
"jdbc:h2..." URL was not implemented.


Re: IgniteUtils NoClassDefFoundError

2018-09-11 Thread Павлухин Иван
Hi Jack,

Could you provide logs and full console output? NoClassDefFoundError -- can
be thrown when class in question is on classpath but fails to initialize
(e.g. exception thrown from static initializer).

2018-09-11 6:05 GMT+03:00 Jack Lever :

> Hi All,
>
> I'm getting an error on application startup which has me stumped. I've
> imported ignite-core, indexing, slf4j and spring-data via maven, version
> 2.6.0. I'm using ignite to do some cache operations, basic stuff
> cross-node. However when I start it, it runs until the config of static ip
> discovery or Ignition.start(config) call depending on what I have in the
> setup and then stops with :
>
> Failed to instantiate [i.o.c.IgniteManager]: Constructor threw exception;
> nested exception is java.lang.NoClassDefFoundError: Could not initialize
> class org.apache.ignite.internal.util.IgniteUtils
>
> I can see the class inside intellij in the jar file in external libraries.
> I can use the class in the code but when I run it appears to be missing ...
>
> How do I go about fixing this or diagnosing it further?
>
> Thanks,
> Jack.
>



-- 
Best regards,
Ivan Pavlukhin


Re: Simulate Read Only Caches

2018-08-30 Thread Павлухин Иван
Hi Steve,

It is an interesting question. I am not aware of any development in this
direction. Perhaps, more experienced community members could tell more.
I think that it would be great to experiment around a topic disabling some
checks here and where. Then it worth to run some kind of benchmark
comparing performance after disabling checks. If difference would be
significant then it most likely will attract the community attention.

2018-08-28 17:12 GMT+03:00 steve.hostettler :

> Hello,
>
> I do have a bunch of caches that I would like to have replicated but to
> keep
> them "read only" after a certain point. It is a fairly standard use case.
> There are master (exchange rates) that are fixed once and for all for a
> given (set of processes). Once loaded there is no reason to bother with
> locking and transactionality.
>
>
> I looked at the implementation and there are quite a number of gates and
> checks that are in place. I wonder how to work around these.
>
> I there a way to simulate this? Maybe there are even more things that we
> can
> temporary disable to speed up common read only use cases.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ivan Pavlukhin


Re: How to run a job every 5 seconds in Ignite

2018-08-28 Thread Павлухин Иван
Hi Lokesh,

You could try out extended cron syntax implemented by Ignite [1].

[1]
https://apacheignite.readme.io/docs/cron-based-scheduling#section-syntax-extension


2018-08-28 10:51 GMT+03:00 Lokesh Sharma :

> Is it possible to run the job every few seconds? As far as I know, cron
> API doesn't support scheduling in seconds.
>
> On Tue, Aug 28, 2018 at 11:27 AM Lokesh Sharma 
> wrote:
>
>> This is what I was looking for. Many thanks!
>>
>> On Mon, Aug 27, 2018 at 3:01 PM Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi Lokesh,
>>>
>>> I'd suggest to start Ignite service, which will guarantee
>>> failover-safety for you: https://apacheignite.readme.
>>> io/docs/service-grid. Just choose cluster-singleton to make sure that
>>> you will have 1 instance of Service in cluster. Inside this service you can
>>> use ignite-scheduler: https://apacheignite.readme.io/docs/
>>> cron-based-scheduling, it has cron API.
>>>
>>> Evgenii
>>>
>>> пн, 27 авг. 2018 г. в 9:16, Lokesh Sharma :
>>>
 I'm using Ignite with Spring Boot. Is there a way to run a job every 5
 seconds on exactly one node of the cluster (which could be any node)?

 Thank You

>>>


-- 
Best regards,
Ivan Pavlukhin


Re: values retrieved from the cache are wrapped with JdkDynamicAopProxy while using springboot and JCache

2018-08-15 Thread Павлухин Иван
Hi daya,

Yes it is a serialization issue. Field AdvisedSupport.methodCache is marked
transient and method readObject which should initialize this field on
deserialization is not called. So, it has value null and NPE is thrown. I
am not sure whether it is expected behavior or not.
Also, it does not look very nice for me that such proxy is stored into
cache. Perhaps it is better to store simple Java objects, like
ReportsRepDetails in your example.

2018-08-14 19:21 GMT+03:00 ipavlukhin :

> Hi daya,
>
> Sorry for delay. I hope I will have a minute tomorrow to check this case.
>
>
>
> On 13.08.2018 15:04, daya airody wrote:
>
>> HI Ivan,
>>
>> I have uploaded a simple spring application reproducing the issue at below
>> link:
>>
>> https://github.com/daya-airody/ignite-caching
>>
>> When I use ConcurrentMapCache to cache results from spring JPA native
>> query,
>> I am able to retrieve it correctly. However, once I enable ignite and
>> JCache, I run into proxy issues. Looks like I am hitting some
>> serialization
>> problem,
>>
>> Please review my code and help me troubleshoot this issue.
>>
>>
>> thanks in advance,
>>
>> --daya--
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


-- 
Best regards,
Ivan Pavlukhin


Re: The Apache Ignite Book

2018-08-12 Thread Павлухин Иван
Hi Shamim,

In 1st chapter Introduction among Ignite possible usages it is mentioned:
"From version 2.5 Apache Ignite will support transactions at SQL level"
It is not true. Transactional SQL was not released yet. There is plan to
release it in near future, but as for me it is better to not mention
concrete future versions for
features which are not released yet.

2018-08-01 10:26 GMT+03:00 srecon :

> Dear, Users.
>   Yesterday the first portion of our new title The Apache Ignite Book had
> been published and available at https://leanpub.com/ignitebook . The full
> table of contents and the sample chapter is also available through  leanpub
>   .
>  The title is an agile published book, and we continue to update the book
> for covering Apache Ignite version 2.x. Feel free to ask any question and
> do
> not hesitate to make any comments or suggestion.
>
> Best regards
>   Shamim Ahmed.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ivan Pavlukhin


Re: values retrieved from the cache are wrapped with JdkDynamicAopProxy while using springboot and JCache

2018-08-07 Thread Павлухин Иван
Hi,

Looks like Spring itself wraps result into proxy. If you could provide a
reproducer it will help to find a reason faster.

2018-08-07 21:09 GMT+03:00 daya airody :

> Values retrieved from cache are wrapped with JdkDynamicAopProxy.  This
> throws
> below NPEs
>
> ---
> java.lang.NullPointerException: null
> at
> org.springframework.aop.framework.AdvisedSupport.
> getInterceptorsAndDynamicInterceptionAdvice(AdvisedSupport.java:481)
> at
> org.springframework.aop.framework.JdkDynamicAopProxy.
> invoke(JdkDynamicAopProxy.java:197)
> at com.sun.proxy.$Proxy255.getEmailAddress(Unknown Source)
> at
> com.partnertap.analytics.controller.AdminCannedController.getAllReps(
> AdminCannedController.java:51)
>
> ---
> I don't understand why cached values should be wrapped with proxies.
> JdkDynamicAopProxy uses methodCache, which is null when the value is
> retrieved from the cache.
>
> This is where I am caching the java method
> 
> @CacheResult(cacheName = "cannedReports")
> public List getAllReps(@CacheKey
> String
> managerId) {
> -
> In the object calling above method, I am trying to print, but getting NPE
> instead.
>
> 
> List allReps =
> reportsService.getAllReps(managerId);
> for (ReportsRepDetailsInterface repDetail : allReps) {
> logger.info("email->", repDetail.getEmailAddress());
> }
> -
>
> please help.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Ivan Pavlukhin


Re: OOM on connecting to Ignite via JDBC

2018-08-06 Thread Павлухин Иван
Hi Orel,

Are you sure that correct port is used? By default 10800 port is used for
JDBC connections. You have 8080 in your command line.

The error could be caused by reading unexpected input from server and
interpreting it as very huge packet size. Attempt to allocate buffer of
such size could simply run to OOME.

2018-08-06 11:56 GMT+03:00 Orel Weinstock (ExposeBox) :

> I've followed the guide on setting up DBeaver to work with Ignite - I've
> set up a driver in DBeaver by selecting a class from the ignite-core jar,
> both version 2.6.0
>
> My cluster is up and running now (e.g. write-through works) that I've
> added the MySQL JDBC driver to the (web-console generated) pom.xml's
> dependencies, but I still can't connect to Ignite via DBeaver.
>
> On 6 August 2018 at 11:17, Denis Mekhanikov  wrote:
>
>> Orel,
>>
>> JDBC driver fails on handshake for some reason.
>> It fails with OOM when trying to allocate a byte array for the handshake
>> message.
>> But there is not much data transferred in it. Most probably, message size
>> is read improperly.
>>
>> Do you use matching versions of JDBC driver and Ignite nodes?
>>
>> Denis
>>
>>
>> вс, 5 авг. 2018 г. в 11:01, Orel Weinstock (ExposeBox) <
>> o...@exposebox.com>:
>>
>>> Hi all,
>>>
>>> Trying to get an Ignite cluster up and going for testing before taking
>>> it to production.
>>> I've set up Ignite 2.6 on a cluster with a single node on a Google Cloud
>>> Compute instance and I have the web console working as well.
>>>
>>> I've imported a table from MySQL and re-run the cluster with the
>>> resulting Docker image.
>>>
>>> Querying for the table via the web console proved fruitless, so I've
>>> switched to SQLLine (on the cluster itself). Still no cigar:
>>>
>>> main(SqlLine.java:265)moo@ignite:/home/moo$
>>> /usr/share/apache-ignite/bin/sqlline.sh --verbose=true -u
>>> jdbc:ignite:thin://127.0.0.1:8080issuing: !connect jdbc:ignite:thin://
>>> 127.0.0.1:8080 '' '' org.apache.ignite.IgniteJdbcThinDriverConnecting
>>> to jdbc:ignite:thin://127.0.0.1:8080java.lang.OutOfMemoryError: Java
>>> heap space at org.apache.ignite.internal.jdb
>>> c.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:586) at
>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(Jdbc
>>> ThinTcpIo.java:575) at org.apache.ignite.internal.jdb
>>> c.thin.JdbcThinTcpIo.handshake(JdbcThinTcpIo.java:328) at
>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(Jdb
>>> cThinTcpIo.java:223) at org.apache.ignite.internal.jdb
>>> c.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:144) at
>>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.ensu
>>> reConnected(JdbcThinConnection.java:148) at
>>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.>> t>(JdbcThinConnection.java:137) at org.apache.ignite.IgniteJdbcTh
>>> inDriver.connect(IgniteJdbcThinDriver.java:157) at
>>> sqlline.DatabaseConnection.connect(DatabaseConnection.java:156) at
>>> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
>>> at sqlline.Commands.connect(Commands.java:1095) at
>>> sqlline.Commands.connect(Commands.java:1001) at
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62) at sun.reflect.DelegatingMethodAc
>>> cessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
>>> java.lang.reflect.Method.invoke(Method.java:498) at
>>> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>>> ndler.java:38) at sqlline.SqlLine.dispatch(SqlLine.java:791) at
>>> sqlline.SqlLine.initArgs(SqlLine.java:566) at
>>> sqlline.SqlLine.begin(SqlLine.java:643) at
>>> sqlline.SqlLine.start(SqlLine.java:373) at
>>> sqlline.SqlLine.main(SqlLine.java:265)
>>>
>>> Tried DBeaver - still OOM.
>>>
>>> Is there a way to get a list of all tables in the cache?
>>> Does anyone have any experience with this error? I can't tell if it's
>>> Ignite itself or just the JDBC client, though I'm leaning towards the
>>> client.
>>>
>>>
>>> --
>>>
>>> --
>>> *Orel Weinstock*
>>> Software Engineer
>>> Email:o...@exposebox.com 
>>> Website: www.exposebox.com
>>>
>>>
>
>
> --
>
> --
> *Orel Weinstock*
> Software Engineer
> Email:o...@exposebox.com 
> Website: www.exposebox.com
>
>


-- 
Best regards,
Ivan Pavlukhin


Re: Transaction return value problem

2018-08-03 Thread Павлухин Иван
Hi,

Denis I wonder if NESTED transaction propagation will work with Ignite?

2018-08-02 18:07 GMT+03:00 Denis Mekhanikov :

> Here you can find how to use Spring transaction management together with
> Ignite: https://ignite.apache.org/releases/latest/javadoc/
> org/apache/ignite/transactions/spring/SpringTransactionManager.html
>
> Transaction propagation is not a feature of a database itself, it's rather
> a Spring's feature.
> It doesn't depend on the underlying database. So, you can use it with
> Ignite as well.
>
> Denis
>
> чт, 2 авг. 2018 г. в 5:40, hulitao198758 :
>
>> Ignite enables transactions to determine how to perform certain operations
>> after a successful transaction is executed, is transaction propagation
>> currently supported, and how to inherit from Spring's transactions?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


-- 
Best regards,
Ivan Pavlukhin


Re: 3rd party persistence with Hive

2018-07-13 Thread Павлухин Иван
Hi engrdean,

I see 2 points here:
1. Log says that load was failed in multi-threaded mode. But then it says
that loading was finished, which means that it succeeded in single-threaded
mode.
2. Indeed Ignite generates query which is not supported by Hive. And not
supported thing is 'mod' operator, which should be '%' in Hive.

If data is loaded fine in single-threaded mode successfully it could be
just fine.
If you would like to proceed with multi-threaded mode you could try
SQLServerDialect as it uses '%' for problematic statement. Unfortunately
there is no special Hive dialect in core module. Perhaps you can introduce
and even contribute it.

Do not hesitate to ask if something is not clear.

2018-07-13 0:31 GMT+03:00 engrdean :

> I've been attempting to setup 3rd party persistence with a Hive database
> but
> I'm running into some issues.  I used the Ignite web console to generate
> code but when I try to initiate a cache load I receive the error below.
> Has
> anyone else been successful in using a Hive database with 3rd party
> persistence?
>
> Just to be clear, I am not trying to implement the IgniteHadoopFileSystem
> implementation of IGFS.  The intent of using 3rd party persistence with
> Hive
> instead of IGFS is to avoid tightly coupling Ignite with our Hadoop
> environment.
>
> It appears to me that the SQL being generated by Ignite to communicate with
> Hive via the driver is probably not something that Hive can parse, but I
> would welcome any feedback and I'm happy to provide config files if that
> would be helpful.
>
> [15:02:46,049][INFO][mgmt-#568%test%][CacheJdbcPojoStore] Started load
> cache
> [cache=EmployeeCache, keyType=com.model.EmployeeKey]
> [15:02:46,088][WARNING][mgmt-#568%test%][CacheJdbcPojoStore] Failed to
> load
> entries from db in multithreaded mode, will try in single thread
> [cache=EmployeeCache, keyType=com.model.EmployeeKey]
> org.apache.hive.service.cli.HiveSQLException: Error while compiling
> statement: FAILED: SemanticException [Error 10011]: Line 1:129 Invalid
> function 'mod'
> at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:262)
> at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.
> java:248)
> at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(
> HiveStatement.java:300)
> at
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:241)
> at
> org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:437)
> at
> org.apache.hive.jdbc.HivePreparedStatement.executeQuery(
> HivePreparedStatement.java:109)
> at
> com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(
> NewProxyPreparedStatement.java:353)
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.loadCache(
> CacheAbstractJdbcStore.java:763)
> at
> org.apache.ignite.internal.processors.cache.store.
> GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.
> java:520)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:608)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.
> localLoadCache(GridCacheProxyImpl.java:217)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.
> localExecute(GridCacheAdapter.java:5520)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$
> LoadCacheJobV2.localExecute(GridCacheAdapter.java:5569)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$
> TopologyVersionAwareJob.execute(GridCacheAdapter.java:6184)
> at
> org.apache.ignite.compute.ComputeJobAdapter.call(
> ComputeJobAdapter.java:132)
> at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.
> execute(GridClosureProcessor.java:1855)
> at
> org.apache.ignite.internal.processors.job.GridJobWorker$
> 2.call(GridJobWorker.java:566)
> at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.
> java:6623)
> at
> org.apache.ignite.internal.processors.job.GridJobWorker.
> execute0(GridJobWorker.java:560)
> at
> org.apache.ignite.internal.processors.job.GridJobWorker.
> body(GridJobWorker.java:489)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
> org.apache.ignite.internal.processors.job.GridJobProcessor.
> processJobExecuteRequest(GridJobProcessor.java:1123)
> at
> org.apache.ignite.internal.processors.job.GridJobProcessor$
> JobExecutionListener.onMessage(GridJobProcessor.java:1921)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1555)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:1183)
> at
> org.apache.ignite.internal.managers.communication.
>