Re: Local node is not added in baseline topology

2020-08-17 Thread Ali Bagewadi
Hello,
Yes the local node(joining node) is left out everytime from baseline
topology.
I have mentioned steps to reproduce it.Also I have attached config files and
C++ code used to start ignite.

1)I have two server nodes i.e one on local machine and one on remote host.
2)When I start the first node the local node is added in the Baseline
topology:Here adc1c71b-c21b-47aa-8be6-6fa5ebaaa035 is the local node.

Current topology version: 2 (Coordinator:
ConsistentId=adc1c71b-c21b-47aa-8be6-6fa5ebaaa035, Order=1)

Baseline nodes:
ConsistentId=adc1c71b-c21b-47aa-8be6-6fa5ebaaa035, State=ONLINE, Order=1

Number of baseline nodes: 1

Other nodes:
ConsistentId=5a9f4e0d-fba3-4786-aa61-936d2f207333, Order=2

3)However when I start the second node on remote host the local node is not
included in the Baseline Topology.As per my observance the joining node
faces this problem.(Here the local node is
5a9f4e0d-fba3-4786-aa61-936d2f207333 which is not included in baseline
topology)
Current topology version: 2 (Coordinator:
ConsistentId=adc1c71b-c21b-47aa-8be6-6fa5ebaaa035, Order=1)

Baseline nodes:
ConsistentId=adc1c71b-c21b-47aa-8be6-6fa5ebaaa035, State=ONLINE, Order=1

Number of baseline nodes: 1

Other nodes:
ConsistentId=5a9f4e0d-fba3-4786-aa61-936d2f207333, Order=2
Number of other nodes: 1


4)Everything works fine when I add the local node using control.sh add
command.

We are using Apache Ignite with C++.Below is the snippet of starting ignite
from C++ code.
IgniteConfiguration cfg;
cfg.springCfgPath = std::string("/home/dsudev/DataBaseConfig.xml");

/* Start a node to access DataBase */
Ignite node = Ignition::Start(cfg);
node.SetActive(true);

/* Get cache instance */
mCache = node.GetOrCreateCache(CACHE_NAME);

Below are the config files:

Config file for local node:

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>

   

 
 
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   



   

   

   




   



   
   
   



   
127.0.0.1
192.168.111.112:7912










Config file for remote node:


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>

   

 
 
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   



   

   

   




   



   
   
   



   
127.0.0.1
192.168.111.111:7912









Thank you,
Ali.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Queue (Documentation or Code defect)?

2020-08-17 Thread Humphrey
Great I see something went missing:

Ignite ignite = Ignition.start();
IgniteQueue queue = ignite.queue("Queue", 0, null);
ignite.close();

In the queue created above I expect it to throw an exception if the queue
can not be fetched, instead it is giving me a queue that is "null".

Check the JavaDoc:
Will get a named queue from cache and create one if it has not been created
yet and cfg is not null. If queue is present already, queue properties will
not be changed. Use collocation for CacheMode.PARTITIONED caches if you have
lots of relatively small queues as it will make fetching, querying, and
iteration a lot faster. If you have few very large queues, then you should
consider turning off collocation as they simply may not fit in a single
node's memory.

Params:
name – Name of queue.
cap – Capacity of queue, 0 for unbounded queue. Ignored if cfg is null.
cfg – Queue configuration if new queue should be created.

Returns:
Queue with given properties.

Throws:
org.apache.ignite.IgniteException – If queue could not be fetched or
created.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Queue

2020-08-17 Thread Humphrey
According to the documentation (java doc) of ignite.queue():
Will get a named queue from cache and create one if it has not been created
yet and cfg is not null. If queue is present already, queue properties will
not be changed. Use collocation for CacheMode.PARTITIONED caches if you have
lots of relatively small queues as it will make fetching, querying, and
iteration a lot faster. If you have few very large queues, then you should
consider turning off collocation as they simply may not fit in a single
node's memory.

Params:
name – Name of queue.
cap – Capacity of queue, 0 for unbounded queue. Ignored if cfg is null.
cfg – Queue configuration if new queue should be created.
Returns:
Queue with given properties.
Throws:
org.apache.ignite.IgniteException – If queue could not be fetched or
created.

But when getting an non existing queue without QueueConfiguration the queue
is null and no exception is thrown.



Documentation should say that it returns a Queue or Null, but it doesn't. It
says that it will throw an exception if the queue could not be fetched or
created, but it doesn't. It just returns null.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache configuration

2020-08-17 Thread Evgenii Zhuravlev
Hi,

You can add cache configuration to the xml file with the *(for example,
cache-*) at the end. After this, caches with names that fit this
template(cache-1 for template cache-*)  will use it's cache configuration.

Evgenii

вс, 16 авг. 2020 г. в 07:03, C Ravikiran :

> As for the below, I have to change in xml and java code also.
>
> Is there there any other possibility, without changing the java code, and
> changing only xml file, shall we achive cache configuration??
>
> As we don't have the java code access, we have the access to the
> configuration
> xml file.
>
> Could you please help me with this?
>
> Regards,
> Ravikiran C
>
>
>
>
> On Sun, 16 Aug, 2020, 12:57 am John Smith,  wrote:
>
>> You can create templates in the XML and programmatically when you say
>> getOrCreate() you can specify the template to use and pass in random name
>> for the cache name ...
>>
>>
>> https://apacheignite.readme.io/docs/cache-template#:~:text=Cache%20templates%20are%20useful%20when,CREATE%20TABLE%20and%20REST%20commands
>> .
>>
>> On Sat., Aug. 15, 2020, 8:53 a.m. itsmeravikiran.c, <
>> itsmeravikira...@gmail.com> wrote:
>>
>>> My cache ids are dynamic.
>>> Is it possible to add cache configuration in xml.
>>> I have checked, name property is mandatory. But i cannot add the name as
>>> it's dynamic name.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Operation block on Cluster recovery/rebalance.

2020-08-17 Thread Denis Magda
>
> But on client reconnect, doesn't it mean it will still block until the
> cluster is active even if I get new IgniteCache instance?


No, the client will be getting an exception on an attempt to get an
IgniteCache instance.

-
Denis


On Fri, Aug 14, 2020 at 4:14 PM John Smith  wrote:

> Yeah I can maybe use vertx event bus or something to do this... But now I
> have to tie the ignite instance to the IgniteCahe repository I wrote.
>
> But on client reconnect, doesn't it mean it will still block until the
> cluster is active even if I get new IgniteCache instance?
>
> On Fri, 14 Aug 2020 at 18:22, Denis Magda  wrote:
>
>> @Evgenii Zhuravlev , @Ilya Kasnacheev
>> , any thoughts on this?
>>
>> As a dirty workaround, you can update your cache references on client
>> reconnect events. You will be getting an exception by calling
>> ignite.cache(cacheName) in the time when the cluster is not activated yet.
>> Does this work for you?
>>
>> -
>> Denis
>>
>>
>> On Fri, Aug 14, 2020 at 3:12 PM John Smith 
>> wrote:
>>
>>> Is there any work around? I can't have an HTTP server block on all
>>> requests.
>>>
>>> 1- I need to figure out why I lose a server nodes every few weeks, which
>>> when rebooting the nodes cause the inactive state until they are back
>>>
>>> 2- Implement some kind of logic on the client side not to block the HTTP
>>> part...
>>>
>>> Can IgniteCache instance be notified of disconnected events so I can
>>> maybe tell the repository class I have to set a flag to skip the operation?
>>>
>>>
>>> On Fri., Aug. 14, 2020, 5:17 p.m. Denis Magda, 
>>> wrote:
>>>
 My guess that it's standard behavior for all operations (SQL,
 key-value, compute, etc.). But I'll let the maintainers of those modules
 clarify.

 -
 Denis


 On Fri, Aug 14, 2020 at 1:44 PM John Smith 
 wrote:

> Hi Denis, so to understand it's all operations or just the query?
>
> On Fri., Aug. 14, 2020, 12:53 p.m. Denis Magda, 
> wrote:
>
>> John,
>>
>> Ok, we nailed it. That's the current expected behavior. Generally, I
>> agree with you that the platform should support an option when operations
>> fail if the cluster is deactivated. Could you propose the change by
>> starting a discussion on the dev list? You can refer to this user list
>> discussion for reference. Let me know if you need help with this.
>>
>> -
>> Denis
>>
>>
>> On Thu, Aug 13, 2020 at 5:55 PM John Smith 
>> wrote:
>>
>>> No I, reuse the instance. The cache instance is created once at
>>> startup of the application and I pass it to my "repository" class
>>>
>>> public abstract class AbstractIgniteRepository implements 
>>> CacheRepository {
>>> public final long DEFAULT_OPERATION_TIMEOUT = 2000;
>>>
>>> private Vertx vertx;
>>> private IgniteCache cache;
>>>
>>> AbstractIgniteRepository(Vertx vertx, IgniteCache cache) {
>>> this.vertx = vertx;
>>> this.cache = cache;
>>> }
>>>
>>> ...
>>>
>>> Future> query(final String sql, final long 
>>> timeoutMs, final Object... args) {
>>> final Promise> promise = Promise.promise();
>>>
>>> vertx.setTimer(timeoutMs, l -> {
>>> promise.tryFail(new TimeoutException("Cache operation did 
>>> not complete within: " + timeoutMs + " Ms.")); // THIS FIRE IF THE BLOE 
>>> DOESN"T COMPLETE IN TIME.
>>> });
>>>
>>> vertx.>executeBlocking(code -> {
>>> SqlFieldsQuery query = new 
>>> SqlFieldsQuery(sql).setArgs(args);
>>> query.setTimeout((int) timeoutMs, TimeUnit.MILLISECONDS);
>>>
>>>
>>> try (QueryCursor> cursor = cache.query(query)) { // 
>>> <--- BLOCKS HERE.
>>> List rows = new ArrayList<>();
>>> Iterator> iterator = cursor.iterator();
>>>
>>> while(iterator.hasNext()) {
>>> List currentRow = iterator.next();
>>> JsonArray row = new JsonArray();
>>>
>>> currentRow.forEach(o -> row.add(o));
>>>
>>> rows.add(row);
>>> }
>>>
>>> code.complete(rows);
>>> } catch(Exception ex) {
>>> code.fail(ex);
>>> }
>>> }, result -> {
>>> if(result.succeeded()) {
>>> promise.tryComplete(result.result());
>>> } else {
>>> promise.tryFail(result.cause());
>>> }
>>> });
>>>
>>> return promise.future();
>>> }
>>>
>>> public  T cache() {
>>> return (T) cache;
>>> }
>>> }
>>>
>>>
>>>
>>> On Thu, 13 Aug 2020 at 16:29, Denis Magda  wrote:

Lost node again.

2020-08-17 Thread John Smith
Hi guys it seems every couple of weeks we lose a node... Here are the logs:
https://www.dropbox.com/sh/8cv2v8q5lcsju53/AAAU6ZSFkfiZPaMwHgIh5GAfa?dl=0

And some extra details. Maybe I need to do more tuning then what is already
mentioned below, maybe set a higher timeout?

3 server nodes and 9 clients (client = true)

Performance wise the cluster is not doing any kind of high volume on
average it does about 15-20 puts/gets/queries (any combination of) per
30-60 seconds.

The biggest cache we have is: 3 million records distributed with 1 backup
using the following template.

  





  

Persistence is configured:

  

  
  

  

  
  

  

  

We also followed the tuning instructions for GC and I/O
if [ -z "$JVM_OPTS" ] ; then
JVM_OPTS="-Xms6g -Xmx6g -server -XX:MaxMetaspaceSize=256m"
fi

#
# Uncomment the following GC settings if you see spikes in your throughput
due to Garbage Collection.
#
JVM_OPTS="$JVM_OPTS -XX:+UseG1GC -XX:+AlwaysPreTouch
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC"
sysctl -w vm.dirty_writeback_centisecs=500 sysctl -w vm
.dirty_expire_centisecs=500


Re: Ignite 3rd party persistency DataSourceBean Config in Java

2020-08-17 Thread Ilya Kasnacheev
Hello!

In this case, you should be using setDataSourceFactory() instead of
setDataSourceBean().

Regards,
-- 
Ilya Kasnacheev


пн, 17 авг. 2020 г. в 10:21, marble.zh...@coinflex.com <
marble.zh...@coinflex.com>:

> Thanks Srikanta,
>
> I'm using below code, with CacheJdbcPojoStoreFactory, for the dsMySQL_Test
> dataSourceBean, I try configured in the default-config.xml, but it will
> show
> 'IgniteException: Failed to load bean in application context
> [beanName=dsMySQL_Test, ', so I am seeking which object can use to config
> the DataSourceBean,
>
> CacheJdbcPojoStoreFactory cacheJdbcPojoStoreFactory = new
> CacheJdbcPojoStoreFactory<>();
>
> cacheJdbcPojoStoreFactory.setDataSourceBean("dsMySQL_Test");
>
> cacheJdbcPojoStoreFactory.setDialect(new MySQLDialect());
> JdbcType jdbcType = new JdbcType();
> jdbcType.setCacheName(Student.class.getSimpleName());
>
> thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Pure memory mode error in Azure Kubernetes

2020-08-17 Thread Ilya Kasnacheev
Hello!

Data region size is a purely configuration limitation, it is not affected
by hardware or OS. So you must be doing something different in the scenario
where you are getting it.

I recommend enabling page eviction, see if you no longer see this error
(but discard data), then tune your region size.

Regards,
-- 
Ilya Kasnacheev


пн, 17 авг. 2020 г. в 12:10, xiaweidong <529566...@qq.com>:

> Thank you for your reply
> I'm pretty sure I didn't get more than 2g of memory.
>
> Roughly 7,000 or so pieces of data would fill up 2g of memory, and I'm
> running on Linux first, Memory consumption is far less than 2g.
> When I move program to Aks, The program starts reporting errors.
>
> I've tested, when I run the k8S node above the first time to ignite, is
> able to run perfectly, when I restart the Ignite and write data, write a
> few will start the OOM;
> When I restart the k8S node and running ignite again, It won't report an
> error;
> I think it's a compatibility issue between the two, and that's just my
> personal idea.
>
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Local node is not added in baseline topology

2020-08-17 Thread Ilya Kasnacheev
Hello!

Does it happen every time (adding local node not working)? If so, can you
please share exact steps to reproduce?

Regards,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 15:17, Ali Bagewadi :

> Hello,
> Thanks for the response.
> However m requirements are
> 1)I don't want to add the nodes manually using control script as it is not
> feasible to get the node id at runtime on hardware as per my requirements.
> 2)I have used the auto adjust command but its unable to add the local node
> to baseline topology.
>
> And Currently I am using below commands to add a node to baseline topology
> and auto adjust respectively.
>
> control.sh --baseline add consistentID
> control.sh --baseline auto_adjust enable timeout 5000
>
> Please suggest.
>
> Thank you,
> Ali
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IpFinder with domain

2020-08-17 Thread Ilya Kasnacheev
Hello!

You cannot use load balancing with thick client nodes. They need to be able
to connect to server nodes' discovery and communication ports via
direct address.

You can try load balancing with a thin client.

Regards,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 17:26, kay :

> Hello,
>
> In my case, 'cache.ignite.com' is a L4 and the port is 80.
>
> cache.ignite.com(ex. 41.1.166.123) will be connect Ignite Server.(ex.
> 42.1.129.123:47500, 42.1.129.123:47501 ...)
>
> Is it possible? or should I define port for connect to Ignite Server.
>
> I will waiting for reply!
> Thank u so much!
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 3rd party persistency DataSourceBean Config in Java

2020-08-17 Thread marble.zh...@coinflex.com
Thanks Srikanta, 

I'm using below code, with CacheJdbcPojoStoreFactory, for the dsMySQL_Test
dataSourceBean, I try configured in the default-config.xml, but it will show
'IgniteException: Failed to load bean in application context
[beanName=dsMySQL_Test, ', so I am seeking which object can use to config
the DataSourceBean, 

CacheJdbcPojoStoreFactory cacheJdbcPojoStoreFactory = new
CacheJdbcPojoStoreFactory<>();

cacheJdbcPojoStoreFactory.setDataSourceBean("dsMySQL_Test");

cacheJdbcPojoStoreFactory.setDialect(new MySQLDialect());
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName(Student.class.getSimpleName());

thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Pure memory mode error in Azure Kubernetes

2020-08-17 Thread xiaweidong
Thank you for your reply 
I'm pretty sure I didn't get more than 2g of memory. 

Roughly 7,000 or so pieces of data would fill up 2g of memory, and I'm
running on Linux first, Memory consumption is far less than 2g. 
When I move program to Aks, The program starts reporting errors.

I've tested,when I run the k8S node above the first time to ignite, is able
to run perfectly, when I restart the Ignite and write data, write a few will
start the OOM;
When I restart the k8S node and running ignite again, It won't report an
error;
I think it's a compatibility issue between the two, and that's just my
personal idea.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Pure memory mode error in Azure Kubernetes

2020-08-17 Thread Stephen Darlington
It’s exactly as the error message says: you ran out of memory.

org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of memory in 
data region [name=Default_Region, initSize=256.0 MiB, maxSize=2.0 GiB, 
persistenceEnabled=false]

You can:
Allocate more memory (2Gb isn’t very much)
Use persistence
Set an eviction policy, so that “old” data is automatically removed
Regards,
Stephen

> On 17 Aug 2020, at 09:02, xiaweidong <529566...@qq.com> wrote:
> 
> Hello everyone, I have a question that has been bothering me ; I can not 
> create a Ignite Kubernetes cluster in Azure Kubernetes from the file 
> 'quark-ignite.xml' quark-ignite.xml: I set persistenceEnabled false , When I 
> write the data , ignite will report an error ; From the error log, the memory 
> resource is requested until the set maximum memory is reached; Below is the 
> detailed log information: [2020-08-17 07:40:12] [INFO] [QUARK] 
> [202:client-connector-#116%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:13] [INFO] [QUARK] [205:client-connector-#119%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:13] [INFO] [QUARK] [202:client-connector-#116%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:14] [INFO] [QUARK] [204:client-connector-#118%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:15] [INFO] [QUARK] [202:client-connector-#116%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:15] [INFO] [QUARK] [201:client-connector-#115%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] [2020-08-17 07:40:16] [INFO] [QUARK] [201:client-connector-#115%quark%] 
> [org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)] [] 
> SERVER Allocated next memory segment [plcName=Default_Region, chunkSize=268.4 
> MB] ^-- Enable Ignite persistence 
> (DataRegionConfiguration.persistenceEnabled) at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1063)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:6160)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3817)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1955)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1705)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:445)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2557)
>  [ignite-core-2.8.1.jar:2.8.1] ^-- Enable eviction or expiration policies at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1839)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2102)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1719)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:300)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:249)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:624)
>  [ignite-core-2.8.1.jar:2.8.1] at 
> org.apache.ignite.internal.processors.platform.client.cache.ClientCachePutRequest.process(ClientCachePutRequest.java:40)
>  

Pure memory mode error in Azure Kubernetes

2020-08-17 Thread xiaweidong
Hello everyone, I have a question that has been bothering me ;I can not
create a Ignite Kubernetes cluster in Azure Kubernetes from the file
'quark-ignite.xml' quark-ignite.xml:  





 
I set persistenceEnabled false , When I write the data , ignite will report
an error ;From the error log, the memory resource is requested until the set
maximum memory is reached;Below is the detailed log information:[2020-08-17
07:40:12] [INFO] [QUARK] [202:client-connector-#116%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:13] [INFO] [QUARK]
[205:client-connector-#119%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:13] [INFO] [QUARK]
[202:client-connector-#116%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:14] [INFO] [QUARK]
[204:client-connector-#118%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:15] [INFO] [QUARK]
[202:client-connector-#116%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:15] [INFO] [QUARK]
[201:client-connector-#115%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB][2020-08-17 07:40:16] [INFO] [QUARK]
[201:client-connector-#115%quark%]
[org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:478)]
[] SERVER Allocated next memory segment [plcName=Default_Region,
chunkSize=268.4 MB]  ^-- Enable Ignite persistence
(DataRegionConfiguration.persistenceEnabled)at
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.ensureFreeSpaceForInsert(IgniteCacheDatabaseSharedManager.java:1063)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:6160)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3817)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1955)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1705)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:445)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2557)
[ignite-core-2.8.1.jar:2.8.1]  ^-- Enable eviction or expiration policies   
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1839)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2102)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1719)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:300)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:249)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:624)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.platform.client.cache.ClientCachePutRequest.process(ClientCachePutRequest.java:40)
[ignite-core-2.8.1.jar:2.8.1]   at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:49)

Re: Ignite 3rd party persistency DataSourceBean Config in Java

2020-08-17 Thread marble.zh...@coinflex.com
Need suggestions, thanks a lot.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/