Re: Load balancing ignite get requests
Hi, get() operation from client always go to the primary node. If you run compute task on other nodes, where each will do get() request for that key, it will read local value. REPLICATED has many other optimizations, for example for SQL queries. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Grid state check before it's completely caught up
Hi, You can, for example, set SYNC rebalance mode for your replicated cache [1]. In that case all cache operations will be blocked unless rebalance is finished, and when it's done you'll get a fully replicated cache. But this will block cache on each topology change. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setRebalanceMode-org.apache.ignite.cache.CacheRebalanceMode- Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Connection between servers.
Hi, Usually it's enough to open ports for communication and discovery, thair default values: 47500 and 47100. If you run more than one node per pachine, you'll need to open a port range: 47500..47509 and 47100...47109. You always can configure other values [1, 2] [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalPort-int- [2] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setLocalPort-int- Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: What is the precise definition of classes eligible for P2P-classloading?
Hi, There are no such limitations on peer class loading, but it was designed and works for compute jobs, remote filters or queries only. All unknown classes from tasks or queries will be deployed in cluster with dependencies according to deployment mode [1]. Actually with job Ignite sends deployment that is used to deploy all required classes. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DeploymentMode.html Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: distributed-ddl extended-parameters section showing 404 page not found
Hi, Where did you find it? It might be a broken link. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Some problems when using Ignite
Hi, Dynamic schema chages is available only via SQL/JDBC [1]. BTW caches created via SQL could be accessed from java API if you add SQL_PUBLIC_ to table. For example: ignite.cache(SQL_PUBLIC_TABLENAME). [1] https://apacheignite-sql.readme.io/docs/ddl Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Eviction Policy on Dirty data
Hi, Could you please explain how do you update database? Do you use CacheStore with writeThrough or manually save? Anyway, you can update data with custom eviction policy: cache.withExpiryPolicy(policy) [1] [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#withExpiryPolicy-javax.cache.expiry.ExpiryPolicy- Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: what are the alternative for IgniteQueue for FIFO transactional, reliable, low-latency messaging
Hi, I think the best way here would be to read items directly from kafka, process and store in cache and rememeber in another cache kafka stream offset. If node crashes, your service could start from the last point (offset). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Transaction Throughput in Data Streamer
Hi, It looks like the most of the time transactions in receiver are waiting for locks. Any lock adds serialization for parallel code. And in your case I don't think it's possible to tune throughput with settings, because ten transactions could wait when one finish. You need to change algorithm. The most effective way would be to stream data with DataStreamer with disabled allowOverride and without any transactions. You need to stream data independently if it's possible, avoid serial code and non-local cache reads/writes. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: System cache's DataRegion size is configured to 40 MB.
Hi, Yes, you're right, it was missed during refactoring. I've created a ticket [1], you may fix it and contribute to Apache Ignite :) [1] https://issues.apache.org/jira/browse/IGNITE-9259 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Spark to Ignite Data load, Ignite node crashashing
Hi, Looks like it was killed by kernel. Check logs for OOM Killer: grep -i 'killed process' /var/log/messages If process was killed by Linux, correct your config, you might be set too much memory for Ignite paged memory, set to lower values [1] If not, try to find in logs by PID, maybe it was killed due to other reason. [1] https://apacheignite.readme.io/docs/memory-configuration Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Free Network Segmentation (aka split brain) plugin available
Hi, Nice work, thank you! I'm sure it will be very useful. Looking forward for your contributions in Apache Ignite project ;) Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Partition distribution across nodes
Hi, Ignite by default uses Rendezvous hashing algorithm [1] and RendezvoudAffinityFunction is an implementation that responsible of partition distribution [2]. This allows significantly reduce traffic on partiton rebalancing. [1] https://en.wikipedia.org/wiki/Rendezvous_hashing [2] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: security question - custom plugin
Hi, 1) You need to add jetbrains annotation in compile-time [1]. 2) Imports depend on what are you using :) It's hard to say if your imports enough. Add ignite-core to your plugin dependencies. I don't think that there are other examples besides that blog post. [1] https://mvnrepository.com/artifact/org.jetbrains/annotations/13.0 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: When using CacheMode.LOCAL, OOM
Hi, I've opened a ticket for this [1]. It seems LOCAL cache keeps all entries on-heap. If you use only one node - switch to PARTITIONED, if more than one - PARTITIONED + node filter [2] [1] https://issues.apache.org/jira/browse/IGNITE-9257 [2] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate- Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Partitions distribution across nodes
Hi Akash, 1) Actually exchange is a short-time process when nodes remap partitions. But Ignite uses late affinity assignment, that means affinity distribution will be switched after rebalance completed. In other words after rebalance it will atomically switch partition distribution. But you don't have to wait when rebalance finish, because it works asynchronously. 2) I think, simpler would be to use IgniteCluster to determine number of nodes [1]: Ignite ignite = Ignition.start("examples/config/example-ignite.xml"); if (ignite.cluster().forServers().nodes().size() == 4) { //... loadCache } 3) No, you can use some custom value in cache with putIfAsent() to atomically get if some action was performed. [1] https://apacheignite.readme.io/docs/cluster-groups Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: SYSTEM_WORKER_TERMINATION (Item Not found)
Hi, I'm not sure that nightly builds are updates regularly, but you should a try. The biggest impact that nightly build could have some bugs that will be fixed on release. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Partitions distribution across nodes
Hi Akash, How do you measure partition distribution? Can you provide code for that test? I can assume that you get partitions before exchange process if finished. Try to use delay in 5 sec after all nodes are started and check again. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Question
Hi, It defines by AffinityFunction [1]. By default 1024 partitions, affinity automatically calculates nodes that will keep required partitions and minifies rebalancing when topology changes (nodes arrive or quit). [1]https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: SYSTEM_WORKER_TERMINATION (Item Not found)
Hi, TTL fixes are not included in 2.6 as it was an emergency release. You'll need to wait for 2.7. https://issues.apache.org/jira/browse/IGNITE-5874 https://issues.apache.org/jira/browse/IGNITE-8503 https://issues.apache.org/jira/browse/IGNITE-8681 https://issues.apache.org/jira/browse/IGNITE-8659 https://issues.apache.org/jira/browse/IGNITE-7972 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Additional field problems occurred in ignite2.6
Hi, Rules of field naming defined in BinaryIdMapper interface. By default used BinaryBasicIdMapper implementation that is by default converts all field names to lower case. So Ignite doesn't support the same field names in different cases as it will treat them as same field. But you can configure BinaryBasicIdMapper to be case-sensitive. Just set it to BinaryConfiguration: config.setBinaryConfiguration(new BinaryConfiguration().setIdMapper(new BinaryBasicIdMapper(false))). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ALTER TABLE ... NOLOGGING
Hi, It might be an issue with deactivation. Try update to 2.6 or wait 2.7. Right now just skip cluster deactivation. Once you formed a baseline topology and finished loading data, just enable WAL log for all caches. When log enabled successfully, you can safely stop nodes. On next time when all baseline nodes joined cluster, it will be activated automatically. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Exception while running sql inside ignite transaction
Hi Akash, First of all SQL is not transactional yet, this feature will be available only since 2.7 [1]. Your exception might be caused if query was canceled or node stopped. [1] https://issues.apache.org/jira/browse/IGNITE-5934 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Best practice for class versioning: marshaller error
Hi Calvin, > Can I assume that BinaryMarshaller won't be used for any object embedded > inside GridCacheQueryResponse? Yes, because Binary can fallback to Optimized, but not vice versa. > If I am correct, do you have any suggestion on how I can avoid this type > of issue? Probably you need to avoid using incorrectly serialized objects, or implement your own Externalizable, when you will do manual serializing/deserializing of such fields. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Best practice for class versioning: marshaller error
Hi Calvin, 1. Enlist I mean that if you want, for example, to get to see what fields present in BinaryObject. In other words, if you want to work with BinaryObject directly. For POJO serialization/deserialization this should not be and issue at all. 2-3. In your case, you have and java.time.Ser in one of the fields of your POJO (or maybe inside of depended object), and it is Externalizable. In such case BinaryMarshalelr falls back to OptimizedMarshaller with all the issues. Try to remove it from your POJOs or make transient. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Scaling with SQL query
Hi, Reduce will be done on node to which JDBC or thin client connected, it could be either client or server node. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Best practice for class versioning: marshaller error
Hi Calvin, BinaryMarshaller can solve that issue with involving a few more. First of all, you will need to disable compact footer to let each BinaryObject has it's schema in footer. If you need just put/get POJOs everything will be fine. But you need to enlist your POJO in BinaryConfiguration [1], because Ignite identifies type with typeId, that is by default hash from full class name. So each BinaryObject keeps only typeId. To find proper class to which it should be deserialized, Ignite needs mapping class name -> typeId. This mapping keeps in marshaller cache that holds in memory and in local files. Anyway you can stop grid, remove Ignite files and ran into issue that you cannot deserialize object, because Ignite cannot find class name for some typeId. And as I said, you need to enlist this class in BinaryConfiguration, that Ignite will read on start and fill marshaller cache. Second case, if you by some reason want to use BinaryObject directly. You should be aware that fact you can read and add fields, but you will not be able to enlist them. Because here used the same approach: field name converts to fieldId. Mapping fieldName -> fieldId is called meta info and stored in local meta cache. Again, hopefully, you can solve this with BinaryConfiguration. In summary, BinaryMarshaller is OK with different schema, but you need disable compact footer and add your POJOs in BinaryConfiguration. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/BinaryConfiguration.html Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Scaling with SQL query
Hi Jose, 1. Yep, I would say, you'll get more profit in persistence. Because if you split between real machines, each may keep more hot data in memory and each has separate hard drive. The more data you can fit into RAM and more hard drive could work in parallel, the better performance you get. 2. The best way to communicate with cluster - via client nodes, they could be embedded in any application and doesn't carry any data. So here client is a client node and each has some resource limit. For example, it has limited number of workers that do serialization/de-serialization and sending/receiving data. In case of large amount of messages you may try increase task distribution by setting TcpCommunicationSpi.setConnectionsPerNode() more than 1. If you add more clients, I mean you start new client node from another app that runs queries - you will more effectively use cluster, because two clients could produce more work and serialize/de-serialize more messages, as well as reducing more data. Just because that work split on more workers. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cache size in offheap mem in bytes
1) This applicable to Ignite. As it grown from GridGain sometimes it may appear in docs, because missed fro removal. 2) Yes, and I would say overhead could be even bigger. But anyway I cannot say definitely how much, because Ignite doesn't store data sequentially, there a lot of nuances. 3) Ignite transaction caches entries on-heap and this works only for TRANSACTIONAL caches. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Scaling with SQL query
Hi, Slight degradation is expected in some cases. Let me explain how it works. 1) Client sends request to each node (if you have query parallelism > 1 than number of requests multiplied by that num). 2) Each node runs that query against it's local dataset. 3) Each node responses with 100 entries. 4) Client collects all responses and performs reduce. So what happens when you add node? First of all dataset splits between larger number of nodes, but if dataset is too small you will not see any difference in query processing, or if newly added node does not significantly reduces amount of data for each other node. F.e. you have 9 nodes and add one more. Each node looses no more than 10% of data. In case of small dataset it will not give you any performance boost. In the other hand, client has to send more requests and reduce more data. For instance, with 9 nodes it receives 900 entries, with 10 nodes - 1K entries. Again, if dataset is relatively small you get overhead on client for additional requests/responses and data. The best scaling show queries by primary key, because in that case client can send request to affinity node directly without broadcasting to all nodes. So when can you get scaling profit for SQL? 1) You have a very large dataset. Each node will process less data and they will do it in parallel. Here boost for each node will beat additional overhead on client. 2) You add more clients that run queries in parallel. Total throughput increases because request/response overhead will be divided between larger number of clients. (Or you can set more connections per node to better utilize client machine resources). 3) You query for primary key. Please note one more thing, that overall latency depends on how fast the slowest node, because client will wait all responses. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Cache size in offheap mem in bytes
Hi, It will be incorrect, because entries are not stored sequentially, there are a lot of infrastructure that requires additional space. For example, memory is divided into pages and each page has header, each entry has key and value, version and other service information. For quick access to entries is created b+tree (which consists of pages too), synchronization primitives, links and binary object overhead to allow access to it's fields. It's quire hard to say how much memory will be required, just approximately [1]. The best way will be to load some number of data and measure how much memory was consumed. You need to make many such measures, because only on large number of entries memory consumption will increase linearly. [1] https://apacheignite.readme.io/docs/capacity-planning Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Ignite Node failure - Node out of topology (SEGMENTED)
Naresh, GC logs show not only GC pause, but system pause as well. Try these parameters: -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Different versions in ignite remote and local nodes.
Hi, Where did you get that images? In logs of all your instances do you see 2.5.0 version? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Cluster getting stuck when new node Join or release
Hi, Thread dumps look healthy. Please share full logs at that time when you took that thread dumps or take a new ones (thread dumps + logs). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Ignite Node failure - Node out of topology (SEGMENTED)
Hi Naresh, Actually any JVM process hang could lead to segmentation. If some node is not responsive for longer than failureDetectionTimeout, it will be kicked off from the cluster to prevent all over grid performance degradation. It works on following scenario. Let's say we have 3 nodes in a ring: n1 -> n2 -> n3. Over ring go some discovery messages along with metrics and connection checks with predefined interval. Node 2 start experiencing issues like GC pause or OS failures that forces process to stop. For that time node 1 is unable to send message to n2 (it doesn't receive ack). n1 waits for failureDetectionTimeout and establishes connection to n3: n1 -> n3; when n2 is not connected. Cluster treated n2 as failed. When n2 comes back it tries to connect to n3 and send message across ring, when it receives message that it's out of grid. For n2 that means it was segmented and best what it could do is stop. To check if there were large JVM or system pauses, you may enable GC logs. If they longer than failureDetectionTimeout, then node will be segmented. The best way would be to solve pauses, but like a workaround - increase timeout. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: No writes to WAL files for the whole day?
I suppose that is issue with updating timestamps, rather with WAL writes. Try to make a load test and compare hash sum of files before load test and after. Also check if WAL history grow. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Distributed Database as best choice for persistence
Hi, Just because: 1) not all users build their apps from scratch, they might have some legacy code built over Cassandra DB; 2) native persistence featured much later than Cassandra module, and there is no point to remove it now; 3) it's always better to offer more choices to user. Anyway, Ignite's native persistence is more powerful as Ignite has full control over it. With external storage Ignite performs more like cache, but when with native persistence - like DB. For example, if you run SQL query on Ignite grid with third party persistence, it's not possible to request data that is not loaded (cached) into memory. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: "Connect timed out" errors during cluster restart
Hi Oleksandr, It's OK for discovery, and this message is printed only in debug mode: if (log.isDebugEnabled()) log.error("Exception on direct send: " + e.getMessage(), e); Just turn off debug logging for discovery package: org.apache.ignite.spi.discovery.tcp. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: No writes to WAL files for the whole day?
Hi, What is your configuration? Check WAL mode and path to persistence. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite docker container not able to join in cluster
Hi, You configured external public EC interface address (34.241...), but it should be internal: 172... Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: monitoring function of web console
Hi, AFAIK, you cannot download plugin separately, it's commercial product. You can use it for free from here [1] or purchase a payed version for internal use. [1] http://console.gridgain.com/ Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Deleting a ticket from https://issues.apache.org/jira/projects/IGNITE/issues
Hi, Not sure if it's possible to remove the ticket. Just close it with won't fix status, it would be enough. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Distributed Database as best choice for persistence
Hi, Probably the best choice would be Cassandra as Ignite has out of the box integration with it [1]. [1] https://apacheignite-mix.readme.io/v2.5/docs/ignite-with-apache-cassandra Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Off-heap eviction configuration
Check your configuration. This code works perfectly well for me. If set page eviction mode to disabled - IOOME will be thrown: IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); DataStorageConfiguration dataStorageConfig = new DataStorageConfiguration(); long offHeapMemoryMax = 256 * 1024 * 1024; DataRegionConfiguration dataRegionConfig = new DataRegionConfiguration(); dataRegionConfig.setInitialSize((long) Math.ceil(0.2 * offHeapMemoryMax)); // 20% of 256MB dataRegionConfig.setMaxSize(offHeapMemoryMax); // 256MB, for testing purposes dataRegionConfig.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU); dataRegionConfig.setEvictionThreshold(0.9); dataRegionConfig.setName("OffHeapRegion"); // tried both default data region, and setting a data region list, but neither dataStorageConfig.setDataRegionConfigurations(dataRegionConfig); igniteConfiguration.setDataStorageConfiguration(dataStorageConfig); CacheConfiguration cacheConfiguration = new CacheConfiguration(); cacheConfiguration.setName("myCache"); cacheConfiguration.setCacheMode(CacheMode.PARTITIONED); cacheConfiguration.setDataRegionName("OffHeapRegion"); cacheConfiguration.setBackups(1); igniteConfiguration.setCacheConfiguration(cacheConfiguration); Ignite ignite = Ignition.start(igniteConfiguration); IgniteCache cache = ignite.getOrCreateCache("myCache"); for (long i = Long.MIN_VALUE; i < Long.MAX_VALUE; i++) cache.put(i, new byte[1800]); Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Text Query via SQL or REST API?
Jose, Unfortunately there is no other tools at the moment. But you still can contribute to Apache Ignite and implement that ticket which will persist Lucene indexes. It would be a great help! Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Off-heap eviction configuration
Hi, I see you used your data region as default and set name for it. Try to set it to DataStorageConfiguration.setDataRegionConfigurations(). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Text Query via SQL or REST API?
Hi, REST API does not have such option, but you can write your own compute task (that uses Java API) and call it from REST [1]. It's not possible to use Lucene search from SQL interfaces. To use full text search you need to annotate fields with @QueryTextField [2] and add to indexed types [3]. Ignite will automatically manage Lucene indexes and you'll be able to search with a TextQuery. Also, please note that Lucene indexes work only in in-memory mode [4]. [1] https://apacheignite.readme.io/v2.5/docs/rest-api#execute [2] https://apacheignite.readme.io/v2.5/docs/cache-queries#text-queries [3] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setIndexedTypes-java.lang.Class...- [4] https://issues.apache.org/jira/browse/IGNITE-5371 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: "WAL segment tail is reached", Is it a problem ?
Hi Mikael! Don't worry about this message and you may just ignore it. It's absolutely OK and means that WAL was read fully. The question is why it's WARNING... In future releases it would be changed to INFO and message content to avoid such confusing. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Node pause for no obvious reason
Hi, Check system logs for that time, maybe there was some system freeze and add more information in GC logs, for example safepoints: -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Local peek from client on a server node in a cluster
Hi, localPeek must be called on local node. I fyou want to do that from client, you have to execute a task [1] targeting on server node. But to list all entries ScanQuery is designed for [2]. You may run it via compute task from client with setLocal() flag set to true. [1] https://apacheignite.readme.io/docs/distributed-closures [2] https://apacheignite.readme.io/docs/cache-queries#scan-queries Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: And again... Failed to get page IO instance (page content is corrupted)
It would be better to upgrade to 2.5, where it is fixed. But if you want to overcome this issue in your's version, you need to add ignite-indexing dependency to your classpath and configure SQL indexes. For example [1], just modify it to work with Spring in XML: org.your.KeyObject org.your.ValueObject [1] https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-registering-indexed-types Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Ignite Node failure - Node out of topology (SEGMENTED)
Hi Naresh, Recommendation will be the same: increase failureDetectionTimeout unless nodes stop segmenting or use gdb (or remove "live" option from jmap command to skip full GC). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Ignite opens/close 5000 sockets in every 5mins after NODE_FAILED event
Hi, This thread dump is absolutely fine, you confused socket state and java thread state. These two things are absolutely unrelated. There should not be so many socket connections (TIME_WAIT means that socket already closed and waiting for last packages) for three nodes. Could you please share your configuration and netstat output? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?
Hi, I totally agree with Val that implementing own AffinityFunction is quite complex way. Requirement that you described is named affinity co-location as I wrote before. Let me explain in more details what to do and what are the drawbacks. 1. Use use @AffinityKeyMapped for all your keys. For example, on each cache save, you set this field for group of keys. Let's say, CustomerKey contains additional annotated field "int affinity". It will be equals of customerId / 1. In this case you will be sure that all keys, grouped with "affinity" will fall into the same partition. You do not have to implement AffintiyFunction and it works automagically. *BUT*, there is no guarantee that each node will hold only one such partition, there is a high risk that one node will keep two or more partitions, when other could be empty. 2. For example, you exactly know that you will not need more than 5 nodes. In this case everything becomes much easier. You implement AffinityFunction in that way it has 5 partitions and assigns to each node only one partition. Method partition() groups your keys on rule that I showed before. If you have less than 5 nodes, you may just put more than one partition to nodes. If you have more than 5 customers, you need to enhance your partition() method: if (key instanceof Integer) return (Integer)key / 1 % parts; Everything fine, *BUT*, you cannot scale. If you add more nodes - they will be empty, just because you have only 5 partitions (actually you may write affinity in a more complex way, but in the end you'll finish with regular RendezvousAffinity). So analyze your requirements and choose the right way. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Baseline topology issue when restarting server nodes one by one
Hi, What IgniteConfiguration do you use? Could you please share it? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Transactional cache
Hi, Ignite keeps Tx cached values on-heap. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Ignite opens/close 5000 sockets in every 5mins after NODE_FAILED event
There is no difference on how do you start/stop your node. Node on start will examine all connections specified in address list: it takes one address and port and tries to connect to it. If not successfull, get another address and port. For instance if you have address 1.2.3.4:47500..47509, node will check 10 addresses. Does this impact you somehow? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?
1. Affinity knows that, because it does assignments. Method assignPartitions() returns that assignments. Please read the javadoc [1]. 2. I just described how keys could be assigned to partition. For example: @Override public int partition(Object key) { if (key instanceof Integer) return (Integer)key / 1; return 0; } How it should be applied to your case, you need to think. 3. Please check how assignPartitions() is implemented in RendezvousAffinityFunction. It doesn't matter from how many and what kind of nodes you're loading data, affinity works equal for each node. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/AffinityFunction.html#assignPartitions-org.apache.ignite.cache.affinity.AffinityFunctionContext- Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?
There are various possible ways, but use one partition per node is definitely a bad idea, because you're loosing scaling possibilities. If you have 5 partitions and 5 nodes, then 6 node will be empty. It's much better if you in AffinityFunction.partition() method will calculate node according to your key. If you have key 1-1 it should go to partitions that belong to a single node. But at the same time method assignPartitions() should assign related partitions to the same node. Or (bad solution, but easier), use 5 partitions, distribute across nodes and put related keys to proper partition. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?
Normally (without @AffinityKeyMapped) Ignite will use CustomerKey hash code (not object hashCode()) to find a partition. Ignite will colsult with AffinityFunction (partition() method) and to what partition put key and with assignPartitions find concrete node that holds that partition. In other hand, if you annotate some field with @AffinityKeyMapped, value from that field would be used for mapping to partition. In your case, I suppose, you need to map a field that is common to your keys, by what you can group into one partition. For example, if you set annotation to customer name will mean that keys with the same customer name will always hit the same partition. AffinityFunction does two things: maps partitions to nodes and keys to partitions. If you override RendezvousAffinityFunction in that way, when partitions 1-4 will go to node 1, you need to make sure that your keys will fall into that partitions. You may start with annotation first (this process named affinity co-location, when some related keys put into same partition), I think that is what you need. Affinity implementation is set in CacheConfiguration.setAffinity(). Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to use Affinity Function to Map a set of Keys to a particular node in a cluster?
Hi, Make sure that your keys are go to specific partition. Only one node could keep that partition at a time (except backups, of course). To do that, you may use @AffinityKeyMapped annotation [1]. Additionally you can implement your own AffinityFunction that will assign partitions that you need to specific node(s). You may try to extend RendezvousAffinityFunction for that. In this case you may assign number of partitions to proper node that you need. [1] https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/model/EmployeeKey.java#L33 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite opens/close 5000 sockets in every 5mins after NODE_FAILED event
Hi, TcpDiscoveryMulticastIpFinder produces such a big number of connections. I'd recommend to switch to TcpDiscoveryVmIpFinder with static set of addresses. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: "WAL segment tail is reached", Is it a problem ?
Hi Mikael, Please share your Ignite settings and logs. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: ClusterTopologyServerNotFoundException
Hi, Could you please provide a reproducer? I don't get such exception. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Backup doubts
Hi, 1. By default get() will read backups if node, on which it's invoked is affinity node. In other words, if current node has backups, Ignite prefer to read local data from backup rather requesting primary node over network. This can be changed by setting CacheConfiguration.setReadFromBackup(false) [1]. 2. It depends on operations that you call. If you use get() - request will go to primary node only. If you do SQL query by primary key or affinity key - it will go to primary node too. In other cases SQL will be invoked on all nodes as it doesn't know beforehand what data nodes have data satisfied your query. 3. Optimal configuration is highly depends on your cluster size and hardware resources. In your case, you have three node and 2 backups, that means each node keeps full dataset and if two of three nodes failed, you don't loose data. But if you have more data than available memory for one node, than it's better either reduce number of backups or increase number of nodes. IMO the best backup configuration is that allows you to loose 20-30% of nodes without loosing data. 4. On node fail, affinity function will re-maps partitions between live nodes, re-balance them and restores number of backups. The more sophisticated behavior if you use persistence, because baseline topology will try to avoid re-balancing [2]. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setReadFromBackup-boolean- [2] https://apacheignite.readme.io/docs/baseline-topology Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Cluster getting stuck when new node Join or release
Hi, It's hard to get what's going wrong from your question. Please attach full logs and thread dumps from all server nodes. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Messages and topics
Hi, Yes, Ignite will send messages to all nodes, but you may use filter: ignite.message(ignite.cluster().forAttribute("topic1", Boolean.TRUE)); In this case messages would be sent to all nodes from the cluster group, in this example - only nodes with set attribute "topic1" [1]. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setUserAttributes-java.util.Map- -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Issue IGNITE-3471
Hi, Yes, for complex transaction this workaround will not work. So you need either wait for fix or avoid using EntryProcessor for now. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Do we require to set MaxDirectMemorySize JVM parameter?
Hi Ankit, No, Ignite uses sun.misc.Unsafe for offheap memory. Direct memory may be used in DirectBuffers used for intercommunication. Usually defaults quite enough. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite query statement reused/cached
Hi, For sure Ignite caches queries, that's why first request runs much longer than rest ones. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: The thread which is inserting data into Ignite is hung
Hi Praveen, Stack traces only show that thread is waiting for response, to get the full picture, please attach full logs and thread dumps at the moment of hang from all nodes. I need from all nodes, because actual issue happened on remote node. Also, according to last exception, there might be connectivity issue, when client cannot get response from cluster. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Data Streamer Hung after a period
Hi, Blocked threads show only the fact that there are no tasks to process in pool. Do you use persistence and/or indexing? Could you please attach your configs and logs from all nodes? Please take few sequential thread dumps when throughput is low. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: NullPointerException in GridCacheTtlManager.expire
Hi Dome, Could you please attach full logs? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: JMX-MBean Reports OffHeapAllocatedSize as zero
Hi Christoph, This metric is not implemented because of complexity. But you may get to know now much of space your cache or cashes consumes with DataRegionMetrics: DataRegionMetrics drm = ignite.dataRegionMetrics("region_name"); long used = (long)(drm.getPhysicalMemorySize() * drm.getPagesFillFactor()); So if you don't use persistence and set custom data region for cache, you can size, consumed by it. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Issue IGNITE-3471
Hi Prasad, This issue could not be completed in 2.5 as it's done in a low priority. As a workaround, you can wrap your executeEntryProcessorTransaction() method into affinity run [1], and no additional value transferring will happen. [1] https://apacheignite.readme.io/docs/collocate-compute-and-data Thanks! Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to set Expiry Policies when using Dataframe API to save data to Ignite?
Hi Ray, I think the only way to do it is to use IgniteDataFrameSettings.OPTION_CONFIG_FILE and set path to xml configuration with all settings you need. Here is a nice article about this [1] [1] https://medium.com/hashmapinc/apache-ignite-using-a-memory-grid-for-distributed-computation-frameworks-spark-and-flink-88de62417839 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Eviction Policy
Hi, If you have enabled read through mode for cache, entry will be loaded on next IgniteCache.get() operation, or when IgniteCache.loadCache() was called. Next time entry will be evicted according to your eviction policy. Please note that entry will not be counted in SQL queries if it was evicted, only for entries that was loaded into Ignite. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Delete SQL is failing with IN clause for a table which has composite key
Hi Naveen, Unfortunately I'm unable to reproduce that error. Could you please attach simple code/project that fails with specified exception? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Strange node fail
Duplicates http://apache-ignite-users.70518.x6.nabble.com/Strange-node-fail-td21078.html. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Strange node fail
Hi Ray, If your JVM process consumes more memory, then started swapping may cause JVM freeze, and as a consequence, throwing it out from the cluster. Check your free memory, disable swapping, if possible, or increase IgniteConfiguration.failureDetectionTimeout. To check that guess you may use dstat and add -XX:+PrintGCApplicationStoppedTime property, it will log additional process stop times. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Exception while using select query
Hi Anshu, This looks like a bug that was fixed in 2.4, try to upgrade [1]. [1] https://ignite.apache.org/download.cgi Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Kubernetes discovery with readinessProbe
Hi Bryan, You need to use StatefulSet [1], Kubernetes will start nodes one-by-one when each comes in a ready state. [1] https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Distributed transaction (Executing task on client as well as on key owner node)
Hi Prasad, This approach will work with multiple keys if they are collocated on the same node and you start/stop transaction in the same thread/task. There no other workaround. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Text Query question
Jet, Yep, this should work, but meanwhile this ticket remains unresolved [1]. [1] https://issues.apache.org/jira/browse/IGNITE-5371 Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Autowire in CacheStore implementation
Hi Prasad, If you started Ignite with IgniteSpringBean or IgniteSpring try @SpringApplicationContextResource [1] annotation. Ignite's resource injector will use spring context to set a dependency annotated by it. But I'm not sure that this will work with CacheStore, it should be rechecked. [1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/resources/SpringApplicationContextResource.html Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: continuous query - changes from local server only
Hi, You may fuse filter for that, for example: ContinuousQuery qry = new ContinuousQuery<>(); final Set nodes = new HashSet<>(client.cluster().forDataNodes("cache") .forHost(client.cluster().localNode()).nodes()); qry.setRemoteFilterFactory(new Factory>() { @Override public CacheEntryEventFilter create() { return new CacheEntryEventFilter() { @IgniteInstanceResource private Ignite ignite; @Override public boolean evaluate( CacheEntryEvent event) throws CacheEntryListenerException { // Server nodes on current host return nodes.contains(ignite.cluster().localNode()); } }; } }); Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Text Query question
Hi Jet, Full text search creates Lucene in-memory indexes and after restart they are not available, so you cannot use it with persistence. @QuerySqlField enables DB indexes that are able to work with persisted data, and probably no way to rebuild them for now. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Design help implementing custom counter on ignite
Hi, Transaction here might be a not optimal solution, as it by default optimistic and may throw optimistic transaction exception. I believe the best solution would be to use EntryProcessor [1], it will atomically modify entry as on TRANSACTIONAL as on ATOMIC cache on affinity data node (that actually keep this entry). [1] https://apacheignite.readme.io/docs/jcache#entryprocessor Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to identify stale ignite client in case of data grid restart and auto reconnect to cluster
Hi, This exception says that client node was stopped, but by default it should wait for servers. In other words, wait for reconnect, in this case it throws IgniteClientDisconnectedException that contains future on which you may wait for reconnect event. You may locally listen for EventType.EVT_CLIENT_NODE_DISCONNECTED to be notified that client node was disconnected [1]. But STOPPED state means that node was actually stopped. To get node status you may use Ignition.state() method and/or register your LifecycleBean implementation (IgniteConfiguration.setLifecycleBeans()). [1] https://apacheignite.readme.io/docs/events#section-local-events Thanks! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: setNodeFilter throwing a CacheException
Hi Shravya, To understand what's going on in your cluster I need full logs from all nodes. Please, share all files, if it's possible. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: setNodeFilter throwing a CacheException
Hi Sharavya, This exception means that client node is disconnected from cluster and tries to reconnect. You may get reconnect future on it (IgniteClientDisconnectedException.reconnectFuture().get()) and wait when client will be reconnected. So it looks like you're trying to create cache on stopped cluster and it has nothing with node filter. Can you share logs from all nodes? Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Memory usage by ignite nodes
Hi Ranjit, That metrics should be correct, you also may check [1], because Ignite anyway keeps data in offheap. But if enabled on-heap, it caches entries in java heap. [1] https://apacheignite.readme.io/docs/memory-metrics Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Question about persisting stream processing results
Hi Svonn, I'm not sure that I properly understand your issue. Could you please provide a problematic code snipped? > is the policy also deleting the Map Yes, if it was stored as a value. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite service method cannot invoke for third time
Hi, Anonymous and inner classes have link to outer class object and might bring it to marshaller. When you set it inner static or separate class you're explicitly saying that you don't need such links. In thread dumps you need to lookup for waiting or blocked threads. In your case in service node you may find that service thread is waiting on invoke(): "svc-#70" #102 prio=5 os_prio=0 tid=0x7fe820024800 nid=0x2c44 waiting on condition [0x7fe7d51f4000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke(GridDhtAtomicCache.java:785) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1338) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1320) at com.mediaiq.caps.platform.choreography.service.IgniteWorkflowServiceImpl.startWorkflow(IgniteWorkflowServiceImpl.java:165) ... Cache operations are invoked on data nodes, so you may go to data node and find: "sys-stripe-5-#6" #15 prio=5 os_prio=0 tid=0x7fd96459b800 nid=0x29a7 waiting on condition [0x7fd94cf9c000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4512) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4493) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1326) at org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.get(GridCacheProxyImpl.java:329) at org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.getCollection(DataStructuresProcessor.java:1001) at org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.queue(DataStructuresProcessor.java:794) at org.apache.ignite.internal.processors.datastructures.GridCacheQueueProxy.readResolve(GridCacheQueueProxy.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:549) at org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:917) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346) at org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:199) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) ... Here OptimizedMarshaller tries to deserialize EntryProcessor, but hanged on deserializing GridCacheQueueProxy aka IgniteQueue. Obviously you do not need to marshal/unmarshal it, and the best solution here would be to overcome it's serialization - remove it from anonymous EntryProcessor context. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Reconnect after cluster shutdown fails
Hi, Discovery events are processed in a single thread, and cache creation uses discovery custom messages. Trying to create cache in discovery thread will lead to deadlock, because discovery thread will wait in your lambda instead of processing messages. To avoid it just start another thread in your listener. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: “Failed to communicate with Ignite cluster" error when using JDBC Thin driver
Hi, It's hard to say why it happens. I'm not familiar with mybatis and actually don't know if it shares jdbc connection between threads. It would be great if you could provide some reproducible example that will help to debug the issue. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Reconnect after cluster shutdown fails
Hi, Please attach thread dumps from all nodes taken at the moment of hang. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: When do we get this error - Unknown pair [platformId=0, typeId=1078091073]]
Hi, Looks like not on all nodes exist your classes. Please check if all classes that you're using in cache are available on all nodes. Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Is it possible to import existing mysql database from file in console?
Hi, There are few options: 1) You need to have backups to survive node loss. [1] 2) You may enable persistence to survive grid restart and store more data that available in memory. [2] 3) Checkout nohup command [3] [1] https://apacheignite.readme.io/docs/primary-and-backup-copies [2] https://apacheignite.readme.io/docs/distributed-persistent-store [3] http://linux.101hacks.com/unix/nohup-command/ Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite service method cannot invoke for third time
Glad to hear that it was helpful! I wrote the example just in email, so didn't have a compiler to check it :) Thanks! -Dmitry -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/