[2.13.0] - Log overflow & Threads hanged
me=(1687272172791, 2023-06-20 10:42:52.791) [ignite-poc-datanode-131-x4fzx] L=1 - Read lock pageId=844420635164763, structureId=1560569965__key_PK##H2Tree [pageIdHex=0002005b, partId=65535, pageIdx=91, flags=0002] [ignite-poc-datanode-131-x4fzx] L=2 - Read lock pageId=844420635197184, structureId=1560569965__key_PK##H2Tree [pageIdHex=00027f00, partId=65535, pageIdx=32512, flags=0002] [ignite-poc-datanode-131-x4fzx] L=1 <- Read unlock pageId=844420635164763, structureId=1560569965__key_PK##H2Tree [pageIdHex=0002005b, partId=65535, pageIdx=91, flags=0002] .. [ignite-poc-datanode-131-x4fzx] L=2 - Read lock pageId=844420635199249, structureId=1560569965__key_PK##H2Tree [pageIdHex=00028711, partId=65535, pageIdx=34577, flags=0002] [ignite-poc-datanode-131-x4fzx] 23-06-20 10:44:05.201 [WARN ] page-lock-tracker-timeout o.a.i.i.p.c.CacheDiagnosticManager:127 - Failed to save locks dump file. [ignite-poc-datanode-131-x4fzx] org.apache.ignite.IgniteCheckedException: Work directory does not exist and cannot be created: /ignite/work [ignite-poc-datanode-131-x4fzx] at org.apache.ignite.internal.util.IgniteUtils.workDirectory(IgniteUtils.java:9900) ~[ignite-core-2.13.0.jar:2.13.0] [ignite-poc-datanode-131-x4fzx] at org.apache.ignite.internal.util.IgniteUtils.defaultWorkDirectory(IgniteUtils.java:9840) ~[ignite-core-2.13.0.jar:2.13.0] [ignite-poc-datanode-131-x4fzx] at org.apache.ignite.internal.processors.cache.persistence.diagnostic.pagelocktracker.PageLockTrackerManager.onHangThreads(PageLockTrackerManager.java:153) ~[ignite-core-2.13.0.jar:2.13.0] [ignite-poc-datanode-131-x4fzx] at org.apache.ignite.internal.processors.cache.persistence.diagnostic.pagelocktracker.SharedPageLockTracker$TimeOutWorker.iteration(SharedPageLockTracker.java:340) ~[ignite-core-2.13.0.jar:2.13.0] [ignite-poc-datanode-131-x4fzx] at org.apache.ignite.internal.util.worker.CycleThread.run(CycleThread.java:49) ~[ignite-core-2.13.0.jar:2.13.0] [ignite-poc-datanode-131-z5x4r] 23-06-20 10:44:18.623 [WARN ] page-lock-tracker-timeout o.a.i.i.p.c.CacheDiagnosticManager:127 - Threads hanged: [(query-#2331%poc%-2376, TIMED_WAITING)] [ignite-poc-datanode-131-z5x4r] 23-06-20 10:44:18.632 [WARN ] page-lock-tracker-timeout o.a.i.i.p.c.CacheDiagnosticManager:127 - Page locks dump: [ignite-poc-datanode-131-z5x4r] Log overflow, size:512, headIdx=512 [structureId=50, pageIdpageId=844420635194620 [pageIdHex=000274fc, partId=65535, pageIdx=29948, flags=0002]] [ignite-poc-datanode-131-z5x4r] Thread=[name=query-#2331%poc%, id=2376], state=TIMED_WAITING [ignite-poc-datanode-131-z5x4r] Locked pages = [844420635194620[000274fc](r=1|w=0),844420635201360[000200008f50](r=1|w=0)] Thanks, MJ
Re: Ignite Affinity - sql join Question
Tested SqlQueriesExample.java of ignite 2.13.0 - the inside sqlQueryWithJoin method, it still prints out below warning message when executing the join sql on COLLOCATED_PERSON_CACHE. So that's a false positive ? [ WARN] - For join two partitioned tables join condition should contain the equality operation of affinity keys. Left side: PERSON; right side: ORGANIZATION Thanks ---Original--- From: "Jiang Jacky"https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/sql/SqlQueriesExample.java Get Outlook for iOS From: MJ <6733...@qq.com Sent: Tuesday, June 13, 2023 1:24:48 AM To: user https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation and added with some additional annotation changes. public class AffinityCollocationExample2 { static class Person { @QuerySqlField(index = true) private int id; @QuerySqlField(index = true) private String companyId; @QuerySqlField private String name; public Person(int id, String companyId, String name) { this.id = id; this.companyId = companyId; this.name = name; } public int getId() { return id; } } static class PersonKey { private int id; @AffinityKeyMapped private String companyId; public PersonKey(int id, String companyId) { this.id = id; this.companyId = companyId; } } static class Company { @QuerySqlField(index = true) private String id; @QuerySqlField private String name; public Company(String id, String name) { this.id = id; this.name = name; } public String getId() { return id; } } public void configureAffinityKeyWithAnnotation() { CacheConfiguration
Re: Ignite Affinity - sql join Question
No luck. With the change its throwing exception BinaryObjectImpl cannot be cast to java.lang.String when it tried to put a new entey into "companies" cache. Beside, there is already the @AffinityKeyMapped annotation on companyId field of PersioKey class. AffinityKey needs to be marked on sql join's both left and right sides? Thanks, MJ ---Original--- From: "Jiang Jacky"https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation and added with some additional annotation changes. public class AffinityCollocationExample2 { static class Person { @QuerySqlField(index = true) private int id; @QuerySqlField(index = true) private String companyId; @QuerySqlField private String name; public Person(int id, String companyId, String name) { this.id = id; this.companyId = companyId; this.name = name; } public int getId() { return id; } } static class PersonKey { private int id; @AffinityKeyMapped private String companyId; public PersonKey(int id, String companyId) { this.id = id; this.companyId = companyId; } } static class Company { @QuerySqlField(index = true) private String id; @QuerySqlField private String name; public Company(String id, String name) { this.id = id; this.name = name; } public String getId() { return id; } } public void configureAffinityKeyWithAnnotation() { CacheConfiguration
Ignite Affinity - sql join Question
hi Igniters, Can you pls advise below ? It always prints out below WARN message in server side when I am trying to use the sql join . Anything wrong with my configuration or testing code ? Is it possible to eliminate that WARN message ? [ WARN] org.apache.ignite.internal.processors.query.h2.QueryParser - For join two partitioned tables join condition should contain the equality operation of affinity keys. Left side: PERSON; right side: COMPANY Ignite version: 2.13.0 see below test code, which is extracted from https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation and added with some additional annotation changes. public class AffinityCollocationExample2 { static class Person { @QuerySqlField(index = true) private int id; @QuerySqlField(index = true) private String companyId; @QuerySqlField private String name; public Person(int id, String companyId, String name) { this.id = id; this.companyId = companyId; this.name = name; } public int getId() { return id; } } static class PersonKey { private int id; @AffinityKeyMapped private String companyId; public PersonKey(int id, String companyId) { this.id = id; this.companyId = companyId; } } static class Company { @QuerySqlField(index = true) private String id; @QuerySqlField private String name; public Company(String id, String name) { this.id = id; this.name = name; } public String getId() { return id; } } public void configureAffinityKeyWithAnnotation() { CacheConfiguration
ignite-shmem ?
hi Igniters, Can anyone please elaborate what below is used for ? The inside libigniteshmem.so is labelled as high security risk by compliance tool hence i have to exclude it. but seems the fundermental funcationalities are not impact. Is it safe to exlude it ? or any advanced ignite usages could be impacted by the removal ofignite-shmem ? org.gridgain:ignite-shmem Server OS: redhat 7 Thanks, MJ
Re client node connects to server nodes behind NAT
That would be peer class loading data stream . Thanks ---- ??: "user" https://ignite-summit.org/sessions/293596 On Mon, Nov 15, 2021 at 3:50 AM MJ <6733...@qq.com wrote: Hi, Is that possible for a non-Kubernetesclient node connects to server nodes within Kubernetes? have read below docs seems impossible https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment#connecting-client-nodes Have tried with thin client outside of Kubernetes - that works fine client node(thick client) - always throw exceptions, most likely the internal ips bebind NAT cannot be detected from external , is there any workaround to implement that non-Kubernetesclient node connects to server nodes within Kubernetes ? I'd like to utilise the power features of thick client. and They can be depoloyed everywhere if there is the way of making it. Thanks, Ma Jun
client node connects to server nodes behind NAT
Hi, Is that possible for a non-Kubernetesclient node connects to server nodes within Kubernetes? have read below docs seems impossible https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment#connecting-client-nodes Have tried with thin client outside of Kubernetes - that works fine client node(thick client) - always throw exceptions, most likely the internal ips bebind NAT cannot be detected from external , is there any workaround to implement that non-Kubernetesclient node connects to server nodes within Kubernetes ? I'd like to utilise the power features of thick client. and They can be depoloyed everywhere if there is the way of making it. Thanks, Ma Jun
frequent "Failed to shutdown socket" warnings in 2.11.0
Hi, I experienced frequent "Failed to shutdown socket" warnings (see below) in testing ignite 2.11.0. Compared org.apache.ignite.internal.util.IgniteUtils 2.11.0 (line:4227) vs 2.10.0 , the method( public static void close(@Nullable Socket sock, @Nullable IgniteLogger log) ) is newly added. Not sure if these methods close/closeQuiet ’s related to IGNITE_QUIET setting but it’s always throwing below warnings no matter IGNITE_QUIET is true or false . And yes those warnings can be suppressed by setting logging level to ERROR. I am not sure if that’s the issue in my code or just flaw in ignite. But is there any other way to gracefully clean up them by coding ? Those exception stacktraces are not good for production usage. 2021-11-03 08:33:05,745 [ WARN] [grid-nio-worker-client-listener-2-#34%ignitePoc_primary%] org.apache.ignite.internal.processors.odbc.ClientListenerProcessor - Failed to shutdown socket: null java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:797) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:407) at org.apache.ignite.internal.util.IgniteUtils.close(IgniteUtils.java:4231) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.closeKey(GridNioServer.java:2784) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2835) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2794) at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1357) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2508) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2273) at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) Thanks, MJ
Re: Problem with Cache KV Remote Query
Confirmed "Compact footer" setting fixed the problem. Thanks a lot. -MJ ---Original--- From: "Alex Plehanov"https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation#configuring-affinity-key; . When I run the code in single jvm, it works perfect and successfully retrieved the cached object (personCache.get(new PersonKey(1, "company1"))) . But when I try to run the client code in another new JVM(meanwhile leave the server node run in local), something goes wrong (see below). Please can anyone elaborate why the first test case succeeded but the second one failed ? Logger log = LoggerFactory.getLogger(getClass()); //success @Test public void test_iterate() throws ClientException, Exception { ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800"); try (IgniteClient client = Ignition.startClient(cfg)) { ClientCache