Hello, I see in attached files the following "Caused by: java.lang.OutOfMemoryError: Java heap space" error. It means that one node can't handle whole resultset of this query. You need to increase JVM heap size (via -Xmx/-Xms vm properties) or add node in cluster. Look at the following doc pages:
1. https://apacheignite.readme.io/docs/preparing-for-production 2. https://apacheignite.readme.io/docs#garbage-collection-tuning On Sun, Dec 10, 2017 at 9:21 PM, Ahmad Al-Masry <[email protected]> wrote: > Dears; > I also noticed that when I reduced to single node, the problem happens, > even that it has 2 GB free of RAM. > Any suggestions? > BR > > > On Dec 10, 2017, at 4:23 PM, Ahmad Al-Masry <[email protected]> wrote: > > > > Dears; > > I try to execute a complex query on a cluster of two nodes, > > When the cashes are configured as PARTITIONED, the execution takes about > 12 seconds, but when I changed it to REPLICATED, the attached error > appears. And when tried to increase Java heap, the query reaches to time > out. > > Also attached is the nodes configuration. > > BR > > <harri-server.xml><error.log> > > > -- > > > > This email, and the content it contains, are intended only for the persons > or entities to which it is addressed. It may contain sensitive, > confidential and/or privileged material. Any review, retransmission, > dissemination or other use of, or taking of any action in reliance upon, > this information by persons or entities other than the intended > recipient(s) is prohibited. If you received this email in error, please > immediately contact security[at]harri[dot]com and delete it from any device > or system on which it may be stored. >
