Hi Naveen, Ignite is memory centric database and I doubt it should run query on Native persistent that is on DISK. However, for now, Ignite can use Heap memory only for heavy queries. It will be fixed in future versions via introducing memory region for SQL queries that will allow Ignite to use disk for very large queries as a trade off to avoid OOM.
Looks like, Map queries fails due to OOM. It is a known issue for pre 2.3 versions [1] when Map queries fetch full result dataset. Is it possible ASSOCIATED_PARTIES objects are too large or you have too low heap size? Is it possible ResultSet or connections are not closed properly and there is some leakage? There should be enough heap memory on all nodes at least 1 page of query results per query. Page size is equals to 1024 by default. So, you need free memory for 1024 ASSOCIATED_PARTIES object per query. [1] https://issues.apache.org/jira/browse/IGNITE-5991 On Fri, Apr 20, 2018 at 9:42 AM, Naveen <[email protected]> wrote: > HI Andrey > > The reason I was trying to explore this feature is, > > I will give you an example. I have a cache with 20M records and when I run > this query > SELECT * FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES > > Query took more than 200 secs and ran out of memory, here is the error > thrown > This I have tried with lary=true, guess syntax and usage is correct. > jdbc:ignite:thin://10.144.114.113?lazy=true > > SQL Error [50000]: javax.cache.CacheException: Failed to run map query > remotely.Failed to execute map query on the node: > ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class > org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out > of > memory.; SQL statement: > SELECT > __Z0.ASSOCIATED_PARTY_ID __C0_0, > __Z0.WALLETID __C0_1, > __Z0.UPDATEDDATETIME __C0_2, > __Z0.UPDATEDBY __C0_3, > __Z0.PARTY_ID __C0_4 > FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195] > javax.cache.CacheException: Failed to run map query remotely.Failed to > execute map query on the node: ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class > org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out > of > memory.; SQL statement: > SELECT > __Z0.ASSOCIATED_PARTY_ID __C0_0, > __Z0.WALLETID __C0_1, > __Z0.UPDATEDDATETIME __C0_2, > __Z0.UPDATEDBY __C0_3, > __Z0.PARTY_ID __C0_4 > FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195] > javax.cache.CacheException: Failed to run map query remotely.Failed to > execute map query on the node: ef5b4e7d-3423-4b84-8427-0491cd13f6c4, class > org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Out > of > memory.; SQL statement: > SELECT > __Z0.ASSOCIATED_PARTY_ID __C0_0, > __Z0.WALLETID __C0_1, > __Z0.UPDATEDDATETIME __C0_2, > __Z0.UPDATEDBY __C0_3, > __Z0.PARTY_ID __C0_4 > FROM "AssociatedPartiesCache".ASSOCIATED_PARTIES __Z0 [90108-195] > > In spite of, lazy=true, I got into this issue. > > The way I was looking at possible solution to overcome this issue is. > > Long running queries should not impact server RAM, Ignite should run query > on Native persistent that is on DISK, the query which I am running should > not bring cluster down, I am still fine if the query takes couple of > minutes > to execute, but ultimately should not disturb the cluster > > After this error, cluster stopped working, we had to restart the cluster > again to make it work. > I believe there are definitely ways to overcome this issue, we do have > billion records in some of the tables, for some reason unknowingly runs > query on those tables, it should not bring down the cluster abtruptly > > Thanks > Naveen > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ > -- Best regards, Andrey V. Mashenkov
