Re: slow query performance against berkley db
Thanks Stan for looking into it. Unfortunately, it still takes 23 sec on 240gb RAM system, the corresponding EXPLAIN PLAN [[SELECT ST.ENTRYID, ST.ATTRNAME, ST.ATTRVALUE, ST.ATTRSTYPE FROM "objectclass".IGNITE_OBJECTCLASS T /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = ?1 */ /* WHERE T.ATTRVALUE = ?1 */ INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: ENTRYID = T.ENTRYID AND ENTRYID = T.ENTRYID */ ON 1=1 /* WHERE (ST.ATTRKIND IN('u', 'o')) AND (ST.ENTRYID = T.ENTRYID) */ INNER JOIN "dn".IGNITE_DN DNT /* "dn".EP_DN_IDX: ENTRYID = ST.ENTRYID AND PARENTDN >= 'dc=ignite,' AND PARENTDN < 'dc=ignite-' AND ENTRYID = ST.ENTRYID */ ON 1=1 WHERE (((ST.ATTRKIND IN('u', 'o')) AND (T.ATTRVALUE = ?1)) AND (DNT.PARENTDN LIKE ?2)) AND ((ST.ENTRYID = DNT.ENTRYID) AND (ST.ENTRYID = T.ENTRYID))]] Pls advise Thanks, Rajesh On Tue, Feb 13, 2018 at 8:48 PM, Stanislav Lukyanovwrote: > Hi Rajesh, > > While I don't have - and, probably, no one has - any benchmarks comparing > Ignite vs Berkeley in a single node configuration (as others have said, > this > is not really a common use case for Ignite), I can say that performance > problems you see are likely to be caused by your query structure. > > Rule of thumb for Ignite's SQL - avoid nested SELECTs. Also make sure you > have proper indexes for the fields you use in conditions. Usually you also > need to make sure that your data is efficiently collocated, but that only > applies to cases when you have multiple nodes. > > I've attempted to optimize the SELECT you've posted - here it is: > SELECT st.entryID, st.attrName, st.attrValue, st.attrsType > FROM "objectclass".Ignite_ObjectClass as t > JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE AS st > ON st.entryID = t.entryID > JOIN "dn".Ignite_DN AS dnt > ON st.entryID = dnt.entry > WHERE t.attrValue= ? > AND (st.attrKind = 'u' OR st.attrKind = 'o') > AND dnt.parentDN LIKE ? > > I can't really verify its correctness, but I guess it can be a decent place > to start. > > Thanks, > Stan > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: slow query performance against berkley db
Dear all Request you to kindly suggest me if my approach is wrong ? The idea of replacing berkley db with Ignite would not work out if the query is slow , whats the best model to be used with Ignite for my usecase. Thanks, Rajesh On Fri, Feb 9, 2018 at 9:38 AM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > Igniters any pointers pls. > > Regards, > Rajesh > > On Wed, Feb 7, 2018 at 9:15 AM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > >> Hi Dmitry, >> >> Thanks a ton. >> >> What is not convincing to me is with just *.1 M in main table and* *2 M >> records in other table * , sql query is taking around 24 sec, that is >> worrisome. >> In local cache mode , I tried both using partitioned and non partitioned >> mode , the result is same. >> All I wanted to know , is my approach is wrong somewhere? I am sure >> igniters would correct me with my approach used. >> >> Regards, >> -Rajesh >> >> On Wed, Feb 7, 2018 at 8:23 AM, Dmitriy Setrakyan <dsetrak...@apache.org> >> wrote: >> >>> Hi Rajesh, >>> >>> Please allow the community some time to test your code. >>> >>> As far as testing single node vs. distributed, when you have more than >>> one node, Ignite will split your data set evenly across multiple nodes. >>> This means that when running the query, it will be executed on each node on >>> smaller data sets in parallel, which should provide better performance. If >>> your query does some level of scanning, then the more nodes you add, the >>> faster it will get. >>> >>> D. >>> >>> On Tue, Feb 6, 2018 at 5:02 PM, Rajesh Kishore <rajesh10si...@gmail.com> >>> wrote: >>> >>>> Hi All >>>> Please help me in getting the pointers, this is deciding factor for us >>>> to further evaluate ignite. Somehow we are not convinced with just . 1 m >>>> records it's not responsive as that of Berkley db. >>>> Let me know the strategy to be adopted, pointers where I am doing >>>> wrong. >>>> >>>> Thanks >>>> Rajesh >>>> >>>> On 6 Feb 2018 6:11 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> >>>> wrote: >>>> >>>>> Further to this, >>>>> >>>>> I am re-framing what I have , pls correct me if my approach is correct >>>>> or not. >>>>> >>>>> As of now, using only node as local cache and using native persistence >>>>> file system. The system has less number of records around *.1 M *in >>>>> main table and 2 M in supporting table. >>>>> >>>>> Using sql to retrieve the records using join , the sql used is >>>>> >>>>> --- >>>>> final String query1 = "SELECT " >>>>> + "f.entryID,f.attrName,f.attrValue, " >>>>> + "f.attrsType " >>>>> + "FROM " >>>>> +"( select st.entryID,st.attrName,st.attrValue, >>>>> st.attrsType from " >>>>> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectC >>>>> lass" >>>>> + " at1 WHERE " >>>>> + " at1.attrValue= ? ) t" >>>>> +" INNER JOIN >>>>> \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE >>>>> st ON st.entryID = t.entryID " >>>>> + " WHERE st.attrKind IN ('u','o') " >>>>> +" ) f " >>>>> + " INNER JOIN " >>>>> + " ( " >>>>> +" SELECT entryID from \"dn\".Ignite_DN where parentDN >>>>> like ? " >>>>> +") " >>>>> +" dnt" >>>>> + " ON f.entryID = dnt.entryID" >>>>> + " order by f.entryID"; >>>>> >>>>> String queryWithType = query1; >>>>> QueryCursor<List> cursor = cache.query(new SqlFieldsQuery( >>>>> queryWithType).setEnforceJoinOrder(true).setArgs("person", >>>>> "dc=ignit
ignite support for multii data center replication
Hi, Does Ignite replication works on Multi Data Center /WAN ? Regards, -Rajesh
Re: slow query performance against berkley db
Igniters any pointers pls. Regards, Rajesh On Wed, Feb 7, 2018 at 9:15 AM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > Hi Dmitry, > > Thanks a ton. > > What is not convincing to me is with just *.1 M in main table and* *2 M > records in other table * , sql query is taking around 24 sec, that is > worrisome. > In local cache mode , I tried both using partitioned and non partitioned > mode , the result is same. > All I wanted to know , is my approach is wrong somewhere? I am sure > igniters would correct me with my approach used. > > Regards, > -Rajesh > > On Wed, Feb 7, 2018 at 8:23 AM, Dmitriy Setrakyan <dsetrak...@apache.org> > wrote: > >> Hi Rajesh, >> >> Please allow the community some time to test your code. >> >> As far as testing single node vs. distributed, when you have more than >> one node, Ignite will split your data set evenly across multiple nodes. >> This means that when running the query, it will be executed on each node on >> smaller data sets in parallel, which should provide better performance. If >> your query does some level of scanning, then the more nodes you add, the >> faster it will get. >> >> D. >> >> On Tue, Feb 6, 2018 at 5:02 PM, Rajesh Kishore <rajesh10si...@gmail.com> >> wrote: >> >>> Hi All >>> Please help me in getting the pointers, this is deciding factor for us >>> to further evaluate ignite. Somehow we are not convinced with just . 1 m >>> records it's not responsive as that of Berkley db. >>> Let me know the strategy to be adopted, pointers where I am doing wrong. >>> >>> Thanks >>> Rajesh >>> >>> On 6 Feb 2018 6:11 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> >>> wrote: >>> >>>> Further to this, >>>> >>>> I am re-framing what I have , pls correct me if my approach is correct >>>> or not. >>>> >>>> As of now, using only node as local cache and using native persistence >>>> file system. The system has less number of records around *.1 M *in >>>> main table and 2 M in supporting table. >>>> >>>> Using sql to retrieve the records using join , the sql used is >>>> --- >>>> final String query1 = "SELECT " >>>> + "f.entryID,f.attrName,f.attrValue, " >>>> + "f.attrsType " >>>> + "FROM " >>>> +"( select st.entryID,st.attrName,st.attrValue, >>>> st.attrsType from " >>>> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectC >>>> lass" >>>> + " at1 WHERE " >>>> + " at1.attrValue= ? ) t" >>>> +" INNER JOIN >>>> \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE >>>> st ON st.entryID = t.entryID " >>>> + " WHERE st.attrKind IN ('u','o') " >>>> +" ) f " >>>> + " INNER JOIN " >>>> + " ( " >>>> +" SELECT entryID from \"dn\".Ignite_DN where parentDN like >>>> ? " >>>> +") " >>>> +" dnt" >>>> + " ON f.entryID = dnt.entryID" >>>> + " order by f.entryID"; >>>> >>>> String queryWithType = query1; >>>> QueryCursor<List> cursor = cache.query(new SqlFieldsQuery( >>>> queryWithType).setEnforceJoinOrder(true).setArgs("person", >>>> "dc=ignite,%")); >>>> System.out.println("SUBTREE "+cursor.getAll() ); >>>> >>>> >>>> --- >>>> >>>> The corresponding EXPLAIN plan is >>>> >>>> >>>> [[SELECT >>>> F.ENTRYID, >>>> F.ATTRNAME, >>>> F.ATTRVALUE, >>>> F.ATTRSTYPE >>>> FROM ( >>>> SELECT >>>> ST.ENTRYID, >>>> ST.ATTRNAME, >>>> ST.ATTRVALUE, >>>> ST.ATTRSTYPE >>>> FROM
Re: code sample for cluster configuration
Thanks Pavel, I am aware about this code. I am able to establish a cluster as well. I have following requirement: >From my application - I want to retrieve/insert records on different cluster server. My application code is single instance , my application code should be unaware from which ignite cluster server its retrieving/inserting the data. Would just setting Ignition.setClientMode(true) is enough for this usecase, would my application to push the data to one of the portioned cluster server ? Thanks, Rajesh On Wed, Feb 7, 2018 at 8:43 PM, Vinokurov Pavel <vinokurov.pa...@gmail.com> wrote: > Hi Rajesh, > > There is a good sample with enabled persistance - > https://github.com/apache/ignite/blob/master/examples/ > src/main/java/org/apache/ignite/examples/persistentstore/ > PersistentStoreExample.java > Also documentation about Ignite persistance presented in > https://apacheignite.readme.io/docs/distributed-persistent-store. > > > 2018-02-07 13:14 GMT+03:00 Rajesh Kishore <rajesh10si...@gmail.com>: > >> Hi, >> >> I want to try a two node setup for Ignite cluster with native file based >> persistence enabled . >> >> Any samples, or pointer ? >> >> >> -Rajesh >> > > > > -- > > Regards > > Pavel Vinokurov >
code sample for cluster configuration
Hi, I want to try a two node setup for Ignite cluster with native file based persistence enabled . Any samples, or pointer ? -Rajesh
Re: slow query performance against berkley db
Hi Dmitry, Thanks a ton. What is not convincing to me is with just *.1 M in main table and* *2 M records in other table * , sql query is taking around 24 sec, that is worrisome. In local cache mode , I tried both using partitioned and non partitioned mode , the result is same. All I wanted to know , is my approach is wrong somewhere? I am sure igniters would correct me with my approach used. Regards, -Rajesh On Wed, Feb 7, 2018 at 8:23 AM, Dmitriy Setrakyan <dsetrak...@apache.org> wrote: > Hi Rajesh, > > Please allow the community some time to test your code. > > As far as testing single node vs. distributed, when you have more than one > node, Ignite will split your data set evenly across multiple nodes. This > means that when running the query, it will be executed on each node on > smaller data sets in parallel, which should provide better performance. If > your query does some level of scanning, then the more nodes you add, the > faster it will get. > > D. > > On Tue, Feb 6, 2018 at 5:02 PM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > >> Hi All >> Please help me in getting the pointers, this is deciding factor for us to >> further evaluate ignite. Somehow we are not convinced with just . 1 m >> records it's not responsive as that of Berkley db. >> Let me know the strategy to be adopted, pointers where I am doing wrong. >> >> Thanks >> Rajesh >> >> On 6 Feb 2018 6:11 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> >> wrote: >> >>> Further to this, >>> >>> I am re-framing what I have , pls correct me if my approach is correct >>> or not. >>> >>> As of now, using only node as local cache and using native persistence >>> file system. The system has less number of records around *.1 M *in >>> main table and 2 M in supporting table. >>> >>> Using sql to retrieve the records using join , the sql used is >>> --- >>> final String query1 = "SELECT " >>> + "f.entryID,f.attrName,f.attrValue, " >>> + "f.attrsType " >>> + "FROM " >>> +"( select st.entryID,st.attrName,st.attrValue, >>> st.attrsType from " >>> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectC >>> lass" >>> + " at1 WHERE " >>> + " at1.attrValue= ? ) t" >>> +" INNER JOIN >>> \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE >>> st ON st.entryID = t.entryID " >>> + " WHERE st.attrKind IN ('u','o') " >>> +" ) f " >>> + " INNER JOIN " >>> + " ( " >>> +" SELECT entryID from \"dn\".Ignite_DN where parentDN like >>> ? " >>> +") " >>> +" dnt" >>> + " ON f.entryID = dnt.entryID" >>> + " order by f.entryID"; >>> >>> String queryWithType = query1; >>> QueryCursor<List> cursor = cache.query(new SqlFieldsQuery( >>> queryWithType).setEnforceJoinOrder(true).setArgs("person", >>> "dc=ignite,%")); >>> System.out.println("SUBTREE "+cursor.getAll() ); >>> >>> >>> --- >>> >>> The corresponding EXPLAIN plan is >>> >>> >>> [[SELECT >>> F.ENTRYID, >>> F.ATTRNAME, >>> F.ATTRVALUE, >>> F.ATTRSTYPE >>> FROM ( >>> SELECT >>> ST.ENTRYID, >>> ST.ATTRNAME, >>> ST.ATTRVALUE, >>> ST.ATTRSTYPE >>> FROM ( >>> SELECT >>> AT1.ENTRYID >>> FROM "objectclass".IGNITE_OBJECTCLASS AT1 >>> WHERE AT1.ATTRVALUE = ?1 >>> ) T >>> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST >>> ON 1=1 >>> WHERE (ST.ATTRKIND IN('u', 'o')) >>> AND (ST.ENTRYID = T.ENTRYID) >>> ) F >>> /* SELECT >>> ST.ENTRYID, >>> ST.ATTRNAME, >>>
Re: slow query performance against berkley db
Hi All Please help me in getting the pointers, this is deciding factor for us to further evaluate ignite. Somehow we are not convinced with just . 1 m records it's not responsive as that of Berkley db. Let me know the strategy to be adopted, pointers where I am doing wrong. Thanks Rajesh On 6 Feb 2018 6:11 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> wrote: > Further to this, > > I am re-framing what I have , pls correct me if my approach is correct or > not. > > As of now, using only node as local cache and using native persistence > file system. The system has less number of records around *.1 M *in main > table and 2 M in supporting table. > > Using sql to retrieve the records using join , the sql used is > --- > final String query1 = "SELECT " > + "f.entryID,f.attrName,f.attrValue, " > + "f.attrsType " > + "FROM " > +"( select st.entryID,st.attrName,st.attrValue, st.attrsType > from " > +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass" > + " at1 WHERE " > + " at1.attrValue= ? ) t" > +" INNER JOIN \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE > st ON st.entryID = t.entryID " > + " WHERE st.attrKind IN ('u','o') " > +" ) f " > + " INNER JOIN " > + " ( " > +" SELECT entryID from \"dn\".Ignite_DN where parentDN like ? " > +") " > +" dnt" > + " ON f.entryID = dnt.entryID" > + " order by f.entryID"; > > String queryWithType = query1; > QueryCursor<List> cursor = cache.query(new SqlFieldsQuery( > queryWithType).setEnforceJoinOrder(true).setArgs("person", > "dc=ignite,%")); > System.out.println("SUBTREE "+cursor.getAll() ); > > > --- > > The corresponding EXPLAIN plan is > > > [[SELECT > F.ENTRYID, > F.ATTRNAME, > F.ATTRVALUE, > F.ATTRSTYPE > FROM ( > SELECT > ST.ENTRYID, > ST.ATTRNAME, > ST.ATTRVALUE, > ST.ATTRSTYPE > FROM ( > SELECT > AT1.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1 > WHERE AT1.ATTRVALUE = ?1 > ) T > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST > ON 1=1 > WHERE (ST.ATTRKIND IN('u', 'o')) > AND (ST.ENTRYID = T.ENTRYID) > ) F > /* SELECT > ST.ENTRYID, > ST.ATTRNAME, > ST.ATTRVALUE, > ST.ATTRSTYPE > FROM ( > SELECT > AT1.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1 > WHERE AT1.ATTRVALUE = ?1 > ) T > /++ SELECT > AT1.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1 > /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > ?1 ++/ > WHERE AT1.ATTRVALUE = ?1 > ++/ > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST > /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: > ENTRYID = T.ENTRYID ++/ > ON 1=1 > WHERE (ST.ATTRKIND IN('u', 'o')) > AND (ST.ENTRYID = T.ENTRYID) > */ > INNER JOIN ( > SELECT > ENTRYID > FROM "dn".IGNITE_DN > WHERE PARENTDN LIKE ?2 > ) DNT > /* SELECT > ENTRYID > FROM "dn".IGNITE_DN > /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ > WHERE (ENTRYID IS ?3) > AND (PARENTDN LIKE ?2): ENTRYID = F.ENTRYID > AND ENTRYID = F.ENTRYID > */ > ON 1=1 > WHERE F.ENTRYID = DNT.ENTRYID > ORDER BY 1]] > ----- > > The above query takes *24 sec* to retrieve the records which we feel > defeats the purpose , our application existing berkley db can retrieve this > faster. > > Question is - > a) I have attached my application models & client code , am I doing > something wrong in defining the models and cache configuration. Right now, > not considering distributed as I have less number of records.. What is > recommended? > b) What is the best memory requireme
Re: slow query performance against berkley db
Further to this, I am re-framing what I have , pls correct me if my approach is correct or not. As of now, using only node as local cache and using native persistence file system. The system has less number of records around *.1 M *in main table and 2 M in supporting table. Using sql to retrieve the records using join , the sql used is --- final String query1 = "SELECT " + "f.entryID,f.attrName,f.attrValue, " + "f.attrsType " + "FROM " +"( select st.entryID,st.attrName,st.attrValue, st.attrsType from " +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass" + " at1 WHERE " + " at1.attrValue= ? ) t" +" INNER JOIN \"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE st ON st.entryID = t.entryID " + " WHERE st.attrKind IN ('u','o') " +" ) f " + " INNER JOIN " + " ( " +" SELECT entryID from \"dn\".Ignite_DN where parentDN like ? " +") " +" dnt" + " ON f.entryID = dnt.entryID" + " order by f.entryID"; String queryWithType = query1; QueryCursor<List> cursor = cache.query(new SqlFieldsQuery( queryWithType).setEnforceJoinOrder(true).setArgs("person", "dc=ignite,%")); System.out.println("SUBTREE "+cursor.getAll() ); --- The corresponding EXPLAIN plan is [[SELECT F.ENTRYID, F.ATTRNAME, F.ATTRVALUE, F.ATTRSTYPE FROM ( SELECT ST.ENTRYID, ST.ATTRNAME, ST.ATTRVALUE, ST.ATTRSTYPE FROM ( SELECT AT1.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1 WHERE AT1.ATTRVALUE = ?1 ) T INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST ON 1=1 WHERE (ST.ATTRKIND IN('u', 'o')) AND (ST.ENTRYID = T.ENTRYID) ) F /* SELECT ST.ENTRYID, ST.ATTRNAME, ST.ATTRVALUE, ST.ATTRSTYPE FROM ( SELECT AT1.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1 WHERE AT1.ATTRVALUE = ?1 ) T /++ SELECT AT1.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1 /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = ?1 ++/ WHERE AT1.ATTRVALUE = ?1 ++/ INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: ENTRYID = T.ENTRYID ++/ ON 1=1 WHERE (ST.ATTRKIND IN('u', 'o')) AND (ST.ENTRYID = T.ENTRYID) */ INNER JOIN ( SELECT ENTRYID FROM "dn".IGNITE_DN WHERE PARENTDN LIKE ?2 ) DNT /* SELECT ENTRYID FROM "dn".IGNITE_DN /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ WHERE (ENTRYID IS ?3) AND (PARENTDN LIKE ?2): ENTRYID = F.ENTRYID AND ENTRYID = F.ENTRYID */ ON 1=1 WHERE F.ENTRYID = DNT.ENTRYID ORDER BY 1]] - The above query takes *24 sec* to retrieve the records which we feel defeats the purpose , our application existing berkley db can retrieve this faster. Question is - a) I have attached my application models & client code , am I doing something wrong in defining the models and cache configuration. Right now, not considering distributed as I have less number of records.. What is recommended? b) What is the best memory requirement of Ignite/H2 , is 16g machine not good enough for the records I have as of now? c) does it create performance overhead when using sql Please guide. Thanks, Rajesh On Tue, Feb 6, 2018 at 9:30 AM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > Hi Michael > > Pls find my response > > > Does that mean Ignite cannot scale well against Berkley dB Incase of > single node? > Could you please clarify, what your question means? > > > (Rajesh) Our application currently uses Berkley dB and we are using it as > key value dB ie storing object as value as bytes, we are using our own > logic in application for replication. > > >The comparison is being done based on one node as of now. > > now as a poc I have considered my model to be fit in sql dB of ignite > > What I am realizing, I get the faster result in Berkley dB against ignite > in just > .1 m records. > I understand
Re: slow query performance against berkley db
Hi Michael Pls find my response Does that mean Ignite cannot scale well against Berkley dB Incase of single node? Could you please clarify, what your question means? (Rajesh) Our application currently uses Berkley dB and we are using it as key value dB ie storing object as value as bytes, we are using our own logic in application for replication. The comparison is being done based on one node as of now. now as a poc I have considered my model to be fit in sql dB of ignite What I am realizing, I get the faster result in Berkley dB against ignite in just .1 m records. I understand that ignite is distributed system, but with just . 1 m records it's result is not comparable with Berkley dB? Any pointers? Regards Rajesh On 6 Feb 2018 8:35 a.m., "Michael Cherkasov" <michael.cherka...@gmail.com> wrote: Rajesh, >Does that mean Ignite cannot scale well against Berkley dB Incase of single node? Could you please clarify, what your question means? (Rajesh) Our application currently uses Berkley dB and we are using it as key value dB ie storing object as value as bytes, we are using our own logic in application for replication. The comparison is being done based on one node as of now. now as a poc I have considered my model to be fit in sql dB of ignite What I am realizing, I get the faster result in Berkley dB against ignite in just .1 m records. I understand that ignite is distributed system, but with just . 1 m records it's result is not comparable with Berkley dB? Any pointers? Ignite can scale from a single node to hundreds(or even thousands, I have seen the only cluster of 300 nodes, but this definitely not a limit). It was designed to work as a distrebuted grid. So I think if you will try to compare one node of Ignite with one node of SomeDB, ignite will lose. But you can run 10 ignite nodes and they will be faster then 10 nodes of somedb, furthermore, you can kill nodes and ignite will continue to work, what will happen if a host with Berkley DB crashes? So in case of crash can you transparently switch to other Berkley DB node and continue to work? Ignite is not just SQL DB, Ignite is a distributed data grid, it's strongly consistent and HA database, please make this into account when comparing it with other solutions. Thanks, Mike. 2018-02-05 9:23 GMT-08:00 Rajesh Kishore <rajesh10si...@gmail.com>: > Hi Christos > > Does that mean Ignite cannot scale well against Berkley dB Incase of > single node? > > Regards > Rajesh > > On 5 Feb 2018 10:08 p.m., "Christos Erotocritou" <chris...@gridgain.com> > wrote: > >> Hi Rajesh, >> >> Ignite is a distributed system, testing with one node is really not the >> way. >> >> You need to consider having multiple nodes and portion and collocate your >> data before. >> >> Thanks, >> C >> >> On 5 Feb 2018, at 16:36, Rajesh Kishore <rajesh10si...@gmail.com> wrote: >> >> Hi, >> >> We are in the process of evaluating Ignite native persistence against >> berkely db. For some reason Ignite query does not seem to be performant the >> way application code behaves against berkley db >> >> Background: >> Berkley db - As of now, we have berkley db for our application and the >> data is stored as name value pair as byte stream in the berkley db's native >> file system. >> >> Ignite DB - We are using Ignite DB's native persistence file system. >> Created appropriate index and retrieving data using SQL involving multiple >> joins. >> >> Ignite configuration : with native persistence enabled , only one node >> >> Data: As of now in the main table we have only *.1 M records *and in >> supporting tables we have around 2 million records >> >> Ignite sql query used >> >> SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM >> ( select st.entryID,st.attrName,st.attrValue, st.attrsType from >> (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass >> at1 WHERE at1.attrValue= ? ) t >> INNER JOIN >> "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON >> st.entryID = t.entryID WHERE st.attrKind IN ('u','o') >> ) f >>INNER JOIN (SELECT entryID from "dn".Ignite_DN where parentDN like ? >> ) dnt ON f.entryID = dnt.entry >> >> The corresponding EXPLAIN PLAN >> >> >> >> [[SELECT >> F__Z3.ENTRYID AS __C0_0, >> F__Z3.ATTRNAME AS __C0_1, >> F__Z3.ATTRVALUE AS __C0_2, >> F__Z3.ATTRSTYPE AS __C0_3 >> FROM ( >> SELECT >> ST__Z2.ENTRYID, >> ST__Z2.ATTRNAME, >> ST__Z2.ATTRVALUE, >> S
Re: slow query performance against berkley db
Any pointers please Thanks Rajesh On 5 Feb 2018 10:53 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> wrote: > Hi Christos > > Does that mean Ignite cannot scale well against Berkley dB Incase of > single node? > > Regards > Rajesh > > On 5 Feb 2018 10:08 p.m., "Christos Erotocritou" <chris...@gridgain.com> > wrote: > >> Hi Rajesh, >> >> Ignite is a distributed system, testing with one node is really not the >> way. >> >> You need to consider having multiple nodes and portion and collocate your >> data before. >> >> Thanks, >> C >> >> On 5 Feb 2018, at 16:36, Rajesh Kishore <rajesh10si...@gmail.com> wrote: >> >> Hi, >> >> We are in the process of evaluating Ignite native persistence against >> berkely db. For some reason Ignite query does not seem to be performant the >> way application code behaves against berkley db >> >> Background: >> Berkley db - As of now, we have berkley db for our application and the >> data is stored as name value pair as byte stream in the berkley db's native >> file system. >> >> Ignite DB - We are using Ignite DB's native persistence file system. >> Created appropriate index and retrieving data using SQL involving multiple >> joins. >> >> Ignite configuration : with native persistence enabled , only one node >> >> Data: As of now in the main table we have only *.1 M records *and in >> supporting tables we have around 2 million records >> >> Ignite sql query used >> >> SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM >> ( select st.entryID,st.attrName,st.attrValue, st.attrsType from >> (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass >> at1 WHERE at1.attrValue= ? ) t >> INNER JOIN >> "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON >> st.entryID = t.entryID WHERE st.attrKind IN ('u','o') >> ) f >>INNER JOIN (SELECT entryID from "dn".Ignite_DN where parentDN like ? >> ) dnt ON f.entryID = dnt.entry >> >> The corresponding EXPLAIN PLAN >> >> >> >> [[SELECT >> F__Z3.ENTRYID AS __C0_0, >> F__Z3.ATTRNAME AS __C0_1, >> F__Z3.ATTRVALUE AS __C0_2, >> F__Z3.ATTRSTYPE AS __C0_3 >> FROM ( >> SELECT >> ST__Z2.ENTRYID, >> ST__Z2.ATTRNAME, >> ST__Z2.ATTRVALUE, >> ST__Z2.ATTRSTYPE >> FROM ( >> SELECT >> AT1__Z0.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 >> WHERE AT1__Z0.ATTRVALUE = ?1 >> ) T__Z1 >> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 >> ON 1=1 >> WHERE (ST__Z2.ATTRKIND IN('u', 'o')) >> AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) >> ) F__Z3 >> /* SELECT >> ST__Z2.ENTRYID, >> ST__Z2.ATTRNAME, >> ST__Z2.ATTRVALUE, >> ST__Z2.ATTRSTYPE >> FROM ( >> SELECT >> AT1__Z0.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 >> WHERE AT1__Z0.ATTRVALUE = ?1 >> ) T__Z1 >> /++ SELECT >> AT1__Z0.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 >> /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = >> ?1 ++/ >> WHERE AT1__Z0.ATTRVALUE = ?1 >> ++/ >> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 >> /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: >> ENTRYID = T__Z1.ENTRYID ++/ >> ON 1=1 >> WHERE (ST__Z2.ATTRKIND IN('u', 'o')) >> AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) >> */ >> INNER JOIN ( >> SELECT >> __Z4.ENTRYID >> FROM "dn".IGNITE_DN __Z4 >> WHERE __Z4.PARENTDN LIKE ?2 >> ) DNT__Z5 >> /* SELECT >> __Z4.ENTRYID >> FROM "dn".IGNITE_DN __Z4 >> /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ >> WHERE (__Z4.ENTRYID IS ?3) >> AND (__Z4.PARENTDN LIKE ?2): ENTRYID = F__Z3.ENTRYID >> */ >> ON 1=1 >> WHERE F__Z3.ENTRYID = DNT__Z5.ENTRYID >> ORDER BY 1], [SELECT >> __C0_0 AS ENTRYID, >> __C0_1 AS ATTRNAME, >> __C0_2 AS ATTRVALUE, >> __C0_3 AS ATTRSTYPE >> FROM PUBLIC.__T0 >> /* "Ignite_DSAttributeStore"."merge_sorted" */ >> ORDER BY 1 >> /* index sorted */]] >> >> >> Any pointers , how should I proceed , Following is the JFR report for the >> code used >> cursor = cache.query(new SqlFieldsQuery(query).setEnforceJoinOrder(true); >> cursor.getAll(); >> >> >> >> >> >> >> Thanks, >> Rajesh >> >> >>
Re: slow query performance against berkley db
Hi Christos Does that mean Ignite cannot scale well against Berkley dB Incase of single node? Regards Rajesh On 5 Feb 2018 10:08 p.m., "Christos Erotocritou" <chris...@gridgain.com> wrote: > Hi Rajesh, > > Ignite is a distributed system, testing with one node is really not the > way. > > You need to consider having multiple nodes and portion and collocate your > data before. > > Thanks, > C > > On 5 Feb 2018, at 16:36, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > > Hi, > > We are in the process of evaluating Ignite native persistence against > berkely db. For some reason Ignite query does not seem to be performant the > way application code behaves against berkley db > > Background: > Berkley db - As of now, we have berkley db for our application and the > data is stored as name value pair as byte stream in the berkley db's native > file system. > > Ignite DB - We are using Ignite DB's native persistence file system. > Created appropriate index and retrieving data using SQL involving multiple > joins. > > Ignite configuration : with native persistence enabled , only one node > > Data: As of now in the main table we have only *.1 M records *and in > supporting tables we have around 2 million records > > Ignite sql query used > > SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM > ( select st.entryID,st.attrName,st.attrValue, st.attrsType from > (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass at1 > WHERE at1.attrValue= ? ) t > INNER JOIN > "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON > st.entryID = t.entryID WHERE st.attrKind IN ('u','o') > ) f >INNER JOIN (SELECT entryID from "dn".Ignite_DN where parentDN like ? ) > dnt ON f.entryID = dnt.entry > > The corresponding EXPLAIN PLAN > > > > [[SELECT > F__Z3.ENTRYID AS __C0_0, > F__Z3.ATTRNAME AS __C0_1, > F__Z3.ATTRVALUE AS __C0_2, > F__Z3.ATTRSTYPE AS __C0_3 > FROM ( > SELECT > ST__Z2.ENTRYID, > ST__Z2.ATTRNAME, > ST__Z2.ATTRVALUE, > ST__Z2.ATTRSTYPE > FROM ( > SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > WHERE AT1__Z0.ATTRVALUE = ?1 > ) T__Z1 > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 > ON 1=1 > WHERE (ST__Z2.ATTRKIND IN('u', 'o')) > AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) > ) F__Z3 > /* SELECT > ST__Z2.ENTRYID, > ST__Z2.ATTRNAME, > ST__Z2.ATTRVALUE, > ST__Z2.ATTRSTYPE > FROM ( > SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > WHERE AT1__Z0.ATTRVALUE = ?1 > ) T__Z1 > /++ SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > ?1 ++/ > WHERE AT1__Z0.ATTRVALUE = ?1 > ++/ > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 > /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: > ENTRYID = T__Z1.ENTRYID ++/ > ON 1=1 > WHERE (ST__Z2.ATTRKIND IN('u', 'o')) > AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) > */ > INNER JOIN ( > SELECT > __Z4.ENTRYID > FROM "dn".IGNITE_DN __Z4 > WHERE __Z4.PARENTDN LIKE ?2 > ) DNT__Z5 > /* SELECT > __Z4.ENTRYID > FROM "dn".IGNITE_DN __Z4 > /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ > WHERE (__Z4.ENTRYID IS ?3) > AND (__Z4.PARENTDN LIKE ?2): ENTRYID = F__Z3.ENTRYID > */ > ON 1=1 > WHERE F__Z3.ENTRYID = DNT__Z5.ENTRYID > ORDER BY 1], [SELECT > __C0_0 AS ENTRYID, > __C0_1 AS ATTRNAME, > __C0_2 AS ATTRVALUE, > __C0_3 AS ATTRSTYPE > FROM PUBLIC.__T0 > /* "Ignite_DSAttributeStore"."merge_sorted" */ > ORDER BY 1 > /* index sorted */]] > > > Any pointers , how should I proceed , Following is the JFR report for the > code used > cursor = cache.query(new SqlFieldsQuery(query).setEnforceJoinOrder(true); > cursor.getAll(); > > > > > > > Thanks, > Rajesh > > >
slow query performance against berkley db
Hi, We are in the process of evaluating Ignite native persistence against berkely db. For some reason Ignite query does not seem to be performant the way application code behaves against berkley db Background: Berkley db - As of now, we have berkley db for our application and the data is stored as name value pair as byte stream in the berkley db's native file system. Ignite DB - We are using Ignite DB's native persistence file system. Created appropriate index and retrieving data using SQL involving multiple joins. Ignite configuration : with native persistence enabled , only one node Data: As of now in the main table we have only *.1 M records *and in supporting tables we have around 2 million records Ignite sql query used SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM ( select st.entryID,st.attrName,st.attrValue, st.attrsType from (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass at1 WHERE at1.attrValue= ? ) t INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON st.entryID = t.entryID WHERE st.attrKind IN ('u','o') ) f INNER JOIN (SELECT entryID from "dn".Ignite_DN where parentDN like ? ) dnt ON f.entryID = dnt.entry The corresponding EXPLAIN PLAN [[SELECT F__Z3.ENTRYID AS __C0_0, F__Z3.ATTRNAME AS __C0_1, F__Z3.ATTRVALUE AS __C0_2, F__Z3.ATTRSTYPE AS __C0_3 FROM ( SELECT ST__Z2.ENTRYID, ST__Z2.ATTRNAME, ST__Z2.ATTRVALUE, ST__Z2.ATTRSTYPE FROM ( SELECT AT1__Z0.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 WHERE AT1__Z0.ATTRVALUE = ?1 ) T__Z1 INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 ON 1=1 WHERE (ST__Z2.ATTRKIND IN('u', 'o')) AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) ) F__Z3 /* SELECT ST__Z2.ENTRYID, ST__Z2.ATTRNAME, ST__Z2.ATTRVALUE, ST__Z2.ATTRSTYPE FROM ( SELECT AT1__Z0.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 WHERE AT1__Z0.ATTRVALUE = ?1 ) T__Z1 /++ SELECT AT1__Z0.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = ?1 ++/ WHERE AT1__Z0.ATTRVALUE = ?1 ++/ INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: ENTRYID = T__Z1.ENTRYID ++/ ON 1=1 WHERE (ST__Z2.ATTRKIND IN('u', 'o')) AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) */ INNER JOIN ( SELECT __Z4.ENTRYID FROM "dn".IGNITE_DN __Z4 WHERE __Z4.PARENTDN LIKE ?2 ) DNT__Z5 /* SELECT __Z4.ENTRYID FROM "dn".IGNITE_DN __Z4 /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ WHERE (__Z4.ENTRYID IS ?3) AND (__Z4.PARENTDN LIKE ?2): ENTRYID = F__Z3.ENTRYID */ ON 1=1 WHERE F__Z3.ENTRYID = DNT__Z5.ENTRYID ORDER BY 1], [SELECT __C0_0 AS ENTRYID, __C0_1 AS ATTRNAME, __C0_2 AS ATTRVALUE, __C0_3 AS ATTRSTYPE FROM PUBLIC.__T0 /* "Ignite_DSAttributeStore"."merge_sorted" */ ORDER BY 1 /* index sorted */]] Any pointers , how should I proceed , Following is the JFR report for the code used cursor = cache.query(new SqlFieldsQuery(query).setEnforceJoinOrder(true); cursor.getAll(); [image: Inline image 1] Thanks, Rajesh
[Resolved]Re: Issues with sub query IN clause
You rock Andrey, the trick worked , tried the same in api, including your response for everyone Orginal query - SELECT st.entryID,st.attrName,st.attrValue, st.attrsType FROM (SELECT entryID as entryID FROM "objectclass".IGNITE_OBJECTCLASS WHERE attrValue = 'person' ) t INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON st.entryID = t.entryID NOTE : "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE has index defined on entryID , following is the explain plan obtained, its scanning the entire table SELECT ST.ENTRYID, ST.ATTRNAME, ST.ATTRVALUE, ST.ATTRSTYPE FROM "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE.__SCAN_ */ INNER JOIN ( SELECT ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS WHERE ATTRVALUE = 'person' ) T /* SELECT ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'person' AND ENTRYID IS ?1 ++/ WHERE (ATTRVALUE = 'person') AND (ENTRYID IS ?1): ENTRYID = ST.ENTRYID */ ON 1=1 WHERE ST.ENTRYID = T.ENTRYID (1 row, 1 ms) How should I approach that "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE full scan can be avoided ? *Andrey suggested* -- Looks like, H2 optimizer changes join order. Try to set enforceJoinOrder=true for the query if you use ignite API. If you use JDBC then flag can be set on per connection basis. --- *After suggestion:* [[SELECT ST__Z2.ENTRYID AS __C0_0, ST__Z2.ATTRNAME AS __C0_1, ST__Z2.ATTRVALUE AS __C0_2, ST__Z2.ATTRSTYPE AS __C0_3 FROM ( SELECT __Z0.ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS __Z0 WHERE __Z0.ATTRVALUE = 'person' ) T__Z1 /* SELECT __Z0.ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS __Z0 /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'person' ++/ WHERE __Z0.ATTRVALUE = 'person' */ INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: ENTRYID = T__Z1.ENTRYID */ ON 1=1 WHERE ST__Z2.ENTRYID = T__Z1.ENTRYID ORDER BY 1], [SELECT __C0_0 AS ENTRYID, __C0_1 AS ATTRNAME, THanks, Rajesh __C0_2 AS ATTRVALUE, __C0_3 AS ATTRSTYPE FROM PUBLIC.__T0 /* "dn"."merge_sorted" */ ORDER BY 1 On Sat, Feb 3, 2018 at 8:48 PM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > Hey Andery, > Trying this with H2 console as of now , any setting there? > > Thanks, > Rajesh >
Re: Issues with sub query IN clause
Hi Andrey, everyone, I am trying to figure out the way that my query should use index instead of full scan, trying to run following query SELECT st.entryID,st.attrName,st.attrValue, st.attrsType FROM (SELECT entryID as entryID FROM "objectclass".IGNITE_OBJECTCLASS WHERE attrValue = 'person' ) t INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON st.entryID = t.entryID NOTE : "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE has index defined on entryID , following is the explain plan obtained, its scanning the entire table SELECT ST.ENTRYID, ST.ATTRNAME, ST.ATTRVALUE, ST.ATTRSTYPE FROM "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE.__SCAN_ */ INNER JOIN ( SELECT ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS WHERE ATTRVALUE = 'person' ) T /* SELECT ENTRYID AS ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'person' AND ENTRYID IS ?1 ++/ WHERE (ATTRVALUE = 'person') AND (ENTRYID IS ?1): ENTRYID = ST.ENTRYID */ ON 1=1 WHERE ST.ENTRYID = T.ENTRYID (1 row, 1 ms) How should I approach that "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE full scan can be avoided ? Thanks in advance. -Rajesh On Fri, Feb 2, 2018 at 11:10 PM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > My bad still I am using the IN clause with variable parameter. > > I am exploring other queries. > Thanks a ton > Rajesh > > > On 2 Feb 2018 10:41 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> > wrote: > >> Hi Andrey, >> >> This query remains stuck >> "EXPLAIN SELECT store.entryID,store.attrName,store.attrValue, >> store.attrsType FROM \"dn\".Ignite_DN dn, >> \"Ignite_DSAttributeStore\".Ignite_DSAttributeStore >> store WHERE dn.entryID in (" >> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass >> at1 WHERE at1.attrValue = 'subentry' )" >> +"UNION " >> +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass >> at1 WHERE at1.attrValue = 'ldapsubentry' )" >> +")" >> +" AND ( dn.parentDN like 'dc=ignite,%')" >> +"AND dn.entryID = store.entryID AND store.attrKind IN ('u','o') order >> by store.entryID"; >> >> >> The corresponding explain plan is >> >> [[SELECT >> STORE__Z1.ENTRYID AS __C0_0, >> STORE__Z1.ATTRNAME AS __C0_1, >> STORE__Z1.ATTRVALUE AS __C0_2, >> STORE__Z1.ATTRSTYPE AS __C0_3 >> FROM "dn".IGNITE_DN DN__Z0 >> /* "dn".RP_DN_IDX: PARENTDN >= 'dc=ignite,' >> AND PARENTDN < 'dc=ignite-' >> */ >> /* WHERE (DN__Z0.PARENTDN LIKE 'dc=ignite,%') >> AND (DN__Z0.ENTRYID IN( >> (SELECT >> AT1__Z2.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 >> /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = >> 'subentry' ++/ >> WHERE AT1__Z2.ATTRVALUE = 'subentry') >> UNION >> (SELECT >> AT1__Z3.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 >> /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = >> 'ldapsubentry' ++/ >> WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry'))) >> */ >> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE STORE__Z1 >> /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: >> ENTRYID = DN__Z0.ENTRYID */ >> ON 1=1 >> WHERE (STORE__Z1.ATTRKIND IN('u', 'o')) >> AND ((DN__Z0.ENTRYID = STORE__Z1.ENTRYID) >> AND ((DN__Z0.PARENTDN LIKE 'dc=ignite,%') >> AND (DN__Z0.ENTRYID IN( >> (SELECT >> AT1__Z2.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 >> /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = >> 'subentry' */ >> WHERE AT1__Z2.ATTRVALUE = 'subentry') >> UNION >> (SELECT >> AT1__Z3.ENTRYID >> FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 >> /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = >> 'ldapsubentry' */ >> WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry') >> ORDER BY 1], [SELECT >> __C0_0 AS
Re: Issues with sub query IN clause
My bad still I am using the IN clause with variable parameter. I am exploring other queries. Thanks a ton Rajesh On 2 Feb 2018 10:41 p.m., "Rajesh Kishore" <rajesh10si...@gmail.com> wrote: > Hi Andrey, > > This query remains stuck > "EXPLAIN SELECT store.entryID,store.attrName,store.attrValue, > store.attrsType FROM \"dn\".Ignite_DN dn, > \"Ignite_DSAttributeStore\".Ignite_DSAttributeStore > store WHERE dn.entryID in (" > +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass at1 > WHERE at1.attrValue = 'subentry' )" > +"UNION " > +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass at1 > WHERE at1.attrValue = 'ldapsubentry' )" > +")" > +" AND ( dn.parentDN like 'dc=ignite,%')" > +"AND dn.entryID = store.entryID AND store.attrKind IN ('u','o') order > by store.entryID"; > > > The corresponding explain plan is > > [[SELECT > STORE__Z1.ENTRYID AS __C0_0, > STORE__Z1.ATTRNAME AS __C0_1, > STORE__Z1.ATTRVALUE AS __C0_2, > STORE__Z1.ATTRSTYPE AS __C0_3 > FROM "dn".IGNITE_DN DN__Z0 > /* "dn".RP_DN_IDX: PARENTDN >= 'dc=ignite,' > AND PARENTDN < 'dc=ignite-' > */ > /* WHERE (DN__Z0.PARENTDN LIKE 'dc=ignite,%') > AND (DN__Z0.ENTRYID IN( > (SELECT > AT1__Z2.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 > /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > 'subentry' ++/ > WHERE AT1__Z2.ATTRVALUE = 'subentry') > UNION > (SELECT > AT1__Z3.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 > /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > 'ldapsubentry' ++/ > WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry'))) > */ > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE STORE__Z1 > /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: > ENTRYID = DN__Z0.ENTRYID */ > ON 1=1 > WHERE (STORE__Z1.ATTRKIND IN('u', 'o')) > AND ((DN__Z0.ENTRYID = STORE__Z1.ENTRYID) > AND ((DN__Z0.PARENTDN LIKE 'dc=ignite,%') > AND (DN__Z0.ENTRYID IN( > (SELECT > AT1__Z2.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 > /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > 'subentry' */ > WHERE AT1__Z2.ATTRVALUE = 'subentry') > UNION > (SELECT > AT1__Z3.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 > /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = > 'ldapsubentry' */ > WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry') > ORDER BY 1], [SELECT > __C0_0 AS ENTRYID, > __C0_1 AS ATTRNAME, > __C0_2 AS ATTRVALUE, > __C0_3 AS ATTRSTYPE > FROM PUBLIC.__T0 > /* "dn"."merge_sorted" */ > ORDER BY 1 > /* index sorted */]] > > > Note that the subquery has no record > +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass at1 > WHERE at1.attrValue = 'subentry' )" > +"UNION " > +"(SELECT at1.entryID FROM \"objectclass\".Ignite_ObjectClass at1 > WHERE at1.attrValue = 'ldapsubentry' )" > > > Any pointers on this ? > > -Rajesh > > > > > > On Fri, Feb 2, 2018 at 10:26 PM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > >> Hey Andrey, >> >> Now , I am getting the result within 3 mins, need to analyze why its >> slower , probably I have to brushup my sql and indexing skills >> this is my explain plan for new query >> >> >> [[SELECT >> ST__Z0.ENTRYID AS __C0_0, >> ST__Z0.ATTRNAME AS __C0_1, >> ST__Z0.ATTRVALUE AS __C0_2, >> ST__Z0.ATTRSTYPE AS __C0_3 >> FROM "dn".IGNITE_DN DN__Z1 >> /* "dn".RP_DN_IDX: PARENTDN >= 'dc=ignite,' >> AND PARENTDN < 'dc=ignite-' >> */ >> /* WHERE DN__Z1.PARENTDN LIKE 'dc=ignite,%' >> */ >> INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z0 >> /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: >> ENTRYID = DN__Z1.ENTRYID */ >> ON 1=1 >> /* WHERE (ST__Z0.ATTRKIND IN('u', 'o')) >> AND (DN__Z1.ENTRYID = ST__Z0.ENTRYID) >> */ >> INNER JOIN "objectclass".IGNITE_OBJECTCLASS A
Re: Issues with sub query IN clause
Thanks Dmitriy, The EXPLAIN PLAN [[SELECT STORE__Z1.ENTRYID AS __C0_0, STORE__Z1.ATTRNAME AS __C0_1, STORE__Z1.ATTRVALUE AS __C0_2, STORE__Z1.ATTRSTYPE AS __C0_3 FROM "dn".IGNITE_DN DN__Z0 /* "dn".IGNITE_DN.__SCAN_ */ /* WHERE ((DN__Z0.PARENTDN LIKE 'dc=ignite,%') OR ((DN__Z0.RDN = 'dc=ignite') AND (DN__Z0.PARENTDN = ','))) AND (DN__Z0.ENTRYID IN( (SELECT AT1__Z2.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'subentry' ++/ WHERE AT1__Z2.ATTRVALUE = 'subentry') UNION (SELECT AT1__Z3.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'ldapsubentry' ++/ WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry'))) */ INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE STORE__Z1 /* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: ENTRYID = DN__Z0.ENTRYID */ ON 1=1 WHERE (STORE__Z1.ATTRKIND IN('u', 'o')) AND ((DN__Z0.ENTRYID = STORE__Z1.ENTRYID) AND (((DN__Z0.PARENTDN LIKE 'dc=ignite,%') OR ((DN__Z0.RDN = 'dc=ignite') AND (DN__Z0.PARENTDN = ','))) AND (DN__Z0.ENTRYID IN( (SELECT AT1__Z2.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2 /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'subentry' */ WHERE AT1__Z2.ATTRVALUE = 'subentry') UNION (SELECT AT1__Z3.ENTRYID FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3 /* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = 'ldapsubentry' */ WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry') ORDER BY 1], [SELECT __C0_0 AS ENTRYID, __C0_1 AS ATTRNAME, __C0_2 AS ATTRVALUE, __C0_3 AS ATTRSTYPE FROM PUBLIC.__T0 /* "Ignite_DSAttributeStore"."merge_sorted" */ ORDER BY 1 /* index sorted */]] Thanks -Rajesh On Fri, Feb 2, 2018 at 5:32 AM, Dmitriy Setrakyan <dsetrak...@apache.org> wrote: > Rajesh, can you please show your query here together with execution plan? > > D. > > > On Thu, Feb 1, 2018 at 8:36 AM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > >> Hi Andrey >> Thanks for your response. >> I am using native ignite persistence, saving data locally and as of now I >> don't have distributed cache, having only one node. >> >> By looking at the doc, it does not look like affinity key is applicable >> here. >> >> Pls suggest. >> >> Thanks Rajesh >> >> On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" <andrey.mashen...@gmail.com> >> wrote: >> >>> Hi Rajesh, >>> >>> >>> Possibly, you data is not collocated and subquery return less retults as >>> it executes locally. >>> Try to rewrite IN into JOIN and check if query with >>> query#setDistributedJoins(true) will return expected result. >>> >>> It is recommended >>> 1. replace IN with JOIN due to performance issues [1]. >>> 2. use data collocation [2] if possible rather than turning on >>> distributed joins. >>> >>> [1] https://apacheignite-sql.readme.io/docs/performance-and- >>> debugging#section-sql-performance-and-usability-considerations >>> [2] https://apacheignite.readme.io/docs/affinity-collocation >>> #section-collocate-data-with-data >>> >>> On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore <rajesh10si...@gmail.com> >>> wrote: >>> >>>> Hi All, >>>> >>>> As of now, we have less than 1 M records , and attribute split into >>>> few(3) tables >>>> with index created. >>>> We are using combination of join & IN clause(sub query) in the SQL >>>> query , for some reason this query does not return any response. >>>> But, the moment we remove the IN clause and use just the join, the >>>> query returns the result. >>>> Note that as per EXPLAIN PLAN , the sub query also seems to be using >>>> the defined >>>> indexes. >>>> >>>> What are the recommendations for using such queries , are there any >>>> guidelines, What we are doing wrong here? >>>> >>>> Thanks, >>>> Rajesh >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> >>> -- >>> Best regards, >>> Andrey V. Mashenkov >>> >> > package org.kishore.
Re: Issues with sub query IN clause
Hi Andrey Thanks for your response. I am using native ignite persistence, saving data locally and as of now I don't have distributed cache, having only one node. By looking at the doc, it does not look like affinity key is applicable here. Pls suggest. Thanks Rajesh On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" <andrey.mashen...@gmail.com> wrote: > Hi Rajesh, > > > Possibly, you data is not collocated and subquery return less retults as > it executes locally. > Try to rewrite IN into JOIN and check if query with > query#setDistributedJoins(true) will return expected result. > > It is recommended > 1. replace IN with JOIN due to performance issues [1]. > 2. use data collocation [2] if possible rather than turning on distributed > joins. > > [1] https://apacheignite-sql.readme.io/docs/performance- > and-debugging#section-sql-performance-and-usability-considerations > [2] https://apacheignite.readme.io/docs/affinity- > collocation#section-collocate-data-with-data > > On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > >> Hi All, >> >> As of now, we have less than 1 M records , and attribute split into >> few(3) tables >> with index created. >> We are using combination of join & IN clause(sub query) in the SQL query >> , for some reason this query does not return any response. >> But, the moment we remove the IN clause and use just the join, the query >> returns the result. >> Note that as per EXPLAIN PLAN , the sub query also seems to be using the >> defined >> indexes. >> >> What are the recommendations for using such queries , are there any >> guidelines, What we are doing wrong here? >> >> Thanks, >> Rajesh >> >> >> >> >> >> >> > > > -- > Best regards, > Andrey V. Mashenkov >
Issues with sub query IN clause
Hi All, As of now, we have less than 1 M records , and attribute split into few(3) tables with index created. We are using combination of join & IN clause(sub query) in the SQL query , for some reason this query does not return any response. But, the moment we remove the IN clause and use just the join, the query returns the result. Note that as per EXPLAIN PLAN , the sub query also seems to be using the defined indexes. What are the recommendations for using such queries , are there any guidelines, What we are doing wrong here? Thanks, Rajesh
RE: default location for ignite file db
Perfect Stan On 29 Jan 2018 7:23 p.m., "Stanislav Lukyanov" <stanlukya...@gmail.com> wrote: > Make sure you have persistence enabled as described at > https://apacheignite.readme.io/docs/distributed-persistent-store. > > Also, make sure you don’t already have a custom setting for the > StoragePath, WalPath or WalArchivePath. > > > > Making an environment variable visible to your process is OS-specific. On > Windows you need to specify it in the System Properties dialog, > > and on Unix you need to use `export` command (you often put the `export` > in your .bashrc). You should consult your system docs > > on doing so. > > > > If you don’t set IGNITE_HOME Ignite will try to find the installation > based on your current directory and class path, > > and if it fails then a temporary directory directory will be created in > the standard system location (e.g. /tmp on Linux) > > to store Ignite’s DB and other files such as logs. > > > > Stan > > > > *From: *Rajesh Kishore <rajesh10si...@gmail.com> > *Sent: *29 января 2018 г. 15:18 > *To: *user@ignite.apache.org > *Subject: *Re: default location for ignite file db > > > > Thanks Stan, > > I just extracted the zip , and I am starting the ignite server in embedded > mode. I dont see DB files getting created at the ${IGNITE_HOME}/work/db. > More specifically how embedded mode server would get to know about the env > variable > r ${IGNITE_HOME} , as in my classpath of application I am using only > required jars of ignite > > -Rajesh > > > > On Mon, Jan 29, 2018 at 5:41 PM, Stanislav Lukyanov < > stanlukya...@gmail.com> wrote: > > It will be under ${IGNITE_HOME}/work/db. > > Please refer to https://apacheignite.readme.io/docs/distributed- > persistent-store which covers this topic in details. > > > > Stan > > > > *From: *Rajesh Kishore <rajesh10si...@gmail.com> > *Sent: *29 января 2018 г. 15:04 > *To: *user@ignite.apache.org > *Subject: *default location for ignite file db > > > > Hi All. > > How do we determine the default location for ignite file based DB > > Thanks in advance > > -Rajesh > > > > > > >
Re: cache configuration without Key
Thanks Ilya it answers my question On 29 Jan 2018 3:34 p.m., "ilya.kasnacheev"wrote: > Hello Rajesh! > > In Ignite, cache (and SQL table) maps not to one object but to two - key > and > value. > > Keys are unique per cache per cluster. They act as primary key in SQL and > as, well, keys in cache API. > > So yes, you need to specify key type (or compound object) too. > > Regards, > Ilya. > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: default location for ignite file db
Thanks Stan, I just extracted the zip , and I am starting the ignite server in embedded mode. I dont see DB files getting created at the ${IGNITE_HOME}/work/db. More specifically how embedded mode server would get to know about the env variable r ${IGNITE_HOME} , as in my classpath of application I am using only required jars of ignite -Rajesh On Mon, Jan 29, 2018 at 5:41 PM, Stanislav Lukyanov <stanlukya...@gmail.com> wrote: > It will be under ${IGNITE_HOME}/work/db. > > Please refer to https://apacheignite.readme.io/docs/distributed- > persistent-store which covers this topic in details. > > > > Stan > > > > *From: *Rajesh Kishore <rajesh10si...@gmail.com> > *Sent: *29 января 2018 г. 15:04 > *To: *user@ignite.apache.org > *Subject: *default location for ignite file db > > > > Hi All. > > How do we determine the default location for ignite file based DB > > Thanks in advance > > -Rajesh > > >
Re: starting Ignite server in embedded mode
Thanks it perfectly answers my query Thanks, Rajesh On Mon, Jan 29, 2018 at 3:31 PM, Stanislav Lukyanov <stanlukya...@gmail.com> wrote: > Hi, > > > > You can start Ignite in the same JVM as your application by using > Ignition.start() method. You can start multiple Ignite instances, > > just create different IgniteConfiguration for them with different instance > names. > > > > IgniteConfiguration serverConfig = new IgniteConfiguration() > > .setIgniteInstanceName("my-server") > > .setDiscoverySpi(new TcpDiscoverySpi() > > .setIpFinder( > > new TcpDiscoveryVmIpFinder() > > .setAddresses(Collections.singleton(" > 127.0.0.1:47500..47502")) > > )); > > Ignite server = Ignition.start(serverConfig); > > > > IgniteConfiguration clientConfig = new IgniteConfiguration() > > .setIgniteInstanceName("my-client") > > .setDiscoverySpi(new TcpDiscoverySpi() > > .setIpFinder( > > new TcpDiscoveryVmIpFinder() > > .setAddresses(Collections.singleton(" > 127.0.0.1:47500..47502")) > > )) > > .setClientMode(true); > > Ignite client = Ignition.start(clientConfig); > > > > // ... client and server run together here ... > > > > // don't forget to close ignite instances when your application > finishes, or JVM won't be able to exit > > client.close(); > > server.close(); > > > > However, you generally shouldn't need two instances of ignite in the same > application. If you want your application to natively use Ignite and > > have server running alongside it, you can just make API calls on the > server instance, no need to create an additional client for that. > > > > Stan > > > > *From: *Rajesh Kishore <rajesh10si...@gmail.com> > *Sent: *29 января 2018 г. 6:26 > *To: *user@ignite.apache.org > *Subject: *starting Ignite server in embedded mode > > > > Hi All, > > We have requirement to manage Ignite server and client by the same > application in embedded mode. Whats the way forward? > > Thanks, > > Rajesh > > >
default location for ignite file db
Hi All. How do we determine the default location for ignite file based DB Thanks in advance -Rajesh
starting Ignite server in embedded mode
Hi All, We have requirement to manage Ignite server and client by the same application in embedded mode. Whats the way forward? Thanks, Rajesh
cache configuration without Key
Hi All, I have a requirement to define a cache class, for which its instance qualifies only for value. public class EntryIndexedAttributes { @QuerySqlField( orderedGroups = { @QuerySqlField.Group( name = "generic_attr_idx", order = 0) , @QuerySqlField.Group( name = "generic_attrVal_idx", order = 0)} ) private Long entryID; @QuerySqlField( orderedGroups = { @QuerySqlField.Group( name = "generic_attr_idx", order = 1) } ) private String attrType; @QuerySqlField( orderedGroups = { @QuerySqlField.Group( name = "generic_attr_idx", order = 2), @QuerySqlField.Group( name = "generic_attrVal_idx", order = 1)} ) private String attrValue; } While defining the cache , it looks like I need to aways specify the key type ? CacheConfigurationcacheCfg = new CacheConfiguration<>("EntryIndexedAttributes"); Thanks, -Rajesh
Re: Ignite Index - unique or non unique
any pointers please Thanks, Rajes On Thu, Jan 25, 2018 at 10:07 AM, Rajesh Kishore <rajesh10si...@gmail.com> wrote: > Hi All, > > Wanted to know - what does ignite supports unique or non -unique index. > I have requirement to create non unique index on a field / group of field. > Whats the way? > > Also, with the EXPLAIN plan , we can get to know the index used for a > query, sometimes my log is not getting generated properly, any settings I > need to do ? I have enabled finest level in java.util.logging.properties > though > > Appreciate the response. > > Thanks, > Rajesh >
Ignite Index - unique or non unique
Hi All, Wanted to know - what does ignite supports unique or non -unique index. I have requirement to create non unique index on a field / group of field. Whats the way? Also, with the EXPLAIN plan , we can get to know the index used for a query, sometimes my log is not getting generated properly, any settings I need to do ? I have enabled finest level in java.util.logging.properties though Appreciate the response. Thanks, Rajesh
Re: query on BinaryObject index and table
Hi Denis, This is my code: CacheConfiguration<Long, BinaryObject> cacheCfg = new CacheConfiguration<>(ORG_CACHE); cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); cacheCfg.setBackups(1); cacheCfg .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); cacheCfg.setIndexedTypes(Long.class, BinaryObject.class); IgniteCache<Long, BinaryObject> cache = ignite.getOrCreateCache(cacheCfg); if ( UPDATE ) { System.out.println("Populating the cache..."); try (IgniteDataStreamer<Long, BinaryObject> streamer = ignite.dataStreamer(ORG_CACHE)) { streamer.allowOverwrite(true); IgniteBinary binary = ignite.binary(); BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE); ; for ( long i = 0; i < 100; i++ ) { streamer.addData(i, objBuilder.setField("id", i) .setField("name", "organization-" + i).build()); if ( i > 0 && i % 100 == 0 ) System.out.println("Done: " + i); } } } IgniteCache<Long, BinaryObject> binaryCache = ignite.cache(ORG_CACHE).withKeepBinary(); BinaryObject binaryPerson = binaryCache.get(54l); System.out.println("name " + binaryPerson.field("name")); Not sure, If I am missing some context here , if I have to use sqlquery , what table name should I specify - I did not create table explicitly, do I need to that? How would I create the index? Thanks, Rajesh On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda <dma...@apache.org> wrote: > > > > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com> > wrote: > > > > Hi, > > > > I have requirement that my schema is not fixed , so I have to use the > BinaryObject approach instead of fixed POJO > > > > I am relying on OOTB file system persistence mechanism > > > > My questions are: > > - How can I specify the indexes on BinaryObject? > > https://apacheignite-sql.readme.io/docs/create-index > https://apacheignite-sql.readme.io/docs/schema-and-indexes > > > - If I have to use sql query for retrieving objects , what table name > should I specify, the one which is used for cache name does not work > > > > Was the table and its queryable fields/indexes created with CREATE TABLE > or Java annotations/QueryEntity? > > If the latter approach was taken then the table name corresponds to the > Java type name as shown in this doc: > https://apacheignite-sql.readme.io/docs/schema-and-indexes > > — > Denis > > > -Rajesh > >
query on BinaryObject index and table
Hi, I have requirement that my schema is not fixed , so I have to use the BinaryObject approach instead of fixed POJO I am relying on OOTB file system persistence mechanism My questions are: - How can I specify the indexes on BinaryObject? - If I have to use sql query for retrieving objects , what table name should I specify, the one which is used for cache name does not work -Rajesh
index for like criteria
Hi All, My query is on substring index working for Ignite. I am using - data grid , default file system persistence enabled. Let us say, I have Organization POJO and index defined on its field as defined below: public class Organization { /** */ private static final AtomicLong ID_GEN = new AtomicLong(); /** Organization ID (indexed). */ @QuerySqlField(index = true) private Long id; /** Organization name (indexed). */ @QuerySqlField(index = true) private String name; .. . } let us say I have million of records for this If I execute the following query , will the index would be applied on the name field? How does Ignite handle the "like" based criteria like this? What are the aspects I need to take care for the scenarios like this. QueryCursorcur = cache.query( new SqlFieldsQuery( "select id, name from Organization where name like ?") .setArgs("organization-54%")); System.out.println("SQL Result: " + cur.getAll()); Thanks in advance, Rajesh
Re: how to create Index for binary or byte type value
Hi Val, Would indices be possible on binary data value , I dont think so. I think I will stick with the POJO for now. I have queries on indices and l am not relying on external persistence system. a) What are the different indices type Ignite supports apart from equality i.e does it support the substring query like - name=raj% or equivalent b) How does indices are actually stored in Ignite, what is the performance overhead when its a very large data set say 20 million each entry is of size 5 kb -Rajesh On Fri, Jan 19, 2018 at 9:21 AM, vkulichenkowrote: > Rajesh, > > This actually sounds exactly like binary format Ignite uses to store the > data: https://apacheignite.readme.io/docs/binary-marshaller > > Doing this manually (i.e. explicitly saving some byte array and creating > indexes over this array) would not be possible, but I don't think you > really > need it. Just work with POJOs or BinaryObject-s on application level and > Ignite will do the rest for you. > > -Val > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: how to create Index for binary or byte type value
Yes, correct object is stored as byte array As my attribute name is not fixed. I want to have index on different attributes which is not directly visible as the pojo instance variable Rajesh On 19 Jan 2018 3:56 a.m., "vkulichenko"wrote: > Hi Rajesh, > > It's not clear what you're trying to achieve. So are you going to store the > attributes as a map or in serialized form as a byte array? And what exactly > should be indexed? > > -Val > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
how to create Index for binary or byte type value
Hi All, The problem statement is in context of "In-Memory Data Grid" persistence enabled. I have requirement/limitation to store the value of a class's object as byte [] , understood by the application in certain format. IgniteCachecache = ignite.getOrCreateCache(cacheCfg); // here key is ID and value is class's object as byte [] , Some of the instance variable (in the form of Map) of the object has multiple key value pair ex: one of the instance variable is Map userAttributes; and has values for one of the object instance as a - v1 // userAttributes.put("a", "v1") b - v2,v3,v4// userAttributes.put("a", "v2","V3","v4") .. Question is - if I want to define the index on these values of userAttributes so that entry can be retrieved, the equivalent index would hold the value of entry id as (a = v1) - 100,101,201, (b = v4) - 201 How to achieve this functionality, any pointers? or Any other suggestions ? Thanks, Rajesh
Re: usage of Apache ignite as Key value DB
Hi Mikael, Thanks a lot for your response, got a fair understanding of this. Have some queries on Indexes , starting a new thread for this. Thanks, Rajesh On Wed, Jan 17, 2018 at 6:33 PM, Mikael <mikael-arons...@telia.com> wrote: > There are a number of ways do to persistence for the cache, you can enable > native persistence for a memory region and assign a cache to that region, > all cache entries will be cached on disk och the ones you use most will be > cached in RAM, another alternative is to add 3rd party class to enable > persistence, the cache will call your implementation to do the persistence > (JDBC and Cassandra is built in), you can do read only or both read and > write persistence. > > Your key/value objects can be created with SQL DDL queries or you can use > POJO's. > > Native persistence information: > > https://apacheignite.readme.io/docs/distributed-persistent-store > > Do your own storage: > > https://apacheignite.readme.io/docs/data-loading > > https://apacheignite.readme.io/docs/3rd-party-stor > > You can do all CRUD operations using the cache API or/and SQL, if you use > native persistence SQL quesries work on the entire cache even if items not > in RAM, if you use 3rd party persistence SQL queries will only work on > items in RAM. > > Hope that helps a little. > Mikael > > > Den 2018-01-17 kl. 13:30, skrev Rajesh Kishore: > > Hello Mikael, > > Thanks a ton for your response. I got descent understanding that for any > operation I need to define cache and the cache item can be persisted. > - Does it mean all CRUD operations would be performed via cache operations > ? > - Consider the case of berkley db where entities are stored locally in > file system. And these entry container were defined by the berkley db apis, > so how entities container are created in Ignite , is it driven by > cacheName? where the entities are stored? To be simple where the records > for "Person" & "Department" would be stored and how that can be configured > > Thanks, > Rajesh > > On Wed, Jan 17, 2018 at 5:37 PM, Mikael <mikael-arons...@telia.com> wrote: > >> There are lots of examples not using SQL, have a look at: >> >> https://apacheignite.readme.io/docs/jcache >> >> Ignite implements the JCache API, just use get/put and so on. >> >> >> Den 2018-01-17 kl. 12:44, skrev Rajesh Kishore: >> >> This is much informative. Further I want to use key value apis instead of >> sql apis which is only given in the example. >> The idea is that it should ease my migration process from Berkley dB >> based code where I am relying on key value apis to play with record in >> different dB containers, what is the equivalent here of ignite i.e how do >> we represent different entity say employee in local file system and how to >> insert and retrieve record >> >> Thanks >> Rajesh >> >> On 17 Jan 2018 3:59 p.m., "Mikael" <mikael-arons...@telia.com> wrote: >> >>> You have to run an Ignite instance to use it (you can embed it in your >>> application), you can't just use the key value store on it's own, a LOCAL >>> cache would be the closest to a Berkeley DB store. >>> >>> Docs at : https://apacheignite.readme.io/docs/data-grid >>> >>> >>> >>> Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore: >>> >>>> Hi, >>>> >>>> I am newbie to Apache Ignite. We are trying to explore Ignite as key >>>> value DB to be replaced with our existing Berkely DB in application. >>>> >>>> Currently, Bekley DB is embedded in the application and db container >>>> operations are performed using Berkely DB apis , similar functionalities we >>>> would need for Ignite. >>>> >>>> The idea is to replace berkley db apis to Ignite apis to use Ignite as >>>> key value DB. >>>> I could not find any docs for the usage of ignite libraries to be used >>>> in the application. >>>> >>>> Any pointers please >>>> >>>> Thanks & Regards, >>>> Rajesh Kishore >>>> >>> >>> >> > >
Re: usage of Apache ignite as Key value DB
Hello Mikael, Thanks a ton for your response. I got descent understanding that for any operation I need to define cache and the cache item can be persisted. - Does it mean all CRUD operations would be performed via cache operations ? - Consider the case of berkley db where entities are stored locally in file system. And these entry container were defined by the berkley db apis, so how entities container are created in Ignite , is it driven by cacheName? where the entities are stored? To be simple where the records for "Person" & "Department" would be stored and how that can be configured Thanks, Rajesh On Wed, Jan 17, 2018 at 5:37 PM, Mikael <mikael-arons...@telia.com> wrote: > There are lots of examples not using SQL, have a look at: > > https://apacheignite.readme.io/docs/jcache > > Ignite implements the JCache API, just use get/put and so on. > > > Den 2018-01-17 kl. 12:44, skrev Rajesh Kishore: > > This is much informative. Further I want to use key value apis instead of > sql apis which is only given in the example. > The idea is that it should ease my migration process from Berkley dB based > code where I am relying on key value apis to play with record in different > dB containers, what is the equivalent here of ignite i.e how do we > represent different entity say employee in local file system and how to > insert and retrieve record > > Thanks > Rajesh > > On 17 Jan 2018 3:59 p.m., "Mikael" <mikael-arons...@telia.com> wrote: > >> You have to run an Ignite instance to use it (you can embed it in your >> application), you can't just use the key value store on it's own, a LOCAL >> cache would be the closest to a Berkeley DB store. >> >> Docs at : https://apacheignite.readme.io/docs/data-grid >> >> >> >> Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore: >> >>> Hi, >>> >>> I am newbie to Apache Ignite. We are trying to explore Ignite as key >>> value DB to be replaced with our existing Berkely DB in application. >>> >>> Currently, Bekley DB is embedded in the application and db container >>> operations are performed using Berkely DB apis , similar functionalities we >>> would need for Ignite. >>> >>> The idea is to replace berkley db apis to Ignite apis to use Ignite as >>> key value DB. >>> I could not find any docs for the usage of ignite libraries to be used >>> in the application. >>> >>> Any pointers please >>> >>> Thanks & Regards, >>> Rajesh Kishore >>> >> >> >
Re: usage of Apache ignite as Key value DB
This is much informative. Further I want to use key value apis instead of sql apis which is only given in the example. The idea is that it should ease my migration process from Berkley dB based code where I am relying on key value apis to play with record in different dB containers, what is the equivalent here of ignite i.e how do we represent different entity say employee in local file system and how to insert and retrieve record Thanks Rajesh On 17 Jan 2018 3:59 p.m., "Mikael" <mikael-arons...@telia.com> wrote: > You have to run an Ignite instance to use it (you can embed it in your > application), you can't just use the key value store on it's own, a LOCAL > cache would be the closest to a Berkeley DB store. > > Docs at : https://apacheignite.readme.io/docs/data-grid > > > > Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore: > >> Hi, >> >> I am newbie to Apache Ignite. We are trying to explore Ignite as key >> value DB to be replaced with our existing Berkely DB in application. >> >> Currently, Bekley DB is embedded in the application and db container >> operations are performed using Berkely DB apis , similar functionalities we >> would need for Ignite. >> >> The idea is to replace berkley db apis to Ignite apis to use Ignite as >> key value DB. >> I could not find any docs for the usage of ignite libraries to be used in >> the application. >> >> Any pointers please >> >> Thanks & Regards, >> Rajesh Kishore >> > >
usage of Apache ignite as Key value DB
Hi, I am newbie to Apache Ignite. We are trying to explore Ignite as key value DB to be replaced with our existing Berkely DB in application. Currently, Bekley DB is embedded in the application and db container operations are performed using Berkely DB apis , similar functionalities we would need for Ignite. The idea is to replace berkley db apis to Ignite apis to use Ignite as key value DB. I could not find any docs for the usage of ignite libraries to be used in the application. Any pointers please Thanks & Regards, Rajesh Kishore