Re: Using ASTParser and TreeWalker for parsing SQL query
Hi Venu, Glad to hear that this technique is working for you. A couple general comments: A) If you need code added to the Derby engine, the best approach would be to file a JIRA and attach a patch file. That makes it easier for committers to evaluate your improvements and track changes to the codebase. B) I see that you want to introduce a new NodeFilterSingletone class, whose purpose seems to be to mark the boundaries of SQL clauses. Keep an eye on https://issues.apache.org/jira/browse/DERBY-6434. I expect to propose a different solution to that problem in an upcoming patch. You may be able to use it. C) It's probably best to move this discussion to derby-dev. That's a better forum for discussing changes to the Derby codebase. One other comment inline... On 1/9/14 6:47 PM, venu wrote: Hi Rick, I tried your solution (regarding AST parser) and its working perfectly. I have some problems could you please give me some advice on these issues. 1. We added TreeWalker.java file in java\engine\org\apache\derby\impl\sql\compile folder. While generating derby.jar the class file was generated but class file was not inserted into the jar file. How can I add TreeWalker.class into derby.jar file ? I added the class into DBMSnodes.properties file but still the class file not found in jar. Could you please tell me how to add this? or if possible add this file in the source then I can modify it. Try adding the class to tools/jar/extraDBMSclasses.properties. Hope this helps, -Rick 2. I did some changes on these files for solving order by and group by issue. If you have some time, could you please look into these codes ? If you feel these codes are correct could you please add those into the official version or even development version? If not, advice me on 'where to change or strategy which I need to follow'. \java\engine\org\apache\derby\impl\sql\compile\ValueNodeList.java -- added below method public ArrayList getValueNodes(){ ArrayList valueNode = new ArrayList(); for (int index = 0; index size(); index++) { valueNode.add((QueryTreeNode)elementAt(index)); } return valueNode; } \java\engine\org\apache\derby\impl\sql\compile\CastNode.java -- added below method public ValueNode getCastOperand(){ return castOperand; } \java\engine\org\apache\derby\impl\sql\compile\SelectNode.java -- modified below method void acceptChildren(Visitor v) throws StandardException { super.acceptChildren(v); if (fromList != null) { NodeFilterSingletone.getInstance().setFromListState(true); -- Added this line fromList = (FromList)fromList.accept(v); NodeFilterSingletone.getInstance().setFromListState(false); -- Added this line } if (whereClause != null) { NodeFilterSingletone.getInstance().setWhereClauseState(true); -- Added this line whereClause = (ValueNode)whereClause.accept(v); NodeFilterSingletone.getInstance().setWhereClauseState(false); -- Added this line } if (wherePredicates != null) { wherePredicates = (PredicateList)wherePredicates.accept(v); } if (havingClause != null) { NodeFilterSingletone.getInstance().setHavingClauseState(true); -- Added this line havingClause = (ValueNode)havingClause.accept(v); NodeFilterSingletone.getInstance().setHavingClauseState(false); -- Added this line } // visiting these clauses was added as part of DERBY-6263. a better fix might be to fix the // visitor rather than skip it. if ( !(v instanceof HasCorrelatedCRsVisitor) ) { if (selectSubquerys != null) { selectSubquerys = (SubqueryList) selectSubquerys.accept( v ); } if (whereSubquerys != null) { whereSubquerys = (SubqueryList) whereSubquerys.accept( v ); } if (groupByList != null) { NodeFilterSingletone.getInstance().setGroupbyState(true); -- Added this line groupByList = (GroupByList) groupByList.accept( v ); NodeFilterSingletone.getInstance().setGroupbyState(false); -- Added this line } if (orderByLists[0] != null) { for (int i = 0; i orderByLists.length; i++) { NodeFilterSingletone.getInstance().setOrderbyState(true); -- Added this line orderByLists[i] = (OrderByList) orderByLists[ i ].accept( v ); NodeFilterSingletone.getInstance().setOrderbyState(false); -- Added this line } } if (offset != null) { offset = (ValueNode) offset.accept( v ); } if (fetchFirst != null) { fetchFirst = (ValueNode)
Setting derby.storage.pageSize
Version: 10.10.1.1 (Embedded) I have been trying to set the derby.storage.pageSize on an index according to http://db.apache.org/derby/docs/10.10/ref/rrefproper40688.html in the hope that increasing the pageSize will help with a large insert/update. It seems the value doesn't seem to be changing. The relevant part of my script is: call SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageSize','16384'); -- CREATE TABLE DataSource... CREATE INDEX dataSource_path_idx ON DataSource(connectorId, path); select p, syscs_util.syscs_get_database_property(p) from (values 'derby.storage.pageCacheSize', 'derby.storage.pageSize') props(p); call SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageSize', '4096'); select p, syscs_util.syscs_get_database_property(p) from (values 'derby.storage.pageCacheSize', 'derby.storage.pageSize') props(p); Both selects give the same output: P |2 --- derby.storage.pageCacheSize |4000 derby.storage.pageSize|NULL So I have (earlier in the script) changed the pageCacheSize property using SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY but the pageSize does not change. Running select pagesize from table (syscs_diag.space_table('APP', 'DataSource')) s; returns no rows. 1. Is there a way to check what the pageSize of an index is? 2. Is there some other way I should be setting the pageSize property? Thanks, Melissa
Can I run derby for a long time?
Can I run derby for a long time with many inserts, deletes and updates? Are the memories, disk spaces recovered after deletion/updating? Any information would be appreciated. Thanks in advance.
Re: Setting derby.storage.pageSize
check out the following to see page size of table and indexes: http://db.apache.org/derby/docs/10.10/ref/rrefsyscsdiagspacetable.html On 1/9/2014 8:40 PM, Melissa Mifsud wrote: Version: 10.10.1.1 (Embedded) I have been trying to set the derby.storage.pageSize on an index according to http://db.apache.org/derby/docs/10.10/ref/rrefproper40688.html in the hope that increasing the pageSize will help with a large insert/update. It seems the value doesn't seem to be changing. The relevant part of my script is: call SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageSize','16384'); -- CREATE TABLE DataSource... CREATE INDEX dataSource_path_idx ON DataSource(connectorId, path); select p, syscs_util.syscs_get_database_property(p) from (values 'derby.storage.pageCacheSize', 'derby.storage.pageSize') props(p); call SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('derby.storage.pageSize', '4096'); select p, syscs_util.syscs_get_database_property(p) from (values 'derby.storage.pageCacheSize', 'derby.storage.pageSize') props(p); Both selects give the same output: P |2 --- derby.storage.pageCacheSize |4000 derby.storage.pageSize|NULL So I have (earlier in the script) changed the pageCacheSize property using SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY but the pageSize does not change. Running select pagesize from table (syscs_diag.space_table('APP', 'DataSource')) s; returns no rows. 1. Is there a way to check what the pageSize of an index is? 2. Is there some other way I should be setting the pageSize property? Thanks, Melissa **
Re: Can I run derby for a long time?
Can I run derby for a long time with many inserts, deletes and updates? Are the memories, disk spaces recovered after deletion/updating? I have run Derby as the primary database for a mid-size Build Automation application. The application ran 24 hours/day, 7 days/week, querying and updating the database from up to 100 concurrent clients. Derby was extremely reliable, and we ran it for many months at a time between restart, with no resource leak problems. As long as my application was careful to close all its ResultSet, Statement, and Connection objects appropriately, we never detected any memory leaks. I did not monitor space reclamation *within* database tables, because that was not a critical issue for my particular usage pattern. thanks, bryan