Re: missing file on Calcite?
SqlParserImpl.java is javaCC generated code from the parser.jj. You need to to "mvn install" to compile the project to generate the file. It is under calcite/core/target/generated-sources/javacc/org/apache/calcite/sql/parser/impl/SqlParserImpl.java. Hope it helps. On Thu, Feb 15, 2018 at 2:39 PM, joyjoywrote: > Hi, > > I am trying to use Calcite for my project. But found that in > SqlParser.java, one of the import package is missing in the Calcite--in > apache-calcite-1.15.0-src_core, in the package of > org.apache.calcite.sql.parser.SqlParser.java. > > The missing file is: > > org.apache.calcite.sql.parser.impl.SqlParserImpl. > > > > Eclipse complain that "the import > org.apache.calcite.sql.parser.impl.SqlParserImpl cannot be resolved" > > Should I command this import line out? Or I need this file to Parse sql? > > > > Thanks, > > Joy > -- "So you have to trust that the dots will somehow connect in your future."
missing file on Calcite?
Hi, I am trying to use Calcite for my project. But found that in SqlParser.java, one of the import package is missing in the Calcite--in apache-calcite-1.15.0-src_core, in the package of org.apache.calcite.sql.parser.SqlParser.java. The missing file is: org.apache.calcite.sql.parser.impl.SqlParserImpl. Eclipse complain that "the import org.apache.calcite.sql.parser.impl.SqlParserImpl cannot be resolved" Should I command this import line out? Or I need this file to Parse sql? Thanks, Joy
[jira] [Created] (CALCITE-2180) Invalid code generated for negative of byte and short values
Julian Hyde created CALCITE-2180: Summary: Invalid code generated for negative of byte and short values Key: CALCITE-2180 URL: https://issues.apache.org/jira/browse/CALCITE-2180 Project: Calcite Issue Type: Bug Reporter: Julian Hyde Assignee: Julian Hyde Invalid code is generated for negative of byte and short values. The reason is that, in Java, if b is a value of type {{byte}}, then {{-b}} has type {{int}}; similarly if {{b}} has type {{short}}. The code generator needs to accommodate for this. The query {code}select -deptno from dept{code} demonstrates the problem, since {{deptno}} has SQL type {{TINYINT}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: copy method in RelSubset class
Yes, log a JIRA case. Making RelSubset.copy return this, rather than throw, seems pragmatic. Long-term I would like to get rid of copy, so we might reverse this change at some point. But by that time, these tests will be enabled. Julian > On Feb 14, 2018, at 4:04 PM, Alessandro Solimando >wrote: > > Hi, > while preparing some additional unit tests for the spark adapter ( > https://github.com/asolimando/calcite/tree/SPARK-TESTS) I have stumbled > upon the issue several times. > > This is the list of the tests that in my opinion should be succeeding but > are failing because of an invocation of *copy* method for *RelSubset* class: > - testFilterBetween > - testFilterIsIn > - testFilterTrue > - testFilterFalse > - testFilterOr > - testFilterIsNotNull > - testFilterIsNull > > As you can infer from the name, the common trait among them is the presence > of a "complex" filtering condition in the where clause. > > Just as a side note (not implying this is a solution), by replacing the > exception throwing with "return this;" inside *RelSubset.copy, *the > aforementioned tests pass. > > Can you please acknowledge the issue (if any) so I can open a ticket, and > reference it in the "@Ignore" of those tests, so I can advance with the PR? > > Best regards, > Alessandro > > On 12 February 2018 at 09:56, Alessandro Solimando < > alessandro.solima...@gmail.com> wrote: > >> Hello Julian, >> If I got it right, trimming the query plan for unused fields is a top-down >> procedure removing any reference to unused fields in the subplan rooted at >> the considered tree node. >> >> This, in principle, can affect also those coming from elements of >> *RelSubset*, independently from the fact that they are in an equivalence >> class, and that their result is "immutable". The only source of problem I >> see, is that the very concept of *RelSubset* suggests a "global scope", >> and updating it according to the contextual information of a specific >> subplan could break its correctness (the relational expressions composing >> *RelSubset* would be fitting only some of original contexts in which they >> were originally equivalent). >> >> However, *trimUnusedFields*, in my example, tries to update the traits of >> RelSubset's elements. >> >> So, if *RelSubset* should be immutable (traits included), then the >> *trimUnusedFields* method should never call *copy* on it, but it does, >> and the exception is thrown. >> >> The fact that implementing copy for *RelSubset* as the identity (that is, >> simply returning "this", ignoring any modification to the traits) did not >> introduce any problem reinforces the immutability hypothesis. >> >> Is my understanding correct? >> Given that the query looks legal, the problem looks "real". >> If this is confirmed, how do you suggest to address it? >> >> On 12 February 2018 at 00:04, Julian Hyde wrote: >> >>> Can you tell me why you want to copy a RelSubset? >>> >>> A RelSubset is an equivalence class - a set of relational expressions >>> that always return the same results. So if you made a copy you’d be >>> creating another equivalent relational expression - that by definition >>> should be in the original RelSubset. >>> On Feb 11, 2018, at 1:18 PM, Alessandro Solimando < >>> alessandro.solima...@gmail.com> wrote: Hello community, I am adding a SparkAdapter test with the following query: > select * > from *(values (1, 'a'), (2, 'b'), (3, 'b'), (4, 'c'), (2, 'c')) as >>> t(x, y)* > where x between 3 and 4 > > When executed, an exception is thrown (the full stack trace is at the >>> end of the email) in *copy* method in *RelSubset* class, while Calcite is trying to get rid of unused terms (specifically, *trimUnusedFields* >>> method from *SqlToRelConverted* class). The signature of copy is as follows: public RelNode copy(RelTraitSet traitSet, List inputs) First of all, I don't understand the reason for the *UnsupportedOperationException* in the first place. Why a RelSubset shouldn't be copied? Assuming that the functionality is simply missing, I have considered two alternatives for implementing it: 1) copy as the identity function -> all Calcite tests pass, but I am ignoring the *traitSet* parameter in this way, looks odd 2) I have tried to build a new *RelSubset* by reusing the cluster and >>> set information from the object, and the trait argument of copy -> assert traits.allSimple(); fails in the constructor In my example, the trait "[1]" (ordering detected at tuple level on the >>> 2nd component) is transformed into a composite trait "[[1]]", this makes the assertion fail. While I know what a trait is, I don't understand what a composite one >>> is. Do you have a concrete example? So the problem here is the introduction of the
Re: subquery leading to a "java.lang.ClassCastException: RexSubQuery cannot be cast to RexLocalRef"
If this still occurs in the latest master then yes, definitely log a JIRA case. On Wed, Feb 14, 2018 at 3:41 PM, Alessandro Solimandowrote: > Hello, > while executing this query Calcite tries to cast the subquery (RexSubQuery) > to a local reference (RexLocalRef), resulting in a ClassCastException. > > Here is the query: > >> >> *select ** >> *from (values (1, 'a'), (2, 'b'), (3, 'b'), (4, 'c'), (2, 'c')) as t(x, y)* >> *where exists (* >> * select ** >> * from (values (1, 'a'), (2, 'b')) as v(w, z)* >> * where w < x**)* > > > But the same happens with other (similar) queries: > >> *select x* > > *from (values (1, 'a'), (2, 'b')) as t(x, y)* > > *where x <= all (* > > * select x* > > * from (values (1, 'a'), (2, 'b'), (1, 'b'), (2, 'c'), (2, 'c')) as t(x, >> y)* > > *)* > > > Can you confirm the issue? > If so I will open a ticket, so I can mark as ignored the related tests with > a reference to such ticket (until the problem gets fixed). > > If needed, they can inspected here: > https://github.com/asolimando/calcite/tree/SPARK-TESTS > > The tests reproducing the issue are: > -) testFilterExists > -) testFilterNotExists > -) testSubqueryAny > -) testSubqueryAll > > Below the full stack trace (for the first query mentioned above): > > java.lang.RuntimeException: With materializationsEnabled=false, limit=0 >> at >> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600) >> at >> org.apache.calcite.test.CalciteAssert$AssertQuery.returns(CalciteAssert.java:1346) >> at >> org.apache.calcite.test.CalciteAssert$AssertQuery.returns(CalciteAssert.java:1329) >> at >> org.apache.calcite.test.CalciteAssert$AssertQuery.returnsUnordered(CalciteAssert.java:1357) >> at >> org.apache.calcite.test.SparkAdapterTest.commonTester(SparkAdapterTest.java:93) >> at >> org.apache.calcite.test.SparkAdapterTest.testFilterExists(SparkAdapterTest.java:720) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:498) >> at >> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) >> at >> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) >> at >> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) >> at >> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) >> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) >> at >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) >> at >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) >> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) >> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) >> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) >> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) >> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) >> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) >> at org.junit.runner.JUnitCore.run(JUnitCore.java:137) >> at >> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) >> at >> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) >> at >> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) >> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) >> Caused by: java.sql.SQLException: Error while executing SQL "select * >> from (values (1, 'a'), (2, 'b'), (3, 'b'), (4, 'c'), (2, 'c')) as t(x, y) >> where exists ( >> select * >> from (values (1, 'a'), (2, 'b')) as v(w, z) >> where w < x >> )": org.apache.calcite.rex.RexSubQuery cannot be cast to >> org.apache.calcite.rex.RexLocalRef >> at org.apache.calcite.avatica.Helper.createException(Helper.java:56) >> at org.apache.calcite.avatica.Helper.createException(Helper.java:41) >> at >> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156) >> at >> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218) >> at >> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568) >> ... 27 more >> Caused by: java.lang.ClassCastException: >> org.apache.calcite.rex.RexSubQuery cannot be cast to >> org.apache.calcite.rex.RexLocalRef >> at >> org.apache.calcite.rex.RexProgramBuilder.registerInput(RexProgramBuilder.java:298) >> at >> org.apache.calcite.rex.RexProgramBuilder.addCondition(RexProgramBuilder.java:272) >> at >> org.apache.calcite.adapter.enumerable.EnumerableFilterToCalcRule.onMatch(EnumerableFilterToCalcRule.java:50) >> at >>
Re: Using Calcite for query planning
Ok, thank you very much Julian, we'll try it out. Best regards, Davide and Guohui. On 14/02/18 19:42, Julian Hyde wrote: It sounds like a good fit. Parse the SQL, translate to relational algebra, apply some query transformation rules on the algebra. If you have a few simple transformations in mind, you may be able to achieve it without a cost model. Or, as you propose, a simple model based on cardinality. To convert a union of conjunctive queries to a conjunction of possibly-union queries, you would probably need a rule called UnionJoinTransposeRule that converts (Union (Join X Y) (Join X Y)) into (Join X (Union Y Z)) or something like that, and combines it with some existing rules to push unions. There is currently no such rule but it would not be hard to write. Julian On Feb 14, 2018, at 5:24 AM, Guohui Xiaowrote: Hi, We are considering using Calcite to perform cost-based query optimization in our project. Specifically, we can already generate some SQL queries expressed in some relational algebra expression through our API, and we want to optimize the generated expressions using Calcite. We have a cost model based on cardinality estimation. We want to use it to convert e.g., a union of conjunctive queries (UCQ) into a a join of UCQs. We would like to understand how much efforts are needed to realize our idea using Calcite. Do you have suggestions about this? Thanks in advance. Best regards, Guohui & Davide -- Guohui Xiao, PhD Assistant Professor with a fixed-term contract KRDB - Faculty of Computer Science Free University of Bozen-Bolzano Piazza Domenicani, 3 I-39100 Bolzano, Italy http://www.ghxiao.org
Re: Multi Product support in a single RelNode tree
These are great inputs. Thank you! On Thursday 15 February 2018, 1:04:18 AM IST, Julian Hydewrote: Schema (and SchemaPlus) is a namespace used to look up object names when validating a SQL query. It is not strictly required if you are building the query manually, or using RelBuilder. The key is the TableScan objects (in this case JdbcTableScan) representing accesses to tables. Those tables could be foreign tables in the same schema, or in different schemas, or be free-floating objects not in any schema at all. The important thing is the instance of JdbcConvention in their TableScan.getTraitSet(). That JdbcConvention contains the URL of the database, its dialect, etc. With different JdbcConvention instances you could join a table in an Oracle database to a table in a table in a SqlServer database or even in a different Oracle database. But if two tables are in the same Oracle database they must have the same JdbcConvention instance. Otherwise Calcite will not consider creating a JdbcJoin (i.e. a join inside the target database). Julian > On Feb 13, 2018, at 10:01 PM, Abbas Gadhia > wrote: > > Hi, > I want to build a RelNode tree with different conventions on different > RelNodes (for example: in the following select query "select * from t1,t2", > t1 is a table from Oracle and t2 is a table from SqlServer). > > I'm confused whether i should be using a single SchemaPlus to hold table > references from both Oracle and SqlServer or I should be creating a different > SchemaPlus for each product. Different SchemaPlus would force me to use a > different RelBuilder, so my guess is that a single SchemaPlus with the > following hierarchy may suffice ("oracle" -> "database1" -> "schema1" -> > "t1"). However, I suspect this single hierarchy (with the product name > inside) may not play well with other parts of Calcite. > Any thoughts, however small would be appreciated. > Thanks > Abbas >
Re: Use Calcite adapter with Elastic search and Java
Did you take a look into the documentation? https://calcite.apache.org/docs/elasticsearch_adapter.html If you use ES5+ you have to use an empty userConfig as the ES Client doesn't support the properties that are listed in the example. You can also take a look at the tests(https://github.com/apache/calcite/blob/master/elasticsearch5/src/test/java/org/apache/calcite/test/Elasticsearch5AdapterIT.java) in the Calcite repository to find usage examples. Currently you can't make use of the ES5 adapter along with sqlline as there are classpath problems, but nobody stepped up to fix it yet. Mit freundlichen Grüßen, *Christian Beikov* Am 14.02.2018 um 10:15 schrieb Saurabh Pathak: Hello, I want to use calcite adapter with elastic search and java. But i haven't idea how to use calcite elastic search adapter could you please let us know how to use calcite adapter with elastic search and java. Thanks in advance for your co operation Thanks & Regards Saurabh Pathak