[jira] [Resolved] (CALCITE-1276) In Druid adapter, deduce columns by running a "segmentMetadata" query
[ https://issues.apache.org/jira/browse/CALCITE-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-1276. -- Resolution: Fixed Fix Version/s: 1.8.0 Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/435e2030. Documentation has not yet been updated, but I will do it before 1.8 is announced. > In Druid adapter, deduce columns by running a "segmentMetadata" query > - > > Key: CALCITE-1276 > URL: https://issues.apache.org/jira/browse/CALCITE-1276 > Project: Calcite > Issue Type: Bug > Components: druid >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: 1.8.0 > > > In Druid adapter, deduce columns by running a "segmentMetadata" query. > Currently a Druid model must contain "dimensions" and "metrics" fields. If > either of these are absent, Calcite should run a [segment metadata > query|http://druid.io/docs/latest/querying/segmentmetadataquery.html] and > take the resulting "columns" and "aggregators" fields and make them into > columns. > The effect will be that the Druid adapter will be easier to configure. You > will be able to connect without a model, per CALCITE-1259: > {code} > jdbc:calcite:schema=wiki; schemaFactory= > org.apache.calcite.adapter.druid.DruidSchemaFactory; > schema.url=http://localhost:8082/druid/v2/?pretty > {code} > It will also adapt to schema changes. If there are multiple segments, and the > schema evolves over time, I suppose that the segments might have different > columns and aggregators. Calcite should use {{"merge": true, > "lenientAggregatorMerge": false}} to combine them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CALCITE-1274) Upgrade Spark adapter to spark-1.6.1
[ https://issues.apache.org/jira/browse/CALCITE-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-1274. -- Resolution: Fixed Fix Version/s: 1.8.0 Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/d18da01e. This upgrades to the most recent version of Apache Spark, fixing for any API changes, but making fuller use of Spark remains an outstanding task. > Upgrade Spark adapter to spark-1.6.1 > > > Key: CALCITE-1274 > URL: https://issues.apache.org/jira/browse/CALCITE-1274 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: 1.8.0 > > > Calcite's Spark adapter uses an old version of Apache Spark, and does not > push very many relational operators down to Spark, and the test suite does > not pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CALCITE-1281) Druid adapter wrongly returns all numeric values as int or float
[ https://issues.apache.org/jira/browse/CALCITE-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-1281. -- Resolution: Fixed Fix Version/s: 1.8.0 Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/ec49a0fa. > Druid adapter wrongly returns all numeric values as int or float > > > Key: CALCITE-1281 > URL: https://issues.apache.org/jira/browse/CALCITE-1281 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: 1.8.0 > > > Druid adapter wrongly returns all numeric values as int or float. If the JDBC > driver is expecting long or double, gets a ClassCastException: > {noformat} > java.lang.ClassCastException: java.lang.Integer cannot be cast to > java.lang.Long > at > org.apache.calcite.avatica.util.AbstractCursor$LongAccessor.getLong(AbstractCursor.java:539) > at > org.apache.calcite.avatica.util.AbstractCursor$AccessorImpl.getInt(AbstractCursor.java:304) > at > org.apache.calcite.avatica.AvaticaResultSet.getInt(AvaticaResultSet.java:252) > at > org.apache.calcite.test.DruidAdapterIT$2.apply(DruidAdapterIT.java:210) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CALCITE-1279) Druid "select" query gives ClassCastException
[ https://issues.apache.org/jira/browse/CALCITE-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-1279. -- Resolution: Fixed Fix Version/s: 1.8.0 Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/23c8e458. > Druid "select" query gives ClassCastException > - > > Key: CALCITE-1279 > URL: https://issues.apache.org/jira/browse/CALCITE-1279 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: 1.8.0 > > > In the Druid adapter, a query of "select" query type gives > {{ClassCastException}}. This does not apply to other query types, such as > "groupBy", so only SQL queries that have no aggregation (GROUP BY, HAVING) > are affected. > The cause is that the {{DRUID_FETCH}} property recently changed from STRING > to NUMBER but we are still accessing it using {{getString()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CALCITE-1277) Rat fails on source distribution due to git.properties
[ https://issues.apache.org/jira/browse/CALCITE-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-1277. -- Resolution: Fixed Fix Version/s: (was: 1.9.0) 1.8.0 Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/a02da271. > Rat fails on source distribution due to git.properties > -- > > Key: CALCITE-1277 > URL: https://issues.apache.org/jira/browse/CALCITE-1277 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: 1.8.0 > > > During 1.8 release vote, [~alangates] reported that {{mvn apache-rat:check}} > fails on source distribution due to {{git.properties}}. I confirmed this. > {{git.properties}} is not a source file -- it is generated during the > release. We should add it to rat exclusions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CALCITE-1278) CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT)
[ https://issues.apache.org/jira/browse/CALCITE-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321727#comment-15321727 ] Maryann Xue commented on CALCITE-1278: -- The fix is straightforward and easy, but I had a problem when I was trying to add a test into JdbcFrontLinqBackTest. It hangs till it ends with an OOM error. {code} @Test public void testDelete() { final List employees = new ArrayList<>(); CalciteAssert.AssertThat with = mutable(employees); with.query("select * from \"foo\".\"bar\"") .returns( "empid=0; deptno=0; name=first; salary=0.0; commission=null\n"); with.query("insert into \"foo\".\"bar\" select * from \"hr\".\"emps\"") .updates(4); with.query("select count(*) as c from \"foo\".\"bar\"") .returns("C=5\n"); with.query("delete from \"foo\".\"bar\" " + "where \"deptno\" = 10") .typeIs(""); with.query("select \"name\", count(*) as c from \"foo\".\"bar\" " + "group by \"name\"") .returnsUnordered( "name=Eric; C=1", "name=first; C=1"); } {code} > CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT) > --- > > Key: CALCITE-1278 > URL: https://issues.apache.org/jira/browse/CALCITE-1278 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.7.0 >Reporter: Maryann Xue >Assignee: Maryann Xue > > DELETE, as one type of TableModify operation, has the same RelDataType as > INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, > the corresponding ColumnMetaData info becomes inconsistent, due to: > {code} > preparedResult = preparingStmt.prepareSql( > sqlNode, Object.class, validator, true); > switch (sqlNode.getKind()) { > case INSERT: > case EXPLAIN: > // FIXME: getValidatedNodeType is wrong for DML > x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); > break; > default: > x = validator.getValidatedNodeType(sqlNode); > } > {code} > I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". > Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a > workaround? For now, we can simply include DELETE and other TableModify > Operation in the first switch case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CALCITE-1274) Upgrade Spark adapter to spark-1.6.1
[ https://issues.apache.org/jira/browse/CALCITE-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde updated CALCITE-1274: - Summary: Upgrade Spark adapter to spark-1.6.1 (was: Update Spark adapter) > Upgrade Spark adapter to spark-1.6.1 > > > Key: CALCITE-1274 > URL: https://issues.apache.org/jira/browse/CALCITE-1274 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Julian Hyde > > Calcite's Spark adapter uses an old version of Apache Spark, and does not > push very many relational operators down to Spark, and the test suite does > not pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CALCITE-1281) Druid adapter wrongly returns all numeric values as int or float
Julian Hyde created CALCITE-1281: Summary: Druid adapter wrongly returns all numeric values as int or float Key: CALCITE-1281 URL: https://issues.apache.org/jira/browse/CALCITE-1281 Project: Calcite Issue Type: Bug Reporter: Julian Hyde Assignee: Julian Hyde Druid adapter wrongly returns all numeric values as int or float. If the JDBC driver is expecting long or double, gets a ClassCastException: {noformat} java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long at org.apache.calcite.avatica.util.AbstractCursor$LongAccessor.getLong(AbstractCursor.java:539) at org.apache.calcite.avatica.util.AbstractCursor$AccessorImpl.getInt(AbstractCursor.java:304) at org.apache.calcite.avatica.AvaticaResultSet.getInt(AvaticaResultSet.java:252) at org.apache.calcite.test.DruidAdapterIT$2.apply(DruidAdapterIT.java:210) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CALCITE-1263) Case-insensitive match and null default value for enum properties
[ https://issues.apache.org/jira/browse/CALCITE-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321337#comment-15321337 ] Julian Hyde commented on CALCITE-1263: -- It's not urgent. No need for a 1.8.1 if 1.9 arrives at the usual release cadence. > Case-insensitive match and null default value for enum properties > - > > Key: CALCITE-1263 > URL: https://issues.apache.org/jira/browse/CALCITE-1263 > Project: Calcite > Issue Type: Bug > Components: avatica >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: next > > > In ConnectionConfigImpl, we allow properties based on enum classes. The > getEnum method throws "Required property 'name' not specified" if the default > value is null, but it should not; we should allow enum properties whose > default value is null. > Also, when resolving an enum property we should check the exact string first, > then look for case-insensitive matches. This will help if people write > 'p=foo' if 'p' is a property of type enum \{ FOO, BAZ \}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CALCITE-1279) Druid "select" query gives ClassCastException
Julian Hyde created CALCITE-1279: Summary: Druid "select" query gives ClassCastException Key: CALCITE-1279 URL: https://issues.apache.org/jira/browse/CALCITE-1279 Project: Calcite Issue Type: Bug Reporter: Julian Hyde Assignee: Julian Hyde In the Druid adapter, a query of "select" query type gives {{ClassCastException}}. This does not apply to other query types, such as "groupBy", so only SQL queries that have no aggregation (GROUP BY, HAVING) are affected. The cause is that the {{DRUID_FETCH}} property recently changed from STRING to NUMBER but we are still accessing it using {{getString()}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CALCITE-1263) Case-insensitive match and null default value for enum properties
[ https://issues.apache.org/jira/browse/CALCITE-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321220#comment-15321220 ] Josh Elser commented on CALCITE-1263: - https://github.com/apache/calcite/pull/240 looks good to me. If you'd like it to go into a 1.8.1, feel free to land it in branch-avatica-1.8 as well as master; otherwise, it'll hit avatica 1.9.0. > Case-insensitive match and null default value for enum properties > - > > Key: CALCITE-1263 > URL: https://issues.apache.org/jira/browse/CALCITE-1263 > Project: Calcite > Issue Type: Bug > Components: avatica >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: next > > > In ConnectionConfigImpl, we allow properties based on enum classes. The > getEnum method throws "Required property 'name' not specified" if the default > value is null, but it should not; we should allow enum properties whose > default value is null. > Also, when resolving an enum property we should check the exact string first, > then look for case-insensitive matches. This will help if people write > 'p=foo' if 'p' is a property of type enum \{ FOO, BAZ \}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CALCITE-1263) Case-insensitive match and null default value for enum properties
[ https://issues.apache.org/jira/browse/CALCITE-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated CALCITE-1263: Comment: was deleted (was: Wow, sorry, I totally missed your ping on this. So sorry. We can look into a 1.8.1 if you'd like with this one.) > Case-insensitive match and null default value for enum properties > - > > Key: CALCITE-1263 > URL: https://issues.apache.org/jira/browse/CALCITE-1263 > Project: Calcite > Issue Type: Bug > Components: avatica >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: next > > > In ConnectionConfigImpl, we allow properties based on enum classes. The > getEnum method throws "Required property 'name' not specified" if the default > value is null, but it should not; we should allow enum properties whose > default value is null. > Also, when resolving an enum property we should check the exact string first, > then look for case-insensitive matches. This will help if people write > 'p=foo' if 'p' is a property of type enum { FOO, BAZ }. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CALCITE-1263) Case-insensitive match and null default value for enum properties
[ https://issues.apache.org/jira/browse/CALCITE-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321207#comment-15321207 ] Josh Elser commented on CALCITE-1263: - Wow, sorry, I totally missed your ping on this. So sorry. We can look into a 1.8.1 if you'd like with this one. > Case-insensitive match and null default value for enum properties > - > > Key: CALCITE-1263 > URL: https://issues.apache.org/jira/browse/CALCITE-1263 > Project: Calcite > Issue Type: Bug > Components: avatica >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: next > > > In ConnectionConfigImpl, we allow properties based on enum classes. The > getEnum method throws "Required property 'name' not specified" if the default > value is null, but it should not; we should allow enum properties whose > default value is null. > Also, when resolving an enum property we should check the exact string first, > then look for case-insensitive matches. This will help if people write > 'p=foo' if 'p' is a property of type enum { FOO, BAZ }. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CALCITE-1263) Case-insensitive match and null default value for enum properties
[ https://issues.apache.org/jira/browse/CALCITE-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321206#comment-15321206 ] Josh Elser commented on CALCITE-1263: - Wow, sorry, I totally missed your ping on this. So sorry. We can look into a 1.8.1 if you'd like with this one. > Case-insensitive match and null default value for enum properties > - > > Key: CALCITE-1263 > URL: https://issues.apache.org/jira/browse/CALCITE-1263 > Project: Calcite > Issue Type: Bug > Components: avatica >Reporter: Julian Hyde >Assignee: Julian Hyde > Fix For: next > > > In ConnectionConfigImpl, we allow properties based on enum classes. The > getEnum method throws "Required property 'name' not specified" if the default > value is null, but it should not; we should allow enum properties whose > default value is null. > Also, when resolving an enum property we should check the exact string first, > then look for case-insensitive matches. This will help if people write > 'p=foo' if 'p' is a property of type enum { FOO, BAZ }. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CALCITE-1278) CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT)
[ https://issues.apache.org/jira/browse/CALCITE-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maryann Xue updated CALCITE-1278: - Description: DELETE, as one type of TableModify operation, has the same RelDataType as INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, the corresponding ColumnMetaData info becomes inconsistent, due to: {code} preparedResult = preparingStmt.prepareSql( sqlNode, Object.class, validator, true); switch (sqlNode.getKind()) { case INSERT: case EXPLAIN: // FIXME: getValidatedNodeType is wrong for DML x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); break; default: x = validator.getValidatedNodeType(sqlNode); } {code} I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a workaround? For now, we can simply include DELETE and other TableModify Operation in the first switch case. was: DELETE, as one type of TableModify operation, has the same RelDataType as INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, the corresponding ColumnMetaData info becomes inconsistent, due to: {code} preparedResult = preparingStmt.prepareSql( sqlNode, Object.class, validator, true); switch (sqlNode.getKind()) { case INSERT: case EXPLAIN: // FIXME: getValidatedNodeType is wrong for DML x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); break; default: x = validator.getValidatedNodeType(sqlNode); } {code} I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a workaround? For now, we can simply include DELETE in the first switch case. > CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT) > --- > > Key: CALCITE-1278 > URL: https://issues.apache.org/jira/browse/CALCITE-1278 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.7.0 >Reporter: Maryann Xue >Assignee: Maryann Xue > > DELETE, as one type of TableModify operation, has the same RelDataType as > INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, > the corresponding ColumnMetaData info becomes inconsistent, due to: > {code} > preparedResult = preparingStmt.prepareSql( > sqlNode, Object.class, validator, true); > switch (sqlNode.getKind()) { > case INSERT: > case EXPLAIN: > // FIXME: getValidatedNodeType is wrong for DML > x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); > break; > default: > x = validator.getValidatedNodeType(sqlNode); > } > {code} > I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". > Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a > workaround? For now, we can simply include DELETE and other TableModify > Operation in the first switch case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CALCITE-1278) CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT)
[ https://issues.apache.org/jira/browse/CALCITE-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maryann Xue updated CALCITE-1278: - Summary: CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT) (was: CalciteSignature's ColumnMetaData info is wrong for DELETE) > CalciteSignature's ColumnMetaData info is wrong for DML (except INSERT) > --- > > Key: CALCITE-1278 > URL: https://issues.apache.org/jira/browse/CALCITE-1278 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.7.0 >Reporter: Maryann Xue >Assignee: Maryann Xue > > DELETE, as one type of TableModify operation, has the same RelDataType as > INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, > the corresponding ColumnMetaData info becomes inconsistent, due to: > {code} > preparedResult = preparingStmt.prepareSql( > sqlNode, Object.class, validator, true); > switch (sqlNode.getKind()) { > case INSERT: > case EXPLAIN: > // FIXME: getValidatedNodeType is wrong for DML > x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); > break; > default: > x = validator.getValidatedNodeType(sqlNode); > } > {code} > I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". > Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a > workaround? For now, we can simply include DELETE in the first switch case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CALCITE-1278) CalciteSignature's ColumnMetaData info is wrong for DELETE
Maryann Xue created CALCITE-1278: Summary: CalciteSignature's ColumnMetaData info is wrong for DELETE Key: CALCITE-1278 URL: https://issues.apache.org/jira/browse/CALCITE-1278 Project: Calcite Issue Type: Bug Components: core Affects Versions: 1.7.0 Reporter: Maryann Xue Assignee: Maryann Xue DELETE, as one type of TableModify operation, has the same RelDataType as INSERT, which is RelRecordType(ROWCOUNT INTEGER). But during "prepare" stage, the corresponding ColumnMetaData info becomes inconsistent, due to: {code} preparedResult = preparingStmt.prepareSql( sqlNode, Object.class, validator, true); switch (sqlNode.getKind()) { case INSERT: case EXPLAIN: // FIXME: getValidatedNodeType is wrong for DML x = RelOptUtil.createDmlRowType(sqlNode.getKind(), typeFactory); break; default: x = validator.getValidatedNodeType(sqlNode); } {code} I've noticed that there is a "FIXME: getValidatedNodeType is wrong for DML". Guess that's the root cause, and RelOptUtil.createDmlRowType() is probably a workaround? For now, we can simply include DELETE in the first switch case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)