The example from Luis works like a charm.
I have some questions,I will start separate threads

Thank you
Enrico

2017-11-08 21:51 GMT+01:00 Enrico Olivelli <[email protected]>:

> Luis thank you,
> my case is the second one. I want to use Calcite planner internally on a
> database system. I will try with your suggestion
>
> Enrico
>
> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer <[email protected]>
> ha scritto:
>
>>  If you intend to run a query then you should follow the tutorial and try
>> to change the csv adapter.  You can add the table to the schema at runtime
>> using something like:
>> ---------------------------------------------------------------------
>>
>> Class.forName("org.apache.calcite.jdbc.Driver");
>> Properties info = new Properties();
>> info.setProperty("lex", "MYSQL_ANSI");
>>
>> final Connection connection = DriverManager.getConnection("jdbc:calcite:",
>> info);
>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
>> SchemaPlus root = conn.getRootSchema();
>> root.add("MYTABLE", new TableImpl());
>> Statement statement = conn.createStatement();
>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
>> ---------------------------------------------------------------------
>>
>> But if you only want to parse, validate and optimize the query plan, you
>> can use something like:
>> ---------------------------------------------------------------------
>>     Table table = new TableImpl();
>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
>>     schema.add("MYTABLE", table);
>>     List<RelTraitDef> traitDefs = new ArrayList<>();
>>     traitDefs.add(ConventionTraitDef.INSTANCE);
>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
>>     SqlParser.Config parserConfig =
>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
>>       .setCaseSensitive(false)
>>       .build();
>>
>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
>>         .parserConfig(parserConfig)
>>         .defaultSchema(schema)
>>         .traitDefs(traitDefs)
>>         // define the rules you want to apply
>>
>>         .programs(Programs.ofRules(Programs.RULE_SET))
>>         .build();
>>     Planner planner = Frameworks.getPlanner(config);
>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
>>     n = planner.validate(n);
>>     RelNode root = planner.rel(n).project();
>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
>> SqlExplainFormat.TEXT,
>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>     RelOptCluster cluster = root.getCluster();
>>     final RelOptPlanner optPlanner = cluster.getPlanner();
>>     RelTraitSet desiredTraits =
>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
>>     final RelNode newRoot = optPlanner.changeTraits(root, desiredTraits);
>>     optPlanner.setRoot(newRoot);
>>     RelNode bestExp = optPlanner.findBestExp();
>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
>> SqlExplainFormat.TEXT,
>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>  ---------------------------------------------------------------------
>>
>> The main problem was that you were not setting the desired trait to use
>> EnumerableConvention.
>> You can see that instead of implementing all the interfaces you should
>> use the available builders and classes.
>> Also for implementing Table I think you should extend AbstractTable
>> instead of implementing Table interface and you can use Statistics.of
>> instead of implementing Statistic interface if it is simple:
>> ---------------------------------------------------------------------
>>
>>   private static class TableImpl extends AbstractTable {
>>     public TableImpl() {}
>>     @Override    public RelDataType getRowType(RelDataTypeFactory
>> typeFactory) {
>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
>>       return builder.add("id", typeFactory.createSqlType(
>> SqlTypeName.INTEGER))
>>           .add("name", typeFactory.createSqlType(
>> SqlTypeName.VARCHAR)).build();
>>     }
>>     @Override
>>     public Statistic getStatistic() {
>>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
>>           ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
>>     }
>>
>>   }
>> ---------------------------------------------------------------------
>>
>>
>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico Olivelli
>> <[email protected]> escreveu:
>>
>>  Hi,
>> I am playing with the planner but I can't get it work for a very simple
>> query.
>> Th table is
>>  MYTABLE(id integer, name varchar)            definition is given in code
>> snippet
>> the query is "SELECT * FROM MYTABLE"
>>
>> The error is:
>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:
>>
>> Root: rel#7:Subset#0.NONE.[0, 1].any
>> Original rel:
>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
>> io}, id = 5
>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0
>> rows,
>> 16.0 cpu, 0.0 io}, id = 2
>>
>> Sets:
>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
>>         rel#5:LogicalProject.NONE.[[0,
>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>> rowcount=15.0, cumulative cost={inf}
>>         rel#8:AbstractConverter.NONE.[0,
>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[
>> 0,
>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
>>
>> Does anybody has an hint for me ?
>> I am using currert master of Calcite (1.15-SNAPSHOT)
>>
>> Thank you
>>
>> Enrico
>>
>>
>> My code is:
>>   @Test
>>     public void test() throws Exception {
>>         Table table = new TableImpl();
>>         CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
>> "default");
>>         schema.add("MYTABLE", table);
>>         SchemaPlus rootSchema = schema.plus();
>>         SqlRexConvertletTable convertletTable =
>> StandardConvertletTable.INSTANCE;
>>         SqlToRelConverter.Config config = SqlToRelConverter.Config.
>> DEFAULT;
>>         FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
>> rootSchema, convertletTable);
>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
>>         sqlNode = imp.validate(sqlNode);
>>         RelRoot relRoot = imp.rel(sqlNode);
>>         RelNode project = relRoot.project();
>>         RelOptPlanner planner = project.getCluster().getPlanner();
>>         planner.setRoot(project);
>>         RelNode findBestExp = planner.findBestExp();
>>         System.out.println("best:" + findBestExp);
>>     }
>>
>>     private class FrameworkConfigImpl implements FrameworkConfig {
>>
>>         private final SqlToRelConverter.Config config;
>>         private final SchemaPlus rootSchema;
>>         private final SqlRexConvertletTable convertletTable;
>>
>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
>>             this.config = config;
>>             this.rootSchema = rootSchema;
>>             this.convertletTable = convertletTable;
>>         }
>>
>>         @Override
>>         public SqlParser.Config getParserConfig() {
>>             return SqlParser.Config.DEFAULT;
>>         }
>>
>>         @Override
>>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
>>             return config;
>>         }
>>
>>         @Override
>>         public SchemaPlus getDefaultSchema() {
>>             return rootSchema;
>>         }
>>
>>         @Override
>>         public RexExecutor getExecutor() {
>>             return new RexExecutorImpl(new DataContextImpl());
>>         }
>>
>>         @Override
>>         public ImmutableList<Program> getPrograms() {
>>             return ImmutableList.of(Programs.standard());
>>         }
>>
>>         @Override
>>         public SqlOperatorTable getOperatorTable() {
>>             return new SqlStdOperatorTable();
>>         }
>>
>>         @Override
>>         public RelOptCostFactory getCostFactory() {
>>             return null;
>>         }
>>
>>         @Override
>>         public ImmutableList<RelTraitDef> getTraitDefs() {
>>
>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
>>                     RelCollationTraitDef.INSTANCE,
>>                     RelDistributionTraitDef.INSTANCE
>>             );
>>         }
>>
>>         @Override
>>         public SqlRexConvertletTable getConvertletTable() {
>>             return convertletTable;
>>         }
>>
>>         @Override
>>         public Context getContext() {
>>             return new ContextImpl();
>>         }
>>
>>         @Override
>>         public RelDataTypeSystem getTypeSystem() {
>>             return RelDataTypeSystem.DEFAULT;
>>         }
>>
>>         class DataContextImpl implements DataContext {
>>
>>             public DataContextImpl() {
>>             }
>>
>>             @Override
>>             public SchemaPlus getRootSchema() {
>>                 return rootSchema;
>>             }
>>
>>             @Override
>>             public JavaTypeFactory getTypeFactory() {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>             @Override
>>             public QueryProvider getQueryProvider() {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>             @Override
>>             public Object get(String name) {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>         }
>>
>>         private class ContextImpl implements Context {
>>
>>             public ContextImpl() {
>>             }
>>
>>             @Override
>>             public <C> C unwrap(Class<C> aClass) {
>>                 return null;
>>             }
>>         }
>>     }
>>
>>     private static class TableImpl implements Table {
>>
>>         public TableImpl() {
>>         }
>>
>>         @Override
>>         public RelDataType getRowType(RelDataTypeFactory typeFactory) {
>>             return typeFactory
>>                     .builder()
>>                     .add("id",
>> typeFactory.createSqlType(SqlTypeName.INTEGER))
>>                     .add("name",
>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
>>                     .build();
>>         }
>>
>>         @Override
>>         public Statistic getStatistic() {
>>             return new StatisticImpl();
>>         }
>>
>>         @Override
>>         public Schema.TableType getJdbcTableType() {
>>             throw new UnsupportedOperationException("Not supported
>> yet.");
>> //To change body of generated methods, choose Tools | Templates.
>>         }
>>
>>         @Override
>>         public boolean isRolledUp(String column) {
>>             return true;
>>         }
>>
>>         @Override
>>         public boolean rolledUpColumnValidInsideAgg(String column,
>> SqlCall
>> call, SqlNode parent, CalciteConnectionConfig config) {
>>             return false;
>>         }
>>
>>         class StatisticImpl implements Statistic {
>>
>>             public StatisticImpl() {
>>             }
>>
>>             @Override
>>             public Double getRowCount() {
>>                 return 15d;
>>             }
>>
>>             @Override
>>             public boolean isKey(ImmutableBitSet columns) {
>>                 return false;
>>             }
>>
>>             @Override
>>             public List<RelReferentialConstraint>
>> getReferentialConstraints() {
>>                 return Collections.emptyList();
>>             }
>>
>>             @Override
>>             public List<RelCollation> getCollations() {
>>                 RelCollation c = new RelCollationImpl(
>>                         ImmutableList.of(
>>                                 new RelFieldCollation(0,
>> RelFieldCollation.Direction.ASCENDING),
>>                                 new RelFieldCollation(1,
>> RelFieldCollation.Direction.ASCENDING)
>>                         )) {
>>                 };
>>                 return Arrays.asList(c);
>>             }
>>
>>             @Override
>>             public RelDistribution getDistribution() {
>>                 return RelDistributions.ANY;
>>             }
>>         }
>>     }
>>
>>
>>
>>
>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <[email protected]>:
>>
>> > Yes that is definitely possible. I am too busy to write a code snippet
>> but
>> > you should take a look at PlannerTest.
>> >
>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
>> > [email protected]> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying to use the Volcano planner in order to optimise queries
>> based
>> > > on statistics but I am having some issues understanding how to achieve
>> > > this, even after looking at the Github repository for tests.
>> > > A first goal I would like to achieve would be to choose a join
>> > > implementation based on its cost.
>> > >
>> > > For example, a query tree can have several joins, and depending on the
>> > > position of the join in the tree, an certain implementation would be
>> more
>> > > efficient than another.
>> > > Would that be possible ? If so, could you share a code snippet ?
>> > >
>> > > Thanks
>> > >
>> > > --
>> > > Campinas Stéphane
>> >
>> >
>
> --
>
>
> -- Enrico Olivelli
>

Reply via email to