asnowfox commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-900375672


   I also have the same error.
   I am running a flink cluster of version 1.12.2. so I recompile iceberg with 
flink-1.12.2 and it compiled well.
   and I use flink sql client to test iceberg.
   the following command works fine:
   `
   CREATE CATALOG flow_catalog WITH (
     'type'='iceberg',
     'catalog-type'='hadoop',
     'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
     'property-version'='1'
   );
   use catalog flow_catalog;
   CREATE DATABASE iceberg_db;
   USE iceberg_db;
   CREATE TABLE test (
       id BIGINT COMMENT 'unique id',
       data STRING
   );
   INSERT INTO test VALUES (1, 'a');
   select * from test;
   `
   but when I change the   **execution.type to batch**. there are the same 
exception.
   `
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot 
be cast to org.apache.iceberg.catalog.Catalog
        at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:183)
        at 
org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79)
        at 
org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108)
        at 
org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178)
        at 
org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204)
        at 
org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110)
        at 
org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49)
        at 
org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163)
        at 
org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
        at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
        at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToTransformation(BatchExecLegacySink.scala:129)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:95)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:48)
        at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
        at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
        at 
org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlan(BatchExecLegacySink.scala:48)
        at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:86)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
        at scala.collection.Iterator.foreach(Iterator.scala:937)
        at scala.collection.Iterator.foreach$(Iterator.scala:937)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
        at scala.collection.IterableLike.foreach(IterableLike.scala:70)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike.map(TraversableLike.scala:233)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:85)
        at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:162)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321)
        at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328)
        at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287)
        at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256)
        at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374)
        at 
org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648)
        at 
org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323)
        at java.util.Optional.ifPresent(Optional.java:159)
        at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214)
        at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144)
        at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115)
        at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
   `
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to