[ 
https://issues.apache.org/jira/browse/FLINK-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359566#comment-17359566
 ] 

Ryan Darling commented on FLINK-22907:
--------------------------------------

Jark, thank you for the quick reply. Here is the full error:
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be 
cast to org.codehaus.commons.compiler.ICompilerFactory
        at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
        at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
        at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:426)
        at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.load3(JaninoRelMetadataProvider.java:374)
        at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.lambda$static$0(JaninoRelMetadataProvider.java:109)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:165)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3951)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
        at 
org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
        at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.create(JaninoRelMetadataProvider.java:469)
        at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.revise(JaninoRelMetadataProvider.java:481)
        at 
org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:95)
        at 
org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:784)
        at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:303)
        at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
        at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
        at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
        at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
        at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
        at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
        at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
        at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
        at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
        at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
        at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
        at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
        at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
        at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
        at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
        at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163)
        at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79)
        at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
        at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:284)
        at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:168)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1516)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:789)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1223)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeOperation$3(LocalExecutor.java:213)
        at 
org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:213)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:235)
        at 
org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:479)
        at 
org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:412)
        at 
org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$0(CliClient.java:327)
        at java.util.Optional.ifPresent(Optional.java:159)
        at 
org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:327)
        at 
org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
        at 
org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
        at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
        at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
        at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
        at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
{code}
 

Also here is the lib directory:
{code:java}
pyflink/lib$ ls -l
total 188841
-rw-r--r-- 1 jovyan users     96144 May 14 19:33 flink-csv-1.13.0.jar
-rw-r--r-- 1 jovyan users 115641566 May 14 19:33 flink-dist_2.11-1.13.0.jar
-rw-r--r-- 1 jovyan users    151996 May 14 19:33 flink-json-1.13.0.jar
-rw-r--r-- 1 jovyan users  36468695 May 14 19:33 flink-table_2.11-1.13.0.jar
-rw-r--r-- 1 jovyan users  41013390 May 14 19:33 
flink-table-blink_2.11-1.13.0.jar
{code}

> SQL Client queries fails on select statement
> --------------------------------------------
>
>                 Key: FLINK-22907
>                 URL: https://issues.apache.org/jira/browse/FLINK-22907
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.13.0
>         Environment: python 3.7.6
> JupyterLab
> apache-flink==1.13.0
>            Reporter: Ryan Darling
>            Priority: Major
>         Attachments: flink_sql_issue1.JPG
>
>
> I have configured a Jupyter notebook to test flink jobs with the sql client. 
> All of my source / sink table creation statements are successful but we are 
> unable to query the created tables
> In this scenario we are attempting to pull data from a kafka topic into a 
> source table and if successful insert into a sink table and on to another 
> kafka topic. 
> We start the sql_client.sh passing the needed jar file locations 
> (flink-sql-connector-kafka_2.11-1.13.0.jar, 
> flink-table-planner_2.12-1.13.0.jar, flink-table-common-1.13.0.jar, 
> flink-sql-avro-confluent-registry-1.13.0.jar, 
> flink-table-planner-blink_2.12-1.13.0.jar)
> Next we create the source table and point to a kafka topic that we know has 
> avro data in it and registered schemas in the schema registry. 
> CREATE TABLE avro_sources ( 
>  prop_id INT,
>  check_in_dt STRING,
>  check_out_dt STRING,
>  los INT,
>  guests INT,
>  rate_amt INT
>  ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'avro_rate',
>  'properties.bootstrap.servers' = '<removed>',
>  'key.format' = 'avro-confluent',
>  'key.avro-confluent.schema-registry.url' = '<removed>',
>  'key.fields' = 'prop_id',
>  'value.format' = 'avro-confluent',
>  'value.avro-confluent.schema-registry.url' = '<removed>',
>  'value.fields-include' = 'ALL',
>  'key.avro-confluent.schema-registry.subject' = 'avro_rate',
>  'value.avro-confluent.schema-registry.subject' = 'avro_rate'
>  )
>  
> At this point I want to see the data that has been pulled into the source 
> table and I get the following error and are struggling to find a solution. I 
> feel this could be a bug 
> Flink SQL> select * from avro_sources;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be 
> cast to org.codehaus.commons.compiler.ICompilerFactory
>  
> Any guidance on how I can resolve the bug or the problem would be 
> appreciated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to