第二个问题的异常栈是啥?

kcz <573693...@qq.com> 于2020年7月17日周五 下午2:17写道:

> 第一个bug提示只需要
> classloader.resolve-order: parent-first
> 第二个bug采用了parquet还没解决
>
>
> ------------------&nbsp;原始邮件&nbsp;------------------
> 发件人:
>                                                   "kcz"
>                                                                 <
> 573693...@qq.com&gt;;
> 发送时间:&nbsp;2020年7月17日(星期五) 中午1:32
> 收件人:&nbsp;"user-zh"<user-zh@flink.apache.org&gt;;
>
> 主题:&nbsp;flink-1.11 DDL 写入hdfs问题 Cannot instantiate user function
>
>
>
> standalone
> lib&nbsp; jar包如下
> flink-connector-hive_2.11-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp;
> flink-json-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> flink-sql-connector-kafka_2.12-1.11.0.jar&nbsp; log4j-api-2.12.1.jar
> flink-csv-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; flink-parquet_2.11-1.11.0.jar&nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> flink-table_2.11-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; log4j-core-2.12.1.jar
> flink-dist_2.11-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; flink-shaded-hadoop-2-uber-2.7.2.11-9.0.jar&nbsp;
> flink-table-blink_2.11-1.11.0.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> log4j-slf4j-impl-2.12.1.jar
> flink-hadoop-compatibility_2.11-1.11.0.jar&nbsp;
> flink-shaded-zookeeper-3.4.14.jar&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> log4j-1.2-api-2.12.1.jar
>
>
>
>
>
> 代码如下:idea下不报错
> StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> env.setParallelism(1);
> env.enableCheckpointing(1000, CheckpointingMode.EXACTLY_ONCE);
> // 同一时间只允许进行一个检查点
> env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);;
> env.setStateBackend(new FsStateBackend(path));
>
> tableEnv.executeSql("CREATE TABLE source_table (\n" +
>         "\thost STRING,\n" +
>         "\turl STRING,\n" +
>         "\tpublic_date STRING\n" +
>         ") WITH (\n" +
>         "\t'connector.type' = 'kafka',\n" +
>         "\t'connector.version' = 'universal',\n" +
>         "\t'connector.startup-mode' = 'latest-offset',\n" +
>         "\t'connector.topic' = 'test_flink_1.11',\n" +
>         "\t'connector.properties.group.id' = 'domain_testGroup',\n" +
>         "\t'connector.properties.zookeeper.connect' = '127.0.0.1:2181',\n"
> +
>         "\t'connector.properties.bootstrap.servers' = '127.0.0.1:9092',\n"
> +
>         "\t'update-mode' = 'append',\n" +
>         "\t'format.type' = 'json',\n" +
>         "\t'format.derive-schema' = 'true'\n" +
>         ")");
>
> tableEnv.executeSql("CREATE TABLE fs_table (\n" +
>         "  host STRING,\n" +
>         "  url STRING,\n" +
>         "  public_date STRING\n" +
>         ") PARTITIONED BY (public_date) WITH (\n" +
>         "  'connector'='filesystem',\n" +
>         "  'path'='path',\n" +
>         "  'format'='json',\n" +
>         "  'sink.partition-commit.delay'='0s',\n" +
>         "  'sink.partition-commit.policy.kind'='success-file'\n" +
>         ")");
>
> tableEnv.executeSql("INSERT INTO  fs_table SELECT host, url,
> DATE_FORMAT(public_date, 'yyyy-MM-dd') FROM source_table");
> TableResult result = tableEnv.executeSql("SELECT * FROM fs_table ");
> result.print();
> 报错如下
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot
> instantiate user function.
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:291)
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.streaming.runtime.tasks.OperatorChain.<init&gt;(OperatorChain.java:126)
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:453)
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
> &nbsp;&nbsp;&nbsp; at
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
> &nbsp;&nbsp;&nbsp; at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassCastException: cannot assign instance of
> org.apache.commons.collections.map.LinkedMap to field
>
>
> 第二个bug
> sink到hdfs时候,采用parquet时候,lib下面有parquet包,pom里面是provided,但是会提示这个error,也试过pom里面不是provided,还是不OK

回复