Re: flink1.12.1 Sink数据到ES7,遇到 Invalid lambda deserialization 问题

2021-04-19 文章 william
报错日志如下:我的flink sql 已用的
flink-sql-connector-elasticsearch7,代码里使用的flink-connector-elasticsearch7,然后在同一个flink运行,就会报这个错误

Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException:
Cannot instantiate user function.
at
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:339)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperator(OperatorChain.java:636)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:609)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:549)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:170)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:509)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:565)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: unexpected exception type
at
java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1682)
at 
java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1254)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2076)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615)
at
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600)
at
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587)
at
org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:541)
at
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:323)
... 9 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
java.lang.invoke.SerializedLambda.readResolve(SerializedLambda.java:230)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1248)
... 33 more
Caused by: java.lang.IllegalArgumentException: Invalid lambda
deserialization
at
org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink$Builder.$deserializeLambda$(ElasticsearchSink.java:86)
... 42 more



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: 如果输入有回撤流的话,group by时间窗口会遇到GroupWindowAggregate doesn't support consuming update and delete changes

2021-04-19 文章 HunterXHunter
toAppendDataStream试试看



--
Sent from: http://apache-flink.147419.n8.nabble.com/


退订

2021-04-19 文章 纪军伟
退订

Re:如果输入有回撤流的话,group by时间窗口会遇到GroupWindowAggregate doesn't support consuming update and delete changes

2021-04-19 文章 casel.chen
我也遇到同样的问题


GroupWindowAggregate doesn't support consuming update and delete changes which 
is produced by node TableSourceScan(table=[[default_catalog, default_database, 
mcsp_pay_log, ...

按时间窗口聚合不支持上游是canal-json格式的cdc表的情况么?我的业务表其实是一张日志表,怎样用flink sql将retract 
table转成append table?

















在 2021-02-04 09:25:10,"HongHuangNeu"  写道:
>如果输入有回撤流的话,group by时间窗口会遇到GroupWindowAggregate doesn't support consuming
>update and delete changes,有没有什么替代方案?输入是来自于流式去重,就是
>
>SELECT [column_list]
>FROM (
>   SELECT [column_list],
> ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
>   ORDER BY time_attr [asc|desc]) AS rownum
>   FROM table_name)
>WHERE rownum = 1
>
>这样的语句
>
>
>
>--
>Sent from: http://apache-flink.147419.n8.nabble.com/


Re:Re: 如果输入有回撤流的话,group by时间窗口会遇到GroupWindowAggregate doesn't support consuming update and delete changes

2021-04-19 文章 casel.chen
flink window doesn't support retract stream 的话有什么workaround办法吗?常见的场景有 业务表cdc -> 
kakfa -> flink按时间窗口聚合
如果业务表是只会insert的日志表,该如何将retract table转换成普通table?




GroupWindowAggregate doesn't support consuming update and delete changes which 
is produced by node TableSourceScan(table=[[default_catalog, default_database, 
mcsp_pay_log, ...

按时间窗口聚合不支持上游是canal-json格式的cdc表的情况么?我的业务表其实是一张日志表,怎样用flink sql将retract 
table转成append table?














在 2021-02-04 09:26:30,"yang nick"  写道:
>flink window  doesn't support update stream.
>
>HongHuangNeu  于2021年2月4日周四 上午9:24写道:
>
>> 如果输入有回撤流的话,group by时间窗口会遇到GroupWindowAggregate doesn't support consuming
>> update and delete changes,有没有什么替代方案?输入是来自于流式去重,就是
>>
>> SELECT [column_list]
>> FROM (
>>SELECT [column_list],
>>  ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
>>ORDER BY time_attr [asc|desc]) AS rownum
>>FROM table_name)
>> WHERE rownum = 1
>>
>> 这样的语句
>>
>>
>>
>> --
>> Sent from: http://apache-flink.147419.n8.nabble.com/
>>


flink1.11版本 -C 指令并未上传udf jar包

2021-04-19 文章 todd
执行指令:flink  run   \
-m yarn-cluster \
-C file:////flink-demo-1.0.jar \
x

在Client端能够构建成功jobgraph,但是在yarn上会报UDF类找不到。我看Classpath中并未上传该JAR包。



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: flink cdc读取Maxwell格式的binlog,如何获取表名等元信息

2021-04-19 文章 chen310
好的,感谢



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: pyflink kafka connector报错

2021-04-19 文章 qianhuan
非常感谢,已解决,sql写错了。



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: pyflink kafka connector报错

2021-04-19 文章 Dian Fu
把flink-connector-kafka_2.11-1.12.2.jar删了

> 2021年4月19日 下午6:08,qianhuan <819687...@qq.com> 写道:
> 
> 感谢回复
> 导入了flink-sql-connector-kafka_2.11-1.12.2.jar和flink-connector-kafka_2.11-1.12.2.jar,并且放到了site-packages/pyflink/lib目录下,还是一样的报错。
> test_source_table_1这个kafka的表应该是创建成功了,是查询的问题吗?
> 
> 
> 
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/



Re:Re:Re:使用了流应用中使用了mysql jdbc的source,Execution处于FINISHED状态无法生成检查点

2021-04-19 文章 anonnius
重新格式下, 不好意思
hi: 今天又试了下, 我这边出现问题是因为: join时使用的语法问题 照成的
应该使用这种语法
-- temporal join the JDBC table as a dimension table
SELECT * FROM myTopic
LEFT JOIN MyUserTable FOR SYSTEM_TIME AS OF myTopic.proctime
ON myTopic.key = MyUserTable.id;


而不是
SELECT * FROM myTopic a 
LEFTJOIN MyUserTable b
ON a.id = b.id
--
hi: 今天又试了下, 我这边出现问题是因为: join时使用的问题 照成的

应该使用这种语法
-- temporal join the JDBC table as a dimension 
tableSELECT*FROMmyTopicLEFTJOINMyUserTableFORSYSTEM_TIMEASOFmyTopic.proctimeONmyTopic.key=MyUserTable.id;
而不是
SELECT*FROMmyTopic aLEFTJOINMyUserTablebON a.id = b.id
文档连接在这里, 
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache

希望对你有帮助

















在 2021-04-19 18:54:38,"anonnius"  写道:
>hi: 今天又试了下, 我这边出现问题是因为: join时使用的问题 照成的
>
>应该使用这种语法
>-- temporal join the JDBC table as a dimension 
>tableSELECT*FROMmyTopicLEFTJOINMyUserTableFORSYSTEM_TIMEASOFmyTopic.proctimeONmyTopic.key=MyUserTable.id;
>而不是
>SELECT*FROMmyTopic aLEFTJOINMyUserTablebON a.id = b.id
>文档连接在这里, 
>https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache
>
>希望对你有帮助
>
>
>
>
>
>
>
>
>在 2021-04-14 16:47:04,"anonnius"  写道:
>>+1, 目前也遇到了
>>在 2021-01-21 17:52:06,"刘海"  写道:
>>>HI!
>>>这边做测试时遇到一个问题:
>>>在流应用中使用了一个mysql  jdbc的source作为维表,为了优化处理效率使用了Lookup Cache,下面是注册的表:
>>>bsTableEnv.executeSql("CREATE TABLE tm_dealers (dealer_code STRING,is_valid 
>>>DECIMAL(10,0),proctime AS PROCTIME(),PRIMARY KEY (dealer_code) NOT 
>>>ENFORCED\n" +
>>>") WITH (" +
>>>"'connector' = 'jdbc'," +
>>>"'url' = 'jdbc:mysql://10.0.15.83:3306/flink-test?useSSL=false'," +
>>>"'table-name' = 'tm_dealers'," +
>>>"'driver' = 'com.mysql.cj.jdbc.Driver'," +
>>>"'username' = 'root'," +
>>>"'password' = 'Cdh2020:1'," +
>>>"'lookup.cache.max-rows' = '500',"+
>>>"'lookup.cache.ttl' = '1800s',"+
>>>"'sink.buffer-flush.interval' = '60s'"+
>>>")");
>>>
>>>
>>>我发现这样的话checkpoint配置会失效,不能触发检查点,日志报如下错误:
>>>job bad9f419433f78d24e703e659b169917 is notin state RUNNING but FINISHED 
>>>instead. Aborting checkpoint.
>>>
>>>
>>>进入WEB UI 看一下视图发现该Execution处于FINISHED状态,FINISHED状态无法进行checkpoint,这种有其它办法吗?
>>>
>>>
>>>感谢大佬指导一下,拜谢!
>>>| |
>>>刘海
>>>|
>>>|
>>>liuha...@163.com
>>>|
>>>签名由网易邮箱大师定制


Re: pyflink kafka connector报错

2021-04-19 文章 qianhuan
感谢回复
导入了flink-sql-connector-kafka_2.11-1.12.2.jar和flink-connector-kafka_2.11-1.12.2.jar,并且放到了site-packages/pyflink/lib目录下,还是一样的报错。
test_source_table_1这个kafka的表应该是创建成功了,是查询的问题吗?



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re:Re:使用了流应用中使用了mysql jdbc的source,Execution处于FINISHED状态无法生成检查点

2021-04-19 文章 anonnius
hi: 今天又试了下, 我这边出现问题是因为: join时使用的问题 照成的

应该使用这种语法
-- temporal join the JDBC table as a dimension 
tableSELECT*FROMmyTopicLEFTJOINMyUserTableFORSYSTEM_TIMEASOFmyTopic.proctimeONmyTopic.key=MyUserTable.id;
而不是
SELECT*FROMmyTopic aLEFTJOINMyUserTablebON a.id = b.id
文档连接在这里, 
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache

希望对你有帮助








在 2021-04-14 16:47:04,"anonnius"  写道:
>+1, 目前也遇到了
>在 2021-01-21 17:52:06,"刘海"  写道:
>>HI!
>>这边做测试时遇到一个问题:
>>在流应用中使用了一个mysql  jdbc的source作为维表,为了优化处理效率使用了Lookup Cache,下面是注册的表:
>>bsTableEnv.executeSql("CREATE TABLE tm_dealers (dealer_code STRING,is_valid 
>>DECIMAL(10,0),proctime AS PROCTIME(),PRIMARY KEY (dealer_code) NOT 
>>ENFORCED\n" +
>>") WITH (" +
>>"'connector' = 'jdbc'," +
>>"'url' = 'jdbc:mysql://10.0.15.83:3306/flink-test?useSSL=false'," +
>>"'table-name' = 'tm_dealers'," +
>>"'driver' = 'com.mysql.cj.jdbc.Driver'," +
>>"'username' = 'root'," +
>>"'password' = 'Cdh2020:1'," +
>>"'lookup.cache.max-rows' = '500',"+
>>"'lookup.cache.ttl' = '1800s',"+
>>"'sink.buffer-flush.interval' = '60s'"+
>>")");
>>
>>
>>我发现这样的话checkpoint配置会失效,不能触发检查点,日志报如下错误:
>>job bad9f419433f78d24e703e659b169917 is notin state RUNNING but FINISHED 
>>instead. Aborting checkpoint.
>>
>>
>>进入WEB UI 看一下视图发现该Execution处于FINISHED状态,FINISHED状态无法进行checkpoint,这种有其它办法吗?
>>
>>
>>感谢大佬指导一下,拜谢!
>>| |
>>刘海
>>|
>>|
>>liuha...@163.com
>>|
>>签名由网易邮箱大师定制


GroupWindowAggregate doesn't support consuming update and delete changes

2021-04-19 文章 casel.chen
使用 flink sql 1.12.1时遇到三个问题:


1. GroupWindowAggregate doesn't support consuming update and delete changes 
which is produced by node TableSourceScan(table=[[default_catalog, 
default_database, mcsp_pay_log, ...

按时间窗口聚合不支持上游是canal-json格式的cdc表的情况么?我的业务表其实是一张日志表,怎样用flink sql将retract 
table转成append table?







2. flink sql写hbase要如何设置 TTL 过期时间?




3. flink sql写influxdb,https://www.alibabacloud.com/help/zh/faq-detail/155018.htm
如果有tag1, tag1value1, tag1value2, tag2, tag2value1, tag2value2 下面这样写不行吗?

create table stream_test_influxdb(

`metric` varchar,

`timestamp` BIGINT,

`tag1` varchar,

`tag1value1` Double,

`tag1value1` Double,

`tag2` varchar,

`tag2value1` Double,

`tag2value1` Double,

)with( ... )

Re: pyflink kafka connector报错

2021-04-19 文章 Dian Fu
Hi,

需要使用 fat jar [1],可以看一下Kafka Table & SQL connector 的文档 [2].

[1] 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka_2.11/1.12.2/flink-sql-connector-kafka_2.11-1.12.2.jar
 

[2] 
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/kafka.html
 


Regards,
Dian

> 2021年4月19日 下午5:34,qianhuan <819687...@qq.com> 写道:
> 
> 看起来很简单的问题, 但是总报错。。
> 代码:
>   source_ddl = """
>CREATE TABLE test_source_table_1(
>a VARCHAR,
>b INT
>) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test1',
>  'properties.bootstrap.servers' = 'localhost:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
>)
>"""
> pyflink版本:
> apache-flink   1.12.2
> 
> 导入的jar包:flink-connector-kafka_2.11-1.12.2.jar
> 
> python执行报错信息:
> py4j.protocol.Py4JJavaError: An error occurred while calling o4.sqlQuery.
> : java.lang.NoClassDefFoundError:
> org/apache/kafka/common/serialization/ByteArrayDeserializer
>   at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.setDeserializer(FlinkKafkaConsumer.java:322)
>   at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:223)
>   at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:154)
>   at
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource.createKafkaConsumer(KafkaDynamicSource.java:383)
>   at
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource.getScanRuntimeProvider(KafkaDynamicSource.java:205)
>   at
> org.apache.flink.table.planner.sources.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:254)
>   at
> org.apache.flink.table.planner.sources.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:71)
>   at
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:101)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
>   at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
>   at
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:165)
>   at
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:157)
>   at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:902)
>   at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:871)
>   at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
>   at
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:77)
>   at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>   at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>   at
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>   at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>   at
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>   at
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
>   at
> 

pyflink kafka connector报错

2021-04-19 文章 qianhuan
看起来很简单的问题, 但是总报错。。
代码:
   source_ddl = """
CREATE TABLE test_source_table_1(
a VARCHAR,
b INT
) WITH (
  'connector' = 'kafka',
  'topic' = 'test1',
  'properties.bootstrap.servers' = 'localhost:9092',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json'
)
"""
pyflink版本:
apache-flink   1.12.2

导入的jar包:flink-connector-kafka_2.11-1.12.2.jar

python执行报错信息:
py4j.protocol.Py4JJavaError: An error occurred while calling o4.sqlQuery.
: java.lang.NoClassDefFoundError:
org/apache/kafka/common/serialization/ByteArrayDeserializer
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.setDeserializer(FlinkKafkaConsumer.java:322)
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:223)
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:154)
at
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource.createKafkaConsumer(KafkaDynamicSource.java:383)
at
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource.getScanRuntimeProvider(KafkaDynamicSource.java:205)
at
org.apache.flink.table.planner.sources.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:254)
at
org.apache.flink.table.planner.sources.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:71)
at
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:101)
at
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
at
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
at
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:165)
at
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:157)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:902)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:871)
at
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
at
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:77)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at
org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at
org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at 
org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
at
org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at
org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
at
org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassNotFoundException:
org.apache.kafka.common.serialization.ByteArrayDeserializer
at
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 35 more



--
Sent from: http://apache-flink.147419.n8.nabble.com/


回复: Flink-kafka-connector Consumer配置警告

2021-04-19 文章 飞翔
你可以看下源码:



这个props只是作为FlinkKafkaConsumer初始化配置变量,只是这个props 
不仅仅是用来初始化kafka的,只不过这个props最后整个扔进kafka消费客户端的初始化里面而已,不会有任何影响。
就想你自己初始化一个kafka 消费端,你往props塞进其他参数,也会警告,但没有任何影响。


--原始邮件--
发件人:
"user-zh"   
 
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#topic-discovery介绍:
 By default, partition discovery is disabled. To enable it, set a
 non-negative value for flink.partition-discovery.interval-millis in the
 provided properties config, representing the discovery interval in
 milliseconds.
 
 
 上述配置应该是合法的,但是为何会报如此警告呢?
 
 
 
 --
 Sent from: http://apache-flink.147419.n8.nabble.com/

Re: Flink-kafka-connector Consumer配置警告

2021-04-19 文章 Paul Lam
这个是 Kafka client 的警告。这个配置项是 Flink 加进去的,Kafka 不认识。

Best,
Paul Lam

> 2021年4月18日 19:45,lp <973182...@qq.com> 写道:
> 
> flink1.12正常程序中,有如下告警:
> 
> 19:38:37,557 WARN  org.apache.kafka.clients.consumer.ConsumerConfig   
>  
> [] - The configuration 'flink.partition-discovery.interval-millis' was
> supplied but isn't a known config.
> 
> 我有一行如下配置:
> properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,10);
> 
> 
> 根据官网https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#topic-discovery介绍:
> By default, partition discovery is disabled. To enable it, set a
> non-negative value for flink.partition-discovery.interval-millis in the
> provided properties config, representing the discovery interval in
> milliseconds.
> 
> 
> 上述配置应该是合法的,但是为何会报如此警告呢?
> 
> 
> 
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/