Re: flink1.11版本 -C 指令并未上传udf jar包

2021-04-20 Thread
-C,--classpath  Adds a URL to each user code
  classloader  on all nodes in the
  cluster. The paths must specify a
  protocol (e.g. file://) and be
  accessible on all nodes (e.g. by means
  of a NFS share). You can use this
  option multiple times for specifying
  more than one URL. The protocol must
  be supported by the {@link
  java.net.URLClassLoader}.

-C指定依赖jar包需要放在URLClassLoader能够访问到的位置。



在 2021/4/19 下午10:22,“todd” 写入:

执行指令:flink  run   \
-m yarn-cluster \
-C file:////flink-demo-1.0.jar \
x

在Client端能够构建成功jobgraph,但是在yarn上会报UDF类找不到。我看Classpath中并未上传该JAR包。



--
Sent from: http://apache-flink.147419.n8.nabble.com/




Re: Flink-kafka-connector Consumer配置警告

2021-04-20 Thread
flink.partition-discovery.interval-millis这个配置在Flink中是生效的,flink kafka connectors 
会根据配置的时间去获取kafka topic的分区信息,代码实现见: FlinkKafkaConsumerBase 
中的createAndStartDiscoveryLoop方法。

19:38:37,557 WARN  org.apache.kafka.clients.consumer.ConsumerConfig
[] - The configuration 'flink.partition-discovery.interval-millis' was
supplied but isn't a known config.

这个WARN是kafka报出来的,意思是说kafka收到了提供这个参数,但是kafka并不认识。
这个参数并不是给kafka用的,只不过在获取kafka分区的时候需要创建一个KafkaConsumer实例,把设置的参数也一并传给了Kafka。
对应的Warn位置为KafkaConsumer构造函数里面调用的config.logUnused()方法。


在 2021/4/18 下午7:45,“lp”<973182...@qq.com> 写入:

flink1.12正常程序中,有如下告警:

19:38:37,557 WARN  org.apache.kafka.clients.consumer.ConsumerConfig 
   
[] - The configuration 'flink.partition-discovery.interval-millis' was
supplied but isn't a known config.

我有一行如下配置:

properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,10);



根据官网https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#topic-discovery介绍:
By default, partition discovery is disabled. To enable it, set a
non-negative value for flink.partition-discovery.interval-millis in the
provided properties config, representing the discovery interval in
milliseconds.


上述配置应该是合法的,但是为何会报如此警告呢?



--
Sent from: http://apache-flink.147419.n8.nabble.com/