flink在k8s上部署,如何修改默认的lib目录

2021-04-18 文章 cxydeve...@163.com
默认的lib路径是/opt/flink/lib
现在我无法操作/opt/flink/lib, 但是又想往里面放东西,所以想请教是否可以修改在flink-conf.yaml指定lib路径?



--
Sent from: http://apache-flink.147419.n8.nabble.com/

flink on yarn 启动报错

2021-04-18 文章 Bruce Zhang
flink on yarn per-job 模式提交报错,命令是 bin/flink run -m yarn-cluster -d -yjm 1024 
-ytm 4096 /home/XX.jar

 

 yarn 资源足够,提交别的程序也可以,只有这个程序提交就报错,但是命令修改为bin/flink run -m yarn-cluster -yjm 1024 
-ytm 4096 /home/testjar/XX.jar 就能成功,即去掉-d 这个命令参数,但是是session模式,并且还会影响别的程序执行

 

报错信息:

2021-04-19 10:08:13,116 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2021-04-19 10:08:13,116 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2021-04-19 10:08:13,541 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cluster 
specification: ClusterSpecification{masterMemoryMB=1024, 
taskManagerMemoryMB=4096, numberTaskManagers=1, slotsPerTaskManager=1}

2021-04-19 10:08:13,843 WARN  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - The 
configuration directory ('/home/software/flink-1.7.0/conf') contains both LOG4J 
and Logback configuration files. Please delete or rename one of them.

2021-04-19 10:08:14,769 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Submitting 
application master application_1618796268543_0019

2021-04-19 10:08:14,789 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted 
application application_1618796268543_0019

2021-04-19 10:08:14,789 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Waiting for the 
cluster to be allocated

2021-04-19 10:08:14,791 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Deploying 
cluster, current state ACCEPTED






 The program finished with the following exception:




org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy 
Yarn job cluster.

at 
org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:82)

at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:238)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)

at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)

at 
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)

Caused by: 
org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: 
The YARN application unexpectedly switched to state FAILED during deployment.

Diagnostics from YARN: Application application_1618796268543_0019 failed 1 
times due to AM Container for appattempt_1618796268543_0019_01 exited with  
exitCode: 1

For more detailed output, check application tracking 
page:http://siact-11:8088/cluster/app/application_1618796268543_0019Then, click 
on links to logs of each attempt.

Diagnostics: Exception from container-launch.

Container id: container_e24_1618796268543_0019_01_01

Exit code: 1

Stack trace: ExitCodeException exitCode=1:

at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)

at org.apache.hadoop.util.Shell.run(Shell.java:482)

at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)

at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)

at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)

at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)







Container exited with a non-zero exit code 1

Failing this attempt. Failing the application.

If log aggregation is enabled on your cluster, use this command to further 
investigate the issue:

yarn logs -applicationId application_1618796268543_0019

at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.startAppMaster(AbstractYarnClusterDescriptor.java:1065)

at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:545)

at 

Flink-kafka-connector Consumer配置警告

2021-04-18 文章 lp
flink1.12正常程序中,有如下告警:

19:38:37,557 WARN  org.apache.kafka.clients.consumer.ConsumerConfig
[] - The configuration 'flink.partition-discovery.interval-millis' was
supplied but isn't a known config.

我有一行如下配置:
properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,10);


根据官网https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#topic-discovery介绍:
By default, partition discovery is disabled. To enable it, set a
non-negative value for flink.partition-discovery.interval-millis in the
provided properties config, representing the discovery interval in
milliseconds.


上述配置应该是合法的,但是为何会报如此警告呢?



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: flink1.12.2 StreamingFileSink 问题

2021-04-18 文章 JasonLee
hi

可以参考这篇文章: https://mp.weixin.qq.com/s/HqXaREr_NZbZ8lgu_yi7yA



-
Best Wishes
JasonLee
--
Sent from: http://apache-flink.147419.n8.nabble.com/