crontab????????????flink-job????,flink-sql-parquet_2.11-1.12.0.jar does not exist

2021-01-01 文章 ??????
,bin/flink run -m yarn-cluster -yjm 660 -ytm 1024 -ys 1 
-yqu xjia_queue -ynm Test_Demo02 -c com.dwd.test_main.Test_Demo02 
-Drest.port="8067" /opt/module/flink1.12/xjia_lib/xjia_shuyun-6.0.jar


WARN org.apache.flink.yarn.configuration.YarnLogConfigUtil  
  [] - The configuration directory 
('/home/xjia/opt/module/flink1.12/conf') already contains a LOG4J config 
file.If you want to use logback, then please delete or rename the log 
configuration file.
2021-01-02 14:55:11,888 INFO org.apache.hadoop.yarn.client.RMProxy 
   [] 
- Connecting to ResourceManager at /0.0.0.0:8032
2021-01-02 14:55:12,056 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - No path for the flink jar passed. Using the 
location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-01-02 14:55:12,206 WARN 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - Neither the HADOOP_CONF_DIR nor the 
YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of 
these to be set to properly load the Hadoop configuration for accessing YARN.
2021-01-02 14:55:12,237 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - The configured JobManager memory is 660 MB. 
YARN will allocate 1024 MB to make up an integer multiple of its minimum 
allocation memory (1024 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 364 MB may not be used by 
Flink.
2021-01-02 14:55:12,238 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - Cluster specification: 
ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=1024, 
slotsPerTaskManager=1}
2021-01-02 14:55:12,458 WARN 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - The file system scheme is 'file'. This 
indicates that the specified Hadoop configuration path is wrong and the system 
is using the default Hadoop configuration values.The Flink YARN client needs to 
store its files in a distributed file system
2021-01-02 14:55:13,775 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - Submitting application master 
application_1609403978979_0043
2021-01-02 14:55:14,025 INFO 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl   
 [] - Submitted application application_1609403978979_0043
2021-01-02 14:55:14,026 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - Waiting for the cluster to be allocated
2021-01-02 14:55:14,029 INFO 
org.apache.flink.yarn.YarnClusterDescriptor 
[] - Deploying cluster, current state ACCEPTED



The program finished with the following exception:


org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Failed to execute sql
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242)
at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:971)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)
Caused by: org.apache.flink.table.api.TableException: Failed to execute sql
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:696)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:759)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:665)
at com.dwd.test_main.Test_Demo02$.main(Test_Demo02.scala:50)
at com.dwd.test_main.Test_Demo02.main(Test_Demo02.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:316)
... 11 more
Caused by: org.apache.flink.client.deployment.ClusterDeploymentException: Could 
not deploy Yarn job cluster.
at 

回复:FlinkSQL 下推的值类型与字段类型不对应

2021-01-01 文章 automths
谢谢你的回答。
但是我的col1,col2就已经是SMALLINT类型的了,是where条件中值下推过程中是Integer类型的,我希望值也是SMALLINT的。



新年快乐!


| |
automths
|
|
邮箱:autom...@163.com
|

签名由 网易邮箱大师 定制

在2020年12月31日 18:17,whirly 写道:
Hi.

查询语句中可以使用 cast 内置函数将值强制转换为指定的类型,如 select CAST(A.`col1` AS SMALLINT) as col1 
from table


参考:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/functions/systemFunctions.html#type-conversion-functions
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/types.html#data-types




best 2021.


在 2020-12-31 17:13:20,"automths"  写道:
>Hi:
>我自定义connect并实现了FilterableTableSource接口,可是发现flink planner 下推的Filter中, 
>Literal类型与字段类型不匹配。
>比如:下面的SQL:
> select * from shortRow1 where key in (1, 2, 3) and col1 > 10 and col1 <= 15
>其中DDL定义时, key、col1、col1都是SMALLINT类型
>在下推的Filter中, GreaterThan中的Literal是Integer类型,这样是合理的吗?或者我的查询语句中要做什么处理?
>
>
>祝好!
>| |
>automths
>|
>|
>autom...@163.com
>|
>


Re:Re: flink 1.12.0 kubernetes-session部署问题

2021-01-01 文章 陈帅
环境:MacBook Pro 单机安装了 minkube v1.15.1 和 kubernetes v1.19.4
我在flink v1.11.3发行版下执行如下命令
kubectl create namespace flink-session-cluster


kubectl create serviceaccount flink -n flink-session-cluster


kubectl create clusterrolebinding flink-role-binding-flink \ --clusterrole=edit 
\ --serviceaccount=flink-session-cluster:flink


./bin/kubernetes-session.sh \ -Dkubernetes.namespace=flink-session-cluster \ 
-Dkubernetes.jobmanager.service-account=flink \ 
-Dkubernetes.cluster-id=session001 \ -Dtaskmanager.memory.process.size=8192m \ 
-Dkubernetes.taskmanager.cpu=1 \ -Dtaskmanager.numberOfTaskSlots=4 \ 
-Dresourcemanager.taskmanager-timeout=360


屏幕打印的结果显示flink web UI启在了 http://192.168.64.2:8081 而不是类似于 
http://192.168.50.135:31753 这样的5位数端口,是哪里有问题?这里的host ip应该是minikube 
ip吗?我本地浏览器访问不了http://192.168.64.2:8081



2021-01-02 10:28:04,177 INFO  
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils [] - The derived 
from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than 
its min value 192.000mb (201326592 bytes), min value will be used instead

2021-01-02 10:28:04,907 INFO  
org.apache.flink.kubernetes.KubernetesClusterDescriptor  [] - Create flink 
session cluster session001 successfully, JobManager Web Interface: 
http://192.168.64.2:8081




查看了pods, service, deployment都正常启动好了,显示全绿色的


接下来提交任务
./bin/flink run -d \ -e kubernetes-session \ 
-Dkubernetes.namespace=flink-session-cluster \ 
-Dkubernetes.cluster-id=session001 \ examples/streaming/WindowJoin.jar



Using windowSize=2000, data rate=3

To customize example, use: WindowJoin [--windowSize ] 
[--rate ]

2021-01-02 10:21:48,658 INFO  
org.apache.flink.kubernetes.KubernetesClusterDescriptor  [] - Retrieve 
flink cluster session001 successfully, JobManager Web Interface: 
http://10.106.136.236:8081




这里显示的 http://10.106.136.236:8081 我是能够通过浏览器访问到的,打开显示作业正在运行,而且available 
slots一项显示的是 0,查看JM日志有如下error




Causedby: 
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
Couldnot allocate the required slot within slot request timeout. Please make 
sure that the cluster has enough resources.
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
 ~[flink-dist_2.12-1.11.3.jar:1.11.3]
... 47 more
Causedby: java.util.concurrent.CompletionException: 
java.util.concurrent.TimeoutException
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
 ~[?:1.8.0_275]
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
 ~[?:1.8.0_275]
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) 
~[?:1.8.0_275]
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
 ~[?:1.8.0_275]
... 27 more
Causedby: java.util.concurrent.TimeoutException
... 25 more


为什么会报这个资源配置不足的错?谢谢解答!








在 2020-12-29 09:53:48,"Yang Wang"  写道:
>ConfigMap不需要提前创建,那个Warning信息可以忽略,是正常的,主要原因是先创建的deployment,再创建的ConfigMap
>你可以参考社区的文档[1]把Jm的log打到console看一下
>
>我怀疑是你没有创建service account导致的[2]
>
>[1].
>https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#log-files
>[2].
>https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#rbac
>
>Best,
>Yang
>
>陈帅  于2020年12月28日周一 下午5:54写道:
>
>> 今天改用官方最新发布的flink镜像版本1.11.3也启不起来
>> 这是我的命令
>> ./bin/kubernetes-session.sh \
>>   -Dkubernetes.cluster-id=rtdp \
>>   -Dtaskmanager.memory.process.size=4096m \
>>   -Dkubernetes.taskmanager.cpu=2 \
>>   -Dtaskmanager.numberOfTaskSlots=4 \
>>   -Dresourcemanager.taskmanager-timeout=360 \
>>   -Dkubernetes.container.image=flink:1.11.3-scala_2.12-java8 \
>>   -Dkubernetes.namespace=rtdp
>>
>>
>>
>> Events:
>>
>>   Type Reason  AgeFrom   Message
>>
>>    --        ---
>>
>>   Normal   Scheduled   88sdefault-scheduler
>> Successfully assigned rtdp/rtdp-6d7794d65d-g6mb5 to
>> cn-shanghai.192.168.16.130
>>
>>   Warning  FailedMount 88skubelet
>> MountVolume.SetUp failed for volume "flink-config-volume" : configmap
>> "flink-config-rtdp" not found
>>
>>   Warning  FailedMount 88skubelet
>> MountVolume.SetUp failed for volume "hadoop-config-volume" : configmap
>> "hadoop-config-rtdp" not found
>>
>>   Normal   AllocIPSucceed  87sterway-daemon  Alloc IP
>> 192.168.32.25/22 for Pod
>>
>>   Normal   Pulling 87skubeletPulling
>> image "flink:1.11.3-scala_2.12-java8"
>>
>>   Normal   Pulled  31skubelet
>> Successfully pulled image "flink:1.11.3-scala_2.12-java8"
>>
>>   Normal   Created 18s (x2 over 26s)  kubeletCreated
>> container flink-job-manager
>>
>>   Normal   Started 18s (x2 over 26s)  kubeletStarted

[ANNOUNCE] Apache Flink Stateful Functions 2.2.2 released

2021-01-01 文章 Tzu-Li (Gordon) Tai
The Apache Flink community released the second bugfix release of the
Stateful Functions (StateFun) 2.2 series, version 2.2.2.

*We strongly recommend all users to upgrade to this version.*

*Please check out the release announcement:*
https://flink.apache.org/news/2021/01/02/release-statefun-2.2.2.html

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Stateful Functions can be found at:
https://search.maven.org/search?q=g:org.apache.flink%20statefun

Python SDK for Stateful Functions published to the PyPI index can be found
at:
https://pypi.org/project/apache-flink-statefun/

Official Dockerfiles for building Stateful Functions Docker images can be
found at:
https://github.com/apache/flink-statefun-docker

Alternatively, Ververica has volunteered to make Stateful Function's images
available for the community via their public Docker Hub registry:
https://hub.docker.com/r/ververica/flink-statefun

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12349366

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Cheers,
Gordon


flink 1.11.2 非原生k8s session模式作业运行失败

2021-01-01 文章 陈帅
我在尝试运行flink on 非原生 k8s session cluster时遇到示例作业运行失败的情况,还请解答一下,谢谢!


参考了官网给的k8s yaml配置文件 [1],k8s session 
cluster能够运行起来,flink:latest下载的是v1.11.2版本的flink
按照其配置起了2个TM,每个TM开了一个slot,一共是两个slots。
之后我通过web UI提交了一个自带的TopSpeedWindowing示例,结果作业在scheduling一段时间后直接fail了,通过kubectl 
logs查看JM日志如下,TM没有输出日志


ExecutingTopSpeedWindowing example with default input data set.
Use --input to specify file input.
Printing result to stdout. Use --output to specify output path.
15:18:06.092 [flink-rest-server-netty-worker-thread-1] ERROR 
org.apache.flink.runtime.rest.handler.job.metrics.JobVertexWatermarksHandler - 
Unhandledexception.
java.util.concurrent.CancellationException: null
at 
java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2276) 
~[?:1.8.0_275]
at 
org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInternal(DefaultExecutionGraphCache.java:99)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraph(DefaultExecutionGraphCache.java:71)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.handleRequest(AbstractExecutionGraphHandler.java:75)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:73)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:178)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:81)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_275]
at 
org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:46) 
[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:78)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:110)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:89)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:54)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:174)
 [flink-dist_2.12-1.11.2.jar:1.11.2]
at 

test

2021-01-01 文章 mq sun