Re: How to auto scale yarn session job

2023-03-20 Thread Si-li Liu
Sorry, it seems related our own bug, after fixed the defect it can scale automatically. Thanks for your reply. Weihua Hu 于2023年3月21日周二 12:02写道: > Hi, > > Yarn session clusters should auto allocate new task managers if slots are > not enough. > And the new submitting should not affect running

Re: How to auto scale yarn session job

2023-03-20 Thread Weihua Hu
Hi, Yarn session clusters should auto allocate new task managers if slots are not enough. And the new submitting should not affect running jobs. Could you provide the failure log? Best, Weihua On Tue, Mar 21, 2023 at 11:57 AM Si-li Liu wrote: > I use this command to launch a flink yarn

How to auto scale yarn session job

2023-03-20 Thread Si-li Liu
I use this command to launch a flink yarn session. yarn-session.sh -s 6 -jm 2048 -tm 4096 -nm sql-client-session -m yarn-cluster -d And all my flink sql job has 2 parallelism, and I found that my yarn session can only have 3 pipelines. If my session doesn't have free slot, submit to this session

flink作业保存的状态文件目录在aliyun oss上打不开

2023-03-20 Thread casel.chen
有一个flink cdc实现多表关联打宽的flink作业,作业状态达到20GB左右,远端状态存储用的是aliyun oss。今天作业运行失败打算手动从checkpoint恢复时发现保存作业状态的checkpoint目录(share目录)无法通过浏览器打开,后来使用命令行list了一下该目录下的文件有多达上万个文件。该flink作业用的是rocksdb state backend并开启了增量checkpoint。请问有什么办法可以解决这个问题吗?share目录下这么多文件是因为增量checkpoint遗留下来的吗?

Re: Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

2023-03-20 Thread Shammon FY
Hi Ajinkya I think you can try to decrease the size of batch shuffle with config `taskmanager.memory.framework.off-heap.batch-shuffle.size` if the data volume of your job is small, the default value is `64M`. You can find more information in doc [1] [1]

prometheus监控flink作业经常OOM

2023-03-20 Thread casel.chen
线上用prometheus监控几百个flink作业,使用的是pushgateway方式,设置采样作业metrics周期是30秒,prometheus服务本身给了将近50GB内存,还是会经常发生OOM,请问有什么调优办法吗?

Re: Handling JSON Serialization without Kryo

2023-03-20 Thread Rion Williams
Hi Shammon,Unfortunately it’s a data stream job. I’ve been exploring a few options but haven’t found anything I’ve decided on yet. I’m currently looking at seeing if I can leverage some type of partial serialization to bind to the properties that I know the job will use and retain the rest as a

Re: Way to add columns with defaults to the existing table and recover from the savepoint

2023-03-20 Thread Shammon FY
Hi Ashish State compatibility is a complex issue, and you can review the state evolution [1] and state processor [2] docs to see if there's a solution for your problem. [1] https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/

Re: Handling JSON Serialization without Kryo

2023-03-20 Thread Shammon FY
Hi Rion Is your job datastream or table/sql? If it is a table/sql job, and you can define all the fields in json you need, then you can directly use json format [1] to parse the data. You can also customize udf functions to parse json data into struct data, such as map, row and other types

Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

2023-03-20 Thread Ajinkya Pathrudkar
Ajinkya Pathrudkar 10:53 AM (1 hour ago) to user-info I hope this email finds you well. I am writing to inform you of a recent update we made to our Flink version, upgrading from 1.14 to 1.15, along with a shift from Java 8 to Java 11. Since the update, we have encountered out-of-memory (direct

Re: LEFT and FULL interval joins in Flink SQL leads to very out of order outputs

2023-03-20 Thread Charles Tan
Hi Flink users, Last week I sent an email about some very delayed outputs as a result of running LEFT or FULL interval joins in Flink SQL. I noticed that in a left join, when a record arrives from the left source but there is no matching record from the right source, the watermark for both sides

Re: Help to understand sql DAG

2023-03-20 Thread Lasse Nedergaard
I have found a ticket 20637 and as I understand it this is the problem I have. No activity since September 2021 on this major issue. Med venlig hilsen / Best regards Lasse Nedergaard > Den 8. mar. 2023 kl. 09.11 skrev Shammon FY : > >  >