Hi Peter,
Can you provide relevant JobManager logs? And can you write down what steps
have you taken before the failure happened? Did this failure occur during
upgrading Flink, or after the upgrade etc.
Best,
Piotrek
śr., 8 wrz 2021 o 16:11 Peter Westermann
napisał(a):
> We recently upgraded
Hi!
execution.checkpoint.path 这个参数不存在,只有
execution.savepoint.path。另外是通过什么方式设置参数的呢?是写在 flink-conf.yaml 里吗?还是通过 SET 语句?
outlook_3e5704ab57282...@outlook.com
于2021年9月9日周四 上午9:47写道:
>
> 您好:
>
>版本:flink1.13
> 运行: flink sql
> 源表: 基于kafka建表
> 处理逻辑:简单过滤输出到hive
>
>
您好:
版本:flink1.13
运行: flink sql
源表: 基于kafka建表
处理逻辑:简单过滤输出到hive
问题:
每1分钟做一次checkpoint,然后由于某原因需要取消任务,再次运行job如何继续从最新的checkpoint继续处理,在客户端设置了参数:
execution.savepoint.path=hdfs:///user/flink/checkpoints/68e103c6ec9d8bfe459cb6c329ec696c
您好:
版本:flink1.13
运行: flink sql
源表: 基于kafka建表
处理逻辑:简单过滤输出到hive
问题:
每1分钟做一次checkpoint,然后由于某原因需要取消任务,再次运行job如何继续从最新的checkpoint继续处理,在客户端设置了参数:
execution.savepoint.path=hdfs:///user/flink/checkpoints/68e103c6ec9d8bfe459cb6c329ec696c
We also have this configuration set in case it makes any difference when
allocation tasks: cluster.evenly-spread-out-slots.
On 2021/09/08 18:09:52, Xiang Zhang wrote:
> Hello,
>
> We have an app running on Flink 1.10.2 deployed in standalone mode. We
> enabled task-local recovery by setting
I copied FROM flink:1.11.3-scala_2.12-java11
RUN mkdir ./plugins/flink-s3-fs-presto
RUN cp ./opt/flink-s3-fs-presto-1.11.3.jar ./plugins/flink-s3-fs-presto/
then started getting this error , trying to run on aws eks and trying to access
s3 bucket 2021-09-08
Hello,
We have an app running on Flink 1.10.2 deployed in standalone mode. We
enabled task-local recovery by setting both *state.backend.local-recovery *and
*state.backend.rocksdb.localdir*. The app has over 100 task managers and 2
job managers (active and passive).
This is what we have
you need to put the flink-s3-fs-hadoop/presto jar into a directory
within the plugins directory, for example the final path should look
like this:
/opt/flink/plugins/flink-s3-fs-hadoop/flink-s3-fs-hadoop-1.13.1.jar
Furthermore, you only need either the hadoop or presto jar, _not_ both
of
yes I copied to plugin folder but not sure same jar I see in /opt as well by
default
root@d852f125da1f:/opt/flink/plugins# lsREADME.txt
flink-s3-fs-hadoop-1.13.1.jar metrics-datadog metrics-influx
metrics-prometheus metrics-statsdexternal-resource-gpu
We recently upgraded from Flink 1.12.4 to 1.12.5 and are seeing some weird
behavior after a change in jobmanager leadership: We’re seeing two copies of
the same job, one of those is in SUSPENDED state and has a start time of zero.
Here’s the output from the /jobs/overview endpoint:
{
"jobs":
Hi,
So for past 2-3 days i have been looking for documentation which elaborates how
flink takes care of restarting the data streaming job. I know all the restart
and failover strategies but wanted to know how different components (Job
Manager, Task Manager etc) play a role while restarting the
Hi,
I'm investigating why a job we use to inspect a flink state is a lot slower
than the bootstrap job used to generate it.
I use RocksdbDB with a simple keyed value state mapping a string key to a
long value. Generating the bootstrap state from a CSV file with 100M
entries takes a couple
Hi
我这边提交命令如下
/Users/xxx/local/flink-1.12.2/bin/flink run \
-m yarn-cluster \
-yd \
-ynm purchase_hist \
-c com.flink.streaming.core.JobApplication \
flink-streaming-core-2.1.0-1.12.2.jar \
-sql /Users/xxx/Desktop/flinkBigtable/xxx.sql \
-checkpointInterval 6 \
-checkpointingMode
Hi,
did you try to use a different order? Core module first and then Hive
module?
The compatibility layer should work sufficiently for regular Hive UDFs
that don't aggregate data. Hive aggregation functions should work well
in batch scenarios. However, for streaming pipeline the aggregate
14 matches
Mail list logo