Hi Arvid,
Thank you for the suggestion.
Indeed, the specified setting was commented out in the Flink configuration
(flink-conf.yaml).
# io.tmp.dirs: /tmp
Is there a fallback (e.g. /tmp) if io.tmp.dirs and
System.getProperty("java.io.tmpdir") are both not set?
Will configure this setting to
目前已经有了一个ticket来跟进了,https://issues.apache.org/jira/browse/FLINK-17707
应该在1.13里面可以支持
Best,
Yang
casel.chen 于2021年3月26日周五 上午8:23写道:
> Flink on K8S Standalone模式下可以通过yaml启多个JM,但是在Native K8S模式下要如果做呢?有文档资料介绍吗?谢谢!
退订
| |
aegean0933
邮箱:aegean0...@163.com
|
Thanks Guowei for the comments and Lukáš Drbal for sharing the feedback.
I think it is not only for Kubernetes application mode, but also Yarn
application and standalone application,
the job id will be set to ZERO if not configured explicitly in HA mode.
For standalone application, we could use
Hi Kevin, Xinbin,
Hi Shuiqiang,
>
> Thanks for the quick response on creating the ticket for Kinesis
> Connector. Do you mind giving me the chance to try to implement the
> connector over the weekend?
>
> I am interested in contributing to Flink, and I think this can be a good
> starting point
Jark wrote
> 我看你的作业里面是window agg,目前 window agg 还不支持自动拆分。1.13 的基于 window tvf 的 window
> agg支持这个参数了。可以期待下。
>
> Best,
> Jark
>
> On Wed, 24 Mar 2021 at 19:29, Robin Zhang
> vincent2015qdlg@
>
> wrote:
>
>> Hi,guomuhua
>> 开启本地聚合,是不需要自己打散进行二次聚合的哈,建议看看官方的文档介绍。
>>
>> Best,
>> Robin
>>
>>
>>
我看你的作业里面是window agg,目前 window agg 还不支持自动拆分。1.13 的基于 window tvf 的 window
agg支持这个参数了。可以期待下。
Best,
Jark
On Wed, 24 Mar 2021 at 19:29, Robin Zhang
wrote:
> Hi,guomuhua
> 开启本地聚合,是不需要自己打散进行二次聚合的哈,建议看看官方的文档介绍。
>
> Best,
> Robin
>
>
> guomuhua wrote
> > 在SQL中,如果开启了 local-global 参数:set
> >
如果我的需求需要写入明细数据,还要根据明细数据做聚合,然后再把汇总数据入库。
这种数据加工的流程应该怎么设计比较好!
Thanks for the clarification Dawid. Resolves my confusion.
Sent from Yahoo Mail on Android
On Fri, 19 Mar 2021 at 2:41 pm, Dawid Wysakowicz
wrote:
Hi Chirag,
I agree it might be a little bit confusing.
Let me try to explain the reasoning. To do that I'll first try to rephrase the
hi all
onyarn31??flink-confhadoop yarn
https://issues.apache.org/jira/browse/FLINK-21981
2??hadoop#configuration??yarnyarn??configuration??yarn??configurationconfiguration??
退订
Flink on K8S Standalone模式下可以通过yaml启多个JM,但是在Native K8S模式下要如果做呢?有文档资料介绍吗?谢谢!
退订
发自我的iPhone
Hi Timo,
Apologies for the late response. I somehow seem to have missed your reply.
I do want the join to be "time-based" since I need to perform a tumble
grouping operation on top of the join.
I tried setting the watermark strategy to `R` - INTERVAL '0.001' SECONDS,
that didn't help either.
This is a Beam issue indeed, though it is an issue with the FlinkRunner. So
I think I will BCC the Flink list.
You may be in one of the following situations:
- These timers should not be viewed as distinct by the runner, but
deduped, per
Hi Team,
My streaming pipeline is based on beam & running using flink runner with
rocksdb as state backend.
Over time I am seeing memory spike & after giving a look at heap dump, I am
seeing records in ‘__StatefulParDoGcTimerId’ which seems to be never cleaned.
Found this jira
Hi,
I have a master/reference data that needs to come in through a
FlinkKafkaConsumer to be broadcast to all nodes and subsequently joined with
the actual stream for enriching content.
The Kafka consumer gets CDC-type records from database changes. All this works
well.
My question is how do
Hi Arvid,
Thanks, will set the scope to Provided and try.
Are there public examples in GitHub that demonstrate a sample app in Minikube?
Sandeep
> On 23-Mar-2021, at 3:17 PM, Arvid Heise wrote:
>
> Hi Sandeep,
>
> please have a look at [1], you should add most Flink dependencies as
When I run a job on my Kubernetes session cluster only the checkpoint
directories are created but not the savepoints. (Filesystem configured to
S3 Minio) Any ideas?
--
Robert Cullen
240-475-4490
I downloaded the lib (last version) from here:
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-7.0/
and put it in the flink_home/lib directory.
It helped.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I have the same problem ...
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Vinaya,
SpillingAdaptiveSpanningRecordDeserializer tries to create a directory in
the temp directory, which you can configure by setting io.tmp.dirs. By
default, it's set to System.getProperty("java.io.tmpdir"), which seems to
be invalid in your case. (Note that the directory has to exist on
Hi,
I'm quite new to flink and I'm trying to create an application, which reads
ID's from a kinesis stream and then uses these to read from a mysql database. I
expect that I would just be doing a join of the id's onto the table
I'm struggling to understand from the documentation how to
Hi all,
Thanks to everyone who has already left feedback on the community
experience in the Community Survey!
The survey is open until *Tuesday, March 30th*, so if you haven't done so
yet, please take 2 minutes (maybe less!) to fill it out below. Your opinion
is very helpful for us to better
和Standalone一样,你可以按照自己创建一个taskmanager-query-state-service,然后把selector修改一下就好了
native会自动添加如下的label,可以filter出来属于一个Flink cluster的TaskManager
app:
component: taskmanager
type: flink-native-kubernetes
Best,
Yang
tian lin 于2021年3月25日周四 下午4:43写道:
> 各位好:
> 请教Flink 1.12.1 在Flink Native
HI, Guowei
yeah, I think so too. There is no way trigger a checkpoint and wath the
checkpoint finished now, so I will do the benchmark with lower level api.
Guowei Ma 于2021年3月25日周四 下午4:59写道:
> Hi,
> I am not an expert of JMH but it seems that it is not an error. From the
> log it looks like
Hi Matthias,
Thank you for following up on this. +1 to officially deprecate Mesos in the
code and documentation, too. It will be confusing for users if this
diverges from the roadmap.
Cheers,
Konstantin
On Thu, Mar 25, 2021 at 12:23 PM Matthias Pohl
wrote:
> Hi everyone,
> considering the
Hi all,
In case it is useful to some of you:
I have a big batch that needs to use globs (*.parquet for example) to
read input files. It seems that globs do not work out of the box (see
https://issues.apache.org/jira/browse/FLINK-6417)
But there is a workaround:
final FileInputFormat
Hi everyone,
considering the upcoming release of Flink 1.13, I wanted to revive the
discussion about the Mesos support ones more. Mesos is also already listed
as deprecated in Flink's overall roadmap [1]. Maybe, it's time to align the
documentation accordingly to make it more explicit?
What do
Hello Guowei,
I just checked it and it works!
Thanks a lot!
Here is workaround which use UUID as jobId:
-D\$internal.pipeline.job-id=$(cat /proc/sys/kernel/random/uuid|tr -d "-")
L.
On Thu, Mar 25, 2021 at 11:01 AM Guowei Ma wrote:
> Hi,
> Thanks for providing the logs. From the logs this
Hello everybody,
I set up a a Flink (1.12.1) and Hadoop (3.2.1) cluster on two machines.
The job should store the checkpoints on HDFS like so:
```java
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(15000,
Hi,
Thanks for providing the logs. From the logs this is a known bug.[1]
Maybe you could use `$internal.pipeline.job-id` to set your own
job-id.(Thanks to Wang Yang)
But keep in mind this is only for internal use and may be changed in
some release. So you should keep an eye on [1] for the correct
请问,MySQLcdc的数据,后续要聚合,应该放到upsertkafka吗?还是有其他方法。
Hello,
sure. Here is log from first run which succeed -
https://pastebin.com/tV75ZS5S
and here is from second run (it's same for all next) -
https://pastebin.com/pwTFyGvE
My Docker file is pretty simple, just take wordcount + S3
FROM flink:1.12.2
RUN mkdir -p $FLINK_HOME/usrlib
COPY
Hi,
I am not an expert of JMH but it seems that it is not an error. From the
log it looks like that the job is not finished.
The data source continues to read data when JMH finishes.
Thread[Legacy Source Thread - Source:
TableSourceScan(table=[[default_catalog, default_database,
各位好:
请教Flink 1.12.1 在Flink Native Kubernets部署模式下,如何开启Queryable
State呢?官网提供了Standaleon
K8S下开启的说明(https://ci.apache.org/projects/flink/flink-docs-stable/deployment/resource-providers/standalone/kubernetes.html#enabling-queryable-state),但Native
K8S部署模式下,无论是Session还是Application 模式,Flink相关k8s
Hi,
After some discussion with Wang Yang offline, it seems that there might be
a jobmanager failover. So would you like to share full jobmanager log?
Best,
Guowei
On Wed, Mar 24, 2021 at 10:04 PM Lukáš Drbal wrote:
> Hi,
>
> I would like to use native kubernetes execution [1] for one batch job
退订
Dear all,
One of the Flink jobs gave below exception and failed. Several attempts to
restart the job resulted in the same exception and the job failed each time.
The job started successfully only after changing the file name.
Flink Version: 1.11.2
Exception
2021-03-24 20:13:09,288 INFO
39 matches
Mail list logo