Hi,
Do you mean your checkpoint failure stops the normal running of your job?
What's your sink type? If it relies on the completed checkpoint to commit,
it should be expected.
On Tue, Oct 31, 2023 at 12:03 AM Evgeniy Lyutikov
wrote:
> Hi team!
> I came across strange behavior in Flink 1.17.1.
Hi, Arjun.
Do you mean clearing all states stored in a user-defined state ?
IIUC, It could be done for Operator state.
But it cannot be done for Keyed state for users because every operation for
it is binded with a specific key currently.
BTW, Could you also share your business scenario ? It could
Hi, Yu
Thanks for the suggestion.
Ideally the data need to come from the sink being audited, adding another sink
serves part of the purpose, but if anything goes wrong in the original sink, I
presume it won't be reflected in the additional sink. (correct me If I'm
mistaken)
I may have to
Hi Steven,
As stated in the `StandaloneResourceManager` comments, the manager does not
acquire new resources and the user needs to manually start the Taskmanager
by themself.
While `ActiveResourceManager` achieves requesting or releasing resources on
demand(that's what active means) based on some
>
> From: Alexander Fedulov
> Sent: 01 November 2023 01:54 AM
> To: Kamal Mittal
> Cc: user@flink.apache.org
> Subject: Re: Flink custom parallel data source
>
>
>
> Flink natively supports a pull-based model for sources, where the source
operators request data from
3 ноября 2023 г. 9:13:51
> *Кому:* user@flink.apache.org
> *Копия:* Nathan Moderwell
> *Тема:* Re: flink-kubernetes-operator cannot handle SPECCHANGE for 100+
> FlinkDeployments concurrently
>
> One of the operator pods logged the following exception before the
> conta
of
reconcile threads, but nothing helped
От: Tony Chen
Отправлено: 3 ноября 2023 г. 9:13:51
Кому: user@flink.apache.org
Копия: Nathan Moderwell
Тема: Re: flink-kubernetes-operator cannot handle SPECCHANGE for 100+
FlinkDeployments concurrently
One of the operator pods
Hi,
验证了下,问题疑似出现在reduce函数中,复用了下wordCount1这个对象。我试了下new一个新的WordCount作为输出应该就可以了。
猜测这可能和基于Heap的state backend有关,多个窗口的heap state可能直接使用的是一个对象的地址。
```
.reduce(
(wordCount1, wordCount2) -> {
WordCount newWC =
new WordCount(
wordCount1.word, wordCount1.count + wordCount2.count);
ated jobmanager pod
> gets deleted and then recreated. rh-flinkdeployment-01 basically becomes
> stuck in this loop where it becomes stable and then gets re-deployed by the
> operator.
>
> This doesn't happen to all 110 FlinkDeployments, but it happens to around
> 30 of them con
work around, or
> is there any pre-existing work that we could potentially re-use?
>
> On Thu, Nov 2, 2023 at 3:30 AM Martijn Visser
> wrote:
>
>> Hi,
>>
>> That's by design: you can't dynamically add and remove topics from an
>> existing Flink job that is bein
for
this dynamic adding/removing topics feature (probably by forking the
flink-connector-kafka and add some custom logic there), just wondering if
there's any direction you can point us if we are to do the work around, or
is there any pre-existing work that we could potentially re-use?
On Thu, Nov 2, 2023
Hi arjun
Flink will save the currently processed file and its corresponding offset
in Flink state [1]. You may need to use the Flink state process API[1] to
access it.
However, I don't think this is a good approach. I suggest adding relevant
metrics to the FileSystem connector to report the
Hi,
That's by design: you can't dynamically add and remove topics from an
existing Flink job that is being restarted from a snapshot. The
feature you're looking for is being planned as part of FLIP-246 [1]
Best regards,
Martijn
[1]
From: Samrat Deb
Sent: Wednesday, November 1, 2023 15:31
To: d...@flink.apache.org
Cc: user@flink.apache.org
Subject: Re: [DISCUSS][FLINK-33240] Document deprecated options as well
Thanks for the proposal ,
+1 for adding deprecated identifier
[Thought] Can we have seperate
,
Zhanghao Chen
From: Alexander Fedulov
Sent: Tuesday, October 31, 2023 18:12
To: d...@flink.apache.org
Cc: user@flink.apache.org
Subject: Re: [DISCUSS][FLINK-33240] Document deprecated options as well
Hi Zhanghao,
Thanks for the proposition.
In general +1, this sounds
Hi Xuyang,
Thanks again for giving me some insights on how to use the Datastream API
for my use case, I will explore it and experiment with it.
I wanted to use the value inside the row datatype as a primary key because,
I might get multiple records for the same id and when I try to make a join
Thanks for the proposal.
+1 from my side and +1 for putting them to a separate section.
Best,
Hang
Samrat Deb 于2023年11月1日周三 15:32写道:
> Thanks for the proposal ,
> +1 for adding deprecated identifier
>
> [Thought] Can we have seperate section / page for deprecated configs ? Wdut
> ?
>
>
>
Hi Xuyang,
Thank you for your response. Since, I have no access to create a ticket in
the ASF jira I have requested for the access and once I get the access will
raise a ticket for the same.
Also, you have asked me to use Datastream API to extract the id and then
use the TableAPI feature, since
it is feasible in this case?
From: Alexander Fedulov
Sent: 01 November 2023 01:54 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink custom parallel data source
Flink natively supports a pull-based model for sources, where the source
operators request data from the external system
; *Sent:* 31 October 2023 04:03 PM
> *To:* Kamal Mittal
> *Cc:* user@flink.apache.org
> *Subject:* Re: Flink custom parallel data source
>
>
>
> Please note that SourceFunction API is deprecated and is due to be
> removed, possibly in the next major version of Flink.
>
> Ideally
in separate threads = new
Runnable () -> serversocket.accept();
So client socket will be accepted and given to separate thread for read data
from TCP stream.
Rgds,
Kamal
From: Alexander Fedulov
Sent: 31 October 2023 04:03 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink custom paral
Please note that SourceFunction API is deprecated and is due to be removed,
possibly in the next major version of Flink.
Ideally you should not be manually spawning threads in your Flink
applications. Typically you would only perform data fetching in the sources
and do processing in the subsequent
Hi Zhanghao,
Thanks for the proposition.
In general +1, this sounds like a good idea as long it is clear that the
usage of these settings is discouraged.
Just one minor concern - the configuration page is already very long, do
you have a rough estimate of how many more options would be added with
,
java.lang.String, org.apache.flink.runtime.state.StateBackend)
From: Alexis Sarda-Espinosa
Sent: Friday, October 27, 2023 4:29 PM
To: Schwalbe Matthias
Cc: user
Subject: Re: Updating existing state with state processor API
⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠
Hi Matthias
Thanks for your proposal, Zhanghao Chen. I think it adds more transparency
to the configuration documentation.
+1 from my side on the proposal
On Wed, Oct 11, 2023 at 2:09 PM Zhanghao Chen
wrote:
> Hi Flink users and developers,
>
> Currently, Flink won't generate doc for the deprecated
Registering the counter is fine, e.g. in `open()`:
lazy val responseCounter: Counter = getRuntimeContext
.getMetricGroup
.addGroup("response_code")
.counter("myResponseCounter")
then, in def asyncInvoke(), I can still only do responseCounter.inc(), but
what I want is
Hi team,
I'm also interested in finding out if there is Java code available to
determine the extent to which a Flink job has processed files within a
directory. Additionally, I'm curious about where the details of the
processed files are stored within Flink.
Thanks and regards,
Arjun S
On Mon,
hiveserver2 endpoint 就是让 flink gateway 直接变成 hive server2,对外来讲它就是 hive
server2 了,它可以直接跟已有的跟 hive server2 的工具配合一起使用。
但是现在你其实用的是 flink jdbc driver,这个并不是跟 hive server2 交互,它就是跟 flink gateway
交互,所以你用hive server2的模式启动,它就不认识了。
casel.chen 于2023年10月30日周一 14:36写道:
>
> 果然不指定endpoint为hiveserver2类型后使用hive
Hi team,
I appreciate the information provided. I'm inquiring whether there exists a
method to automatically relocate processed files from a directory once a
Flink job has completed processing them. I'm particularly keen on
understanding how this particular use case is currently managed in
Hi Kean,
I would like to share with you our analysis of the pros and cons about
enabling Bloomfilter in production.
Pros:
By enabling BloomFilter, RocksDB.get() can filter out data files that not
contains this key for sure and hence reduce some random disk reads. This
performance improvement is
还有一种做法就是使用datastream,datastream支持sideoutput,但 flink
sql不支持,不过有一种迂回的做法就是flinksql -> datastream -> flink
sql,可以查一下官网资料,flinksql和datastream可以互相转换。
Xuyang 于2023年10月30日周一 10:17写道:
> Flink SQL目前对于脏数据没有类似side output的机制来输出,这个需求用自定义connector应该可以实现。
>
>
>
>
>
>
>
> --
>
> Best!
> Xuyang
>
>
>
>
>
Hi casel,
Flink JDBC 链接到 gateway 目前使用的是 flink 的 gateway 接口,所以你在启动 gateway
的时候不用指定 endpoint 为 hiveserver2 类型,用 Flink 默认的 gateway endpoint 类型即可。
casel.chen 于2023年10月29日周日 17:24写道:
>
> 1. 启动flink集群
> bin/start-cluster.sh
>
>
> 2. 启动sql gateway
> bin/sql-gateway.sh start
I believe bloom filters are off by default because they add overhead and
aren't always helpful. I.e., in workloads that are write heavy and have few
reads, bloom filters aren't worth the overhead.
David
On Fri, Oct 20, 2023 at 11:31 AM Mate Czagany wrote:
> Hi,
>
> There have been no reports
alog,管理起来很麻烦,有这个特性会好很多。
>| |
> 回复的原邮件
>| 发件人 | Feng Jin |
>| 发送日期 | 2023年10月20日 13:18 |
>| 收件人 | |
>| 主题 | Re: flink sql不支持show create catalog 吗? |
>hi casel
>
>
>从 1.18 开始,引入了 CatalogStore,持久化了 Catalog 的配置,确实可以支持 show create catalog 了。
>
>
>Best,
>Feng
>
&
Congratulations! thanks to the release managers and everyone who has
contributed!
Best
Tamir
From: Jark Wu
Sent: Friday, October 27, 2023 7:39 AM
To: d...@flink.apache.org
Cc: Qingsheng Ren ; User ;
user...@flink.apache.org
Subject: Re: [ANNOUNCE] Apache
> Or was it the querying of the checkpoints you were advising against?
Yes, I meant the approach, not file removal itself. Mainly because how
exactly FileSource stores its state is an implementation detail and there
are no external guarantees for its consistency between even the minor
versions.
> This is not a robust solution, I would advise against it.
Oh no? Am curious as to why not. It seems not dissimilar to how Kafka
topic retention works: the messages are removed after some time period
(hopefully after they are processed), so why would it be bad to remove
files that are already
Hi Gordon,
Thanks for your information. That is what I need.
And I have responded to the Kafka connector RC vote mail.
Best regards,
Xianxun
> 2023年10月28日 04:13,Tzu-Li (Gordon) Tai 写道:
>
> Hi Xianxun,
>
> You can find the list supported Flink versions for each connector here:
>
Hi Xianxun,
You can find the list supported Flink versions for each connector here:
https://flink.apache.org/downloads/#apache-flink-connectors
Specifically for the Kafka connector, we're in the process of releasing a
new version for the connector that works with Flink 1.18.
The release
> I wonder if you could use this fact to query the committed checkpoints
and move them away after the job is done.
This is not a robust solution, I would advise against it.
Best,
Alexander
On Fri, 27 Oct 2023 at 16:41, Andrew Otto wrote:
> For moving the files:
> > It will keep the files as
* with regards to empty string. The null check is still a bit defensive and
one could return false in test(), but it does not matter really since
String.substring in getName() can never return null.
On Fri, 27 Oct 2023 at 16:32, Alexander Fedulov
wrote:
> Actually, this is not even "defensive
Actually, this is not even "defensive programming", but is the required
logic for processing directories.
See here:
Hi Matthias,
Thanks for the response. I guess the specific question would be, if I work
with an existing savepoint and pass an empty DataStream to
OperatorTransformation#bootstrapWith, will the new savepoint end up with an
empty state for the modified operator, or will it maintain the existing
For moving the files:
> It will keep the files as is and remember the name of the file read in
checkpointed state to ensure it doesnt read the same file twice.
I wonder if you could use this fact to query the committed checkpoints and
move them away after the job is done. I think it should even
Good morning Alexis,
Something like this we do all the time.
Read and existing savepoint, copy some of the not to be changed operator states
(keyed/non-keyed) over, and process/patch the remaining ones by transforming
and bootstrapping to new state.
I could spare more details for more specific
Hi team, Thanks for your quick response.
I have an inquiry regarding file processing in the event of a job restart.
When the job is restarted, we encounter challenges in tracking which files
have been processed and which remain pending. Is there a method to
seamlessly resume processing files from
Congratulations and thanks release managers and everyone who has
contributed!
Best,
Jark
On Fri, 27 Oct 2023 at 12:25, Hang Ruan wrote:
> Congratulations!
>
> Best,
> Hang
>
> Samrat Deb 于2023年10月27日周五 11:50写道:
>
> > Congratulations on the great release
> >
> > Bests,
> > Samrat
> >
> > On
Congratulations and thanks release managers and everyone who has
contributed!
Best,
Jark
On Fri, 27 Oct 2023 at 12:25, Hang Ruan wrote:
> Congratulations!
>
> Best,
> Hang
>
> Samrat Deb 于2023年10月27日周五 11:50写道:
>
> > Congratulations on the great release
> >
> > Bests,
> > Samrat
> >
> > On
Congratulations!
Best,
Hang
Samrat Deb 于2023年10月27日周五 11:50写道:
> Congratulations on the great release
>
> Bests,
> Samrat
>
> On Fri, 27 Oct 2023 at 7:59 AM, Yangze Guo wrote:
>
> > Great work! Congratulations to everyone involved!
> >
> > Best,
> > Yangze Guo
> >
> > On Fri, Oct 27, 2023 at
Congratulations!
Best,
Hang
Samrat Deb 于2023年10月27日周五 11:50写道:
> Congratulations on the great release
>
> Bests,
> Samrat
>
> On Fri, 27 Oct 2023 at 7:59 AM, Yangze Guo wrote:
>
> > Great work! Congratulations to everyone involved!
> >
> > Best,
> > Yangze Guo
> >
> > On Fri, Oct 27, 2023 at
Hi,
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 来取消订阅邮件。
Best,
Junrui
13430298988 <13430298...@163.com> 于2023年10月27日周五 11:00写道:
> 退订
Hi,
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 来取消订阅邮件。
Best,
Junrui
chenyu_opensource 于2023年10月27日周五 10:20写道:
> 退订
Great work! Congratulations to everyone involved!
Best,
Yangze Guo
On Fri, Oct 27, 2023 at 10:23 AM Qingsheng Ren wrote:
>
> Congratulations and big THANK YOU to everyone helping with this release!
>
> Best,
> Qingsheng
>
> On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
>>
>> Great work,
Great work! Congratulations to everyone involved!
Best,
Yangze Guo
On Fri, Oct 27, 2023 at 10:23 AM Qingsheng Ren wrote:
>
> Congratulations and big THANK YOU to everyone helping with this release!
>
> Best,
> Qingsheng
>
> On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
>>
>> Great work,
Congratulations and big THANK YOU to everyone helping with this release!
Best,
Qingsheng
On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
> Great work, thanks everyone involved!
>
> Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
> >
> > Thanks for the great work!
> >
> > Best,
> >
Congratulations and big THANK YOU to everyone helping with this release!
Best,
Qingsheng
On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
> Great work, thanks everyone involved!
>
> Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
> >
> > Thanks for the great work!
> >
> > Best,
> >
Great work, thanks everyone involved!
Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
>
> Thanks for the great work!
>
> Best,
> Rui
>
> On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
>
> > Finally! Thanks to all!
> >
> > Best,
> > Paul Lam
> >
> > > 2023年10月27日 03:58,Alexander Fedulov
Great work, thanks everyone involved!
Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
>
> Thanks for the great work!
>
> Best,
> Rui
>
> On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
>
> > Finally! Thanks to all!
> >
> > Best,
> > Paul Lam
> >
> > > 2023年10月27日 03:58,Alexander Fedulov
Thanks for the great work!
Best,
Rui
On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
> Finally! Thanks to all!
>
> Best,
> Paul Lam
>
> > 2023年10月27日 03:58,Alexander Fedulov 写道:
> >
> > Great work, thanks everyone!
> >
> > Best,
> > Alexander
> >
> > On Thu, 26 Oct 2023 at 21:15, Martijn
Thanks for the great work!
Best,
Rui
On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
> Finally! Thanks to all!
>
> Best,
> Paul Lam
>
> > 2023年10月27日 03:58,Alexander Fedulov 写道:
> >
> > Great work, thanks everyone!
> >
> > Best,
> > Alexander
> >
> > On Thu, 26 Oct 2023 at 21:15, Martijn
Yeah agree, not a problem in general. But it just seems odd. Returning true if
a fileName can be null will blow up a lot more in the reader as far as my
understanding goes.
I just want to understand whether this is an erroneous condition or an actual
use case. Lets say is it possible to get a
Finally! Thanks to all!
Best,
Paul Lam
> 2023年10月27日 03:58,Alexander Fedulov 写道:
>
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
>> Thank you all who have contributed!
>>
>> Op do 26 okt 2023 om 18:41 schreef Feng Jin
>>
Finally! Thanks to all!
Best,
Paul Lam
> 2023年10月27日 03:58,Alexander Fedulov 写道:
>
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
>> Thank you all who have contributed!
>>
>> Op do 26 okt 2023 om 18:41 schreef Feng Jin
>>
Great work, thanks everyone!
Best,
Ron
Alexander Fedulov 于2023年10月27日周五 04:00写道:
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
> > Thank you all who have contributed!
> >
> > Op do 26 okt 2023 om 18:41 schreef Feng Jin
> >
>
Great work, thanks everyone!
Best,
Ron
Alexander Fedulov 于2023年10月27日周五 04:00写道:
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
> > Thank you all who have contributed!
> >
> > Op do 26 okt 2023 om 18:41 schreef Feng Jin
> >
>
Hi Arjun,
Flink's FileSource doesnt move or delete the files as of now. It will keep the
files as is and remember the name of the file read in checkpointed state to
ensure it doesnt read the same file twice.
Flink's source API works in a way that single Enumerator operates on the
JobManager.
Great work, thanks everyone!
Best,
Alexander
On Thu, 26 Oct 2023 at 21:15, Martijn Visser
wrote:
> Thank you all who have contributed!
>
> Op do 26 okt 2023 om 18:41 schreef Feng Jin
>
> > Thanks for the great work! Congratulations
> >
> >
> > Best,
> > Feng Jin
> >
> > On Fri, Oct 27, 2023
* to clarify: by different output I mean that for the same input message
the output message could be slightly smaller due to the abovementioned
factors and fall into the allowed size range without causing any failures
On Thu, 26 Oct 2023 at 21:52, Alexander Fedulov
wrote:
> Your expectations
Your expectations are correct. In case of AT_LEAST_ONCE Flink will wait
for all outstanding records in the Kafka buffers to be acknowledged before
marking the checkpoint successful (=also recording the offsets of the
sources). That said, there might be other factors involved that could lead
to a
Is there an actual issue behind this question?
In general: this is a form of defensive programming for a public interface
and the decision here is to be more lenient when facing potentially
erroneous user input rather than blow up the whole application with a
NullPointerException.
Best,
Flink's FileSource will enumerate the files and keep track of the progress
in parallel for the individual files. Depending on the format you use, the
progress is tracked at the different level of granularity (TextLine being
the simplest one that tracks the progress based on the number of lines
Thank you all who have contributed!
Op do 26 okt 2023 om 18:41 schreef Feng Jin
> Thanks for the great work! Congratulations
>
>
> Best,
> Feng Jin
>
> On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
>
> > Congratulations, Well done!
> >
> > Best,
> > Leonard
> >
> > On Fri, Oct 27, 2023 at
Thank you all who have contributed!
Op do 26 okt 2023 om 18:41 schreef Feng Jin
> Thanks for the great work! Congratulations
>
>
> Best,
> Feng Jin
>
> On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
>
> > Congratulations, Well done!
> >
> > Best,
> > Leonard
> >
> > On Fri, Oct 27, 2023 at
file = env.fromSource(source,
> WatermarkStrategy.*forMonotonousTimestamps*()
> .withTimestampAssigner(new WatermarkAssigner((Object input)
> -> System.*currentTimeMillis*())),"FileSource");
> file.print();
> }
>
>
>
>
>
> Regards,
>
>
Thanks for the great work! Congratulations
Best,
Feng Jin
On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
> Congratulations, Well done!
>
> Best,
> Leonard
>
> On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee
> wrote:
>
> > Thanks for the great work! Congrats all!
> >
> > Best,
> > Lincoln
Thanks for the great work! Congratulations
Best,
Feng Jin
On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
> Congratulations, Well done!
>
> Best,
> Leonard
>
> On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee
> wrote:
>
> > Thanks for the great work! Congrats all!
> >
> > Best,
> > Lincoln
Congratulations, Well done!
Best,
Leonard
On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee wrote:
> Thanks for the great work! Congrats all!
>
> Best,
> Lincoln Lee
>
>
> Jing Ge 于2023年10月27日周五 00:16写道:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink
Congratulations, Well done!
Best,
Leonard
On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee wrote:
> Thanks for the great work! Congrats all!
>
> Best,
> Lincoln Lee
>
>
> Jing Ge 于2023年10月27日周五 00:16写道:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink
Thanks for the great work! Congrats all!
Best,
Lincoln Lee
Jing Ge 于2023年10月27日周五 00:16写道:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.18.0, which is the first release for the Apache Flink 1.18 series.
>
> Apache Flink® is an open-source unified
Thanks for the great work! Congrats all!
Best,
Lincoln Lee
Jing Ge 于2023年10月27日周五 00:16写道:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.18.0, which is the first release for the Apache Flink 1.18 series.
>
> Apache Flink® is an open-source unified
rti Dhar Upadhyay K
Cc: user@flink.apache.org
Subject: Re: CSV Decoder with AVRO schema generated Object
Hi Kirti,
What do you mean exactly by "Flink CSV Decoder"? Please provide a snippet of
the code that you are trying to execute.
To be honest, combining CSV with AVRO-generated classes
Hi Kirti,
What do you mean exactly by "Flink CSV Decoder"? Please provide a snippet
of the code that you are trying to execute.
To be honest, combining CSV with AVRO-generated classes sounds rather
strange and you might want to reconsider your approach.
As for a quick fix, using aliases in your
Hello,
这个问题解决了吗?我遇到相同的问题,还没定为到原因。
Best,
Paul Lam
> 2023年7月20日 12:04,王刚 写道:
>
> 异常栈信息
> ```
>
> 2023-07-20 11:43:01,627 ERROR
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Terminating
> TaskManagerRunner with exit code 1.
> org.apache.flink.util.FlinkException: Failed
Hi Ralph,
can you explain a bit more? When you say "barriers" you should be referring
to the checkpoints, but from your description seems more like watermarks.
What functionality is supported in Flink and not Flink SQL? In terms of
watermarks, there were a few shortcomings between the two APIs
Hi,
Please send email to user-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from user@flink.apache.org, and you can refer [1][2]
for more details.
Best,
Hang
[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2]
cords with funder 12345 in the table_a and a single
record with funder 12345 in the table_b. When I run this Flink job, I can see
an INSERT with two UPDATEs as my results (corresponding to the records from
table_a), but their order is not deterministic. If I re-run the application
several time
Hi,
看一下你的DynamicTableSource实现的类,如果你用的是InputFormat的旧source(用的是类似InputFormatProvider.of),可以使用InputFormat里的close方法;
如果用的是flip-27的source(用的是类似SourceProvider.of),SplitReader里也有一个close方法
--
Best!
Xuyang
在 2023-10-24 11:54:36,"jinzhuguang" 写道:
>版本:Flink 1.16.0
>
Hi Jing,
My team and I have been blocked by the need for a PyFlink release including
https://github.com/apache/flink/pull/23141, and I saw that you mentioned
that anybody can be the release manager of a bug fix release. Could we
explore what it would take for me to do this (assuming nobody is
gt;
>>
>> In my example, I have the following query:
>>
>> SELECT a.funder, a.amounts_added, r.amounts_removed FROM table_a AS a
>> JOIN table_b AS r ON a.funder = r.funder
>>
>> Let's say I have three records with funder 12345 in the table_a and a
>> single
Hi,Zakelly
Thank you for your answer.
Best,
rui
Zakelly Lan 于2023年10月13日周五 19:12写道:
> Hi rui,
>
> The 'state.backend.fs.memory-threshold' configures the threshold below
> which state is stored as part of the metadata, rather than in separate
> files. So as a result the JM will use its memory
Hi,Zakelly
Thank you for your answer.
Best,
rui
Zakelly Lan 于2023年10月13日周五 19:12写道:
> Hi rui,
>
> The 'state.backend.fs.memory-threshold' configures the threshold below
> which state is stored as part of the metadata, rather than in separate
> files. So as a result the JM will use its memory
The additional exceptions with the same error but on different files
Pyflink lib error :
java.lang.RuntimeException: An error occurred while copying the file.
at org.apache.flink.api.common.cache.DistributedCache.getFile(
DistributedCache.java:158)
at
larity and allows us to identify
>>>bottleneck tasks.
>>>3. Autoscaler feature currently only works for K8s opeartor + native
>>>K8s mode.
>>>
>>>
>>> Best,
>>> Zhanghao Chen
>>> --
&
Hi Hemi
You can not just filter the delete records.
You must use the following syntax to generate a delete record.
```
CREATE TABLE test_source (f1 xxx, f2, xxx, f3 xxx, deleted boolean) with
(.);
INSERT INTO es_sink
SELECT f1, f2, f3
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY
Hi Tony,
It doesn’t seem like the operator had too much to do with this error , I
wonder if this would still happen in newer Flink versions with the
JobResultStore already available.
It would be great to try. In any case I highly recommend upgrading to newer
Flink versions for better operator
Hi Gyula,
After upgrading our operator version to the HEAD commit of the release-1.6
branch (
https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator/127962962?tag=3f0dc2e),
we are still seeing this same issue.
Here's the log message on the last savepoint
Hi Hemi,
One possible way, but it may generate many useless states.
As shown below:
```
CREATE TABLE test_source (f1 xxx, f2, xxx, f3 xxx, deleted boolean) with
(.);
INSERT INTO es_sink
SELECT f1, f2, f3
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY f1, f2 ORDER BY proctime()) as
Team,
Please unsubscribe my email id.
On Thu, Oct 19, 2023 at 6:25 AM jihe18717838093 <18717838...@126.com> wrote:
> Hi team,
>
>
>
> Could you please remove this email from the subscription list?
>
>
>
> Thank you!
>
>
>
> Best,
>
> Minglei
>
Hi,
By naming the container flink-main-container, Flink will know which
container spec it should use for the Flink containers.
If you change the name Flink won't know which container spec to use for the
Flink container, and will probably think it's just a sidecar container, and
there will still
Hi,
There have been no reports about setting this configuration causing any
issues. I would guess it's off by default because it can increase the
memory usage by an unpredictable amount.
I would say feel free to enable it, from what you've said I also think that
this would improve the
801 - 900 of 45670 matches
Mail list logo