如题,flink中以及其他开源库总有类似现象,一直忽略,但其实感觉这种做法很影响理解。
比如netty的ChannelPipeline接口继承了ChannelInboundInvoker接口,但将其方法全部在自身接口中都声明了一次,这不是多此一举吗。
Hi Dian,
Thanks for the reply.
I don't think a filter function makes sense here. I have 2,000 tenants in
the source database, and I want all records for a single tenant in a
tenant-specific topic. So, with a filter function, if I understand it
correctly, I would need 2,000 different filters, which
出问题的Jobmanager日志如下,貌似是被隔离?? 然后重启该Jobmanager后就OK了。
2021-06-24 11:30:18,756 WARN akka.remote.Remoting
[] - Tried to associate with unreachable remote
address [akka.tcp://fl...@bjhw-aisecurity-flink03.bjhw:13141]. Address
is now gated for 50 ms, all messages to this address will
You are right that split is still not supported. Does it make sense for you to
split the stream using a filter function? There is some overhead compared the
built-in stream.split as you need to provide a filter function for each
sub-stream and so a record will evaluated multiple times.
>
>From the implementation of DefaultCompletedCheckpointStore, Flink will only
retain the configured amount of checkpoints.
Maybe you could also check the content of jobmanager-leader ConfigMap. It
should have the same number of pointers for the completedCheckpoint.
Best,
Yang
Ivan Yang
Robert is right. We Could only support single job submission in application
mode when the HA mode is enabled.
This is a known limitation of current application mode implementation.
Best,
Yang
Robert Metzger 于2021年6月24日周四 上午3:54写道:
> Thanks a lot for checking again. I just started Flink in
watermarkeventtimewatermarkkey
----
??:
Join
?? Join ??
?? Join ?? Join Join
?? join
Thanks a lot for checking again. I just started Flink in Application mode
with a jar that contains two "executeAsync" submissions, and indeed two
jobs are running.
I think the problem in your case is that you are using High Availability (I
guess, because there are log statements from the
Sounds good. Thanks.
Thomas
On Wed, Jun 23, 2021 at 11:59 AM Seth Wiesman wrote:
> It will just work as long as you enable partition discovery.
>
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#partition-discovery
>
> On Tue, Jun 22, 2021 at
Hi,
New PyFlink user here. Loving it so far. The first major problem I've run
into is that I cannot create a Kafka Producer with dynamic topics. I see
that this has been available for quite some time in Java with Keyed
Serialization using the getTargetTopic method. Another way to do this in
Java
It will just work as long as you enable partition discovery.
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#partition-discovery
On Tue, Jun 22, 2021 at 1:32 PM Thomas Wang wrote:
> Hi,
>
> I'm wondering if anyone has changed the number of
Thanks for the reply. Yes, We are seeing all the completedCheckpoint and
they keep growing. We will revisit our k8s set up, configmap etc
> On Jun 23, 2021, at 2:09 AM, Yang Wang wrote:
>
> Hi Ivan,
>
> For completedCheckpoint files will keep growing, do you mean too many
> files
Hi Arvid,
I've created the following issue:
https://issues.apache.org/jira/browse/FLINK-23127
On Wed, Jun 23, 2021 at 2:09 AM Arvid Heise wrote:
> This looks like a bug at first glance. Could you please open a ticket for
> that?
>
> If not, I'd do that tomorrow.
>
> On Wed, Jun 23, 2021 at
Thanks Yun. Let me try options provided below.
Thanks,
Vijay
> On Jun 23, 2021, at 4:51 AM, Yun Tang wrote:
>
>
> Hi Vijay,
>
> To be honest, an 18MB checkpoint size in total might not be something
> serious. If you really want to dig what inside, you could use
>
Hi Debraj,
By Source Legacy Thread we refer to all sources which do not implement the new
interface yet [1]. Currently only the Hive, Kafka and FileSource
are already migrated. In general, there is no sever downside of using the older
source but in the future we plan only to implement ones
Hi
I am seeing the below logs in flink 1.13.0 running in YARN
2021-06-23T13:41:45.761Z WARN grid.task.MetricSdmStalenessUtils Legacy
Source Thread - Source: MetricSource -> Filter -> MetricStoreMapper ->
(Filter -> Timestamps/Watermarks -> Map -> Flat Map, Sink:
FlinkKafkaProducer11, Sink:
Hi Vijay,
To be honest, an 18MB checkpoint size in total might not be something serious.
If you really want to dig what inside, you could use
Checkpoints#loadCheckpointMetadata [1] to load the _metadata to see anything
unexpected.
And you could refer to
会保留维表状态的,靠watermark清理过期数据。
祝好
Leonard
> 在 2021年6月23日,19:20,op <520075...@qq.com.INVALID> 写道:
>
> 谢谢,Event time temporal join
> 会保存temporal每个的key的最新状态吗,官网文档说跟两边watermark有关,每太看明白。。。
>
>
>
>
> --原始邮件--
> 发件人:
??Event time temporal join
??temporal??key??watermark??
----
??:
Hi,
Flink SQL 目前支持 Event time temporal join 任意表/视图,还不支持 Processing-time temporal
join 任意表/视图(支持Processing-time join 实现了LookupTableSource的表)。
Processing-time temporal join 任意表目前不支持的原因主要是语义问题,具体来说: 在Processing
time关联时,Flink SQL 层面还没比较好的机制保证维表加载完后再关联。比如如用来做维表流的kafka中有 1000万 条数据,但目前没有办法实现将这
This looks like a bug at first glance. Could you please open a ticket for
that?
If not, I'd do that tomorrow.
On Wed, Jun 23, 2021 at 6:36 AM Yaroslav Tkachenko <
yaroslav.tkache...@shopify.com> wrote:
> Hi everyone,
>
> I need to add support for the GCS filesystem. I have a working example
>
hi??kakatemporal join??
org.apache.flink.table.api.TableException: Processing-time temporal join is not
supported yet.
sql??
create view visioned_table as
select
user_id,
event
from
(select
user_id,
event,
Thanks Yingjie for pinging me.
Hi vtygoss,
Leonard is right, maybe you are using the wrong statistics information.
This caused the optimizer to select the **BROADCAST JOIN**
incorrectly. Unfortunately, Flink needs to broadcast a huge amount of data,
even gigabytes. This is really the
Hi Robert,
But I saw Flink doc shows application mode can run multiple jobs? Or I
misunderstand it?
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/
*Compared to the Per-Job mode, the Application Mode allows the
submission of applications consisting of multiple
Hi Rommel,
I wonder why avro type would use kryo as its serializer to serialize, could you
check what kind of type information could get via TypeInformation.of(class) [1]
[1]
Hi Kevin,
欢迎来到 Apache Flink 开源社区!正如唐云所说,社区非常欢迎每一个贡献,也很珍惜每一份贡献。
但是中文文档的维护是一个非常庞大的工作,涉及到所有的模块,所以需要很多模块的 committer 的协作,
所以有时候难免会有更新不及时。
如果你有发现未翻译的页面且没有相关 JIRA issue,可以直接去创建 issue 并提交 PR。
如果已有相关 issue 和 PR,可以帮助 review,社区目前更缺高质量的 reviewer,这更能加速很多翻译的进度。
Best,
Jark
On Wed, 23 Jun 2021 at 11:04, Yun
Hi Qihua,
Application Mode is meant for executing one job at a time, not multiple
jobs on the same JobManager.
If you want to do that, you need to use session mode, which allows managing
multiple jobs on the same JobManager.
On Tue, Jun 22, 2021 at 10:43 PM Qihua Yang wrote:
> Hi Arvid,
>
> Do
28 matches
Mail list logo