Hi
退订应该发送到 user-zh-unsubscr...@flink.apache.org
Best
JasonLee
在2021年08月21日 09:43,牛成 写道:
退订
退订
退订
But, I do not know why this leads to the job's failure and recovery
since I have set the tolerable failed checkpoint to Integer.MAX_VALUE.
Due to the failure, my task manager failed because of the task cancel
timeout, and about 80% of task managers went down due to cancel
timeout.
yidan zhao
Thank you, Caizhi, for looking into this and identifying the source of the
bug. Is there a way to work around this at the API level until this bug is
resolved? Can I somehow "inject" the type?
Thanks a lot for your help,
Matthias
On Thu, Aug 19, 2021 at 10:15 PM Caizhi Weng wrote:
> Hi!
>
>
Ok, thanks. I have some result, and you can give some ensure. Here is
the issue code:
The async function's implementation. It do async redis query, and fill
some data back.
In code [ currentBatch.get(i).getD().put("ipLabel",
objects.getResponses().get(i)); ] the getD() returns a map attr in
Thanks, will try that.
From: Chesnay Schepler
Sent: Friday, August 20, 2021 8:06 AM
To: Colletta, Edward ; user@flink.apache.org
Subject: Re: failures during job start
NOTICE: This email is from an external sender - do not click on links or
attachments unless you recognize the sender and know
Hi David, I was able to get this working using your suggestion:
1)Deploy a Flink YARN Session Cluster, noting the host + port of the
session’s Job Manager.
2)Submit a Flink job using the session’s details, i.e submitting Flink job
with ‘-m host:port’ option.
Thanks for clearing
I use event time,with Kafka as my source.
The system that I am developing requires data to be aggregated every 15
minutes, thus
I am using a Tumbling Event Time window. However, my system also is
required to take
action every 15 minutes even if there is activity.
I need the elements collected in
I don't think there are any metrics; logging-wise you will need to do
some detective work.
We do know which tasks have started deployment by this message from the
JobManager:
ExecutionGraph [] - (/)
() switched from SCHEDULED to DEPLOYING.
We also know which have completed deployment by
集群里的机器互相访问配的是内网地址呗,你这得开内网访问...
在 2021-08-20 18:56:58,"杨帅统" 写道:
>
>
>
>
>
>
>test.gl.cdh.node1 对应的是远程服务器外网地址 139.9.132.*
>192.168.0.32:9866是139.9.132.*机器的同一内网下的另一台内网地址 为啥会返回内网地址啊。。。
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>在 2021-08-20 18:28:34,"东东" 写道:
>>这不很清楚么,连 192.168.0.32:9866 超时啊
>>
>>
>>
>>
>>在
Hi Suman,
> But I am always seeing the following code of `
*AbstractMapBundleOperator.java*` `*numOfElements` *is always 0.
It is weird, please set a breakpoint at line `
*bundleTrigger.onElement(input);*` in `*processElement*` method to see
what happens when a record is processed by
test.gl.cdh.node1 对应的是远程服务器外网地址 139.9.132.*
192.168.0.32:9866是139.9.132.*机器的同一内网下的另一台内网地址 为啥会返回内网地址啊。。。
在 2021-08-20 18:28:34,"东东" 写道:
>这不很清楚么,连 192.168.0.32:9866 超时啊
>
>
>
>
>在 2021-08-20 18:13:10,"杨帅统" 写道:
>>// 开启checkpoint
>>env.enableCheckpointing(5000L,
这不很清楚么,连 192.168.0.32:9866 超时啊
在 2021-08-20 18:13:10,"杨帅统" 写道:
>// 开启checkpoint
>env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
>
>env.getCheckpointConfig().setCheckpointStorage("hdfs://test.gl.cdh.node1:8020/flink/flink-cdc-demo");
>System.setProperty("HADOOP_USER_NAME",
// 开启checkpoint
env.enableCheckpointing(5000L, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setCheckpointStorage("hdfs://test.gl.cdh.node1:8020/flink/flink-cdc-demo");
System.setProperty("HADOOP_USER_NAME", "root");
报错信息如下:
org.apache.hadoop.net.ConnectTimeoutException: 6
你好:
我使用的场景是要实时统计一天的数据,在小窗口进行即时输出,延迟不能太高,cumulate window符合要求,tumble window 延迟太高了。
在 2021-08-20 16:01:57,"Caizhi Weng" 写道:
>Hi!
>
>你可能想要的是 tumble window 而不是 cumulate window。
>
>李航飞 于2021年8月20日周五 下午3:26写道:
>
>> 能不能通过设置,每分钟只输出 当前小窗口中计算数据的结果,之前计算的结果不再输出
>>
>>
>> 目前测试发现,输入的数据越多,到下次输出的数据也会越来越多,
>>
Hi!
你可能想要的是 tumble window 而不是 cumulate window。
李航飞 于2021年8月20日周五 下午3:26写道:
> 能不能通过设置,每分钟只输出 当前小窗口中计算数据的结果,之前计算的结果不再输出
>
>
> 目前测试发现,输入的数据越多,到下次输出的数据也会越来越多,
> 不同窗口的计算结果,都会再下次窗口中输出,
>
>
Hi Jingsong,
I have created a JIRA ticket
https://issues.apache.org/jira/browse/FLINK-23891.
Best,
Yik San
On Fri, Aug 20, 2021 at 3:32 PM Yik San Chan
wrote:
> Hi Caizhi,
>
> Thanks for the work around! It should work fine.
>
> Hi Jingsong,
>
> Thanks for the suggestion. Before creating a
To remove your address from the list, send a message to:
mailto:user-zh-unsubscr...@flink.apache.org>>
> 2021年8月20日 下午3:36,18221112048 <18221112...@163.com> 写道:
>
>
Hi Caizhi,
Thanks for the work around! It should work fine.
Hi Jingsong,
Thanks for the suggestion. Before creating a JIRA ticket, I wonder if this
is considered a valid ask at the first glance? If so, I will create a JIRA
ticket.
Best,
Yik San
On Fri, Aug 20, 2021 at 11:28 AM Jingsong Li
??
能不能通过设置,每分钟只输出 当前小窗口中计算数据的结果,之前计算的结果不再输出
目前测试发现,输入的数据越多,到下次输出的数据也会越来越多,
不同窗口的计算结果,都会再下次窗口中输出,
thanks @chesney
its later On Friday, August 20, 2021, 03:05:26 AM AST, Chesnay Schepler
wrote:
Is the problem that previously uploaded jars are no longer available (which
would be expected behavior), or that you cannot upload new jars? If it is the
latter, could you use the
Is the problem that previously uploaded jars are no longer available
(which would be expected behavior), or that you cannot upload new jars?
If it is the latter, could you use the developer tools of you browser to
check what response the UI receives when attempting to upload the jar?
On
Hi Chenyu,
這確實是目前尚未解決的一個問題,相關的 jira issue 可以看這 [1]。
jira issue 底下的討論串有提到一個替代方案是:使用 -D\$internal.pipeline.job-id=$(cat
/proc/sys/kernel/random/uuid|tr -d "-") 主動為 application 模式的任務產生隨機的 jobid。
但由於此配置參數屬於 flink 內部參數,可能不保證未來任何改動後的向後兼容性,請謹慎考慮後再使用。
[1]
Hi Jing,
I tried using `*MapBundleOperator*` also (I am yet to test with
LinkedHashMap) . But I am always seeing that the following code of `
*AbstractMapBundleOperator.java*` `*numOfElements` *is always 0. It is
never getting incremented. I replaced `*TaxiFareStream*` with `
*MapBundleOperator*`
FYI, I'm referring to the legacy offsets metric gauges.
On Thu, Aug 19, 2021 at 4:53 PM Mason Chen wrote:
> Hi all,
>
> We have found that the per partition Kafka metrics contributes to a lot of
> metrics being indexed by our metrics system.
>
> We would still like to have the proxied kafka
29 matches
Mail list logo