Hi Jeff,
I think the purpose of this tool it to allow users play with the memory
configurations without needing to actually deploy the Flink cluster or even
have a job. For sanity checks, we currently have them in the start-up
scripts (for standalone clusters) and resource managers (on K8s/Yarn/Me
Thanks for your feedbacks, @Xintong and @Jeff.
@Jeff
I think it would always be good to leverage exist logic in Flink, such
as JobListener. However, this calculator does not only target to check
the conflict, it also targets to provide the calculating result to
user before the job is actually depl
Thanks for reply, @Zhijiang, @Congxian!
@Congxian
$current_processing - $event_time works for event time. How about
processing time? Is there a good way to measure the latency?
Best
Lu
On Sun, Mar 29, 2020 at 6:21 AM Zhijiang wrote:
> Hi Lu,
>
> Besides Congxian's replies, you can also get som
I was able to get generic types to work when I used GenericTypeInfo and
made sure to wrap the generic in some concrete type. In my case I used
scala.Some as the wrapper. It looks something like this (in Scala):
import org.apache.flink.api.java.typeutils.GenericTypeInfo
val descriptor = new ListSta
Hi Yangze,
Does this tool just parse the configuration in flink-conf.yaml ? Maybe it
could be done in JobListener [1] (we should enhance it via adding hook
before job submission), so that it could all the cases (e.g. parameters
coming from command line)
[1]
https://github.com/apache/flink/blob/m
Thanks Yangze, I've tried the tool and I think its very helpful.
Thank you~
Xintong Song
On Mon, Mar 30, 2020 at 9:40 AM Yangze Guo wrote:
> Hi, Yun,
>
> I'm sorry that it currently could not handle it. But I think it is a
> really good idea and that feature would be added to the next versi
Hi, Yun,
I'm sorry that it currently could not handle it. But I think it is a
really good idea and that feature would be added to the next version.
Best,
Yangze Guo
On Mon, Mar 30, 2020 at 12:21 AM Yun Tang wrote:
>
> Very interesting and convenient tool, just a quick question: could this tool
Hello Yun,
I see this error reported by:
*org.apache.flink.runtime.webmonitor.WebMonitorUtils* - *JobManager log
files are unavailable in the web dashboard. Log file location not found in
environment variable 'log.file' or configuration key 'Key: 'web.log.path' ,
default: null (fallback keys: [{k
Hi Vitaliy
Property of 'log.file' would be configured if you have uploaded 'logback.xml'
or 'log4j.properties' [1].
The file would contain logs of job manager or task manager which is decided by
the component itself. And as you can see, this is only a local file path, I am
afraid this cannot un
Very interesting and convenient tool, just a quick question: could this tool
also handle deployment cluster commands like "-tm" mixed with configuration in
`flink-conf.yaml` ?
Best
Yun Tang
From: Yangze Guo
Sent: Friday, March 27, 2020 18:00
To: user ; user...@f
HI :) I have finally figured it out :)
On top of changes from last email,
in my test method, I had to wrap "testHarness.processElement" in
synchronized block, like this:
@Test
public void foo() throws Exception {
synchronized (this.testHarness.getCheckpointLock()) {
testHarness.proce
Hi Lu,
Besides Congxian's replies, you can also get some further explanations from
"https://flink.apache.org/2019/07/23/flink-network-stack-2.html#latency-tracking";.
Best,
Zhijiang
--
From:Congxian Qiu
Send Time:2020 Mar. 28 (Sa
Dear community,
happy to share this week's Apache Flink community digest with a couple of
threads around the upcoming release of Apache Flink Stateful Functions 2.0,
an update on Flink 1.10.1, two FLIPs to improve Apache
Flink's distributed runtime and the schedule for Flink Forward Virtual
Confer
Hi,
another update on this one.
I managed to make the workaround a little bit cleaner.
The test setup I have now is like this:
ByteArrayOutputStream streamEdgesBytes = new ByteArrayOutputStream();
ObjectOutputStream oosStreamEdges = new
ObjectOutputStream(streamEdgesBytes);
oosStreamEd
cc user@f.a.o
Hi Siva,
I am not aware of a Flink MongoDB Connector in either Apache Flink, Apache
Bahir or flink-packages.org. I assume that you are doing idempotent
upserts, and hence do not require a transactional sink to achieve
end-to-end exactly-once results.
To build one yourself, you impl
Thanks!
What am I supposed to put in the apply/process function for the sink to be
invoked on a List of items?
Sidney Feiner / Data Platform Developer
M: +972.528197720 / Skype: sidney.feiner.startapp
[emailsignature]
From: tison
Sent: Sunday, March 22, 2020
16 matches
Mail list logo