Hi,
If you don’t care about losing some metrics, you can edit log4j.properties to
ignore it.
log4j.logger.org.apache.flink.runtime.metrics=ERROR
BTW, Whether all machines can telnet datadog port?
Whether the number of requests exceeds the datadog's processing capacity?
原始邮件
发件人:Fanbin bufanbin
Hi Georg,
Have checked that already. Here, with this I will be able to achieve dynamic
partitioning and dynamic rules flow but I can not understand how can I handle
this rules at particualar instat as..in rule says A is down if 2 Bs are not
down..so need to wait for some time till I confirm 2 B
Hi,
Does any have any idea on the following error msg: (it flooded my task
manager log)
I do have datadog metrics present so this is probably only happens for some
metrics.
2020-06-24 03:27:15,362 WARN
org.apache.flink.metrics.datadog.DatadogHttpClient- Failed
sending request to Datad
It seems that I'm clearing the timers in a right way, but there is a new
timer created from WindowOperator::registerCleanupTimer method. This one is
called from WindowOperator::processElement at the end of both if/else
branches.
How can I mitigate this? I dont want to have any "late firings" for m
I am trying to add some custom metrics to my window (because the window is
causing a lot of backpressure). However I can't seem to use a
RichAggregationFunction instead of an AggregationFunction. I am trying to
see how long things get held in our EventTimeSessionWindows.withGap window.
Is there ano
Thanks that makes sense.
On Tue, Jun 23, 2020 at 2:13 PM Laurent Exsteens <
laurent.exste...@euranova.eu> wrote:
> Hi Nick,
>
> On a project I worked on, we simply made the file accessible on a shared
> NFS drive.
> Our source was custom, and we forced it to parallelism 1 inside the job,
> so the
Why not use flink CEP?
https://flink.apache.org/news/2020/03/24/demo-fraud-detection-2.html
has a nice interactive example
Best,
Georg
Jaswin Shah schrieb am Di. 23. Juni 2020 um 21:03:
> Hi I am thinking of using some rule engine like DROOLS with flink to solve
> a problem described below:
>
Hi Nick,
On a project I worked on, we simply made the file accessible on a shared
NFS drive.
Our source was custom, and we forced it to parallelism 1 inside the job, so
the file wouldn't be read multiple times. The rest of the job was
distributed.
This was also on a standalone cluster. On a resour
Hi I am thinking of using some rule engine like DROOLS with flink to solve a
problem described below:
I have stream of events coming from kafka topic and I want to analyze those
events based on some rules and give the results in results streams when rules
are satisfied.
Now, I am able to solve
Hi guys,
What is the best way to process a file from a unix file system since there
is no guarantee as to which task manager will be assigned to process the
file. We run flink in standalone mode. We currently follow the brute force
way in which we copy the file to every task manager, is there a bet
Hi Marco Villalobos-2
unfortunately I don't think Tumbling window will work in my case.
The reasons:
1. Window must start only when there is a new event, and previous window is
closed. The new Tumbling window is created just after previews one is
purged. In my case I have to use SessionWindow wher
Internally, we have our own ConfigurableCredentialsProvider. Based on the
config in core-site.xml, it does assume-role with the proper IAM
credentials using STSAssumeRoleSessionCredentialsProvider. We just need to
grant permission for the instance credentials to be able to assume the IAM
role for b
Hi Kristoff,
> On Jun 23, 2020, at 6:52 AM, KristoffSC
> wrote:
>
> Hi all,
> I'm using Flink 1.9.2 and I would like to ask about my use case and approach
> I've took to meet it.
>
> The use case:
> I have a keyed stream, where I have to buffer messages with logic:
> 1. Buffering should sta
One addition:
in clear method of my custom trigger I do call
ctx.deleteProcessingTimeTimer(window.maxTimestamp());
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
When working with an ever growing key-space (let's say session ID), and a
SessionWindow with a ProcessFunction - should we worry about the state
growing indefinitely? Or does the window make sure to clean state after
triggers?
Thanks
Hi all,
I'm using Flink 1.9.2 and I would like to ask about my use case and approach
I've took to meet it.
The use case:
I have a keyed stream, where I have to buffer messages with logic:
1. Buffering should start only when message arrives.
2. The max buffer time should not be longer than 3 seco
Hey all,
we are currently migrating our Flink jobs from v1.9.1 to v1.10.1. The code
has been migrated as well as our Docker images (deploying on K8s using
standalone mode). Now an issue occurs if we use log4j2 and the Kafka
Appender which was working before. There are a lot of errors regarding
"Fai
I am afraid that you can be much more precise if you use System.nanoTime()
instead of System.currentTimeMillis() together with Thread.sleep(delay);.
First because Thread.sleep is less precise [1] and second because you can
do less operations with System.nanoTime() in an empty loop. Like this:
whil
Thanks for the suggestion.
After digging a bit, we've found it most convenient to just add labels to
all our Prometheus queries, like this:
flink_taskmanager_job_task_operator_currentOutputWatermark{job_name=""}
The job_name label will be exposed if you run your job with a job name like
this:
s
jobmanager restart failed, throw exception.
org.apache.flink.runtime.client.jobexecutionexception could not set up
jobmanager
cannot set up the user code libraries file does not exist
/flink/recovery/appid/blob/job*** i cannot
find this file in hdfs, but the directory existed in hdfs.
Hi,
Maybe a bit crazy idea, but you could also try extending the S3
filesystem and add the metadata there. You could write a thin wrapper
for the existing filesystem. If you'd like to go that route you might
want to check this page[1]. You could use that filesystem with your
custom scheme.
Best,
21 matches
Mail list logo