Hi, new to Apache Flink. Trying to find some solid input on how best to
handle exceptions in streams -- specifically those that should not
interrupt the stream.
For example, if an error occurs during deserialization from bytes/Strings
to your data-type, in my use-case I would rather queue the
Hi Stephan,
I just wrote an answer to your SO question.
Best, Fabian
2016-11-10 11:01 GMT+01:00 Stephan Epping :
> Hello,
>
> I found this question in the Nabble archive (http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Maintaining-
>
Hi Miguel,
I tried to reproduce the problem in a similar setup using Flink in Docker
with 2 workers:
CONTAINER IDIMAGE COMMAND
CREATED STATUS PORTS
NAMES
3fbaf5876e31melentye/flink:latest
Hi Till.I just checked and my test finished after 19 hours with the config
below.The expected Linear Road test time is 3.5 hours.I have achieved this for
1/2 data I sent yesterday.But for 105 G worth of tuples I get 19 hours.No
exceptions, no errors. Clean. But almost 5 times slower than
Thanks Till.I did all of that with one difference.I have only 1 topic with 64
partitions correlating to the total number of slots in all Flink servers.Can
you elaborate on "As long as you have more Kafka topics than Flink Kafka
consumers (subtasks) " pls?Perhaps thats the bottleneck in my
Hi Aljoscha,
alright, for the time being I have modified the WindowOperator and built
flink-streaming-java for our team. When you only change the
WindowOperator class, is it safe to just bundle it with the job? I.e.
does this class have precedence over the class in the binary bundle of
flink?
Hi,
We let watermark proceed at the earliest timestamp among all event types.
Our test result looks correct.
/*
* Watermark proceeds at the earliest timestamp among all the event types
* */
public class EventsWatermark implements
AssignerWithPeriodicWatermarks> {
private
Hey there,
I don't really understand what Broadcast does, does it in a way export the
elements from a DataSet in a Collection? Because then it might be what i'm
looking for.
when implementing algorithms in Flink Gelly i keep getting stuck on what i
cannot do with DataSets. For example, if i want
Hi,
there were some discussions on the ML and it seems that the consensus is to
aim for a release this year.
Let me think a bit more and get back to you on the other issues.
Cheers,
Aljoscha
On Thu, 10 Nov 2016 at 11:47 Konstantin Knauf
wrote:
> Hi Aljoscha,
>
>
Hi Aljoscha,
unfortunately, I think, FLINK-4994 would not solve our issue. What does
"on the very end" mean in case of a GlobalWindow?
FLINK-4369 would fix my workaround though. Is there already a timeline
for Flink 1.2?
Cheers,
Konst
On 10.11.2016 10:19, Aljoscha Krettek wrote:
> Hi
The amount of data should be fine. Try to set the number of slots to the
number of cores you have available.
As long as you have more Kafka topics than Flink Kafka consumers (subtasks)
you should be fine. But I think you can also decrease the number of Kafka
partitions a little bit. I guess that
Hi,
this is not possible in the current implementation. Lateness is determined
at the WindowOperator and within a Trigger you can actually check the
current watermark and therefore determine if a window is late or not.
Cheers,
Aljoscha
On Tue, 8 Nov 2016 at 17:19 Seth Wiesman
Hi Konstantin,
evicting elements not being evicted is a bug that should be fixed for Flink
1.2: https://issues.apache.org/jira/browse/FLINK-4369.
The check about non-existing window state when triggering was introduced
because otherwise a Trigger could return FIRE and then there would be
nothing
13 matches
Mail list logo