Hi,
I want to overwrite the method “openNewPartFile” in the BucketingSink Class
such that it creates part file name with inclusion of timestamp whenever it
rolls a new part file.
Can someone share some thoughts on how I can do this.
Thanks a ton, in advance.
Regards,
Raja.
Fabian,
Just to follow up on this, I took the patch, compiled that class and stuck
it into the existing 1.3.2 jar and all is well. (I couldn't get all of
flink to build correctly)
Thank you!
On Wed, Sep 20, 2017 at 3:53 PM, Garrett Barton
wrote:
> Fabian,
>
Hi, I am running Flink 1.3.2 on kubernetes, I am not sure why sometime one
of my TM is killed, is there a way to debug this? Thanks
= Logs
*2017-10-05 22:36:42,631 INFO
org.apache.flink.runtime.instance.InstanceManager - Registered
TaskManager at
Hi Ken,
I don't have much experience with streaming iterations.
Maybe Aljoscha (in CC) has an idea what is happening and if it can be
prevented.
Best, Fabian
2017-10-05 1:33 GMT+02:00 Ken Krugler :
> Hi all,
>
> I’ve got a streaming topology with an iteration, and
Hi,
Is there a way to catch the timeouts thrown from async io operator?
We use async io API to make some high latency HTTP API calls. Currently
when the underlying http connection hangs and fails to timeout in the
configured time the async timeout kicks in and throws an exception which
causes
Hi,
I'm noticing a weird issue with our flink streaming job. We use async
io operator which makes a HTTP call and in certain cases when the async
task times out, it throws an exception and causing the job to restart.
java.lang.Exception: An async function call terminated with an
exception.
Hi,
As you noticed, Flink does currently not put Source-X and Throttler-X (for some
X) in the same task slot (TaskManager). In the low-level execution system,
there are two connection patterns: ALL_TO_ALL and POINTWISE. Flink will only
schedule Source-X and Throttler-X on the same slot when
Hi,
I am running a simple stream Flink job (Flink version 1.3.2 and 1.3.1)
whose source and sink is a Kafka cluster 0.10.0.1.
I am testing savepoints by stopping/resuming the job and when I checked the
validity of the data sunk during the stop time I observed that some of the
events have been
Hello
I have set up a cluster and added taskmanagers manually with bin/taskmanager.sh
start.
I noticed that if i have 5 task managers with one slot each and start a job
with -p5, then if i stop a taskmanager the job will fail even if there are 4
more taskmanagers.
Is this expected (I turned
Hi,
Yes, the AvroSerializer currently partially still uses Kryo for object copying.
Also, right now, I think the AvroSerializer is only used when the type is
recognized as a POJO, and that `isForceAvroEnabled` is set on the job
configuration. I’m not sure if that is always possible.
As
Hi,
I think you can implement that by writing a custom Trigger that combines
functionality of CountTrigger and EventTimeTrigger. You should keep in mind,
though, that having windows of size 1 hour and slide 5 seconds will lead to a
lot of duplication because in Flink every sliding window is
Hi Bo,
I'm not familiar with Mesos deployments, but I'll forward this to Till or Eron
(in CC) who perhaps could provide some help here.
Cheers,
Gordon
On 2 October 2017 at 8:49:32 PM, Bo Yu (yubo1...@gmail.com) wrote:
Hello all,
This is Bo, I met some problems when I tried to use flink in my
Hi Garrett,
thanks for reporting back!
Glad you could resolve the issue :-)
Best, Fabian
2017-10-05 23:21 GMT+02:00 Garrett Barton :
> Fabian,
>
> Turns out I was wrong. My flow was in fact running in two separate jobs
> due to me trying to use a local variable
Hi Vishal,
I think you're right! And thanks for looking into this so deeply.
With your last mail your basically saying, that the checkpoint could not be
restored because your HDFS was temporarily down. If Flink had not deleted that
checkpoint it might have been possible to restore it at a
14 matches
Mail list logo