Hi Curtis,
we implemented this today. But without a REST-Interface. We transfer out
artifacts and a script with a scp call from out Bamboo server and execute the
script. This script kills the yarn application, start a new flink application
in yarn and submit all routes to the cluster.
I am using rich window function in my streaming project. I want "close"
method to get triggered after each window interval.
In my case, open gets executed life time once & close method doesn't get
executed ?
Can anybody help to sort out same ? I want tear down method after each
window interval.
Hi Again, following is from the dashboard while wverything is supposedlt
running.No real-time change in send/received/#of records...but one node is
definitely producing a *.out file...And all TMs are reporting in their *.log
files. And the process will eventually end , but very slow.Thanks
I would like to be able to use Jenkins to deploy jobs to Flink.
I’ve seen talk of a REST interface that might allow me to do this
https://issues.apache.org/jira/browse/FLINK-1228
Is there any documentation around this feature?
Thanks Aljoscha,Thats why I am wondering about this. I dont see send/receive
columns change at alljust 0's all the time.The only thing that changes is
time stamp.Is this an indication that the nodes in the cluster are not
participating in execution of the data?Thanks again.Amir-
Hi Aljoscha & Fabian,
Finally I got this working. Thanks for your help. In terms persisting
the state (for S2), I tried to use checkpoint every 10 Secs using a
FsStateBackend... What I notice is that the checkpoint duration is almost
2 minutes for many cases, while for the other cases it
/cc Robert, he is looking into extending the Kafka Connectors to support
more of Kafka's direct utilities
On Thu, Sep 22, 2016 at 3:17 PM, Swapnil Chougule
wrote:
> It will be good to have RawSchema as one of the deserialization schema in
> streaming framework (like
I have just noticed that this is exactly what it currently does. Reading the
docs I assumed all windows would be of the same size.
> Am 22.09.2016 um 13:35 schrieb Maximilian Bode :
>
> Hi everyone,
>
> is there an easy way to implement a tumbling event time window
Hi everyone,
is there an easy way to implement a tumbling event time window that tumbles at
a certain time? Examples could be daily or hourly (tumbling at exactly 00:00,
01:00, 02:00 etc.) windows.
So in particular, for a daily window, the first window would be shorter than
the rest, tumble
Actually i was wrong on the UDF point. By variables i meant the
information that is encoded in the scope, like the subtask index, task
name, taskmanager ID etc., however all these can be accessed from the
MetricGroup that is returned by RuntimeContext#getMetricGroup(), which
you can of course
Hi Fabian/ Chesnay
Can anybody give me permission to assign JIRA (created for same.)?
Thanks,
Swapnil
On Tue, Sep 20, 2016 at 6:18 PM, Swapnil Chougule
wrote:
> Thanks Chesnay & Fabian for update.
> I will create JIRA issue & open a pull request to fix it.
>
> Thanks,
Can you try running with DEBUG logging level?
Then you should see if input splits are assigned.
Also, you could try to use a debugger to see what's going on.
On Mon, Sep 19, 2016 at 2:04 PM, Yassine MARZOUGUI <
y.marzou...@mindlytix.com> wrote:
> Hi Chensey,
>
> I am running Flink 1.1.2, and
Hi Luis,
using Event Time windows, you should be able to generate some test data and
get predictable results.
Flink is internally using similar tests to ensure correctness of the
windowing implementation (for example
the EventTimeWindowCheckpointingITCase).
Regards,
Robert
On Mon, Sep 12, 2016
Hi,
Is there some way to emit a watermark in the trigger?
I see that in the evictor there is the option to check the StreamRecord
if it is a watermark..so I would hope that there is some option also to create
them
Hi,
to me, this looks like you are running into the problem described under
[FLINK-4603] : KeyedStateBackend cannot restore user code classes. I have
opened a pull request (PR 2533) this morning that should fix this behavior as
soon as it is merged into master.
Best,
Stefan
> Am 21.09.2016
15 matches
Mail list logo