Hi Tzu-Li,
Huge thanks for the input, I'll try to implement prototype of your idea and
see if it answers my requirements
On 9 January 2017 at 08:02, Tzu-Li (Gordon) Tai wrote:
> Hi Igor!
>
> What you can actually do is let a single FlinkKafkaConsumer consume from
> both
he
> conclusions section)
> http://data-artisans.com/why-apache-beam/
>
> Thanks,
> Kostas
>
> On Jun 3, 2016, at 2:55 PM, Igor Berman <igor.ber...@gmail.com> wrote:
>
> Hi
>
> according to presentation of Tyler Akidau
> https://docs.google.com/presentatio
Hi
according to presentation of Tyler Akidau
https://docs.google.com/presentation/d/13YZy2trPugC8Zr9M8_TfSApSCZBGUDZdzi-WUz95JJw/present
Flink
supports late arrivals for window processing, while I've seen several
question in the userlist regarding late arrivals and answer was - sort of
"not for
thanks Alexander, I'll take a look
On 10 May 2016 at 13:07, lofifnc wrote:
> Hi,
>
> Some shameless self promotion:
>
> You can also checkout:
> https://github.com/ottogroup/flink-spector
> which has to the goal to remove such hurdles when testing flink programs.
>
>
on't
understand something?
On 9 May 2016 at 19:37, Igor Berman <igor.ber...@gmail.com> wrote:
> Any idea how to handle following(the message is clear, but I'm not sure
> what I need to do)
> I'm opening "generic" environment in my code
> (StreamExecutionEnv
Any idea how to handle following(the message is clear, but I'm not sure
what I need to do)
I'm opening "generic" environment in my code
(StreamExecutionEnvironment.getExecutionEnvironment())
and JavaProgramTestBase configures TestEnvironment...
so what I should do to support custom tests?
1. Suppose I have stream of different events(A,B,C). Each event will need
it's own processing pipeline.
what is recommended approach of splitting pipelines per each event? I can
do some filter operator at the beginning. I can setup different jobs per
each event. I can hold every such event in
I think I've had this issue too and fixed it as Ufuk suggested
in core-site.xml
something like
fs.s3a.buffer.dir
/tmp
On 4 May 2016 at 11:10, Ufuk Celebi wrote:
> Hey Chen Qin,
>
> this seems to be an issue with the S3 file system. The root cause is:
>
> Caused by:
1. why are you doing join instead of something like
System.currentTimeInMillis()? at the end you have tuple of your data with
timestamp anyways...so why just not to wrap you data in tuple2 with
additional info of creation ts?
2. are you sure that consumer/producer machines' clocks are in sync?
[FLINK-3854] Support Avro key-value rolling sink writer #1953
On 27 April 2016 at 19:29, Igor Berman <igor.ber...@gmail.com> wrote:
> Hi Aljoscha,
>
> avro-mapred jar contains different M/R output formats for avro, and their
> writers
> it's primary used in M/R jobs tha
t; Cheers,
> Aljoscha
>
> On Mon, 25 Apr 2016 at 21:24 Igor Berman <igor.ber...@gmail.com> wrote:
>
>> Hi,
>> it's not a problem, I'll find time to change it(I understand the
>> refactoring is in master and not released yet).
>> Wanted to ask if it's acceptable
7
>
> It basically requires the writer to know the write position, so that we
> can truncate to a valid position in case of failure.
>
> Cheers,
> Aljoscha
>
> On Thu, 21 Apr 2016 at 18:40 Igor Berman <igor.ber...@gmail.com> wrote:
>
>> ok,
>> I have working
similar.
>
> Cheers,
> Aljoscha
>
> On Thu, 21 Apr 2016 at 09:45 Igor Berman <igor.ber...@gmail.com> wrote:
>
>> Hi All,
>> Is there such implementation somewhere?(before I start to implement it
>> myself, it seems not too difficult based on SequenceFileW
for the sake of history(at task manager level):
in conf/flink-conf.yaml
env.java.opts: -Dmy-prop=bla -Dmy-prop2=bla2
On 17 April 2016 at 16:25, Igor Berman <igor.ber...@gmail.com> wrote:
> How do I provide java arguments while submitting job? Suppose I have some
> lega
How do I provide java arguments while submitting job? Suppose I have some
legacy component that is dependent on java argument configuration.
I suppose Flink reuses same jvm for all jobs, so in general I can start
task manager with desired arguments, but then all my jobs can't have
different
thanks a lot for the info, seems not too complex
I'll try to write simple tool to read this state.
Aljoscha, does the key reflects unique id of operator in some way? Or key
is just a "name" that passed to ValueStateDescriptor.
thanks in advance
On 15 April 2016 at 15:10, Stephan Ewen
Yes, indeed this is direction we are trying currently
thanks
On 14 April 2016 at 18:31, Aljoscha Krettek <aljos...@apache.org> wrote:
> Could still be, as I described it by using a message queue to do the
> communication between Flink and the front end.
>
> On Thu, 14 Apr
Hi Aljoscha,
thanks for the response
Synchronous - in our case means that request by end-client to frontend(say
some REST call) needs to wait until processing in backend(Flink) is done
and should return response(e.g. alert) back to end-client(i.e. end-client
-> frontend -> kafka-> flink)
those
Congratulations!
Very nice work, very interesting features.
One question regarding CEP: do you think it's feasible to define pattern
over window of 1 month or even more?
Is there some deep explanation regarding how this partial states are saved?
I mean events that create "funnel" might be
19 matches
Mail list logo