re complicated.
>>>>>>>>
>>>>>>>> For example, one of the things Beam has focused on was a language
>>>>>>>> portability framework. Do I get this with Flink? No. Does that mean
>>>>>>>> Beam
>>>>>>>> is better than Flink? No. Maybe a better question would be, do I
>>>>>>>> want to
>>>>>>>> be able to run Python pipelines?
>>>>>>>>
>>>>>>>> This is just an example, there are many more factors to consider.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Max
>>>>>>>>
>>>>>>>> On 30.04.19 10:59, Robert Bradshaw wrote:
>>>>>>>> > Though we all certainly have our biases, I think it's fair to say
>>>>>>>> that
>>>>>>>> > all of these systems are constantly innovating, borrowing ideas
>>>>>>>> from
>>>>>>>> > one another, and have their strengths and weaknesses. I wouldn't
>>>>>>>> say
>>>>>>>> > one is, or will always be, in front of or behind another.
>>>>>>>> >
>>>>>>>> > Take, as the given example Spark Structured Streaming. Of course
>>>>>>>> the
>>>>>>>> > API itself is spark-specific, but it borrows heavily (among other
>>>>>>>> > things) on ideas that Beam itself pioneered long before Spark 2.0,
>>>>>>>> > specifically the unification of batch and streaming processing
>>>>>>>> into a
>>>>>>>> > single API, and the event-time based windowing (triggering) model
>>>>>>>> for
>>>>>>>> > consistently and correctly handling distributed, out-of-order data
>>>>>>>> > streams.
>>>>>>>> >
>>>>>>>> > Of course there are also operational differences. Spark, for
>>>>>>>> example,
>>>>>>>> > is very tied to the micro-batch style of execution whereas Flink
>>>>>>>> is
>>>>>>>> > fundamentally very continuous, and Beam delegates to the
>>>>>>>> underlying
>>>>>>>> > runner.
>>>>>>>> >
>>>>>>>> > It is certainly Beam's goal to keep overhead minimal, and one of
>>>>>>>> the
>>>>>>>> > primary selling points is the flexibility of portability (of both
>>>>>>>> the
>>>>>>>> > execution runtime and the SDK) as your needs change.
>>>>>>>> >
>>>>>>>> > - Robert
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>>>>>> >>
>>>>>>>> >> Ofcourse! I suspect beam will always be one or two step
>>>>>>>> backwards to the new functionality that is available or yet to come.
>>>>>>>> >>
>>>>>>>> >> For example: Spark Structured Streaming is still not available,
>>>>>>>> no CEP apis yet and much more.
>>>>>>>> >>
>>>>>>>> >> Sent from my iPhone
>>>>>>>> >>
>>>>>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
>>>>>>>> pankajchanda...@gmail.com> wrote:
>>>>>>>> >>
>>>>>>>> >> Will Beam add any overhead or lack certain API/functions
>>>>>>>> available in Spark/Flink?
>>>>>>>>
>>>>>>>
tors to consider.
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Max
>>>>>>>
>>>>>>> On 30.04.19 10:59, Robert Bradshaw wrote:
>>>>>>> > Though we all certainly have our biases, I think it's fair to say
>>>>>>> that
>>>>>>> > all of these systems are constantly innovating, borrowing ideas
>>>>>>> from
>>>>>>> > one another, and have their strengths and weaknesses. I wouldn't
>>>>>>> say
>>>>>>> > one is, or will always be, in front of or behind another.
>>>>>>> >
>>>>>>> > Take, as the given example Spark Structured Streaming. Of course
>>>>>>> the
>>>>>>> > API itself is spark-specific, but it borrows heavily (among other
>>>>>>> > things) on ideas that Beam itself pioneered long before Spark 2.0,
>>>>>>> > specifically the unification of batch and streaming processing
>>>>>>> into a
>>>>>>> > single API, and the event-time based windowing (triggering) model
>>>>>>> for
>>>>>>> > consistently and correctly handling distributed, out-of-order data
>>>>>>> > streams.
>>>>>>> >
>>>>>>> > Of course there are also operational differences. Spark, for
>>>>>>> example,
>>>>>>> > is very tied to the micro-batch style of execution whereas Flink is
>>>>>>> > fundamentally very continuous, and Beam delegates to the underlying
>>>>>>> > runner.
>>>>>>> >
>>>>>>> > It is certainly Beam's goal to keep overhead minimal, and one of
>>>>>>> the
>>>>>>> > primary selling points is the flexibility of portability (of both
>>>>>>> the
>>>>>>> > execution runtime and the SDK) as your needs change.
>>>>>>> >
>>>>>>> > - Robert
>>>>>>> >
>>>>>>> >
>>>>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>>>>> >>
>>>>>>> >> Ofcourse! I suspect beam will always be one or two step backwards
>>>>>>> to the new functionality that is available or yet to come.
>>>>>>> >>
>>>>>>> >> For example: Spark Structured Streaming is still not available,
>>>>>>> no CEP apis yet and much more.
>>>>>>> >>
>>>>>>> >> Sent from my iPhone
>>>>>>> >>
>>>>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
>>>>>>> pankajchanda...@gmail.com> wrote:
>>>>>>> >>
>>>>>>> >> Will Beam add any overhead or lack certain API/functions
>>>>>>> available in Spark/Flink?
>>>>>>>
>>>>>>
e our biases, I think it's fair to say
>>>>>> that
>>>>>> > all of these systems are constantly innovating, borrowing ideas from
>>>>>> > one another, and have their strengths and weaknesses. I wouldn't say
>>>>>> > one is, or will always be, in front of or behind another.
>>>>>> >
>>>>>> > Take, as the given example Spark Structured Streaming. Of course the
>>>>>> > API itself is spark-specific, but it borrows heavily (among other
>>>>>> > things) on ideas that Beam itself pioneered long before Spark 2.0,
>>>>>> > specifically the unification of batch and streaming processing into
>>>>>> a
>>>>>> > single API, and the event-time based windowing (triggering) model
>>>>>> for
>>>>>> > consistently and correctly handling distributed, out-of-order data
>>>>>> > streams.
>>>>>> >
>>>>>> > Of course there are also operational differences. Spark, for
>>>>>> example,
>>>>>> > is very tied to the micro-batch style of execution whereas Flink is
>>>>>> > fundamentally very continuous, and Beam delegates to the underlying
>>>>>> > runner.
>>>>>> >
>>>>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>>>>> > primary selling points is the flexibility of portability (of both
>>>>>> the
>>>>>> > execution runtime and the SDK) as your needs change.
>>>>>> >
>>>>>> > - Robert
>>>>>> >
>>>>>> >
>>>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>>>> >>
>>>>>> >> Ofcourse! I suspect beam will always be one or two step backwards
>>>>>> to the new functionality that is available or yet to come.
>>>>>> >>
>>>>>> >> For example: Spark Structured Streaming is still not available, no
>>>>>> CEP apis yet and much more.
>>>>>> >>
>>>>>> >> Sent from my iPhone
>>>>>> >>
>>>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
>>>>>> pankajchanda...@gmail.com> wrote:
>>>>>> >>
>>>>>> >> Will Beam add any overhead or lack certain API/functions available
>>>>>> in Spark/Flink?
>>>>>>
>>>>>
r.
>>>>> >
>>>>> > Take, as the given example Spark Structured Streaming. Of course the
>>>>> > API itself is spark-specific, but it borrows heavily (among other
>>>>> > things) on ideas that Beam itself pioneered long before Spark 2.0,
>>>>> > specifically the unification of batch and streaming processing into a
>>>>> > single API, and the event-time based windowing (triggering) model for
>>>>> > consistently and correctly handling distributed, out-of-order data
>>>>> > streams.
>>>>> >
>>>>> > Of course there are also operational differences. Spark, for example,
>>>>> > is very tied to the micro-batch style of execution whereas Flink is
>>>>> > fundamentally very continuous, and Beam delegates to the underlying
>>>>> > runner.
>>>>> >
>>>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>>>> > primary selling points is the flexibility of portability (of both the
>>>>> > execution runtime and the SDK) as your needs change.
>>>>> >
>>>>> > - Robert
>>>>> >
>>>>> >
>>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>>> >>
>>>>> >> Ofcourse! I suspect beam will always be one or two step backwards
>>>>> to the new functionality that is available or yet to come.
>>>>> >>
>>>>> >> For example: Spark Structured Streaming is still not available, no
>>>>> CEP apis yet and much more.
>>>>> >>
>>>>> >> Sent from my iPhone
>>>>> >>
>>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
>>>>> pankajchanda...@gmail.com> wrote:
>>>>> >>
>>>>> >> Will Beam add any overhead or lack certain API/functions available
>>>>> in Spark/Flink?
>>>>>
>>>>
single API, and the event-time based windowing (triggering) model for
>>>> > consistently and correctly handling distributed, out-of-order data
>>>> > streams.
>>>> >
>>>> > Of course there are also operational differences. Spark, for example,
>>>> > is very tied to the micro-batch style of execution whereas Flink is
>>>> > fundamentally very continuous, and Beam delegates to the underlying
>>>> > runner.
>>>> >
>>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>>> > primary selling points is the flexibility of portability (of both the
>>>> > execution runtime and the SDK) as your needs change.
>>>> >
>>>> > - Robert
>>>> >
>>>> >
>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>> >>
>>>> >> Ofcourse! I suspect beam will always be one or two step backwards to
>>>> the new functionality that is available or yet to come.
>>>> >>
>>>> >> For example: Spark Structured Streaming is still not available, no
>>>> CEP apis yet and much more.
>>>> >>
>>>> >> Sent from my iPhone
>>>> >>
>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
>>>> pankajchanda...@gmail.com> wrote:
>>>> >>
>>>> >> Will Beam add any overhead or lack certain API/functions available
>>>> in Spark/Flink?
>>>>
>>>
execution whereas Flink is
>>> > fundamentally very continuous, and Beam delegates to the underlying
>>> > runner.
>>> >
>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>> > primary selling points is the flexibility of portability (of both the
>>> > execution runtime and the SDK) as your needs change.
>>> >
>>> > - Robert
>>> >
>>> >
>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>> >>
>>> >> Ofcourse! I suspect beam will always be one or two step backwards to
>>> the new functionality that is available or yet to come.
>>> >>
>>> >> For example: Spark Structured Streaming is still not available, no
>>> CEP apis yet and much more.
>>> >>
>>> >> Sent from my iPhone
>>> >>
>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
>>> wrote:
>>> >>
>>> >> Will Beam add any overhead or lack certain API/functions available in
>>> Spark/Flink?
>>>
>>
ant kodali > >
>> >>>>>>>>> wrote:
>> >>>>>>>>>> Staying behind doesn't imply one is better than the other and I
>> >>>>>>>>>> didn't mean that in any way but I fail to see how an
ks
> >>>>>>>>>> then I would think the interface would need to be changed.
> >>>>>>>>>> Another
> >>>>>>>>>> example would say the underlying execution engines take
> >>>>>>>>>
elling points is the flexibility of portability (of
both the
execution runtime and the SDK) as your needs change.
- Robert
On Tue, Apr 30, 2019 at 5:29 AM wrote:
Ofcourse! I suspect beam will always be one or two step
backwards to the new functionality that is available or
yet to
come.
For example: Spark Structured Streaming is still not
available,
no CEP apis yet and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand
wrote:
Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
xibility of portability (of
both the
execution runtime and the SDK) as your needs change.
- Robert
On Tue, Apr 30, 2019 at 5:29 AM wrote:
Ofcourse! I suspect beam will always be one or two step
backwards to the new functionality that is available or yet to
come.
For example: Spark Structured Streaming is still not available,
no CEP apis yet and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand
wrote:
Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
ution runtime and the SDK) as your needs change.
- Robert
On Tue, Apr 30, 2019 at 5:29 AM wrote:
Ofcourse! I suspect beam will always be one or two step
backwards to the new functionality that is available or yet to
come.
For example: Spark Structured Streaming is still not available,
no CEP apis yet and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand
wrote:
Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
gt;>>>>> "Of course the API itself is Spark-specific, but it borrows
>>> >>>>>>> heavily (among other things) on ideas that Beam itself pioneered
>>> >>>>>>> long before Spark 2.0" Good to know.
>>> >>>>>>>
>>> >>>>>>> "one of the things Beam has focused on was a language portability
>>> >>>>>>> framework" Sure but how important is this for a typical user? Do
>>> >>>>>>> people stop using a particular tool because it is in an X
>>> >>>>>>> language? I personally would put features first over language
>>> >>>>>>> portability and it's completely fine that may not be in line with
>>> >>>>>>> Beam's priorities. All said I can agree Beam focus on language
>>> >>>>>>> portability is great.
>>> >>>>>>>
>>> >>>>>>> On Tue, Apr 30, 2019 at 2:48 AM Maximilian Michels
>>> >>>>>>> wrote:
>>> >>>>>>>>> I wouldn't say one is, or will always be, in front of or behind
>>> >>>>>>>>> another.
>>> >>>>>>>> That's a great way to phrase it. I think it is very common to
>>> >>>>>>>> jump to
>>> >>>>>>>> the conclusion that one system is better than the other. In
>>> >>>>>>>> reality it's
>>> >>>>>>>> often much more complicated.
>>> >>>>>>>>
>>> >>>>>>>> For example, one of the things Beam has focused on was a
>>> language
>>> >>>>>>>> portability framework. Do I get this with Flink? No. Does that
>>> >>>>>>>> mean Beam
>>> >>>>>>>> is better than Flink? No. Maybe a better question would be, do I
>>> >>>>>>>> want to
>>> >>>>>>>> be able to run Python pipelines?
>>> >>>>>>>>
>>> >>>>>>>> This is just an example, there are many more factors to
>>> consider.
>>> >>>>>>>>
>>> >>>>>>>> Cheers,
>>> >>>>>>>> Max
>>> >>>>>>>>
>>> >>>>>>>> On 30.04.19 10:59, Robert Bradshaw wrote:
>>> >>>>>>>>> Though we all certainly have our biases, I think it's fair to
>>> >>>>>>>>> say that
>>> >>>>>>>>> all of these systems are constantly innovating, borrowing ideas
>>> >>>>>>>>> from
>>> >>>>>>>>> one another, and have their strengths and weaknesses. I
>>> wouldn't
>>> >>>>>>>>> say
>>> >>>>>>>>> one is, or will always be, in front of or behind another.
>>> >>>>>>>>>
>>> >>>>>>>>> Take, as the given example Spark Structured Streaming. Of
>>> course
>>> >>>>>>>>> the
>>> >>>>>>>>> API itself is spark-specific, but it borrows heavily (among
>>> other
>>> >>>>>>>>> things) on ideas that Beam itself pioneered long before Spark
>>> 2.0,
>>> >>>>>>>>> specifically the unification of batch and streaming processing
>>> >>>>>>>>> into a
>>> >>>>>>>>> single API, and the event-time based windowing (triggering)
>>> >>>>>>>>> model for
>>> >>>>>>>>> consistently and correctly handling distributed, out-of-order
>>> data
>>> >>>>>>>>> streams.
>>> >>>>>>>>>
>>> >>>>>>>>> Of course there are also operational differences. Spark, for
>>> >>>>>>>>> example,
>>> >>>>>>>>> is very tied to the micro-batch style of execution whereas
>>> Flink is
>>> >>>>>>>>> fundamentally very continuous, and Beam delegates to the
>>> underlying
>>> >>>>>>>>> runner.
>>> >>>>>>>>>
>>> >>>>>>>>> It is certainly Beam's goal to keep overhead minimal, and one
>>> of
>>> >>>>>>>>> the
>>> >>>>>>>>> primary selling points is the flexibility of portability (of
>>> >>>>>>>>> both the
>>> >>>>>>>>> execution runtime and the SDK) as your needs change.
>>> >>>>>>>>>
>>> >>>>>>>>> - Robert
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>> >>>>>>>>>> Ofcourse! I suspect beam will always be one or two step
>>> >>>>>>>>>> backwards to the new functionality that is available or yet to
>>> >>>>>>>>>> come.
>>> >>>>>>>>>>
>>> >>>>>>>>>> For example: Spark Structured Streaming is still not
>>> available,
>>> >>>>>>>>>> no CEP apis yet and much more.
>>> >>>>>>>>>>
>>> >>>>>>>>>> Sent from my iPhone
>>> >>>>>>>>>>
>>> >>>>>>>>>> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
>>> >>>>>>>>>> wrote:
>>> >>>>>>>>>>
>>> >>>>>>>>>> Will Beam add any overhead or lack certain API/functions
>>> >>>>>>>>>> available in Spark/Flink?
>>>
>>
ular tool because it is in an X
>> >>>>>>> language? I personally would put features first over language
>> >>>>>>> portability and it's completely fine that may not be in line with
>> >>>>>>> Beam's priorities. All said I can agree Beam focus on language
>> >>>>>>> portability is great.
>> >>>>>>>
>> >>>>>>> On Tue, Apr 30, 2019 at 2:48 AM Maximilian Michels
>> >>>>>>> wrote:
>> >>>>>>>>> I wouldn't say one is, or will always be, in front of or behind
>> >>>>>>>>> another.
>> >>>>>>>> That's a great way to phrase it. I think it is very common to
>> >>>>>>>> jump to
>> >>>>>>>> the conclusion that one system is better than the other. In
>> >>>>>>>> reality it's
>> >>>>>>>> often much more complicated.
>> >>>>>>>>
>> >>>>>>>> For example, one of the things Beam has focused on was a language
>> >>>>>>>> portability framework. Do I get this with Flink? No. Does that
>> >>>>>>>> mean Beam
>> >>>>>>>> is better than Flink? No. Maybe a better question would be, do I
>> >>>>>>>> want to
>> >>>>>>>> be able to run Python pipelines?
>> >>>>>>>>
>> >>>>>>>> This is just an example, there are many more factors to consider.
>> >>>>>>>>
>> >>>>>>>> Cheers,
>> >>>>>>>> Max
>> >>>>>>>>
>> >>>>>>>> On 30.04.19 10:59, Robert Bradshaw wrote:
>> >>>>>>>>> Though we all certainly have our biases, I think it's fair to
>> >>>>>>>>> say that
>> >>>>>>>>> all of these systems are constantly innovating, borrowing ideas
>> >>>>>>>>> from
>> >>>>>>>>> one another, and have their strengths and weaknesses. I wouldn't
>> >>>>>>>>> say
>> >>>>>>>>> one is, or will always be, in front of or behind another.
>> >>>>>>>>>
>> >>>>>>>>> Take, as the given example Spark Structured Streaming. Of course
>> >>>>>>>>> the
>> >>>>>>>>> API itself is spark-specific, but it borrows heavily (among
>> other
>> >>>>>>>>> things) on ideas that Beam itself pioneered long before Spark
>> 2.0,
>> >>>>>>>>> specifically the unification of batch and streaming processing
>> >>>>>>>>> into a
>> >>>>>>>>> single API, and the event-time based windowing (triggering)
>> >>>>>>>>> model for
>> >>>>>>>>> consistently and correctly handling distributed, out-of-order
>> data
>> >>>>>>>>> streams.
>> >>>>>>>>>
>> >>>>>>>>> Of course there are also operational differences. Spark, for
>> >>>>>>>>> example,
>> >>>>>>>>> is very tied to the micro-batch style of execution whereas
>> Flink is
>> >>>>>>>>> fundamentally very continuous, and Beam delegates to the
>> underlying
>> >>>>>>>>> runner.
>> >>>>>>>>>
>> >>>>>>>>> It is certainly Beam's goal to keep overhead minimal, and one of
>> >>>>>>>>> the
>> >>>>>>>>> primary selling points is the flexibility of portability (of
>> >>>>>>>>> both the
>> >>>>>>>>> execution runtime and the SDK) as your needs change.
>> >>>>>>>>>
>> >>>>>>>>> - Robert
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> On Tue, Apr 30, 2019 at 5:29 AM wrote:
>> >>>>>>>>>> Ofcourse! I suspect beam will always be one or two step
>> >>>>>>>>>> backwards to the new functionality that is available or yet to
>> >>>>>>>>>> come.
>> >>>>>>>>>>
>> >>>>>>>>>> For example: Spark Structured Streaming is still not available,
>> >>>>>>>>>> no CEP apis yet and much more.
>> >>>>>>>>>>
>> >>>>>>>>>> Sent from my iPhone
>> >>>>>>>>>>
>> >>>>>>>>>> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
>> >>>>>>>>>> wrote:
>> >>>>>>>>>>
>> >>>>>>>>>> Will Beam add any overhead or lack certain API/functions
>> >>>>>>>>>> available in Spark/Flink?
>>
>
t;>>> wrote:
> >>>>>>>>> I wouldn't say one is, or will always be, in front of or behind
> >>>>>>>>> another.
> >>>>>>>> That's a great way to phrase it. I think it is ver
, and one of
the
primary selling points is the flexibility of portability (of
both the
execution runtime and the SDK) as your needs change.
- Robert
On Tue, Apr 30, 2019 at 5:29 AM wrote:
Ofcourse! I suspect beam will always be one or two step
backwards to the new functionality that is availa
ays be one or two step
backwards to the new functionality that is available or yet to
come.
For example: Spark Structured Streaming is still not available,
no CEP apis yet and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand
wrote:
Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
t; >>>>>> be able to run Python pipelines?
> >>>>>>
> >>>>>> This is just an example, there are many more factors to consider.
> >>>>>>
> >>>>>> Cheers,
> >>>>>> Max
> >>>>&g
>> >>>> > specifically the unification of batch and streaming processing
into a
>> >>>> > single API, and the event-time based windowing (triggering) model
for
>> >>>> > consistently and correctly handling distributed, out-of-order data
>> >>>> > streams.
>> >>>> >
>> >>>> > Of course there are also operational differences. Spark, for
example,
>> >>>> > is very tied to the micro-batch style of execution whereas Flink
is
>> >>>> > fundamentally very continuous, and Beam delegates to the
underlying
>> >>>> > runner.
>> >>>> >
>> >>>> > It is certainly Beam's goal to keep overhead minimal, and one of
the
>> >>>> > primary selling points is the flexibility of portability (of both
the
>> >>>> > execution runtime and the SDK) as your needs change.
>> >>>> >
>> >>>> > - Robert
>> >>>> >
>> >>>> >
>> >>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>> >>>> >>
>> >>>> >> Ofcourse! I suspect beam will always be one or two step
backwards to the new functionality that is available or yet to come.
>> >>>> >>
>> >>>> >> For example: Spark Structured Streaming is still not available,
no CEP apis yet and much more.
>> >>>> >>
>> >>>> >> Sent from my iPhone
>> >>>> >>
>> >>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
pankajchanda...@gmail.com> wrote:
>> >>>> >>
>> >>>> >> Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
example: Spark Structured Streaming is still not available,
no CEP apis yet and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand
wrote:
Will Beam add any overhead or lack certain API/functions
available in Spark/Flink?
wing (triggering) model
> for
> >>>> > consistently and correctly handling distributed, out-of-order data
> >>>> > streams.
> >>>> >
> >>>> > Of course there are also operational differences. Spark, for
> example,
> >>>> > is very tied to the micro-batch style of execution whereas Flink is
> >>>> > fundamentally very continuous, and Beam delegates to the underlying
> >>>> > runner.
> >>>> >
> >>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
> >>>> > primary selling points is the flexibility of portability (of both
> the
> >>>> > execution runtime and the SDK) as your needs change.
> >>>> >
> >>>> > - Robert
> >>>> >
> >>>> >
> >>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
> >>>> >>
> >>>> >> Ofcourse! I suspect beam will always be one or two step backwards
> to the new functionality that is available or yet to come.
> >>>> >>
> >>>> >> For example: Spark Structured Streaming is still not available, no
> CEP apis yet and much more.
> >>>> >>
> >>>> >> Sent from my iPhone
> >>>> >>
> >>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand <
> pankajchanda...@gmail.com> wrote:
> >>>> >>
> >>>> >> Will Beam add any overhead or lack certain API/functions available
> in Spark/Flink?
>
wrote:
Ofcourse! I suspect beam will always be one or two step backwards to the new
functionality that is available or yet to come.
For example: Spark Structured Streaming is still not available, no CEP apis yet
and much more.
Sent from my iPhone
On Apr 30, 2019, at 12:11 AM, Pankaj Chand wrote:
Will Beam add any overhead or lack certain API/functions available in
Spark/Flink?
s.
>>>> >
>>>> > Of course there are also operational differences. Spark, for example,
>>>> > is very tied to the micro-batch style of execution whereas Flink is
>>>> > fundamentally very continuous, and Beam delegates to the underlying
>>>> > runner.
>>>> >
>>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>>> > primary selling points is the flexibility of portability (of both the
>>>> > execution runtime and the SDK) as your needs change.
>>>> >
>>>> > - Robert
>>>> >
>>>> >
>>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>>> >>
>>>> >> Ofcourse! I suspect beam will always be one or two step backwards to
>>>> >> the new functionality that is available or yet to come.
>>>> >>
>>>> >> For example: Spark Structured Streaming is still not available, no CEP
>>>> >> apis yet and much more.
>>>> >>
>>>> >> Sent from my iPhone
>>>> >>
>>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
>>>> >> wrote:
>>>> >>
>>>> >> Will Beam add any overhead or lack certain API/functions available in
>>>> >> Spark/Flink?
e underlying
>>> > runner.
>>> >
>>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>>> > primary selling points is the flexibility of portability (of both the
>>> > execution runtime and the SDK) as your needs change.
>>> >
>>> > - Robert
>>> >
>>> >
>>> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
>>> >>
>>> >> Ofcourse! I suspect beam will always be one or two step backwards to
>>> the new functionality that is available or yet to come.
>>> >>
>>> >> For example: Spark Structured Streaming is still not available, no
>>> CEP apis yet and much more.
>>> >>
>>> >> Sent from my iPhone
>>> >>
>>> >> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
>>> wrote:
>>> >>
>>> >> Will Beam add any overhead or lack certain API/functions available in
>>> Spark/Flink?
>>>
>>
gt; runner.
>> >
>> > It is certainly Beam's goal to keep overhead minimal, and one of the
>> > primary selling points is the flexibility of portability (of both the
>> > execution runtime and the SDK) as your needs change.
>> >
>> > - Robe
the flexibility of portability (of both the
> > execution runtime and the SDK) as your needs change.
> >
> > - Robert
> >
> >
> > On Tue, Apr 30, 2019 at 5:29 AM wrote:
> >>
> >> Ofcourse! I suspect beam will always be one or two step backwards to
&
e:
Will Beam add any overhead or lack certain API/functions available in
Spark/Flink?
will always be one or two step backwards to the new
> functionality that is available or yet to come.
>
> For example: Spark Structured Streaming is still not available, no CEP apis
> yet and much more.
>
> Sent from my iPhone
>
> On Apr 30, 2019, at 12:11 AM, Pankaj Chand
ote:
>
> Will Beam add any overhead or lack certain API/functions available in
> Spark/Flink?
Will Beam add any overhead or lack certain API/functions available in
Spark/Flink?
29 matches
Mail list logo