Hi Jaromir,

You can make use of Custom Trigger and set the allowed lateness to max
value.

I have kept the custom trigger code (EventTimeTrigger) same as Flink 1.0.3,
doing this the late elements will not be discarded and they will be
assigned single windows , now you can decide what you want to do with the
late elements in the window evaluation function

This is what I have done since I cannot afford to loose any financial data.


Regards,
Vinay Patil

On Mon, Nov 7, 2016 at 10:58 PM, Till Rohrmann [via Apache Flink Mailing
List archive.] <ml-node+s1008284n14424...@n3.nabble.com> wrote:

> You're right if you want to guarantee a deterministic computation for an
> arbitrary allowed lateness. In the general case, you would never be able
> to
> calculate the final result of a window in a finite time, because there
> might always be another element which arrives later. However, for most
> practical use cases you can define an upper bound for the allowed lateness
> which you can use to calculate your final result. If not, then you will
> simply run out of storage capacity at some point of time, because you have
> to keep some state around for this late element (in the general case).
>
> Cheers,
> Till
>
> On Mon, Nov 7, 2016 at 5:55 PM, Jaromir Vanek <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14424&i=0>>
> wrote:
>
> > Hi Till, thank you for your answer.
> >
> > I am afraid defining an allowed lateness won't help. It will just change
> > the
> > problem by constant time. If we agree an element can come in arbitrary
> time
> > after watermark (depends on the network latency), it may be assigned to
> the
> > window or may be not if comes before/after allowed lateness period
> expires.
> > Then element may be counted in or discarded.
> >
> > Still seems the results are not deterministic. In other words if I run
> the
> > job reading from Kafka multiple times I may get different result
> depending
> > on external conditions like network and cluster stability.
> >
> > Please correct me if i'm wrong.
> >
> > J.V.
> >
> >
> >
> >
> >
> >
> > --
> > View this message in context: http://apache-flink-mailing-
> > list-archive.1008284.n3.nabble.com/Deterministic-
> > processing-with-out-of-order-streams-tp14409p14422.html
> > Sent from the Apache Flink Mailing List archive. mailing list archive at
> > Nabble.com.
> >
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-mailing-list-archive.1008284.n3.
> nabble.com/Deterministic-processing-with-out-of-order-
> streams-tp14409p14424.html
> To start a new topic under Apache Flink Mailing List archive., email
> ml-node+s1008284n1...@n3.nabble.com
> To unsubscribe from Apache Flink Mailing List archive., click here
> <http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=dmluYXkxOC5wYXRpbEBnbWFpbC5jb218MXwxODExMDE2NjAx>
> .
> NAML
> <http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Deterministic-processing-with-out-of-order-streams-tp14409p14425.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to