[ 
https://issues.apache.org/jira/browse/FLINK-1967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611837#comment-14611837
 ] 

ASF GitHub Bot commented on FLINK-1967:
---------------------------------------

Github user gyfora commented on the pull request:

    https://github.com/apache/flink/pull/879#issuecomment-118008845
  
    Also maybe it is completely unnecessary to automatically attach a source 
timestamp if we don't have any windowing operators. 
    
    One other thing that came into my mind: in order to keep "deterministic" 
results after failure we should persist data with the timestamp attached at the 
sources. Are we planning to do this? I guess this question goes hand in hand 
with the automatic source level backup even without kafka. I just wanted to 
bring it up.


> Introduce (Event)time in Streaming
> ----------------------------------
>
>                 Key: FLINK-1967
>                 URL: https://issues.apache.org/jira/browse/FLINK-1967
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Aljoscha Krettek
>            Assignee: Aljoscha Krettek
>
> This requires introducing a timestamp in streaming record and a change in the 
> sources to add timestamps to records. This will also introduce punctuations 
> (or low watermarks) to allow windows to work correctly on unordered, 
> timestamped input data. In the process of this, the windowing subsystem also 
> needs to be adapted to use the punctuations. Furthermore, all operators need 
> to be made aware of punctuations and correctly forward them. Then, a new 
> operator must be introduced to to allow modification of timestamps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to