Github user StephanEwen commented on the pull request:

    https://github.com/apache/flink/pull/983#issuecomment-127949704
  
    Quoting from above:
    
    In many cases, the problem that Tuple0 does not consume anything, will be 
caught by the fact that other data that is stored or shipped with the tuple 
will advance the stream. However, relying on that is super dangerous. As soon 
as you hit a case where that is not given, it will not work any more.
    
    Not being able to find a counter example does not mean it is correct. 
    
    The network stack works, because it ships metadata bytes per element. With 
the TypeSerializerInput/Output format example, the output writes for 10 
elements an empty file, the input reads either none or infinitely many. There 
is at least one counterexample.
    
    What is so bad about creating a `Tuple0Serializer` that writes a dummy byte 
per tuple?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to