Github user rangadi commented on the issue:

    https://github.com/apache/flink/pull/4239
  
    > Hmmm, are you sure about this thing? That would mean that Kafka doesn't 
support transactional parallel writes from two different process, which would 
be very strange. Could you point to a source of this information?
    
    It does not prohibit parallel transactions. Just restricts what an EOS 
consumer, which reads only the committed messages can see.
    
    See 'Reading Transactional Messages' section in JavaDoc for KafkaConsumer : 
https://github.com/apache/kafka/blob/0.11.0/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L421
 : 
    
    > In read_committed mode, the consumer will read only those transactional 
messages which have been successfully committed. It will continue to read 
non-transactional messages as before. There is no client-side buffering in 
read_committed mode. Instead, the end offset of a partition for a 
read_committed consumer would be the offset of the first message in the 
partition belonging to an open transaction. This offset is known as the 'Last 
Stable Offset'(LSO).
    
    If there is an open transaction, the EOS consumers don't read past it.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to