Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1640#discussion_r52995692
  
    --- Diff: docs/apis/streaming/fault_tolerance.md ---
    @@ -176,6 +176,11 @@ state updates) of Flink coupled with bundled sinks:
             <td></td>
         </tr>
         <tr>
    +        <td>Cassandra sink</td>
    +        <td>exactly once</td>
    --- End diff --
    
    which is also not true. a flink failure while data is being written to 
cassandra will cause duplicates. you can only say this if writing the data to 
the final table is completely handled by cassandra (for example by writing into 
a temporary table, exporting it to csv and importing into the target table; the 
only way for duplicates is if cassandra fails while importing).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to