The connector uses Java driver cql request under the hood which means it
responds to the changing database like a normal application would. This
means retries may result in a different set of data than the original
request if the underlying database changed.
On Fri, Jun 26, 2020, 9:42 PM Jungtaek
I'm not sure how it is implemented, but in general I wouldn't expect such
behavior on the connectors which read from non-streaming fashion storages.
The query result may depend on "when" the records are fetched.
If you need to reflect the changes in your query you'll probably want to
find a way
Hello everyone,
I was wondering, how Cassandra spark connector deals with deleted/updated
record while readstream operation. If the record was already fetched in
spark memory, and it got updated or deleted in database, does it get
reflected in streaming join?
Thanks,
Rahul
--
Sent from: