Re: [Structured spak streaming] How does cassandra connector readstream deals with deleted record

2020-06-26 Thread Russell Spitzer
The connector uses Java driver cql request under the hood which means it responds to the changing database like a normal application would. This means retries may result in a different set of data than the original request if the underlying database changed. On Fri, Jun 26, 2020, 9:42 PM Jungtaek

Re: [Structured spak streaming] How does cassandra connector readstream deals with deleted record

2020-06-26 Thread Jungtaek Lim
I'm not sure how it is implemented, but in general I wouldn't expect such behavior on the connectors which read from non-streaming fashion storages. The query result may depend on "when" the records are fetched. If you need to reflect the changes in your query you'll probably want to find a way

[Structured spak streaming] How does cassandra connector readstream deals with deleted record

2020-06-24 Thread Rahul Kumar
Hello everyone, I was wondering, how Cassandra spark connector deals with deleted/updated record while readstream operation. If the record was already fetched in spark memory, and it got updated or deleted in database, does it get reflected in streaming join? Thanks, Rahul -- Sent from: