CDC from Cassandra works using Oracle Goldengate for Bigdata, we are doing that 
and publishing to kafka. But one of the downstream need batch files with 
complete dataset.
I am evaluating some options based on previous responses.

Thanks
Bibin John

From: Peter Corless <pe...@scylladb.com>
Sent: Friday, February 21, 2020 2:15 PM
To: user@cassandra.apache.org
Subject: Re: Mechanism to Bulk Export from Cassandra on daily Basis

Question: would daily deltas be a good use of CDC? (Rather than export entire 
tables.)

(I can understand that this might make analytics hard if you need to span 
multiple resultant daily files.)

Perhaps along with CDC, maybe set up the tables for export via a Kafka topic?

(https://docs.lenses.io/connectors/source/cassandra.html<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.lenses.io_connectors_source_cassandra.html&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=eFshZuDXOwvmW_UjVcAH8Q&m=UkSNN00d9DQnu8K66j74GovxF7z1hG6QrbiBTC1BIdU&s=0RCm7kSG_X6W60hLV1_6ApXvAfZNMNq-YekhddoRqbs&e=>)

Or maybe some sort of exporter using Apache Spark?

https://github.com/scylladb/scylla-migrator<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_scylladb_scylla-2Dmigrator&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=eFshZuDXOwvmW_UjVcAH8Q&m=UkSNN00d9DQnu8K66j74GovxF7z1hG6QrbiBTC1BIdU&s=pGkLLwEXfo93DoW_k4yo16nryA0ZJozLNoZGmoGK1LQ&e=>

I'm just trying to throw out a few other ideas on how to solve the exportation 
problem.

On Fri, Feb 21, 2020, 8:45 AM Durity, Sean R 
<sean_r_dur...@homedepot.com<mailto:sean_r_dur...@homedepot.com>> wrote:
I would also push for something besides a full refresh, if at all possible. It 
feels like a waste of resources to me – and not predictably scalable. 
Suggestions: use a queue to send writes to both systems. If the downstream 
system doesn’t handle TTL, perhaps set an expiration date and a purge query on 
the downstream target.

If you have to do the full refresh, perhaps a Spark job would be a decent 
solution. I would probably create a separate DC (with a lower replication 
factor and smaller number of nodes) just to handle the analytical/unload kind 
of workload (if the other functions of the cluster might be impacted by the 
unload).

DSBulk from DataStax is very fast and scriptable, too.

Sean Durity – Staff Systems Engineer, Cassandra

From: JOHN, BIBIN <bj9...@att.com<mailto:bj9...@att.com>>
Sent: Wednesday, February 19, 2020 5:25 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: [EXTERNAL] RE: Mechanism to Bulk Export from Cassandra on daily Basis

Thank you for suggestion. Full refresh is currently designed because with delta 
we cannot identify what got deleted. So downstreams prefer full data everyday.


Thanks
Bibin John

From: Reid Pinchback 
<rpinchb...@tripadvisor.com<mailto:rpinchb...@tripadvisor.com>>
Sent: Wednesday, February 19, 2020 3:14 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Mechanism to Bulk Export from Cassandra on daily Basis

To the question of ‘best approach’, so far the comments have been about 
alternatives in tools.

Another axis you might want to consider is from the data model viewpoint.  So, 
for example, let’s say you have 600M rows.  You want to do a daily transfer of 
data for some reason.  First question that comes to mind is, do you need all 
the data every day?  Usually that would only be the case if all of the data is 
at risk of changing.

Generally the way I’d cut down the pain on something like this is to figure out 
if the data model currently does, or could be made to, only mutate in a limited 
subset.  Then maybe all you are transferring are the daily changes.  Systems 
based on catching up to daily changes will usually be pulling single-digit 
percentages of data volume compared to the entire storage footprint.  That’s 
not only a lot less data to pull, it’s also a lot less impact on the ongoing 
operations of the cluster while you are pulling that data.

R

From: "JOHN, BIBIN" <bj9...@att.com<mailto:bj9...@att.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Wednesday, February 19, 2020 at 1:13 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Mechanism to Bulk Export from Cassandra on daily Basis

Message from External Sender
Team,
We have a requirement to bulk export data from Cassandra on daily basis? Table 
contain close to 600M records and cluster is having 12 nodes. What is the best 
approach to do this?


Thanks
Bibin John

________________________________

The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.

Reply via email to