I wrote an ETL tool for Cassandra which is based on scanning the binary
commit log of each node, extracting which keys have received inserts,
filtering them by the column timestamp to only select the last X minutes
mutations, then it issues a multiget to Cassandra to get the freshest
version of the rows (if you can/want to partially update rows in your
target DB you could skip this step).

With this data it's then able to connect to various DB (PostgreSQL hstore,
Oracle and plain CSV in our case) and issue the appropriate  "upsert"
calls. It's also parallel and quite fast (150.000 filtered rows in less
than 1m, insert speed depending on your target DB). We use it in production
to provide realtime join/search capabilities with PostgreSQL with a delay
of only 1m to 5m.

I was able to open source an early, rough version in my Github:

https://github.com/carloscm/cassandra-commitlog-extract

It needs some work to make it into a proper Java project, feel free to fork
and play with it.

On Friday, December 14, 2012, cko2...@gmail.com wrote:

> We will use Cassandra as logging storage in one of our web application.
> The application only insert rows into Cassandra but never update or delete
> any rows. The CF is expected to grow by about 0.5 million rows per day.
>
> We need to transfer the data in Cassandra to another relational database
> daily. Due to the large size of the CF, instead of truncating the
> relational table and reloading all rows into it each time, we plan to run a
> job to select the "delta" rows since the last run and insert them into the
> relational database.
>
> We know we can use Java, Pig or Hive to extract the delta rows to a flat
> file and load the data into the target relational table. We are
> particularly interested in a process that can extract delta rows without
> scanning the entire CF.
>
> Has anyone used any other ETL tools to do this kind of delta extraction
> from Cassandra? We appreciate any comments and experience.
>
> Thanks,
> Chin
>

Reply via email to