This does look like a very viable solution. Thanks.
Could you give us some pointers/documentation on :
- how can we build such SSTables using spark jobs, maybe
https://github.com/Netflix/sstable-adaptor ?
- how do we send these tables to cassandra? does a simple SCP work?
- what is the
blob/trunk/
> src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java )
>
>
>
> - Jeff
>
> --
> Jeff Jirsa
>
>
> On Jan 30, 2018, at 12:12 AM, Julien Moumne <jmou...@deezer.com> wrote:
>
> Hello, I am looking for best practices for the following
Hello, I am looking for best practices for the following use case :
Once a day, we insert at the same time 10 full tables (several 100GiB each)
using Spark C* driver, without batching, with CL set to ALL.
Whether skinny rows or wide rows, data for a partition key is always
completely updated /