YAN Bo created CASSANDRA-15354:

             Summary: Cassandra CQLSSTableWriter and sstableloader support HDFS
                 Key: CASSANDRA-15354
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-15354
             Project: Cassandra
          Issue Type: New Feature
          Components: Legacy/Local Write-Read Paths, Local/SSTable, Tool/sstable
            Reporter: YAN Bo

rdd.foreachPartition( msgIterator => {
  val writer = CQLSSTableWriter.builder()
    // set target schema
    // set CQL statement to put data
    // set partitioner if needed
    // default is Murmur3Partitioner so set if you use different one.
    .withPartitioner( new Murmur3Partitioner()).build()
  msgIterator.foreach(msg => {
    val items = msg.toString().split(",")
    val  javaList = new util.ArrayList[Object]();
    items.foreach(t=> javaList.add(t))
Cassandra has provided bulkdata's export/import via SSTable, which is very 
fancy for users. In some case we have TB-level data from HDFS to Cassandra, and 
we can use spark to generate SSTable files by distributed computation with 
codes like above. Unfortunately  CQLSSTableWriter can only write data to local 
path, and sstableloader can only load from local path. So if we use 
CQLSSTableWriter in Spark or Hadoop MR program, we need to write other codes 
put local sstables distributed in distributed nodes to HDFS, then download all 
sstables from HDFS to the machine with sstableloader,  bigdata stored and 
transferred between pysical machines will bring many reliability problems.  


So we'd better let CQLSSTableWriter can write data to HDFS directly or have 
other writer which supports HDFS, and let sstableloader can load from HDFS path.

This message was sent by Atlassian Jira

To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to