Hi guys

I have a question about Kudu with Spark.

For example there is a table in kudu with field record_id and following 
partitioning:
HASH (record_id) PARTITIONS N

Is it possible to load records from such table in key value fashion with 
correct partitioner information in RDD? For example RDD[(record_id, row)]
Because when i try to use kuduRDD in spark the partitioner has None value so im 
losing information about original (kudu) partitioning.

Thanks

Reply via email to