JulianJaffePinterest commented on pull request #12159:
URL: https://github.com/apache/druid/pull/12159#issuecomment-1041400963


   @wangxiaobaidu11 sorry Xiao Wang, I think I misinterpreted your question in 
my earlier comment. You're right to partition your dataframe by timestamp. I 
thought you were already doing that before the code snippet you shared and then 
applying the SingleDimensionPartitioner on top of the previous partition. If 
you were using the SingleDimensionPartitioner to partition your dataframe by 
timestamp, you will still need to do the partitioning but it will likely be 
faster to partition directly. You could start with 
   ```java
   Dataset<Row> dataset = sparkSession.sql(querySql);
   Dataset<Row> bucketedDataset = dataset.withColumn(<column_name>, 
SparkUdfs.bucketRow(col(tsCol), lit(tsFormat), lit(segmentGranularity)));
   Dataset<Row> partitionedDataSet = bucketedDataset.repartition(col(<column 
name>)).dropColumn(<column name>);
   ```
   if you want a single partition per segment or using the NumberedPartitioner 
as described above if you want to set the rows per partition. If that's not 
fast enough or you want to guarantee each bucket will correspond to exactly one 
partition, you can use a custom partitioner.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to