Hi there,

I have a question about writing Parquet using SparkSQL. Spark 1.4 has already 
supported writing DataFrames as Parquet files with “partitionBy(colNames: 
String*)”, as Spark-6561 fixed.
Is there any method or plan to write Parquet with dynamic partitions? For 
example, instead of partitioning on the column Year(range:1900-2016) directly, 
do partition on the *decade* of the Year(range:190-201).
Thanks.

Best,
Ran
---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to