Hi,

I'm using PySpark to write df to s3 in parquet.
I would like to add the partitioned columns to the file as well.
What is the best way to do this?

e.g df.write.partitionBy('day','hour')....
file out come -> day,hour,time,name....
and not time,name....


Thanks!
Tzahi

Reply via email to