Hi, I', using spark-sql to process my data and store result as parquet
partitioned by several columns

ds.write
  .partitionBy("year", "month", "day", "hour", "workflowId")
  .parquet("/here/is/my/dir")


I want to run more jobs that will produce new partitions or add more files
to existing partitions.
What is the right way to do it?

Reply via email to