Hi Denis, great to see you here :)
It works, thanks!

Do you know how spark generates datafile names?  names look like part-0000
with uuid appended after

part-00000-124a8c43-83b9-44e1-a9c4-dcc8676cdb99.c000.snappy.parquet




2018-03-17 14:15 GMT+01:00 Denis Bolshakov <bolshakov.de...@gmail.com>:

> Hello Serega,
>
> https://spark.apache.org/docs/latest/sql-programming-guide.html
>
> Please try SaveMode.Append option. Does it work for you?
>
>
> сб, 17 мар. 2018 г., 15:19 Serega Sheypak <serega.shey...@gmail.com>:
>
>> Hi, I', using spark-sql to process my data and store result as parquet
>> partitioned by several columns
>>
>> ds.write
>>   .partitionBy("year", "month", "day", "hour", "workflowId")
>>   .parquet("/here/is/my/dir")
>>
>>
>> I want to run more jobs that will produce new partitions or add more
>> files to existing partitions.
>> What is the right way to do it?
>>
>

Reply via email to