Based on size of the output data you can do the math of how many file you
will need to produce 100MB files. Once you have number of files you can do
coalesce or repartition depending on whether your job writes more or less
output partitions.

On Sun, 5 May 2019 at 2:21 PM, rajat kumar <kumar.rajat20...@gmail.com>
wrote:

> Hi All,
> My spark sql job produces output as per default partition and creates N
> number of files.
> I want to create each file as 100Mb sized in the final result.
>
> how can I do it ?
>
> thanks
> rajat
>
>

Reply via email to