Hi, its supported, try to use coalesce(1) (the spelling is wrong) and after that do the partitions.
Regards, Gourav On Mon, May 9, 2016 at 7:12 PM, Mail.com <pradeep.mi...@mail.com> wrote: > Hi, > > I have to write tab delimited file and need to have one directory for each > unique value of a column. > > I tried using spark-csv with partitionBy and seems it is not supported. Is > there any other option available for doing this? > > Regards, > Pradeep > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >