initially my there is no dir, directory which created by spark job. it
should empty while job execution. df write itself create first file and
trying to overwrite it seems.
On Fri, Jan 13, 2017 at 11:42 AM, Amrit Jangid
wrote:
> Hi Rajendra,
>
> It says your directory is not empty *s3n://**buc
Hi Rajendra,
It says your directory is not empty *s3n://**buccketName/cip/daily_date.*
Try to use save *mode. eg *
df.write.mode(SaveMode.Overwrite).partitionBy("date").f
ormat("com.databricks.spark.csv").option("delimiter", "#").option("codec", "
org.apache.hadoop.io.compress.GzipCo
Hi team,
I am reading N number of csv and writing file based date partition. date is
one column, it has integer value(ex 20170101)
val df = spark.read
.format("com.databricks.spark.csv")
.schema(schema)
.option("delimiter","#")
.option("nullValue","")