I also tried

jsc.sparkContext().sc().hadoopConfiguration().set("dfs.replication", "2")

But, still its not working.

Any ideas why its not working ?


Abhi

On Tue, May 31, 2016 at 4:03 PM, Abhishek Anand <abhis.anan...@gmail.com>
wrote:

> My spark streaming checkpoint directory is being written to HDFS with
> default replication factor of 3.
>
> In my streaming application where I am listening from kafka and setting
> the dfs.replication = 2 as below the files are still being written with
> replication factor=3
>
> SparkConf sparkConfig = new
> SparkConf().setMaster("mymaster").set("spark.hadoop.dfs.replication", "2");
>
> Is there anything else that I need to do ??
>
>
> Thanks !!!
> Abhi
>

Reply via email to