In general it would be nice to be able to configure replication on a
per-job basis.  Is there a way to do that without changing the config
values in the Hadoop conf/ directory between jobs?  Maybe by modifying
OutputFormats or the JobConf ?


On Mon, Jul 14, 2014 at 11:12 PM, Matei Zaharia <matei.zaha...@gmail.com>
wrote:

> You can change this setting through SparkContext.hadoopConfiguration, or
> put the conf/ directory of your Hadoop installation on the CLASSPATH when
> you launch your app so that it reads the config values from there.
>
> Matei
>
> On Jul 14, 2014, at 8:06 PM, valgrind_girl <124411...@qq.com> wrote:
>
> > eager to know this issue too,does any one knows how?
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/hdfs-replication-on-saving-RDD-tp289p9700.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>

Reply via email to