By default, all built-in erasure coding policies are disabled, except the
one defined in dfs.namenode.ec.system.default.policy which is enabled by
default(RS-6-3-1024k).  With this configuration, the default EC policy will
be used when no policy name is passed as an argument in the ‘-setPolicy’
command.

You can enable set of policies through hdfs ec [-enablePolicy -policy
<policyName>] command based on the size of the cluster and the desired
fault-tolerance properties.

For instance, for a cluster with 9 racks, a policy like RS-10-4-1024k will
not preserve rack-level fault-tolerance, and RS-6-3-1024k or RS-3-2-1024kmight
be more appropriate.

Reference:

https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html

On Sun, Feb 17, 2019 at 8:53 PM Shuubham Ojha <shuubhamo...@gmail.com>
wrote:

> Hello, I am trying to use Hadoop 3.1.1 on my cluster. I wish to experiment
> with the Hitchhiker Code which I believe was introduced in Hadoop 3 itself.
> I don't understand how do I activate the hitchhiker feature for the blocks
> of files I put on the datanode. I also don't know which erasure coding
> policy is being used by default on the uploaded blocks of files when I
> don't do anything. Any help regarding setting the erasure coding policy
> (and hitchhiker feature) would be appreciated.
>
> It's a bit urgent.
>
> Warm regards,
> Shuubham Ojha
>


-- 



--Brahma Reddy Battula

Reply via email to