Hi Takanobu,

thanks for the quick reply. I missed that class.

But does it really do what I need?
If I have these racks:
/dc1/rack1
/dc1/rack2
/dc1/rack3
/dc2/rack1
/dc2/rack2
/dc2/rack3

And I place a single block in HDFS, couldn't this policy chose /dc1/rack1,
/dc1/rack2, /dc1/rack3 at random?

Cheers,
Lars

On Thu, Jul 4, 2019 at 12:46 PM Takanobu Asanuma <tasan...@yahoo-corp.jp>
wrote:

> Hi Lars,
>
> I think BlockPlacementPolicyRackFaultTolerant can do it.
> This policy tries to place 3 replica separately in different racks.
>
> <property>
>   <name>dfs.block.replicator.classname</name>
>
> <value>org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant</value>
> </property>
>
> See also:
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java
>
> Thanks,
> - Takanobu
> ________________________________________
> From: Lars Francke <lars.fran...@gmail.com>
> Sent: Thursday, July 4, 2019 18:15
> To: hdfs-user@hadoop.apache.org
> Subject: BlockPlacementPolicy question with hierarchical topology
>
> Hi,
>
> I have a customer who wants to make sure that copies of his data are
> distributed amongst datacenters. So they are using rack names like this
> /dc1/rack1, /dc1/rack2, /dc2/rack1 etc.
>
> Unfortunately, the BlockPlacementPolicyDefault seems to place all blocks
> on /dc1/* sometimes.
>
> Is there a way to guarantee that /dc1/* and /dc2/* will be used in this
> scenario?
>
> Looking at chooseRandomWithStorageTypeTwoTrial it seems to consider the
> full "scope" and not its components. I couldn't find anything in the code
> but I had hoped I'm missing something: Is there a way to configure HDFS for
> the behaviour I'd like?
>
> Thanks!
>
> Lars
>

Reply via email to