Hi Marcos,
If these discs are not shared across nodes, I would not worry. Hadoop takes
care of making sure data is not replicated to single node.

But if all these 20 nodes are sharing these 10 HDD's,

Then you may have to basically assign specific disc to specific node and
make your cluster rack aware so that the replication in same rack would go
to different node and replication to second rack will to new disc.





On Tue, May 13, 2014 at 1:38 PM, kishore alajangi <[email protected]
> wrote:

> replication factor=1
>
>
> On Tue, May 13, 2014 at 11:04 AM, SF Hadoop <[email protected]> wrote:
>
>> Your question is unclear. Please restate and describe what you are
>> attempting to do.
>>
>> Thanks.
>>
>>
>> On Monday, May 12, 2014, Marcos Sousa <[email protected]> wrote:
>>
>>> Hi,
>>>
>>> I have 20 servers with 10 HD with 400GB SATA. I'd like to use them to be
>>> my datanode:
>>>
>>> /vol1/hadoop/data
>>> /vol2/hadoop/data
>>> /vol3/hadoop/data
>>> /volN/hadoop/data
>>>
>>> How do user those distinct discs not to replicate?
>>>
>>> Best regards,
>>>
>>> --
>>> Marcos Sousa
>>>
>>
>
>
> --
> Thanks,
> Kishore.
>



-- 
Nitin Pawar

Reply via email to