just set you replication factor to 1 and you will be fine.
On Tue, May 13, 2014 at 8:12 AM, Marcos Sousa falecom...@marcossousa.comwrote:
Yes,
I don't want to replicate, just use as one disk? Isn't possible to make
this work?
Best regards,
Marcos
On Tue, May 13, 2014 at 6:55 AM,
replication factor=1
On Tue, May 13, 2014 at 11:04 AM, SF Hadoop sfhad...@gmail.com wrote:
Your question is unclear. Please restate and describe what you are
attempting to do.
Thanks.
On Monday, May 12, 2014, Marcos Sousa falecom...@marcossousa.com wrote:
Hi,
I have 20 servers with
Hi Marcos,
If these discs are not shared across nodes, I would not worry. Hadoop takes
care of making sure data is not replicated to single node.
But if all these 20 nodes are sharing these 10 HDD's,
Then you may have to basically assign specific disc to specific node and
make your cluster rack
Your question is unclear. Please restate and describe what you are
attempting to do.
Thanks.
On Monday, May 12, 2014, Marcos Sousa falecom...@marcossousa.com wrote:
Hi,
I have 20 servers with 10 HD with 400GB SATA. I'd like to use them to be
my datanode:
/vol1/hadoop/data
If you specify a list in the property dfs.datanode.data.dir hadoop
will distribute the data blocks among all those disks; it will not
replicate data between them. If you want to use the disks as a single
one you gotta make a LVM array or any other solution to present them as
a single one to
Yes,
I don't want to replicate, just use as one disk? Isn't possible to make
this work?
Best regards,
Marcos
On Tue, May 13, 2014 at 6:55 AM, Rahul Chaudhari
rahulchaudhari0...@gmail.com wrote:
Marcos,
While configuring hadoop, the dfs.datanode.data.dir property in
hdfs-default.xml
Hi,
I have 20 servers with 10 HD with 400GB SATA. I'd like to use them to be my
datanode:
/vol1/hadoop/data
/vol2/hadoop/data
/vol3/hadoop/data
/volN/hadoop/data
How do user those distinct discs not to replicate?
Best regards,
--
Marcos Sousa