Hi Brian,
Did you try set dfs.datanode.fsdataset.volume.choosing.policy to
org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy

then there are some other related options with this policy. you can google
it.

On Wed, Oct 8, 2014 at 4:44 PM, Aitor Cedres <aced...@pivotal.io> wrote:

>
> Hi Brian,
>
> Hadoop does not balance the disks within a DataNode. If you ran out of
> space and then add additional disks, you should shutdown the DataNode and
> move manually a few files to the new disk.
>
> Regards,
>
> Aitor Cedrés
>
>
> On 6 October 2014 14:46, Brian C. Huffman <bhuff...@etinternational.com>
> wrote:
>
>> All,
>>
>> I have a small hadoop cluster (2.5.0) with 4 datanodes and 3 data disks
>> per node.  Lately some of the volumes have been filling, but instead of
>> moving to other configured volumes that *have* free space, it's giving
>> errors in the datanode logs:
>> 2014-10-03 11:52:44,989 ERROR 
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> thor2.xmen.eti:50010:DataXceiver error processing WRITE_BLOCK
>>  operation  src: /172.17.1.3:35412 dst: /172.17.1.2:50010
>> java.io.IOException: No space left on device
>>     at java.io.FileOutputStream.writeBytes(Native Method)
>>     at java.io.FileOutputStream.write(FileOutputStream.java:345)
>>     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.
>> receivePacket(BlockReceiver.java:592)
>>     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.
>> receiveBlock(BlockReceiver.java:734)
>>     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.
>> writeBlock(DataXceiver.java:741)
>>     at org.apache.hadoop.hdfs.protocol.datatransfer.
>> Receiver.opWriteBlock(Receiver.java:124)
>>     at org.apache.hadoop.hdfs.protocol.datatransfer.
>> Receiver.processOp(Receiver.java:71)
>>     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(
>> DataXceiver.java:234)
>>     at java.lang.Thread.run(Thread.java:745)
>>
>> Unfortunately it's continuing to try to write and when it fails, it's
>> passing the exception to the client.
>>
>> I did a restart and then it seemed to figure out that it should move to
>> the next volume.
>>
>> Any suggestions to keep this from happening in the future?
>>
>> Also - could it be an issue that I have a small amount of non-HDFS data
>> on those volumes?
>>
>> Thanks,
>> Brian
>>
>>
>

Reply via email to