Do you have 5*BLOCK_SIZE free space on at least one of the volumes on
the DN? If these are small VMs or your dfs.data.dir is /tmp maybe 80%
capacity is actually small enough that you can't allocate any more
blocks?

On Wed, Nov 9, 2011 at 11:44 AM, Roman Shaposhnik <[email protected]> wrote:
> On Tue, Nov 8, 2011 at 4:27 PM, Roman Shaposhnik <[email protected]> wrote:
>> My next try is going to be to have a run where all DNs never go above 50%
>> of storage utilization. If that cures it -- fine, but it still makes
>> up a pretty scary
>> failure scenario.
>
> That was successful:
>  http://bigtop01.cloudera.org:8080/view/Hadoop%200.22/job/Bigtop-trunk-smoketest-22/lastCompletedBuild/testReport/
>
> At this point, the only question remaining is why does this
> behavior show up when nodes run close to capacity.
>
> Thanks,
> Roman.
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to