That's because the files were still open. You get billed for the entire
block until the file is closed (block is finalized).
As an experiment, try reducing "dfs.blocksize" by half.

Kihwal

On Fri, Jun 1, 2018 at 12:56 AM, tao tony <tonytao0...@outlook.com> wrote:

> hi ,
>
>
> I used Apache HAWQ to write data on HDFS-2.7.3,and met a strange problem.
>
> I had totally wirte 300MB data,commit 100 times,each time commit 3MB.But
> each node "block pool used"  increased by more than 30GB,"block pool
> used"  in namenode increased 100GB.But when I use "hadoop fs -du -h
> /",the space only grow 300MB.And there's no change with block numbers.
> If i continually commit small data, "block pool used" will become
> greater then 100% and returned no space left.
>
> After about several minutes,the "block pool used" will gradually
> decrease to the normal.
>
> I didn't  see any logs on namenode and  datanode to reclaim the "block
> pool used".
>
> Anyone could explain why it happend and how Could I solve this
> problem.Many thanks!
>
>
> Tao Jin
>
>

Reply via email to