[
https://issues.apache.org/jira/browse/HADOOP-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577273#action_12577273
]
Pete Wyckoff commented on HADOOP-2991:
--------------------------------------
If reserved = something like: DU.everythingOtherThanDFS() +
DF.getUnusableSpace() [the metadata space for the FS itself]
then everything is ok in the new code.
So, since the DF.getCapacity() is off, this is a bug.
Also, the above formula is basically uncomputable.
Am I to calculate that for every machine for every disk? What happens if
someone installs a new version of python on one of those disks? Do I have to
re-calculate everything? And how would I even know to do that.
Raghu, what is the real life motivation and example of how this change of the
semantics is useful?
thanks, pete
ps for us, this means we would have to set reserved to 20 GB today. I have no
idea tomorrow if it would be 21 or 19. And it means that on our non'/'
partitions, we waste about 19 GB * 3 drives * 100 = 6 TB
> dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
> ----------------------------------------------------------------------------
>
> Key: HADOOP-2991
> URL: https://issues.apache.org/jira/browse/HADOOP-2991
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.15.0, 0.15.1, 0.15.2, 0.15.3, 0.16.0
> Reporter: Joydeep Sen Sarma
> Priority: Critical
>
> changes for https://issues.apache.org/jira/browse/HADOOP-1463
> have caused a regression. earlier:
> - we could set dfs.du.reserve to 1G and be *sure* that 1G would not be used.
> now this is no longer true. I am quoting Pete Wyckoff's example:
> <example>
> Let's look at an example. 100 GB disk and /usr using 45 GB and dfs using 50
> GBs now
> Df -kh shows:
> Capacity = 100 GB
> Available = 1 GB (remember ~4 GB chopped out for metadata and stuff)
> Used = 95 GBs
> remaining = 100 GB - 50 GB - 1GB = 49 GB
> Min(remaining, available) = 1 GB
> 98% of which is usable for DFS apparently -
> So, we're at the limit, but are free to use 98% of the remaining 1GB.
> </example>
> this is broke. based on the discussion on 1463 - it seems like the notion of
> 'capacity' as being the first field of 'df' is problematic. For example -
> here's what our df output looks like:
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda3 130G 123G 49M 100% /
> as u can see - 'Size' is a misnomer - that much space is not available.
> Rather the actual usable space is 123G+49M ~ 123G. (not entirely sure what
> the discrepancy is due to - but have heard this may be due to space reserved
> for file system metadata). Because of this discrepancy - we end up in a
> situation where file system is out of space.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.