[ 
https://issues.apache.org/jira/browse/HADOOP-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648894#action_12648894
 ] 

Pete Wyckoff commented on HADOOP-4680:
--------------------------------------

This is the fix but I still haven't looked at why the unit test didn't catch it.
{code}
-  st->f_bavail  =  cap/bsize;
+  st->f_bavail  =  (cap-used)/bsize;
{code}

I used this to debug:
{code}
#include <sys/statvfs.h> 

#include <stdio.h>

int main(int ac, char ** av) {
  struct statvfs b;
  statvfs(av[1], &b);
  printf("%ld\n",b.f_bsize);
  printf("%ld\n",b.f_frsize);
  printf("%ld\n",b.f_blocks);
  printf("%ld\n",b.f_bfree);
  printf("%ld\n",b.f_bavail);
  printf("%ld\n",b.f_files);
  printf("%ld\n",b.f_ffree);
  printf("%ld\n",b.f_favail);
  printf("%ld\n",b.f_fsid);
  printf("%ld\n",b.f_flag);
  printf("%ld\n",b.f_namemax);

}
{code}


> fuse-dfs - df -kh on hdfs mount shows much less %used than the dfs UI
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-4680
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4680
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>            Reporter: Pete Wyckoff
>
> i.e., statfs broken.
> This used to show the correct amount and now is showing wrong numbers in  
> production environment.
> Yet, the unit test for this passes??
> This is the trunk version of fuse-dfs against 0.17 libhdfs and hadoop.
> Unknown if this affects 0.18.2 or 19.1 or trunk but i suspect it does. 
> admittedly fuse-dfs isn't supported against hadoop 17 but I am confident that 
> it is likely broken in all versions.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to