James Youngman wrote on 20-09-07 12:02:
On 9/19/07, Roberto Spadim <[EMAIL PROTECTED]> wrote:
hello guys, i'm using archlinux current under XFS
see what's happen with my /dev/sda2 (/ root partition)
DF output: (88% Usage)

Sist. Arq.           1K-blocos      Usad Dispon.   Uso% Montado em
/dev/sda2              9765312   8567884   1197428  88% /

So, 1197428 KB is free.

$ bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.

So you have about 1.2GB (1.14GiB) free.  About 8.6GB has been used.

[EMAIL PROTECTED] /]# du -s -x /
2031605 /

A different figure.   2GB; a significant disparity.

i think that DF is understanding /mnt/smb (smbfs mount point) as / disk
but if i umount it and get again df and du -s /*, df still with 88%

OK, so you had a hypothesis, tested it and found it was wrong.  But
that's a good way to proceed.

what could i do? i think that DU is right, but DF is the main problem

They are both right but they do different things.   df asks the
filesystem how much space is free.   du examines all the files you
told it to look at and adds up the space occupied by them.  These
figures are always different.   I'll comment on this again, below.

since, if i do:
dd if=/dev/zero of=/test bs=1024 count=very big count
i get:

[EMAIL PROTECTED] /]# dd if=/dev/zero of=/test bs=1024 count=5000000000
rm /testdd: escrevendo `/test': Não há espaço disponível no dispositivo
1194693+0 records in
1194692+0 records out
1223364608 bytes (1,2 GB) copied, 37,6201 s, 32,5 MB/s

So you tried to create a really large file, and created one that was
only 1.2GB because you ran out of space.   This confirms that the
output from df is correct.

so, df is my problem since i have "only" 1.2 gb,

Well, dd is telling you the same thing as df.   I'd assume that df is
correct, then.

There are many things that will normally result in the output of df
being different to the output of du.   Here are some examples:

  * filesystem overhead (e.g. partly-ful blocks)
  * journal files
  * unlinked files which are still open (they are not actually deleted
until they are closed, but du cannot see them because they have no
  * inaccessible files

You can eliminate unlinked files as a problem by stopping (and if
needed, restartring) all the processes on your system which have large
open files.

To get a list of all the deleted files which are still open and the
processes which have them open, you can use this command (which
probably works only on GNU/Linux systems):

 find /proc -mindepth 3 -path '/proc/*/fd/*' -type l -lname
'*deleted*' -printf '%l %p\n'

That's a lower case L after the % sign there.   The output of this
command is a list of deleted files (the left hand column) and the
processes that have them open (for each /proc/NNN/fd/XX item in the
right hand column, the NNN is the PID of the process which has the
file open).

As for inaccessible files, one way for that to happen is if you create
a whole bunch fo stuff (say under /home) and then add a new filesystem
and mount it over the existing stuff.   The existing stuff is still
there (you didn't delete it) but it is not accessible to du because a
filesystem got mounted over it.

Hope this helps,


This explanation (well done!) should make it into the FAQ, if only for
its clarity, that's what I think.


Bug-coreutils mailing list

Reply via email to