Alvaro Zuniga wrote:

You could do this.  This is all gnu df by the way. so I see the same behavior on Gentoo, Debian, and CentOS.  The example below is CentOS 5. Also verified the same with CentOS 4.5. If I were scripting this, I would probably grep -v out the header anyway.
[EMAIL PROTECTED] ~]# df -P
Filesystem         1024-blocks      Used Available Capacity Mounted on
/dev/mapper/VolGroup00-LogVol01  17838512   8709068   8208664      52% /
/dev/md0                108765     29614     73535      29% /boot
tmpfs                  1946036         0   1946036       0% /dev/shm
/dev/mapper/VolGroup00-LogVol02  49517260   5551712  41409644      12% /xenguests
You have new mail in /var/spool/mail/root
[EMAIL PROTECTED] ~]# df -P --block-size=512
Filesystem          512-blocks      Used Available Capacity Mounted on
/dev/mapper/VolGroup00-LogVol01  35677024  17418136  16417328      52% /
/dev/md0                217530     59228    147070      29% /boot
tmpfs                  3892072         0   3892072       0% /dev/shm
/dev/mapper/VolGroup00-LogVol02  99034520  11103424  82819288      12% /xenguests
[EMAIL PROTECTED] ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol01
                      17838512   8709068   8208664  52% /
/dev/md0                108765     29614     73535  29% /boot
tmpfs                  1946036         0   1946036   0% /dev/shm
/dev/mapper/VolGroup00-LogVol02
                      49517260   5551712  41409644  12% /xenguests




In particular notice the 512 byte.

FreeBSD 6.1:
Man page entry:
-P
Use POSIX compliant output of 512-byte blocks rather than the
default.  Note that this overrides the BLOCKSIZE specification
from the environment.

# df
Filesystem              1K-blocks     Used    Avail Capacity  
/dev/aacd0s1a              128990   108038    10634    91%   
/dev/aacd0s1f              257998    29934   207426    13%  
/dev/aacd0s1g            32294488 27185442  2525488    91% 
/dev/aacd0s1e              257998   212512    24848    90%

# df -P
Filesystem              512-blocks      Used    Avail Capacity
/dev/aacd0s1a               257980    216076    21268    91% 
/dev/aacd0s1f               515996     59868   414852    13%
/dev/aacd0s1g             64588976  54370884  5050976    91
/dev/aacd0s1e               515996    425024    49696    90% 


Now lets do Gentoo Linux using udev on Kernel 2.6.23:
Man page entry:
-P, --portability use the POSIX output format <-- helpful isn't it?

# df
Filesystem           1K-blocks      Used Available Use% 
/dev/hda2             38448304  12638396  23856808  35%
/dev/hda3             30755864  28282744    910800  97%
/dev/hda5                31077     20264      9209  69% 
/dev/hda6              3903620   1698744   2204876  44%
/dev/hda7              4795176    841360   3953816  18%
shm                     258240         0    258240   0% 

# df -P
Filesystem         1024-blocks      Used Available Capacity 
/dev/hda2             38448304  12638396  23856808      35% 
udev                     10240       108     10132       2% 
/dev/hda3             30755864  28282744    910800      97% 
/dev/hda5                31077     20264      9209      69% 
/dev/hda6              3903620   1698744   2204876      44% 
/dev/hda7              4795176    841360   3953816      18% 
shm                     258240         0    258240       0% 

Also, notice that Linux seems to change the
header Capacity/Use% without any changes for the value; most likely
because this is a percentage but it does make me wonder about the
modularity of the implementation.

The important thing is what would the application report for the size of
the disk? No body in the right mind should ever display this. As we
speak I have no idea what the size of those partitions are, if I cared, I
would use df -h like someone else mentioned. If I were doing some type
of admin task, probably dh -hi would be more appropriate. This could uncover 
often ignored issues with inodes when running out of disk space. I think
I once replaced a drive because of that, therefore now alias = "df -hi",
which I never questioned till today.

Anyway, if formatting is all we are aiming at here, a regular _expression_
suppressing the "\n" is what is needed then, we extract the values from
the $1 variables in perl or the matches array in php or c (pcre), 
instead of one line at a time. 

If the actual size of the disk is also important :-), I would add a 
function that can take in the value for each of the disk values, 
determine the OS or Kernel Version, and  df version adjusting the output 
accordingly.


  
I would be willing to wager that 99% of those scripts use 'df' in the
way that I and others that have responded to this thread so far use it:

$ df | awk ...
      
True, but what columns is your awk going after?  The problem is that 'df' 
output isn't standard.  You get different output on AIX than Linux and 
*BSD, not sure about Solaris or HPUX.  So that's what POSIX is for. :)

Yes i've been bitten by this before... my solution was to add -P to all 
the broken scripts haha.  I do agree that the multi-line df output caused 
by the default RedHat LVM is quite aggravating.

I've run into similar problems with vmstat/iostat output being different 
between Linux kernels and other Unices.  Unfortunately i haven't found a 
-P flag for vmstat.

ray
-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Ray DeJean  				       	 http://www.r-a-y.org
Systems Engineer                    Southeastern Louisiana University
IBM Certified Specialist  	      AIX Administration, AIX Support
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


_______________________________________________
General mailing list
[email protected]
http://mail.brlug.net/mailman/listinfo/general_brlug.net
    

_______________________________________________
General mailing list
[email protected]
http://mail.brlug.net/mailman/listinfo/general_brlug.net

  

_______________________________________________
General mailing list
[email protected]
http://mail.brlug.net/mailman/listinfo/general_brlug.net

Reply via email to