Curt,
> Huh.
>
> The copy of du I got, via
>
> http://gnuwin32.sourceforge.net/packages.html
I am sorry. You did not say you were using the gnuwin32 port. I
therefore assumed you were using Cygwin since Cygwin is probably the
best known of the GNU ports. Pardon me and
They are the experts there.
http://cygwin.com
Thanks
Bob
> If du starts in the root directory, it does not
> appear to see any sub-directories under either
> FAT16 or FAT32.
>
> e.g. du e:\
> where e: is a 32 gig HD, single FAT32 partition
>
> gives a single line:
>
OS: win95b (OSR 2.0)
File partitions: FAT16 and FAT32
If du starts in the root directory, it does not
appear to see any sub-directories under either
FAT16 or FAT32.
e.g. du e:\
where e: is a 32 gig HD, single FAT32 partition
gives a single line:
0 e:\
or du c:\
where c: is the first
I see the following errors using the df and du commands on a redhat
advanced server 2.1 system:
[EMAIL PROTECTED] temp]# df -k .
/dev/sdd1 35006192 7956 33220012 1%
/databases/oradata/aplcprd/temp
[EMAIL PROTECTED] temp]# df -h .
/dev/sdd1 33G 7.8M 31G 1
Hello.
Tim Newsham wrote:
Would be nice if DU could print out the cost of the storage rather
than the number of blocks. The following code shows an example
of this (option -$ reads a cost from /usr/share/du-cost and applies
it before printout out the result).
Haha. Nice idea. I can think of a
Would be nice if DU could print out the cost of the storage rather
than the number of blocks. The following code shows an example
of this (option -$ reads a cost from /usr/share/du-cost and applies
it before printout out the result).
Tim N.
--- du.c.orig Fri Jun 18 15:51:04 2004
+++ du.c
Hello,
Please have a look at this example (executed by a
regular user from the home directory)
$ du --separate-dirs --summarize
180 .
$ du --summarize
1268.
The first command seems to function as expected: it
shows the summary of the current directory, size of
Anthony Thyssen <[EMAIL PROTECTED]> wrote:
> I have a series of backup home directories what use hardlinks on files
> that have not changed. If I run "du" on these directories I get
> a disk usage summery as if the directories are not hard linked together!
>
>
I have a series of backup home directories what use hardlinks on files
that have not changed. If I run "du" on these directories I get
a disk usage summery as if the directories are not hard linked together!
How can I get real disk usage summery. IE: the cost in disk space of
the
Richard Dawe wrote:
> Hello.
>
> Dave Gotwisner wrote:
> [snip]
> > Rather than assume it just takes a list of files, I would suggest strongly
> > that whoever chooses to implement this (if anyone does), they also allow
> > it to take other options as part of the file.
> > Literally, they should r
see no benefit
in taking option strings from an input file.
And there'd be a drawback in that one would have to handle
quoting differences since there'd no longer be a shell in the loop.
E.g. specifying --exclude=\*.bak on the command line would work fine,
but putting --exclude=\*.bak in
Hello.
Dave Gotwisner wrote:
[snip]
> Rather than assume it just takes a list of files, I would suggest strongly
> that whoever chooses to implement this (if anyone does), they also allow
> it to take other options as part of the file.
> Literally, they should replace the "--process-file=foo" with
d a newline then converting
> that newline to a null would no longer match the original filename.
Yes, and such filenames wouldn't be understood anyway, even if the next
program in the pipeline (ls, wc, du) assumed \n-separated lines. All
I'm saying is that if you have to choose, ch
Bernd Jendrissek wrote:
> Jim Meyering wrote:
> > If the format is simply one file name per line, then what about
> > files with names containing a newline?
> >
> > One solution is to require that newlines and backslashes be
> > backslash-escaped. Another is simply to require that file names
> >
Jim Meyering wrote:
> "Dan Heller" <[EMAIL PROTECTED]> wrote:
> > Anyway, the other method is to support the "take the input from a file"
> > approach that Dave pointed out:
>
> Thanks for bringing this up.
>
> It would be useful to giv
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Sep 18, 2003 at 09:23:24AM +0200, Jim Meyering wrote:
> If the format is simply one file name per line, then what about
> files with names containing a newline?
>
> One solution is to require that newlines and backslashes be backslash-escaped.
"Dan Heller" <[EMAIL PROTECTED]> wrote:
> While [using xargs] technically "works" in that comands don't "fail", per se,
> it doesn't solve the real problem at hand; the command is reading all the
> info coming in (du, in this case), and tally
Hello.
Dan Heller wrote:
>
> Is this an oversight or omission?
> I want to do:
> $ locate .jpg | sed [...] | du -c -b
Why should du read a list of files on stdin?
You can use xargs to convert stdin to a list of parameters:
locate .jpg | sed [...] | xargs du -c -b
xargs co
Is this an oversight or omission?
I want to do:
$ locate .jpg | sed [...] | du -c -b
--
--dan
http://www.danheller.com/
___
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils
André Somers <[EMAIL PROTECTED]> wrote:
> When I run du on a folder with *lots* of files (I estimate anywhere between
> 200,000 to 400,000) over many subdirectories, du manages to crash my linux
> sytem. I run du as a normal user. If I look at the systemlog, there is no
> infor
Hi,
I am seeing an odd problem with my du, but I'm not sure where exactly the
problem is: with the kernel or with du itself.
When I run du on a folder with *lots* of files (I estimate anywhere between
200,000 to 400,000) over many subdirectories, du manages to crash my linux
sytem. I r
Hi. I think there's a bug in the 'du' utility in fileutils-4.1.
du -b is supposed to report the size of the file in bytes.
The observed behavior is that du -b reports the size rounded
up to the nearest multiple of 1024, however:
venice[ /venice/cradle ]% ls -l file.dat
RobBlond <[EMAIL PROTECTED]> wrote:
> i say:
> bash:> du /
>
> he says:
> [...]
> Total Size: -2047307569 bytes
> in 200179 Files
> and 20086 Directories
Thanks for the report.
Please include the version next time (run du --version).
That is almost certainly fi
i say:
bash:> du /
he says:
[...]
Total Size: -2047307569 bytes
in 200179 Files
and 20086 Directories
---
looks like a too small variable...
sorry, but my coding skills are not good enough for a patch...
not jet :o)
greeds, robblond
___
Hi
> You wanted it included, not me. I'm just telling you what you should
> do to make Jim happy. =)
I don't WANT it to be included. It was a suggestion, nothing more, nothing
less.
> Some more nitpicks, I don't think this option deserves an short
> option.
EOT.
> And maybe you could make
Here we go. again.
You wanted it included, not me. I'm just telling you what you should
do to make Jim happy. =)
Some more nitpicks, I don't think this option deserves an short
option. And maybe you could make the long option take an argument?
I.e. something like --broader-size=12, this way
h structure in the global variable `htab' to
@@ -668,7 +683,7 @@
human_block_size (getenv ("DU_BLOCK_SIZE"), 0, &output_block_size);
- while ((c = getopt_long (argc, argv, "abchHklmsxDLSX:", long_options, NULL))
+ while ((c = getopt_long (argc, argv, "
t;,
+ sprintf (p, "%11s ",
human_readable ((uintmax_t) f->stat.st_size, hbuf, 1,
output_block_size < 0 ? output_block_size : 1));
}
- snip -
du is a bit more "complicated". I stripped the Tab-char and made the left
OK. Here we go.
Make it an option, and it might just be usefull. Changing the default
is silly since most files tend to be far smaller than 1GB, and this
just wastes precious space on the screen. Would you like to do this?
You could look at the some of the output formating switches to see ho
>File-sizes get bigger. I have regularly files that are >=
>1.000.000.000 bytes. This makes ls output a bit "difficult" to read
>and the format is "jumpy" if they are mixed with files <=
>999.999.999 in size.
>
> Why not just use --human-readable? I would have a h
Hi
>File-sizes get bigger. I have regularly files that are >=
>1.000.000.000 bytes. This makes ls output a bit "difficult" to read
>and the format is "jumpy" if they are mixed with files <=
>999.999.999 in size.
>
> Why not just use --human-readable? I would have a hard time read
File-sizes get bigger. I have regularly files that are >=
1.000.000.000 bytes. This makes ls output a bit "difficult" to read
and the format is "jumpy" if they are mixed with files <=
999.999.999 in size.
Why not just use --human-readable? I would have a hard time reading
several line
Hi
File-sizes get bigger. I have regularly files that are >= 1.000.000.000
bytes. This makes ls output a bit "difficult" to read and the format is
"jumpy" if they are mixed with files <= 999.999.999 in size.
Because of this i have patched the (s)printf of my loc
On Sun, Dec 08, 2002 at 09:18:43PM +0100, you [Jim Meyering] wrote:
> Ville Herva <[EMAIL PROTECTED]> wrote:
> > Long (non-embedded) softlinks allocate disk blocks to hold the referred path
> > on linux/ext[23] (possibly on other fs's as well). This space is not
&g
Long (non-embedded) softlinks allocate disk blocks to hold the referred path
on linux/ext[23] (possibly on other fs's as well). This space is not
reported by du(1) at all:
mkdir empty; cd empty
ln -fs $(perl -e "print ('a' x 100)") a
du -k a
0 a
perl -e
'($d
Thomas Preissler <[EMAIL PROTECTED]>:
> Thank you very much, for your help.
>
> But I already fixed the problem. A MySQL-logifile has been deleted,
> but MySQL was not restartet. After restarting it, all was ok.
Glad to hear your problem is resolved.
Yes this will cause d
rts that I'm using 2882M.
> tar --exclude-from says that the backup is 2722M.
> du with my tar-friendly exclude from reports that I'm using 2942M.
> du without any exclude file reports that I'm using exactly: 2942M.
I am not sure of the accuracy you wish. To my mind those numbe
On Wed, Oct 30, 2002 at 05:05:02PM -0700, Bob Proulx wrote:
>
> I am not convinced by this data that the exclude list is really the
> issue here. It might be. But the other confusion seems a much more
> likely explanation.
>
> Bob
>
So what is the idiomatic way to guess the size of an archive
[EMAIL PROTECTED] (Bob Proulx) wrote:
> df is reporting disk blocks free.
>
> du is reporting disk blocks used.
df reports both free and used blocks.
paul
___
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/lis
Jacob Elder <[EMAIL PROTECTED]> [2002-09-28 18:13:11 -0400]:
> It appears that du and tar use a different pattern language for their
> --exclude-from option. I was trying to predict the size of a backup that
> would be performed with tar, and came across a discrepency.
>
>
Thomas Preissler <[EMAIL PROTECTED]> [2002-10-21 14:24:43 +0200]:
> I have encountered a strange problem with df 4.0.
Yes, a strange problem. I can't recreate it here.
> df shows me that a partition is nearly full. But when I use other
> programs like du or calculate it b
Hello,
I have encountered a strange problem with df 4.0.
df shows me that a partition is nearly full. But when I use other
programs like du or calculate it by hand, other values are shown.
I have checked this problem with the latest version: fileutils-4.1
and it is the same problem.
Showing
It appears that du and tar use a different pattern language for their
--exclude-from option. I was trying to predict the size of a backup that
would be performed with tar, and came across a discrepency.
df reports that I'm using 2882M.
tar --exclude-from says that the backup is 2722M.
du wi
r in malloc:ing path fixed.
Best Regards
Kim Ahlstrom
[EMAIL PROTECTED]
http://kim.animanga.nu/ (Swedish only)
du.4111.c.diff
Description: Binary data
Hi!
I have have made two additions to the du utility that I thought you
might want to take a look at.
First. The addition of a -p flag. When this is used the file's size in
percent relative to the total number of shown files will be printed. A
downside to this is that du has to go th
ize.
> There are different outputs for these utilities while I expect the same behavior
> $ls -al
> $du . -a -b --max-depth=1
>
> The difference is in the sizes they show, there is no match between them, for
>example I have a file that its size is 201 bytes using ls and 4096 u
Hi all,
I don't know whether this is a bug or not, but I
will describe for you what I have noticed
I am using RedHat 7.2 with kernel
2.4 (using ext3
filesystem)
There are different outputs for these utilities
while I expect the same behavior
$ls -al
and
$du . -a -b --max-depth=
Subject: Re: the files listed with du -a do not match ls -R, possible bug?
Date: Mon, 26 Aug 2002 15:54:45 -0500
From: Aaron Wegner <[EMAIL PROTECTED]>
To: Jim Meyering <[EMAIL PROTECTED]>
> I'm pretty sure that's not caused by a bug in du.
>
> Don't depend on
Aaron Wegner <[EMAIL PROTECTED]> wrote:
> I think I found what may be a bug in the du program. So far I have only been
> able to observe this anomaly in my build directory for glibc. The bug is
> uncovered with the following commands, issued from the base of the bu
xplicit. Perhaps because it can't overrun ARG_MAX? Don't know.
> #du -h -d0 $HOME
> 1.0G/home/mwlucas
> #
> I have a GB in my home directory? Let's look a layer deeper and see where
> the heck it is.
>
> #du -h -d 1
> 52M./bin
> 1.4M./.kde
&
g.
-dtakes one argument, the number of directories deep you want to show. A
-0will just give you a simple subtotal of the files in a
directory.#du -h -d0 $HOME1.0G
/home/mwlucas#I have a GB in my home directory? Let's look a layer
deeper and see wherethe heck it is.#du -h -d
1 52M ./bi
-- Forwarded Message --
Subject: Re: the files listed with du -a do not match ls -R, possible bug?
Date: Mon, 19 Aug 2002 09:33:26 -0500
From: Aaron Wegner <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] (Bob Proulx)
On Saturday 17 August 2002 12:51 pm, you wrote:
> Aar
Aaron Wegner <[EMAIL PROTECTED]> [2002-08-15 13:59:06 -0500]:
> I think I found what may be a bug in the du program. So far I have
> only been able to observe this anomaly in my build directory for
> glibc.
I as well was unable to recreate your problem in any of the
directories t
I think I found what may be a bug in the du program. So far I have only been
able to observe this anomaly in my build directory for glibc. The bug is
uncovered with the following commands, issued from the base of the build
directory:
--
# ls -lR | grep
iI did du -sb and got:
-828863916
--
Karel 'Clock' Kulhavy
___
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils
On Sat, Jun 15, 2002 at 07:43:35PM +0200, Daniel Holbach wrote:
> root@chef:/var/spool/oops/storages# du -sh
> 24k .
> root@chef:/var/spool/oops/storages# ls -l
> -rw-r--r--1 proxy proxy 20971520 Jun 15 19:35 oops_storage
Probably a sparse file. For example:
(42)osgili
Hi,
I think it's a bug, but maybe I'm wrong, here's the output
--snip--
root@chef:/var/spool/oops/storages# du -sh
24k .
root@chef:/var/spool/oops/storages# ls -l
-rw-r--r--1 proxy proxy 20971520 Jun 15 19:35 oops_storage
root@chef:/var/spool/oops/storages#
--sn
At 03:32 +0300 2002-05-21, A. Wik wrote:
>On Mon, 20 May 2002, Dag Øien wrote:
>
>> >This page describes du as found in the fileutils-3.16
>> >package; other versions may differ slightly. Mail correc-
>> >tions and addition
From [EMAIL PROTECTED] Mon May 20 14:31:06 2002
Subject: du, kilobytes
From: =?ISO-8859-1?Q?Dag_=D8ien?= <[EMAIL PROTECTED]>
Content-Transfer-Encoding: 7bit
>This page describes du as found in the fileutils-3.16
>pa
On Mon, 20 May 2002, Dag Øien wrote:
> >This page describes du as found in the fileutils-3.16
> >package; other versions may differ slightly. Mail correc-
> >tions and additions to [EMAIL PROTECTED] and [EMAIL PROTECTED]
> >and [EM
>This page describes du as found in the fileutils-3.16
>package; other versions may differ slightly. Mail correc-
>tions and additions to [EMAIL PROTECTED] and [EMAIL PROTECTED]
>and [EMAIL PROTECTED] . Report bugs in the pro-
>
Hi,
I've been working on some enhancements to du, and I'd be happy to
contribute them back, if you're interested.
They add a second display mode to du in which it stores directory
entries in a tree structure, and prints them later. This allows entries
to be displayed as perce
hi.
I was playing around with Linux and tmpfs, I made a big bunch
of dirs. I really mean BIG. Names were "A" x 2000.
$ rm -rf A*
Segmentation fault
$ du -sh
Segmentation fault
$
I have x86 Linux 2.4.18-rc2, glibc-2.2.4-19.3, fileutils-4.1 & 4.1.7.
Here gdb debug stuff from v4
> I ve got a file in my home directory, that begins with a "-". See
> the output of du -sh * below. Maybe tomeone could place some nasty
> files in temp, and whet root does a du, then...
Please check out the faq on filenames that start with a dash.
http://www.gnu.org/sof
Hi,
I ve got a file in my home directory, that begins with a "-".
See the output of du -sh * below. Maybe tomeone could place some nasty files in temp,
and whet root does a du, then...
io-ii:/data/sort/johnny # du -sh *
du: invalid option -- p
Try `du --help' for more informati
Hi,
I forgot to mention the Version of fileutils:
fileutils 4.1
Johnny
___
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils
> I ask because it doesn't appear to do what logic and the man page
> imply it will do when executed as such: du -sHx /* it reports size
> for /home, /usr, /data and /proc which are all seperate
> filesystems.
du -x will avoid crossing any filesystem below the ones that you ar
Is the -x|--one-file-system option to du still supported?
Does anyone know if it still works? I ask because it doesn't
appear to do what logic and the man page imply it will do
when executed as such: du -sHx /* it reports size for /home,
/usr, /data and /proc which are all seperate filesy
Thanks for the report.
> I'm trying to use gnu du, which is installed as gdu on our systems, to
> avoid hiding the system du command. The system version deals happily
> with large files (> 4GB) but the gnu version does something strange.
Because of the difference that occurs
Paco Brufal wrote:
> There are about 600 MB of difference in the /var partition... I am
> using ReiserFS on a Debian Potato, package fileutils is version 4.0l-8.
I can't speak to the exact error, Paco, but I can
tell you that a large number of bugs have been fixed
in the latest fileuti
Hi-
I'm trying to use gnu du, which is installed as gdu on our systems, to
avoid hiding the system du command. The system version deals happily
with large files (> 4GB) but the gnu version does something strange.
For example, on small files:
hpux> ls -l fileutils-4.1.5.tar.gz
Hello,
Please, look at this:
olympus2:/var# du -sch /var
262M/var
262Mtotal
olympus2:/var# df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda2 3418000 1101424 2316576 32% /
/dev/sda147579 9444 35679 21
> du -h and du -H returns different numbers, it's bizarre.
Could you be more specific? Why is it bizarre? Different numbers
from what? At first glance nothing jumped off of the page at me as
being unusual.
Bob
___
Bug-fileutils mailing lis
Hi.
du -h and du -H returns different numbers, it's bizarre.
My version is: du (GNU fileutils) 4.0.35.
yeti@morgan:~ > du -h lord.avi
681Mlord.avi
yeti@morgan:~ > du -H lord.avi
714Mlord.avi
yeti@morgan:~ > du lord.avi
697220 lord.avi
yeti@yeti:/pub/kino > du -h Ar
`du --one-file-system` runs into several (proc ext2) filesystems:
=
% sls1cp root /root 2 } uname -a
Linux sls1cp 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
% sls1cp root /root 2 } uname -a
Linux sls1cp 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
% sls1cp root /root 3 } du --version
du (GNU fileutils) 4.0.36
Written by Torbjorn Granlund, David MacKenzie, Larry McVoy, and Paul Eggert.
Copyright (C) 2000 Free Software Foundation, Inc.
This
Hello !
I have a small bug on the tool "du" on my cygwin (win 2000) :
~> du --version
du (GNU fileutils) 3.16
When I launch du on a huge directory (3.25 GB), I got the
following result :
~> du -h
-805648523.0.
The result for all the subdirectories
> In du v4.1 I get filesizes that differ substantialy from those calculated by ls -l.
> Is that a bug or a feature ... ?
Neither. One is apples and the other is oranges. You can't compare
apples to oranges.
In one du man page:
du - summarize disk usage
[...]
Summ
> I'm working on a packaging system that produces statistics of the size of different
>types of files. It uses fileutils/du for this and passes the files as arguments.
>There is, however, a limit to how many arguments can be passed to a program, and for
>packages with a lot o
I'm working on a packaging system that produces statistics of the size of different
types of files. It uses fileutils/du for this and passes the files as arguments. There
is, however, a limit to how many arguments can be passed to a program, and for
packages with a lot of files this
Jim,
> Thanks for the detailed bug report!
> If someone can debug it, that'd be great.
unfortunately I cannot give you access to our system, but I could try
to help you debugging the du program.
I installed the sources of the fileutils and had a look at the program
flow. I create
In du v4.1 I get filesizes that differ substantialy from those calculated by ls -l.
Is that a bug or a feature ... ?
- Ingo Bormuth
___
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils
Thanks for the detailed bug report!
If someone can debug it, that'd be great.
Otherwise, if someone gives me temporary access
to such a system, I'll do it.
Jens Liebehenschel <[EMAIL PROTECTED]> wrote:
| Dear developers,
|
| I want to report an error in the du (dis
Dear developers,
I want to report an error in the du (disk usage) program.
Configuration:
--
SuSE Linux 7.1, Kernel 2.2.18
Mounted filesystem from HP-UX 10.20
du Version: ?, but in the manpage I found:
--- cut ---
GNU fileutils 4.0.35, December 2000
--- cut ---
Error description
> Maybe I'm crazy or doing something wrong, but I swear that the -S
> option in the 'du' utility doesnt do what it is supposed to. It is
> supposed to not show the subdirectories but it still does. The
> output of 'du' is the same as 'du -S' I
"Daniel A. Palm" wrote:
>
> Hi.
> Maybe I'm crazy or doing something wrong, but I swear that the -S option in the
>'du' utility
> doesnt do what it is supposed to. It is supposed to not show the subdirectories
> but it still does. The output of
Hi.
Maybe I'm crazy or doing something wrong, but I swear that the -S option in the 'du'
utility
doesnt do what it is supposed to. It is supposed to not show the subdirectories
but it still does. The output of 'du' is the same as 'du -S' I think.
I'm
Using 'du -h' command on directories larger than 1 GB, output is
absurd (e.g.: negative numbers) even if the exit status is '0'.
Using on same directories the command 'du' without '-h'option,
output is correct (i.e.:exact number of KB).
Enviroment Info
LUTELY NO WARRANTY.
> For details type `warranty'.
> 5261 + 784 + 670 + 3978 + 1464 + 2100 + 3083 + 1700
> 19040
Very good. But you forgot the directory size. Please add in the size
of 'ls -ld RCS' too.
> prometheus{/home/systems/syk/src}14 % du -b RCS
> 12288 RCS
Hi,
I found this email address as reference in the du man page on SuSE7.0,
so
here we go ..
The du command on SuSE7.0 appears to be a little less than accurate:
prometheus{/home/systems/syk/src}9 % uname -a
Linux prometheus 2.2.16 #1 Wed Aug 2 20:03:33 GMT 2000 i686 unknown
prometheus{/home
I don't know whether this is a known problem or I'm just failing to
interpret the results from du correctly. This is running on Win98:
sholden@THINKER ~/.ncftp
$ du --version
du (GNU fileutils) 4.0
sholden@THINKER ~/.ncftp
$ ls -l
total 31
-rw-r--r-- 1 sholden unknown 167 Fe
Randy
> While df will show the disks getting full, I use du to determine
> which files need truncating. Once located I will either truncate
> them or just remove them via the rm command. However, after removing
> 150 - 200 megs of log files a df command does not show the correct
>
Hi
I have a Linux RH5.2 server running and I am having a slight problem
understanding the results of du and df commands. Disk space is at a premium
for us at the moment so I check periodically via df to determine whether I
need to go in an trim a few web accounts of stray "carts&qu
Hi,
It recently struck me that a useful option to `du' would be the
ability to add up the lengths of files instead of the disc space they
occupy.
If you agree, please consider the diff below as a way of adding this
feature. It patches du.c and fileutils.texi (and nothing else).
ttf
he --count-links (-l) option, but with
the caveats that `du -l -s --total ./ ./*/' reports double the space
used in `.', and counts the size of every file, even if it encounters
the same hard-linked file several times.
Since the existing behavior is so counter-intuitive, I'll probab
th a total.
[fileutils-4.0s]$ du -s -c ./ ./*/
15341 .
15341 total
[fileutils-4.0s]$ du -s ./ ./*/
15341 .
528 ./doc
3213./inst
160 ./intl
2494./lib
181 ./m4
126 ./man
2618./po
4285./src
362 ./tests
[fileutils-
45063a1a9db2130ddbf6c2119bccc048 -
sudo ssh lm004 "dd if=/dev/hdb1 bs=1024 count=1024 | md5sum"
1024+0 records in
1024+0 records out
45063a1a9db2130ddbf6c2119bccc048
Nevertheless, the outputs from du are distinct. I think the fileutils
4.0 one is incorrect, and the file
I recently upgraded (ahem) some machines, and noticed a behavior
change in 'du'. I was wondering if this was intentional or a bug.
For example, on a RedHat 6.2/i386 based gnu/linux distribution,
'du --block-size=1 file' reports the size of 'file' in 1024-mul
Hello.
I'm using an du 4.0q program and I'm persuaded that its function of
excluding files or subdirectories does not work properly.
I have Debian linux and in man/info pages is following description :
(according my problematic usage)
--exclude=PAT ... --exclude='/usr/src
Hello
I have find a bug in du
on platform RH 6.1 2.2.12
Using du -s on a ZIP
which have file named as
/mnt/zip/foo/%bar/foo.gif
du crash
the procces are Dead (D entry on ps wux)
Have try to kill the du process as owner
and as root and is still live
So i'm unable to umount th
1 - 100 of 103 matches
Mail list logo