Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
On RHEL and bash, if it matters...
Thanks,
Kent
On Mon, 2007-10-22 at 09:11 -0400, Kent Johnson wrote:
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
du -c *.txt
On Monday 22 October 2007 09:11, Kent Johnson wrote:
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
Ah! Perhaps I
More than you asked for, but here's a command that reports
total space occupied by all files with names ending in .jpg,
recursively from the current directory (but not crossing mount
points) and which is also a gratuitous example of the Process
Substitution facility mentioned in a previous
Kent Johnson wrote:
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
On RHEL and bash, if it matters...
Thanks,
Jim Kuzdrall wrote:
On Monday 22 October 2007 09:11, Kent Johnson wrote:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total size of the ~1000 files matching *.txt.
Ah!
Ooops - that --files0-from= option is apparently
new enough (my du version is 5.97) that it's probably
not widely available. My home system has it, but my
work systems don't... -/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
Kent Johnson [EMAIL PROTECTED] writes:
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
Stephen Ryan [EMAIL PROTECTED] writes:
du -c *.txt | tail -1
du prints out the sizes of each of the matching files; '-c' means you
want a total,
On 10/22/07, Stephen Ryan [EMAIL PROTECTED] wrote:
On Mon, 2007-10-22 at 09:11 -0400, Kent Johnson wrote:
Newbie question:
How can I get the total size, in K, of all files in a directory that
match a pattern?
For example, I have a dir with ~5000 files, I would like to know the
total
Hmm, again, certainly not my fist instinct :)
Paul, we embrace diversity here but that is *definitely* OT...
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
On Monday 22 October 2007 09:36, Kent Johnson wrote:
Jim Kuzdrall wrote:
On Monday 22 October 2007 09:11, Kent Johnson wrote:
How can I get the total size, in K, of all files in a directory
that match a pattern?
For example, I have a dir with ~5000 files, I would like to know
the
to be aware of things like
allocation overhead (a 3 byte might use 4096 bytes on disk, or
whatever) and sparse files (files with holes in the middle, thus
using *less* space on disk than the file size).
The GNU variant, at least, has an option to report actual file sizes
instead of disk usage
Shawn K. O'Shea wrote:
du -c *.txt | tail -1
Since I know Kent has a Mac and this might be on his laptop, I'd like
to add that this should really be:
du -ck *.txt | tail -1
No, this is a bona fide Linux question :-) it's a Webfaction account.
But thanks for the note!
Kent
On 10/22/07, Michael ODonnell [EMAIL PROTECTED] wrote:
Ooops - that --files0-from= option is apparently
new enough ... that it's probably not widely available.
find . -xdev -type f -name *.jpg -print0 2/dev/null | xargs -0 du
-ch | tail -1
(untested)
-- Ben
, has an option to report actual file sizes
=instead of disk usage.
=
= Which one you want depends on what you're looking for.
I'd just like to kibbutz one more subtlety: du reports disk usage as
discussed above, but another way that you can get seemingly conflicting
numbers is from sparse files
Hi All,
Can the 2GB file size limit be changed? I need to store about 10GB worth
of data in a single file, but it dies at 2GB.
TIA,
Kenny
--
Tact is just *not* saying true stuff -- Cordelia Chase
Kenneth E. Lussier
file I/O routines in your kernel
- your C library
- Samba
Samba and NFS(v2) don't like 2GB file sizes.
http://www.suse.de/~aj/linux_lfs.html
-Mark
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
At some point hitherto, Mark Komarinski hath spake thusly:
Samba and NFS(v2) don't like 2GB file sizes.
http://www.suse.de/~aj/linux_lfs.html
That page is a bit outdated. It talks about RH 6.2 as being current,
and doesn't mention ext3 at all. I
In a message dated: 20 Aug 2002 07:34:27 EDT
Kenneth E. Lussier said:
Hi All,
Can the 2GB file size limit be changed? I need to store about 10GB worth
of data in a single file, but it dies at 2GB.
I don't know if ext2 supports big files. I think you need to turn
something on in the kernel
19 matches
Mail list logo