> Message: 28
> Date: Tue, 2 Jan 2007 10:20:08 -0800
> From: "Kurt Buff" <[EMAIL PROTECTED]>
> Subject: Batch file question - average size of file in directory
> To: [EMAIL PROTECTED]
> Message-ID:
>       <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> All,
> 
> I don't even have a clue how to start this one, so am looking for a little 
> help.
> 
> I've got a directory with a large number of gzipped files in it (over
> 110k) along with a few thousand uncompressed files.
> 
> I'd like to find the average uncompressed size of the gzipped files,
> and ignore the uncompressed files.
> 
> How on earth would I go about doing that with the default shell (no
> bash or other shells installed), or in perl, or something like that.
> I'm no scripter of any great expertise, and am just stumbling over
> this trying to find an approach.
> 
> Many thanks for any help,
> 
> Kurt

Hi, Kurt.

Can I make some assumptions that simplify things?  No kinky filenames, 
just [a-zA-Z0-9.].  My approach specifically doesn't like colons or 
spaces, I bet.  Also, you say gzipped, so I'm assuming it's ONLY gzip, 
no bzip2, etc.

Here's a first draft that might give you some ideas.  It will output:

foo.gz : 3456
bar.gz : 1048576
(etc.)

find . -type f | while read fname; do
  file $fname | grep -q "compressed" && echo "$fname : $(zcat $fname | wc -c)"
done


If you really need a script that will do the math for you, then
pip the output of this into bc:

#!/bin/sh

find . -type f | {

n=0
echo scale=2
echo -n "("
while read fname; do
  if file $fname | grep -q "compressed"
  then
    echo -n "$(zcat $fname | wc -c)+"
    n=$(($n+1))
  fi
done
echo "0) / $n"

}

That should give you the average decompressed size of the gzip'ped
files in the current directory.

_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to