Re: script help

2011-02-16 Thread Peter Andreev
2011/2/15 RW rwmailli...@googlemail.com:
 On Tue, 15 Feb 2011 12:57:12 +0300
 Peter Andreev andreev.pe...@gmail.com wrote:

 Use of xargs on many files will be much faster than find...exec
 construction

 This is a surprisingly common myth. exec can pass single or multiple
 arguments  according to whether you use ; or +

You are right, use of + makes -exec much faster. Thank you, I
didn't know about this feature.

 find / -type f -name copyright.htm | xargs sed -i .bak -e
 's/2010/2011/g'

 This is much less safe on FreeBSD than it is with the GNU versions
 because print0 is required for paths with spaces.

 find  ... -print0 | xargs -0 ...



 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org




-- 
--
AP
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-16 Thread Bernt Hansson

2011-02-14 23:34, Jack L. Stone skrev:

Hello folks:


Hello!


No doubt this will be easy for those with scritping abilities.



# find all of the same filenames (copyright.htm) and then replace the year
2010 with 2011 in each file. Once I have a working script, I should be able
to add it as a cron job to run on the first day of each new year.


cd /your/www/directory rm -rf copyright.htm


Any help appreciated.

Thanks!
Jack

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-16 Thread Mike Jeays
On Wed, 16 Feb 2011 16:47:57 +0100
Bernt Hansson be...@bah.homeip.net wrote:

 2011-02-14 23:34, Jack L. Stone skrev:
  Hello folks:
 
 Hello!
 
  No doubt this will be easy for those with scritping abilities.
 
  # find all of the same filenames (copyright.htm) and then replace the year
  2010 with 2011 in each file. Once I have a working script, I should be able
  to add it as a cron job to run on the first day of each new year.
 
 cd /your/www/directory rm -rf copyright.htm
 
  Any help appreciated.
 
  Thanks!
  Jack
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

I doubt anyone will be dumb enough to fall for this, but it is not exactly 
constructive. Anyhow, it won't work without a semicolon.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-16 Thread Robert Bonomi

 Date: Wed, 16 Feb 2011 16:47:57 +0100
 From: Bernt Hansson be...@bah.homeip.net
 Subject: Re: script help

 2011-02-14 23:34, Jack L. Stone skrev:
  Hello folks:

 Hello!

  No doubt this will be easy for those with scritping abilities.

  # find all of the same filenames (copyright.htm) and then replace the 
  year 2010 with 2011 in each file. Once I have a working script, I 
  should be able to add it as a cron job to run on the first day of each 
  new year.

 cd /your/www/directory 
 rm -rf copyright.htm

(A) doesn't do what the OP asked.
(B) doesn't do what you -think- it does. (i.e., it will delete at most one file)


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread perryh
Jack L. Stone ja...@sage-american.com wrote:

 # find all of the same filenames (copyright.htm) and then replace
 the year 2010 with 2011 in each file. Once I have a working
 script, I should be able to add it as a cron job to run on the
 first day of each new year.

Before actually doing this, you might want to consult a copyright
lawyer.  Seems to me that merely claiming a more recent copyright
date, having made no substantive change to the work for which the
copyright is claimed, could be construed as a fraudulent claim.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread erikmccaskey64
my little opinion: first run the changes on a backup, or a copy of the files:

this one works under linux bash fedora:
how to create a shadow of a folder [same filenames in another dir, but with 0 
Byte size]


in the original, A directory:
find . -type f gt; a.txt


B directory:
cat ../a.txt | while read file; do if [[ $file = */* ]]; then mkdir -p 
${file%/*}; fi; touch $file; done




so if something goes wrong, there would be no trouble

 Be Mon, 14 Feb 2011 15:11:19 -0800 Adam Vande More 
lt;amvandem...@gmail.comgt; írta  

On Mon, Feb 14, 2011 at 4:34 PM, Jack L. Stone 
lt;ja...@sage-american.comgt;wrote: 
 
gt; Hello folks: 
gt; 
gt; No doubt this will be easy for those with scritping abilities. 
gt; 
gt; I have a gazillion files by the same name and each contains the same line 
gt; requiring the same change. But the problem is that they are in many 
gt; different directories on a server with numerous domains. While I could 
gt; handle the change using a single directory within my abilities, I'm unsure 
gt; how to do a search and replace throughout the many domains and their 
gt; directories. Don't want to mess up. Here's what I'm trying to do: 
gt; 
gt; # find all of the same filenames (copyright.htm) and then replace the year 
gt; 2010 with 2011 in each file. Once I have a working script, I should be 
able 
gt; to add it as a cron job to run on the first day of each new year. 
gt; 
gt; Any help appreciated. 
gt; 
 
/usr/ports/misc/rpl 
 
-- 
Adam Vande More 
___ 
freebsd-questions@freebsd.org mailing list 
http://lists.freebsd.org/mailman/listinfo/freebsd-questions 
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org 
 





___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Peter Andreev
Use of xargs on many files will be much faster than find...exec construction

find / -type f -name copyright.htm | xargs sed -i .bak -e 's/2010/2011/g'

2011/2/15 erikmccaskey64 erikmccaske...@zoho.com:
 my little opinion: first run the changes on a backup, or a copy of the files:

 this one works under linux bash fedora:
 how to create a shadow of a folder [same filenames in another dir, but with 
 0 Byte size]


 in the original, A directory:
 find . -type f gt; a.txt


 B directory:
 cat ../a.txt | while read file; do if [[ $file = */* ]]; then mkdir -p 
 ${file%/*}; fi; touch $file; done




 so if something goes wrong, there would be no trouble

  Be Mon, 14 Feb 2011 15:11:19 -0800 Adam Vande More 
 lt;amvandem...@gmail.comgt; írta 

 On Mon, Feb 14, 2011 at 4:34 PM, Jack L. Stone 
 lt;ja...@sage-american.comgt;wrote:

 gt; Hello folks:
 gt;
 gt; No doubt this will be easy for those with scritping abilities.
 gt;
 gt; I have a gazillion files by the same name and each contains the same line
 gt; requiring the same change. But the problem is that they are in many
 gt; different directories on a server with numerous domains. While I could
 gt; handle the change using a single directory within my abilities, I'm 
 unsure
 gt; how to do a search and replace throughout the many domains and their
 gt; directories. Don't want to mess up. Here's what I'm trying to do:
 gt;
 gt; # find all of the same filenames (copyright.htm) and then replace the 
 year
 gt; 2010 with 2011 in each file. Once I have a working script, I should be 
 able
 gt; to add it as a cron job to run on the first day of each new year.
 gt;
 gt; Any help appreciated.
 gt;

 /usr/ports/misc/rpl

 --
 Adam Vande More
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org






 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org




-- 
--
AP
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Jack L. Stone
At 12:41 AM 2/15/2011 -0800, per...@pluto.rain.com wrote:
Jack L. Stone ja...@sage-american.com wrote:

 # find all of the same filenames (copyright.htm) and then replace
 the year 2010 with 2011 in each file. Once I have a working
 script, I should be able to add it as a cron job to run on the
 first day of each new year.

Before actually doing this, you might want to consult a copyright
lawyer.  Seems to me that merely claiming a more recent copyright
date, having made no substantive change to the work for which the
copyright is claimed, could be construed as a fraudulent claim.
___

Wow! You wandered way off the trail. I own the tech magazine I founded 23
years ago and we publish monthly to 214 countries. I hav also practiced law
for my companies for nearly 40 years, so quit worrying about that stuff.

I just need script help, not other than that.

Jack

(^_^)
Happy trails,
Jack L. Stone

System Admin
Sage-american
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Julian H. Stacey
Hi,
per...@pluto.rain.com wrote:
 Jack L. Stone ja...@sage-american.com wrote:
 
  # find all of the same filenames (copyright.htm) and then replace
  the year 2010 with 2011 in each file. Once I have a working
  script, I should be able to add it as a cron job to run on the
  first day of each new year.
 
 Before actually doing this, you might want to consult a copyright
 lawyer.  Seems to me that merely claiming a more recent copyright
 date, having made no substantive change to the work for which the
 copyright is claimed, could be construed as a fraudulent claim.

One might also want to Not delete the earliest Copyright date.

Numerous commercial firms lists several copyright years in same
file or product start up.  

When I was editing some of my stuff recently, I decided to leave
first  last year in, dont know if thats correct though.

I suppose if one really wanted to know what's correct, one could search
 read Bern (Switzerland) International Copyright Convention.

Cheers,
Julian
-- 
Julian Stacey, BSD Unix Linux C Sys Eng Consultants Munich http://berklix.com
 Mail plain text;  Not quoted-printable, Not HTML, Not base 64.
 Reply below text sections not at top, to avoid breaking cumulative context.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread RW
On Tue, 15 Feb 2011 12:57:12 +0300
Peter Andreev andreev.pe...@gmail.com wrote:

 Use of xargs on many files will be much faster than find...exec
 construction

This is a surprisingly common myth. exec can pass single or multiple
arguments  according to whether you use ; or + 
 
 find / -type f -name copyright.htm | xargs sed -i .bak -e
 's/2010/2011/g'

This is much less safe on FreeBSD than it is with the GNU versions
because print0 is required for paths with spaces.

find  ... -print0 | xargs -0 ...



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Jack L. Stone
At 02:53 PM 2/15/2011 +, RW wrote:
On Tue, 15 Feb 2011 12:57:12 +0300
Peter Andreev andreev.pe...@gmail.com wrote:

 Use of xargs on many files will be much faster than find...exec
 construction

This is a surprisingly common myth. exec can pass single or multiple
arguments  according to whether you use ; or + 
 
 find / -type f -name copyright.htm | xargs sed -i .bak -e
 's/2010/2011/g'

This is much less safe on FreeBSD than it is with the GNU versions
because print0 is required for paths with spaces.

find  ... -print0 | xargs -0 ...



Forgot to mention: if the string to replace on the text line of the files
includes a connecting dash, like 1988-2010, I suppose rather than using
just the 2010/2011 perhaps should be 1988-2010/1988-2011

Jack

(^_^)
Happy trails,
Jack L. Stone

System Admin
Sage-american
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Paul Schmehl
--On February 15, 2011 12:57:12 PM +0300 Peter Andreev 
andreev.pe...@gmail.com wrote:



Use of xargs on many files will be much faster than find...exec
construction

find / -type f -name copyright.htm | xargs sed -i .bak -e 's/2010/2011/g'



I believe you, but can you explain why this is true?  What makes xargs 
faster than exec?


--
Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.
***
It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead. Thomas Jefferson
There are some ideas so wrong that only a very
intelligent person could believe in them. George Orwell

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-15 Thread Lowell Gilbert
Paul Schmehl pschmehl_li...@tx.rr.com writes:

 --On February 15, 2011 12:57:12 PM +0300 Peter Andreev
 andreev.pe...@gmail.com wrote:

 Use of xargs on many files will be much faster than find...exec
 construction

 find / -type f -name copyright.htm | xargs sed -i .bak -e 's/2010/2011/g'


 I believe you, but can you explain why this is true?  What makes xargs
 faster than exec?

Classically, exec always spun off a new process for each exec (i.e.,
every single file).

For years now, find(1) has had a POSIX-standard syntax (ending the
command with a '+' syntax for the end of an -exec line, which does
pretty much the same thing in a single command.

Sometimes, the command being used only handles one filename at a time,
and -exec is necessary.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-14 Thread Roland Smith
On Mon, Feb 14, 2011 at 04:34:37PM -0600, Jack L. Stone wrote:
 # find all of the same filenames (copyright.htm) and then replace the year
 2010 with 2011 in each file. Once I have a working script, I should be able
 to add it as a cron job to run on the first day of each new year.

The following command should do the trick, I think.

find / -type f -name copyright.htm -exec sed -i .bak -e 's/Copyright © 
20../Copyright © 2011/g' {} \;

Basically the find(1) command locates the files you want to change, and than
for every file it calls sed(1) with the -i flag to do the in-place
editing. The originals are saved as copyright.htm.bak. If all goes well, you
can delete those.

Depending on the contents of the files, you might want to just replace 2010 by
2011, or use a little more context as I did in the example, to make sure only
the right numbers are replaced.

Roland
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgpW28cbNfx6P.pgp
Description: PGP signature


Re: script help

2011-02-14 Thread Chip Camden
Quoth Jack L. Stone on Monday, 14 February 2011:
 Hello folks:
 
 No doubt this will be easy for those with scritping abilities.
 
 I have a gazillion files by the same name and each contains the same line
 requiring the same change. But the problem is that they are in many
 different directories on a server with numerous domains. While I could
 handle the change using a single directory within my abilities, I'm unsure
 how to do a search and replace throughout the many domains and their
 directories. Don't want to mess up. Here's what I'm trying to do:
 
 # find all of the same filenames (copyright.htm) and then replace the year
 2010 with 2011 in each file. Once I have a working script, I should be able
 to add it as a cron job to run on the first day of each new year.
 
 Any help appreciated.
 
 Thanks!
 Jack
 
 (^_^)
 Happy trails,
 Jack L. Stone
 
 System Admin
 Sage-american
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

find /upper-dir -name copyright.htm -exec sed -i '' -e s/2010/2011/g {} \;

-- 
Sterling (Chip) Camden | sterl...@camdensoftware.com | 2048D/3A978E4F
http://chipsquips.com  | http://camdensoftware.com   | http://chipstips.com


pgpNijFWKcFpE.pgp
Description: PGP signature


Re: script help

2011-02-14 Thread Jarrod Slick

On 2/14/11 3:34 PM, Jack L. Stone wrote:

Hello folks:

No doubt this will be easy for those with scritping abilities.

I have a gazillion files by the same name and each contains the same line
requiring the same change. But the problem is that they are in many
different directories on a server with numerous domains. While I could
handle the change using a single directory within my abilities, I'm unsure
how to do a search and replace throughout the many domains and their
directories. Don't want to mess up. Here's what I'm trying to do:

# find all of the same filenames (copyright.htm) and then replace the year
2010 with 2011 in each file. Once I have a working script, I should be able
to add it as a cron job to run on the first day of each new year.

Any help appreciated.

Thanks!
Jack

(^_^)
Happy trails,
Jack L. Stone

System Admin
Sage-american
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Something like this should work (*UNTESTED*):

#!/usr/bin/perl
use strict;
use warnings;
use File::Find;

my @directories = qw(/var/www/html /var/www/html2 /etc); #if you don't 
know all the directories and are okay with the script running for a 
while you could just specify /
my $line = quotemeta(Copyright 2010); # Or whatever your line actually 
is . . .

my $copyright_file = quotemeta(copyright.htm);
find(\wanted, @directories);

sub wanted {
if($_ =~ $copyright_file) {
open(my $fh, '', $File::Find::dir.'/'.$copyright_file) or die 
Couldn't create read handle: $!\n;

my $new_file = undef;
while($fh) {
if($_ =~ /^$line$/) {
$_ =~ s/2010/2011/;
}
$new_file .= $_.\n;
}
close($fh);
open($fh, '', $File::Find::dir.'/'.$copyright_file) or die 
Couldn't create write handle: $!\n;

print $fh $new_file;
close($fh);
}
}
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-14 Thread Adam Vande More
On Mon, Feb 14, 2011 at 4:34 PM, Jack L. Stone ja...@sage-american.comwrote:

 Hello folks:

 No doubt this will be easy for those with scritping abilities.

 I have a gazillion files by the same name and each contains the same line
 requiring the same change. But the problem is that they are in many
 different directories on a server with numerous domains. While I could
 handle the change using a single directory within my abilities, I'm unsure
 how to do a search and replace throughout the many domains and their
 directories. Don't want to mess up. Here's what I'm trying to do:

 # find all of the same filenames (copyright.htm) and then replace the year
 2010 with 2011 in each file. Once I have a working script, I should be able
 to add it as a cron job to run on the first day of each new year.

 Any help appreciated.


/usr/ports/misc/rpl

-- 
Adam Vande More
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2011-02-14 Thread Chuck Swiger
On Feb 14, 2011, at 2:34 PM, Jack L. Stone wrote:
 # find all of the same filenames (copyright.htm) and then replace the year
 2010 with 2011 in each file. Once I have a working script, I should be able
 to add it as a cron job to run on the first day of each new year.

  find . -name copyright.htm -exec sed -i .BAK s/2010/2011/ {} \;

Of course, a purely automated replacement of the year without making any other 
change is likely considered de minimus from a copyright perspective.  You'd 
need to make a more substantial change involving some original content for this 
to be genuinely meaningful

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: script help

2006-09-05 Thread Paul Schmehl



--On Tuesday, September 05, 2006 11:16:26 -0700 ann kok 
[EMAIL PROTECTED] wrote:



Hi all

I would like to ask script question

1/ if i use the script to run within 1 minute, I can't
run it in the cronjob. how can I run this script
automatically?

I don't understand what you mean here.  Are you asking how to run a cronjob 
more than once a minute?



2/ I have file file.txt as below, there are two
fields.

4 999
10 200
15 400
60 900

I write awk script to exact field 2 if the field 1
less than 10 and put it in the file result.txt. but
i am not successful!

awk '{
if ($1  10)
$2=this is result $2 when the feild 1 less than 10
}'  file.txt  result.txt

result.txt

this is result $2 when the feild 1 less than 10
this is result $2 when the feild 1 less than 10


but i would like the result as

this is result 999 when the feild 1 less than 10
this is result 200 when the feild 1 less than 10


awk '{
if ($1  10)
$2=this is result $2 when the feild 1 less than 10
}'  file.txt  result.txt

If you want awk to return the value of $2, don't quote it.

Paul Schmehl ([EMAIL PROTECTED])
Adjunct Information Security Officer
The University of Texas at Dallas
http://www.utdallas.edu/ir/security/


Re: script help

2006-09-05 Thread ann kok
Dear Paul

Thank you for your mail

I want to run a script xx seconds automatically

but cronjob is limited minutes

Thank you

--- Paul Schmehl [EMAIL PROTECTED] wrote:

 
 
 --On Tuesday, September 05, 2006 11:16:26 -0700 ann
 kok 
 [EMAIL PROTECTED] wrote:
 
  Hi all
 
  I would like to ask script question
 
  1/ if i use the script to run within 1 minute, I
 can't
  run it in the cronjob. how can I run this script
  automatically?
 
 I don't understand what you mean here.  Are you
 asking how to run a cronjob 
 more than once a minute?
 
  2/ I have file file.txt as below, there are two
  fields.
 
  4 999
  10 200
  15 400
  60 900
 
  I write awk script to exact field 2 if the field
 1
  less than 10 and put it in the file result.txt.
 but
  i am not successful!
 
  awk '{
  if ($1  10)
  $2=this is result $2 when the feild 1 less than
 10
  }'  file.txt  result.txt
 
  result.txt
 
  this is result $2 when the feild 1 less than 10
  this is result $2 when the feild 1 less than 10
 
 
  but i would like the result as
 
  this is result 999 when the feild 1 less than 10
  this is result 200 when the feild 1 less than 10
 
 awk '{
  if ($1  10)
 $2=this is result $2 when the feild 1 less than
 10
  }'  file.txt  result.txt
 
 If you want awk to return the value of $2, don't
 quote it.
 
 Paul Schmehl ([EMAIL PROTECTED])
 Adjunct Information Security Officer
 The University of Texas at Dallas
 http://www.utdallas.edu/ir/security/
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: script help

2006-09-05 Thread Paul Schmehl
--On Tuesday, September 05, 2006 15:36:09 -0700 ann kok 
[EMAIL PROTECTED] wrote:



Dear Paul

Thank you for your mail

I want to run a script xx seconds automatically

but cronjob is limited minutes

That's correct.  The fastest you can run a cron job is every minute because 
that's how often cron checks for jobs.


Paul Schmehl ([EMAIL PROTECTED])
Adjunct Information Security Officer
The University of Texas at Dallas
http://www.utdallas.edu/ir/security/


Re: script help

2006-09-05 Thread Frank Shute
On Tue, Sep 05, 2006 at 03:36:09PM -0700, ann kok wrote:

 Dear Paul
 
 Thank you for your mail
 
 I want to run a script xx seconds automatically
 
 but cronjob is limited minutes
 
 Thank you

I think what you're after is sleep(1)

-- 

 Frank 


echo f r a n k @ e s p e r a n c e - l i n u x . c o . u k | sed 's/ //g'

  ---PGP keyID: 0x10BD6F4B---  
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help for updating routine

2005-11-03 Thread Denny White

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 11/2/05, Denny White [EMAIL PROTECTED] wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


I have a script, pasted in below, which does various
things on a daily basis, like cvsup src, docs, ports,
portsdb, portversion, portupgrade,  so on. I finally
figured out how to do the if/then/else thing with the
portversion-portupgrade part of the script, but I can't
figure out what to do to bypass the docs install part
if there are no new docs. Thanks for any help I can
get on it. Script follows:

#!/bin/sh
#
echo Cvsup latest src and doc
cvsup -g -L 2 /root/srcdoc-supfile
#
# THIS THE PART IN QUESTION, THAT DOES
# DOES THE DOCS. CUSTOM MAKEFILE IS FOR
# ENGLISH ONLY.
#G
#send copious output to the bit bucket
echo Updating docs
echo 
cd /usr/doc
cp Makefile.custom Makefile
make install
#make install  /dev/null
#
cd /root
echo Portsnap fetching and updating ports
echo 
portsnap fetch
portsnap update
#
echo Updating INDEX in /usr/ports
echo 
cd /usr/ports
#make fetchindex
portsdb -uUF
#
echo Portaudit checking for vulnerabilities in installed ports
echo Results in file /root/vulnerable
echo 
portaudit -Fda  /root/vulnerable
#
echo Portversion checking if any ports need upgrading
echo Results in file /root/need2upgrade
echo 
portversion -l   /root/need2upgrade
if grep '' /root/need2upgrade; then
echo Portupgrade upgrading out-of-date ports
portupgrade -arR; else
echo Ports already up to date 12
exit 1
fi
echo Finished at `/bin/date`.
exit




Today Andrew P. contributed the following:

1. You can limit docs to custom languages in
make.conf, that's a better way


Yup, did it already. I had just copied it word for
word to see how well it worked. Found it in Dru
Lavigne's at OReilly.



2. You can affor to copy extra 60Mb once a day,
can't you?


Don't quite follow on that. It's all downloaded.
Other langs aren't #'d out in the supfile, just
aren't installed. Time consuming, not about h/d
space.



3. You can grep cvsup output against something
like doc/


That's what I thought. Can't see grepping doc,
maybe update? Don't know quite how, tho. Don't
know enough about scripting yet, as I said. I
don't want to interrupt the cvsup process. I
thought about using tee  grep 'update' or
something to that affect in that secondary
output.



4. Never run portsnap fetch from cron, even if
you chose a very odd time, use portsnap cron



Yup, know about that, but thanks for the warning.
I have it setup like you said, in cron, for times
when I'm too lazy to run the entire script  instead,
just do it piecemeal.

GnuPG key  : 0x1644E79A  |  http://wwwkeys.nl.pgp.net
Fingerprint: D0A9 AD44 1F10 E09E 0E67  EC25 CB44 F2E5 1644 E79A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (FreeBSD)
Comment: Made with pgp4pine 1.76

iD8DBQFDal7vy0Ty5RZE55oRAtEcAJ9RJz3f7O6HXaL8KCAAPi4kn5cVewCgtASm
qSJKDVKG3r7SDQ0PDfjk+kU=
=nLco
-END PGP SIGNATURE-

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help for updating routine

2005-11-02 Thread Andrew P.
On 11/2/05, Denny White [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 I have a script, pasted in below, which does various
 things on a daily basis, like cvsup src, docs, ports,
 portsdb, portversion, portupgrade,  so on. I finally
 figured out how to do the if/then/else thing with the
 portversion-portupgrade part of the script, but I can't
 figure out what to do to bypass the docs install part
 if there are no new docs. Thanks for any help I can
 get on it. Script follows:

 #!/bin/sh
 #
 echo Cvsup latest src and doc
 cvsup -g -L 2 /root/srcdoc-supfile
 #
 # THIS THE PART IN QUESTION, THAT DOES
 # DOES THE DOCS. CUSTOM MAKEFILE IS FOR
 # ENGLISH ONLY.
 #G
 #send copious output to the bit bucket
 echo Updating docs
 echo 
 cd /usr/doc
 cp Makefile.custom Makefile
 make install
 #make install  /dev/null
 #
 cd /root
 echo Portsnap fetching and updating ports
 echo 
 portsnap fetch
 portsnap update
 #
 echo Updating INDEX in /usr/ports
 echo 
 cd /usr/ports
 #make fetchindex
 portsdb -uUF
 #
 echo Portaudit checking for vulnerabilities in installed ports
 echo Results in file /root/vulnerable
 echo 
 portaudit -Fda  /root/vulnerable
 #
 echo Portversion checking if any ports need upgrading
 echo Results in file /root/need2upgrade
 echo 
 portversion -l   /root/need2upgrade
 if grep '' /root/need2upgrade; then
 echo Portupgrade upgrading out-of-date ports
 portupgrade -arR; else
 echo Ports already up to date 12
 exit 1
 fi
 echo Finished at `/bin/date`.
 exit

 GnuPG key  : 0x1644E79A  |  http://wwwkeys.nl.pgp.net
 Fingerprint: D0A9 AD44 1F10 E09E 0E67  EC25 CB44 F2E5 1644 E79A

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.2 (FreeBSD)
 Comment: Made with pgp4pine 1.76

 iD8DBQFDaQ7Ny0Ty5RZE55oRAjYhAKCyDOKGhu86oAVu6Ml2ANf2Rt3vXwCfcs52
 2V388qkRXw8Kiun8iR7rbiY=
 =Wscs
 -END PGP SIGNATURE-

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]


1. You can limit docs to custom languages in
make.conf, that's a better way

2. You can affor to copy extra 60Mb once a day,
can't you?

3. You can grep cvsup output against something
like doc/

4. Never run portsnap fetch from cron, even if
you chose a very odd time, use portsnap cron

5. etc :)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-24 Thread antenneX
- Original Message - 
From: antenneX [EMAIL PROTECTED]
To: Giorgos Keramidas [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Sent: Tuesday, August 23, 2005 8:35 PM
Subject: Re: Script help using cut


 - Original Message - 
 From: Giorgos Keramidas [EMAIL PROTECTED]
 To: antenneX [EMAIL PROTECTED]
 Cc: freebsd-questions@freebsd.org
 Sent: Tuesday, August 23, 2005 8:16 PM
 Subject: Re: Script help using cut


  On 2005-08-23 20:02, antenneX [EMAIL PROTECTED] wrote:
   Been trying to complete a script that I can use to grep spam
 emails
   from the maillog, then trim it to just the plain email address.
 Trying
   to use cut in the script but it's not doing what I want yet.
  
   Here is what the earlier lines have the lines down to so far:
   (envelope-from [EMAIL PROTECTED])  -- no quotes
   ...and I want this clean trimmed result after trim using cut
 or
   anything else that works to trim/cut:
  
   [EMAIL PROTECTED]  --- no underlines of course
  
   That's a TAB space at beginning of the line.
  
   The envelope lines are in a tmp file in colum format (one line
 below
   the other).
   (envelope-from [EMAIL PROTECTED])
   (envelope-from [EMAIL PROTECTED])
   (envelope-from [EMAIL PROTECTED])
  
   All ideas appreciated
 
  Does it have to be cut(1)?
 
  $ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort |
 uniq
 


Just woke up this morning and realized I needed to chop off more -- 
everything except the domain.

So, instead of [EMAIL PROTECTED] I need the result badguy.com

How could the above awk line be expanded to chop off the username@
portion as well?

Sorry, must have been really tired.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-24 Thread Giorgos Keramidas
On 2005-08-24 07:58, antenneX [EMAIL PROTECTED] wrote:
antenneX [EMAIL PROTECTED] wrote:
Giorgos Keramidas [EMAIL PROTECTED] wrote:
 (envelope-from [EMAIL PROTECTED])
 (envelope-from [EMAIL PROTECTED])
 (envelope-from [EMAIL PROTECTED])

 All ideas appreciated

 $ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort | uniq

 Just woke up this morning and realized I needed to chop off more --
 everything except the domain.

 So, instead of [EMAIL PROTECTED] I need the result badguy.com

 How could the above awk line be expanded to chop off the username@
 portion as well?

sed(1) can do more than one substitutions in one line:

sed -e 's/)[[:space:]]*$//' -e 's/^.*@//'

or you can use as complex regular expressions as necessary to cut
specific parts of the line:

sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1/'

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-24 Thread antenneX

- Original Message - 
From: Giorgos Keramidas [EMAIL PROTECTED]
To: antenneX [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Sent: Wednesday, August 24, 2005 8:04 AM
Subject: Re: Script help using cut


 On 2005-08-24 07:58, antenneX [EMAIL PROTECTED] wrote:
 antenneX [EMAIL PROTECTED] wrote:
 Giorgos Keramidas [EMAIL PROTECTED] wrote:
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
 
  All ideas appreciated
 
  $ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort
| uniq
 
  Just woke up this morning and realized I needed to chop off
more --
  everything except the domain.
 
  So, instead of [EMAIL PROTECTED] I need the result badguy.com
 
  How could the above awk line be expanded to chop off the username@
  portion as well?

 sed(1) can do more than one substitutions in one line:

 sed -e 's/)[[:space:]]*$//' -e 's/^.*@//'

 or you can use as complex regular expressions as necessary to cut
 specific parts of the line:

 sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1/'


In fact, my very next script line uses sed(1) to add the TAB and the
RHS to the sendmail access file:
sed 's/$/   REJECT/g' tmpfile  /etc/mail/access

I'll bet my line could be incorporated with yours.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-24 Thread Giorgos Keramidas
On 2005-08-24 11:41, antenneX [EMAIL PROTECTED] wrote:
Giorgos Keramidas [EMAIL PROTECTED] wrote:
 sed -e 's/)[[:space:]]*$//' -e 's/^.*@//'

 or you can use as complex regular expressions as necessary to cut
 specific parts of the line:

 sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1/'

 In fact, my very next script line uses sed(1) to add the TAB and the
 RHS to the sendmail access file:
 sed 's/$/   REJECT/g' tmpfile  /etc/mail/access

 I'll bet my line could be incorporated with yours.

Sure.  It's probably also a good idea to use mv(1) with a temporary file
residing under /etc/mail too, to make sure the update to the access map
is as close to being an ``atomic operation'' as possible:

% accesstmp=`mktemp /etc/mail/access.tmp.XX`
% if [ -z ${accesstmp} ]; then
%   exit 1
% fi
%
% ( cat /etc/mail/access ;
%   awk '{whatever else here}' tmpfile | \
%   sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1REJECT/' )  
${accesstmp}
% if [ $? -ne 0 ]; then
%   exit 1
% fi
% mv ${accesstmp} /etc/mail/access
% cd /etc/mail  make access.db

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-24 Thread antenneX
- Original Message - 
From: Giorgos Keramidas [EMAIL PROTECTED]
To: antenneX [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Sent: Wednesday, August 24, 2005 11:52 AM
Subject: Re: Script help using cut


 On 2005-08-24 11:41, antenneX [EMAIL PROTECTED] wrote:
 Giorgos Keramidas [EMAIL PROTECTED] wrote:
  sed -e 's/)[[:space:]]*$//' -e 's/^.*@//'
 
  or you can use as complex regular expressions as necessary to cut
  specific parts of the line:
 
  sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1/'
 
  In fact, my very next script line uses sed(1) to add the TAB and
the
  RHS to the sendmail access file:
  sed 's/$/   REJECT/g' tmpfile  /etc/mail/access
 
  I'll bet my line could be incorporated with yours.

 Sure.  It's probably also a good idea to use mv(1) with a temporary
file
 residing under /etc/mail too, to make sure the update to the access
map
 is as close to being an ``atomic operation'' as possible:

 % accesstmp=`mktemp /etc/mail/access.tmp.XX`
 % if [ -z ${accesstmp} ]; then
 % exit 1
 % fi
 %
 % ( cat /etc/mail/access ;
 %   awk '{whatever else here}' tmpfile | \
 %   sed -e 's/[EMAIL PROTECTED]([^)]*\))[[:space:]]*$/\1 REJECT/' ) 
${accesstmp}
 % if [ $? -ne 0 ]; then
 % exit 1
 % fi
 % mv ${accesstmp} /etc/mail/access
 % cd /etc/mail  make access.db


Giorgos, that's pretty snazzy compared to my crude script. Will now
work on weaving it all together. Eliminates a bit more manual effort.

I like it  appreciate the extra help!

Best regards,
Jack L. Stone

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-23 Thread Giorgos Keramidas
On 2005-08-23 20:02, antenneX [EMAIL PROTECTED] wrote:
 Been trying to complete a script that I can use to grep spam emails
 from the maillog, then trim it to just the plain email address. Trying
 to use cut in the script but it's not doing what I want yet.

 Here is what the earlier lines have the lines down to so far:
 (envelope-from [EMAIL PROTECTED])  -- no quotes
 ...and I want this clean trimmed result after trim using cut or
 anything else that works to trim/cut:

 [EMAIL PROTECTED]  --- no underlines of course

 That's a TAB space at beginning of the line.

 The envelope lines are in a tmp file in colum format (one line below
 the other).
 (envelope-from [EMAIL PROTECTED])
 (envelope-from [EMAIL PROTECTED])
 (envelope-from [EMAIL PROTECTED])

 All ideas appreciated

Does it have to be cut(1)?

$ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort | uniq

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-23 Thread antenneX
- Original Message - 
From: Giorgos Keramidas [EMAIL PROTECTED]
To: antenneX [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Sent: Tuesday, August 23, 2005 8:16 PM
Subject: Re: Script help using cut


 On 2005-08-23 20:02, antenneX [EMAIL PROTECTED] wrote:
  Been trying to complete a script that I can use to grep spam
emails
  from the maillog, then trim it to just the plain email address.
Trying
  to use cut in the script but it's not doing what I want yet.
 
  Here is what the earlier lines have the lines down to so far:
  (envelope-from [EMAIL PROTECTED])  -- no quotes
  ...and I want this clean trimmed result after trim using cut
or
  anything else that works to trim/cut:
 
  [EMAIL PROTECTED]  --- no underlines of course
 
  That's a TAB space at beginning of the line.
 
  The envelope lines are in a tmp file in colum format (one line
below
  the other).
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
 
  All ideas appreciated

 Does it have to be cut(1)?

 $ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort |
uniq


No, it doesn't have to be cut.
I'll give this a try...
Thanks and,
Best regards,

Jack L. Stone

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
[EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help using cut

2005-08-23 Thread antenneX
- Original Message - 
From: Giorgos Keramidas [EMAIL PROTECTED]
To: antenneX [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Sent: Tuesday, August 23, 2005 8:16 PM
Subject: Re: Script help using cut


 On 2005-08-23 20:02, antenneX [EMAIL PROTECTED] wrote:
  Been trying to complete a script that I can use to grep spam
emails
  from the maillog, then trim it to just the plain email address.
Trying
  to use cut in the script but it's not doing what I want yet.
 
  Here is what the earlier lines have the lines down to so far:
  (envelope-from [EMAIL PROTECTED])  -- no quotes
  ...and I want this clean trimmed result after trim using cut
or
  anything else that works to trim/cut:
 
  [EMAIL PROTECTED]  --- no underlines of course
 
  That's a TAB space at beginning of the line.
 
  The envelope lines are in a tmp file in colum format (one line
below
  the other).
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
  (envelope-from [EMAIL PROTECTED])
 
  All ideas appreciated

 Does it have to be cut(1)?

 $ awk '{print $2}' tmpfile | sed -e 's/)[[:space:]]*$//' | sort |
uniq


Yep! That looks good!

Many thanks again for the tip.

Best regards,
Jack L. Stone

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help needed please

2003-08-14 Thread Alexander Haderer
At 08:49 14.08.2003 -0500, Jack L. Stone wrote:
...
When we started providing the articles 6-7 years ago, folks used browsers
to read the articles. Now, the trend has become a more lazy approach and
there is an increasing use of those download utilities which can be left
unattended to download entire web sites taking several hours to do so.
Multiply this by a number of similar downloads and there goes the
bandwidth, denying those other normal online readers the speed needed for
loading and browsing in the manner intended. Several hundred will be
reading at a time and several 1000 daily.

A possible solution?
What comes to my mind:

- Offer zip/tar.gz archives via an ftp server to your customers.
- allow customer's server to mirror your ftp-server
- probably: setup a mailing list to inform your customers about changes/updates
Of course you can additionally install some bandwith limitation stuff. (But 
I don't know one, sorry).

Alexander

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help needed please

2003-08-14 Thread Jack L. Stone
At 03:44 PM 8.14.2003 +0100, Jez Hancock wrote:
On Thu, Aug 14, 2003 at 08:49:49AM -0500, Jack L. Stone wrote:
 Server Version: Apache/1.3.27 (Unix) FrontPage/5.0.2.2510 PHP/4.3.1
 The above is typical of the servers in use, and with csh shells employed,
 plus IPFW.
 
 My apologies for the length of this question, but the background seems
 necessary as brief as I can make it so the question makes sense.
 
 The problem:
 We have several servers that provide online reading of Technical articles
 and each have several hundred MB to a GB of content.
 
 When we started providing the articles 6-7 years ago, folks used browsers
 to read the articles. Now, the trend has become a more lazy approach and
 there is an increasing use of those download utilities which can be left
 unattended to download entire web sites taking several hours to do so.
 Multiply this by a number of similar downloads and there goes the
 bandwidth, denying those other normal online readers the speed needed for
 loading and browsing in the manner intended. Several hundred will be
 reading at a time and several 1000 daily.
snip
There is no easy solution to this, but one avenue might be to look at
bandwidth throttling in an apache module.

One that I've used before is mod_throttle which is in the ports:

/usr/ports/www/mod_throttle

which allows you to throttle users by ip address to a certain number of
documents and/or up to a certain transfer limit.  IIRC it's fairly
limited though in that you can only apply per IP limits to _every_
virtual host - ie in the global httpd.conf context.

A more finegrained solution (from what I've read, haven't tried it) is
mod_bwshare - this one isn't in the ports but can be found here:

http://www.topology.org/src/bwshare/

this module overcomes some of the shortfalls of mod_throttle and allows
you to specify finer granularity over who consumes how much bandwidth
over what time period.

 Now, my question: Is it possible to write a script that can constantly scan
 the Apache logs to look for certain footprints of those downloaders,
 perhaps the names, like HTTRACK, being one I see a lot. Whenever I see
 one of those sessions, I have been able to abort them by adding a rule to
 the firewall to deny the IP address access to the server. This aborts the
 downloading, but have seen the attempts constantly continue for a day or
 two, confirming unattended downloads.
 
 Thus, if the script could spot an offender and then perhaps make use of
 the firewall to add a rule containing the offender's IP address and then
 flush to reset the firewall, this would at least abort the download and
 free up the bandwidth (I already have a script that restarts the firewall).
 
 Is this possible and how would I go about it???
If you really wanted to go down this route then I found a script someone
wrote a while back to find 'rude robots' from a httpd logfile which you
could perhaps adapt to do dynamic filtering in conjunction with your
firewall:

http://stein.cshl.org/~lstein/talks/perl_conference/cute_tricks/log9.html

If you have any success let me know.

-- 
Jez


Interesting. Looks like a step in the right direction. Will weigh this one
along the possibilities.

Many thanks...!

Best regards,
Jack L. Stone,
Administrator

SageOne Net
http://www.sage-one.net
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help needed please

2003-08-14 Thread Michael Conlen
Jack,

You can setup Apache to deny access to people using that browser. The 
catch is that it's easy to work around it by changing the browser 
string. If they are that desperate to do this after you deny access to 
people using HTTRACK or other clients you can place a link that no human 
would access that runs a CGI that runs the firewall rule to deny them 
access. You probably want it to return some data and wait a bit so the 
user can't figure out easily what URL is killing their access.

You can also put on your website that users are not allowed to use the 
site using non interactive browsers. Then when you find them you send a 
nasty gram to their ISP and notify them that continued abuse could be a 
crime under the Computer Fraud and Abuse Act (if you and they are in the 
US) and let their ISP take care of it.

--
Michael Conlen
Jack L. Stone wrote:

Server Version: Apache/1.3.27 (Unix) FrontPage/5.0.2.2510 PHP/4.3.1
The above is typical of the servers in use, and with csh shells employed,
plus IPFW.
My apologies for the length of this question, but the background seems
necessary as brief as I can make it so the question makes sense.
The problem:
We have several servers that provide online reading of Technical articles
and each have several hundred MB to a GB of content.
When we started providing the articles 6-7 years ago, folks used browsers
to read the articles. Now, the trend has become a more lazy approach and
there is an increasing use of those download utilities which can be left
unattended to download entire web sites taking several hours to do so.
Multiply this by a number of similar downloads and there goes the
bandwidth, denying those other normal online readers the speed needed for
loading and browsing in the manner intended. Several hundred will be
reading at a time and several 1000 daily.
Further, those download utilities do not discriminate on the files
downloaded unless the user sets them to exclude certain types of files they
don't need for the articles. All or most don't bother to set the
parameters. They just turn them loose and go about their day. Essentially a
DoS for normal readers who notice the slowdown, but not with malice.
This method downloads a tremendous amount of unnecessary content. Some
downloaders have been contacted to stop (if we spot an email address from a
login) and in response they simply weren't aware of the problems they were
making and agreed to at least spread downloads over longer periods of time.
I can live with that.
A possible solution?
Now, my question: Is it possible to write a script that can constantly scan
the Apache logs to look for certain footprints of those downloaders,
perhaps the names, like HTTRACK, being one I see a lot. Whenever I see
one of those sessions, I have been able to abort them by adding a rule to
the firewall to deny the IP address access to the server. This aborts the
downloading, but have seen the attempts constantly continue for a day or
two, confirming unattended downloads.
Thus, if the script could spot an offender and then perhaps make use of
the firewall to add a rule containing the offender's IP address and then
flush to reset the firewall, this would at least abort the download and
free up the bandwidth (I already have a script that restarts the firewall).
Is this possible and how would I go about it???

Many thanks for any ideas on this!

Best regards,
Jack L. Stone,
Administrator
SageOne Net
http://www.sage-one.net
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script help needed please

2003-08-14 Thread Jez Hancock
On Thu, Aug 14, 2003 at 08:49:49AM -0500, Jack L. Stone wrote:
 Server Version: Apache/1.3.27 (Unix) FrontPage/5.0.2.2510 PHP/4.3.1
 The above is typical of the servers in use, and with csh shells employed,
 plus IPFW.
 
 My apologies for the length of this question, but the background seems
 necessary as brief as I can make it so the question makes sense.
 
 The problem:
 We have several servers that provide online reading of Technical articles
 and each have several hundred MB to a GB of content.
 
 When we started providing the articles 6-7 years ago, folks used browsers
 to read the articles. Now, the trend has become a more lazy approach and
 there is an increasing use of those download utilities which can be left
 unattended to download entire web sites taking several hours to do so.
 Multiply this by a number of similar downloads and there goes the
 bandwidth, denying those other normal online readers the speed needed for
 loading and browsing in the manner intended. Several hundred will be
 reading at a time and several 1000 daily.
snip
There is no easy solution to this, but one avenue might be to look at
bandwidth throttling in an apache module.

One that I've used before is mod_throttle which is in the ports:

/usr/ports/www/mod_throttle

which allows you to throttle users by ip address to a certain number of
documents and/or up to a certain transfer limit.  IIRC it's fairly
limited though in that you can only apply per IP limits to _every_
virtual host - ie in the global httpd.conf context.

A more finegrained solution (from what I've read, haven't tried it) is
mod_bwshare - this one isn't in the ports but can be found here:

http://www.topology.org/src/bwshare/

this module overcomes some of the shortfalls of mod_throttle and allows
you to specify finer granularity over who consumes how much bandwidth
over what time period.

 Now, my question: Is it possible to write a script that can constantly scan
 the Apache logs to look for certain footprints of those downloaders,
 perhaps the names, like HTTRACK, being one I see a lot. Whenever I see
 one of those sessions, I have been able to abort them by adding a rule to
 the firewall to deny the IP address access to the server. This aborts the
 downloading, but have seen the attempts constantly continue for a day or
 two, confirming unattended downloads.
 
 Thus, if the script could spot an offender and then perhaps make use of
 the firewall to add a rule containing the offender's IP address and then
 flush to reset the firewall, this would at least abort the download and
 free up the bandwidth (I already have a script that restarts the firewall).
 
 Is this possible and how would I go about it???
If you really wanted to go down this route then I found a script someone
wrote a while back to find 'rude robots' from a httpd logfile which you
could perhaps adapt to do dynamic filtering in conjunction with your
firewall:

http://stein.cshl.org/~lstein/talks/perl_conference/cute_tricks/log9.html

If you have any success let me know.

-- 
Jez

http://www.munk.nu/
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Script Help

2002-10-08 Thread Fernando Gleiser

On Tue, 8 Oct 2002, Brendan McAlpine wrote:

 Hey all,

 I am poring over mail logs and trying to pull out all the email
 addresses contained in the log.  Does anyone have any idea how I could
 do this with a shell script?

Untested, and asuming sendmail log format:

#!/usr/bin/perl -w

while () {
if (m/=([^@]+@[^]+)/) {
print $1 \n;
}
}

Translation: for every line, if line matches a '=', followed by a '',
followed by (one or more of anything but a '@' or a '' followed by a '@'
and then one or more of anything but a '')

print the part that matches between the parens



Fer

 Thanks

 Brendan


 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with unsubscribe freebsd-questions in the body of the message



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message