Re: Scripting question

2007-09-14 Thread Jonathan McKeown
On Friday 14 September 2007 09:42, Steve Bertrand wrote:
> >>> I don't have the perl skills, though that would be ideal.
>
> -- snip --
>
> > Another approach in Perl would be:
> >
> > #!/usr/bin/perl
> > my (%names, %dups);
> > while (<>) {
> > my ($key) = split;
> > $dups{$key} = 1 if $names{$key};
> > $names{$key} = 1;
> > }
> > delete @names{keys %dups};

> I don't know if this is completely relevant, but it appears as though it
>  may help.
>
> Bob Showalter once advised me on the Perl Beginners list as such,
> quoted, but snipped for clarity:
>
> see "perldoc -q duplicate" If the array elements can
> be compared with string semantics (as you are doing here), the following
> will work:
>
>my @array = do { my %seen; grep !$seen{$_}++, @clean };

The problem with this is that it leaves you with one copy of each duplicated 
item: the requirement was to remove all copies of duplicated items and return 
only the non-repeated items.

Jonathan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-14 Thread Steve Bertrand

>>> I don't have the perl skills, though that would be ideal.

-- snip --

> Another approach in Perl would be:
> 
> #!/usr/bin/perl
> my (%names, %dups);
> while (<>) {
> my ($key) = split;
> $dups{$key} = 1 if $names{$key};
> $names{$key} = 1;
> }
> delete @names{keys %dups};
> #
> # keys %names is now an unordered list of only non-repeated elements
> # keys %dups is an unordered list of only repeated elements
> 
> split splits on whitespace, returning a list of fields which can be assigned 
> to a list of variables. Here we only want to capture the first field: split 
> is more efficient for this than using a regex. The first occurrence of $key 
> is in parens because it's actually a list of one variable name.
> 
> We build two hashes, one, %name, keyed by the original names (this is the 
> classic way to reduce duplicates to single occurrences, since the duplicated 
> keys overwrite the originals), and one, %dup, whose keys are names already 
> appearing in %names - the duplicated entries. Having done that we use a hash 
> slice to delete from %names all the keys of %dups, which leaves the keys of 
> %names holding all the entries which only appear once (and the keys of %dups 
> all the duplicated entries if that's useful).

I don't know if this is completely relevant, but it appears as though it
 may help.

Bob Showalter once advised me on the Perl Beginners list as such,
quoted, but snipped for clarity:

see "perldoc -q duplicate" If the array elements can
be compared with string semantics (as you are doing here), the following
will work:

   my @array = do { my %seen; grep !$seen{$_}++, @clean };

Steve
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-14 Thread Jonathan McKeown
On Thursday 13 September 2007 20:35, Roland Smith wrote:
> On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:
> > I'm trying to do some text file manipulation, and it's driving me nuts.
[snip]
> > I've looked at sort and uniq, and I've googled a fair bit but can't
> > seem to find anything that would do this.
> >
> > I don't have the perl skills, though that would be ideal.
> >
> > Any help out there?
>
> #!/usr/bin/perl
> while (<>) {
> # Assuming no whitespace in addresses; kill everything after the first
> # space 
> s/ .*$//; 
> # Store the name & count in a hash
> $names{$_}++;
> }
> # Go over the hash
> while (($name,$count) = each(%names)) {
>   if ($count == 1) {
>   # print unique names.
>   print $name, "\n";
>   }
> }

Another approach in Perl would be:

#!/usr/bin/perl
my (%names, %dups);
while (<>) {
my ($key) = split;
$dups{$key} = 1 if $names{$key};
$names{$key} = 1;
}
delete @names{keys %dups};
#
# keys %names is now an unordered list of only non-repeated elements
# keys %dups is an unordered list of only repeated elements

split splits on whitespace, returning a list of fields which can be assigned 
to a list of variables. Here we only want to capture the first field: split 
is more efficient for this than using a regex. The first occurrence of $key 
is in parens because it's actually a list of one variable name.

We build two hashes, one, %name, keyed by the original names (this is the 
classic way to reduce duplicates to single occurrences, since the duplicated 
keys overwrite the originals), and one, %dup, whose keys are names already 
appearing in %names - the duplicated entries. Having done that we use a hash 
slice to delete from %names all the keys of %dups, which leaves the keys of 
%names holding all the entries which only appear once (and the keys of %dups 
all the duplicated entries if that's useful).

Jonathan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


RE: Scripting question

2007-09-13 Thread David Christensen
Kurt Buff wrote:

> I'm trying to do some text file manipulation, and it's driving me nuts.
...
> I don't have the perl skills, though that would be ideal.
> Any help out there?

Buy "Learning Perl, Fourth Edition", read it, and do the exercises:

http://www.oreilly.com/catalog/learnperl4/index.html


Investing the effort to become proficient with Perl will serve you well in the
long run.


HTH,

David

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Jonathan McKeown
On Thursday 13 September 2007 20:19, Kurt Buff wrote:
> On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
> > > The only space is the one separating the SMTP address from the OK or
> > > NO.
> >
> > Then you should be able to tell it to sort on the first token in
> > the string with white space as a separator and to eliminate
> > duplicates.   It has been a long time since I had need of sort. I
> > don't remember the arguments/flags but am sure that type of thing can be
> > done.

You can use uniq if the file is already sorted (if not, put a sort at the 
start of the pipe) - after using awk to pick the first field:

awk '{print $1}' inputfile | uniq -u

> Ya know, it's really easy to get wrapped around the axle on this stuff.
>
> I think I may have a better solution. The file I'm trying to massage
> has a predecessor - the non-unique lines are the result of a
> concatenation of two files.
>
> Silly me, it's better to 'grep -v' with the one file vs. the second
> rather than trying to merge, sort and further massage the result. The
> fix will be to use sed against the first file to remove the ' NO',
> thus providing a clean argument for grepping the other file.

If it's two files and you want to select or reject common lines, look at 
comm(1) as another technique.

Jonathan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Jeffrey Goldberg

On Sep 13, 2007, at 2:38 PM, Kurt Buff wrote:


Instead of grep -v take a look at comm.



Interesting! I just looked at the man page, and while I don't think it
it's going to be directly useful (or I'm just not reading the page
correctly), it's a new utility to me - I'll keep it in mind for other
things.


Maybe I haven't understood what you are after.

If you want to get lines that exist in either file1 or file2 but not  
both (and if the files are already sorted) then


  comm -3 file1  file2

will do that.

-j




--
Jeffrey Goldberghttp://www.goldmark.org/jeff/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Kurt Buff
On 9/13/07, Jeffrey Goldberg <[EMAIL PROTECTED]> wrote:
> On Sep 13, 2007, at 1:19 PM, Kurt Buff wrote:
>
> > I think I may have a better solution. The file I'm trying to massage
> > has a predecessor - the non-unique lines are the result of a
> > concatenation of two files.
> >
> > Silly me, it's better to 'grep -v' with the one file vs. the second
> > rather than trying to merge, sort and further massage the result. The
> > fix will be to use sed against the first file to remove the ' NO',
> > thus providing a clean argument for grepping the other file.
>
> Instead of grep -v take a look at comm.
>
> -j


Interesting! I just looked at the man page, and while I don't think it
it's going to be directly useful (or I'm just not reading the page
correctly), it's a new utility to me - I'll keep it in mind for other
things.

Thanks!
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Kurt Buff
On 9/13/07, Craig Whipp <[EMAIL PROTECTED]> wrote:
> > On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
> >> > The only space is the one separating the SMTP address from the OK or
> >> NO.
> >>
> >> Then you should be able to tell it to sort on the first token in
> >> the string with white space as a separator and to eliminate
> >> duplicates.   It has been a long time since I had need of sort. I
> >> don't remember the arguments/flags but am sure that type of thing can be
> >> done.
> >>
> >> jerry
> >
> > Ya know, it's really easy to get wrapped around the axle on this stuff.
> >
> > I think I may have a better solution. The file I'm trying to massage
> > has a predecessor - the non-unique lines are the result of a
> > concatenation of two files.
> >
> > Silly me, it's better to 'grep -v' with the one file vs. the second
> > rather than trying to merge, sort and further massage the result. The
> > fix will be to use sed against the first file to remove the ' NO',
> > thus providing a clean argument for grepping the other file.
> >
> > Sigh.
> >
> > Kurt
>
>
> It sounds like you've found your solution, but how about the below shell
> script?  Probably woefully inefficient, but should work.
>
> - Craig
>
> ### begin script ##
> #!/bin/sh
> # Read in an input list of 2 column data pairs and output the pairs where
> the first columns are unique.
>
> INPUT_FILE="list.txt"
> OUTPUT_FILE="new_list.txt"
> NON_UNIQ_LIST=""
>
> for NON_UNIQ in `cat $INPUT_FILE | awk '{print $1}' | sort | uniq -c |
> grep -vE '^ *1' | awk '{print $2}'`
> do
> NON_UNIQ_LIST=$NON_UNIQ_LIST"|"$NON_UNIQ
> done
>
> NON_UNIQ_LIST=`echo $NON_UNIQ_LIST | sed 's/^.//'`
>
> cat $INPUT_FILE | grep -vE $NON_UNIQ_LIST > $OUTPUT_FILE
> ### end script ##


I'll fiddle with this too, but I like the perl better.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Kurt Buff
On 9/13/07, Roland Smith <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:
> > I'm trying to do some text file manipulation, and it's driving me nuts.
> >
> > I've got a sorted file of SMTP addresses, and want to eliminate the
> > lines that are the same up to a space character within the line.
> >
> > Example:
> >
> > [EMAIL PROTECTED] NO
> > [EMAIL PROTECTED] OK
> >
> > The above lines *both* need to be eliminated from output - I don't
> > want the first or second of them, I want them both gone.
> >
> > I've looked at sort and uniq, and I've googled a fair bit but can't
> > seem to find anything that would do this.
> >
> > I don't have the perl skills, though that would be ideal.
> >
> > Any help out there?
>
> #!/usr/bin/perl
> while (<>) {
> # Assuming no whitespace in addresses; kill everything after the first 
> space
> s/ .*$//;
> # Store the name & count in a hash
> $names{$_}++;
> }
> # Go over the hash
> while (($name,$count) = each(%names)) {
>   if ($count == 1) {
>   # print unique names.
>   print $name, "\n";
>   }
> }
>
>
> Roland
> --
> R.F.Smith   http://www.xs4all.nl/~rsmith/
> [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
> pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)

I can follow the logic in that.

I'll definitely try incorporating that.

Thanks!
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Craig Whipp
> On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
>> > The only space is the one separating the SMTP address from the OK or
>> NO.
>>
>> Then you should be able to tell it to sort on the first token in
>> the string with white space as a separator and to eliminate
>> duplicates.   It has been a long time since I had need of sort. I
>> don't remember the arguments/flags but am sure that type of thing can be
>> done.
>>
>> jerry
>
> Ya know, it's really easy to get wrapped around the axle on this stuff.
>
> I think I may have a better solution. The file I'm trying to massage
> has a predecessor - the non-unique lines are the result of a
> concatenation of two files.
>
> Silly me, it's better to 'grep -v' with the one file vs. the second
> rather than trying to merge, sort and further massage the result. The
> fix will be to use sed against the first file to remove the ' NO',
> thus providing a clean argument for grepping the other file.
>
> Sigh.
>
> Kurt


It sounds like you've found your solution, but how about the below shell
script?  Probably woefully inefficient, but should work.

- Craig

### begin script ##
#!/bin/sh
# Read in an input list of 2 column data pairs and output the pairs where
the first columns are unique.

INPUT_FILE="list.txt"
OUTPUT_FILE="new_list.txt"
NON_UNIQ_LIST=""

for NON_UNIQ in `cat $INPUT_FILE | awk '{print $1}' | sort | uniq -c |
grep -vE '^ *1' | awk '{print $2}'`
do
NON_UNIQ_LIST=$NON_UNIQ_LIST"|"$NON_UNIQ
done

NON_UNIQ_LIST=`echo $NON_UNIQ_LIST | sed 's/^.//'`

cat $INPUT_FILE | grep -vE $NON_UNIQ_LIST > $OUTPUT_FILE
### end script ##

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Jeffrey Goldberg

On Sep 13, 2007, at 1:19 PM, Kurt Buff wrote:


I think I may have a better solution. The file I'm trying to massage
has a predecessor - the non-unique lines are the result of a
concatenation of two files.

Silly me, it's better to 'grep -v' with the one file vs. the second
rather than trying to merge, sort and further massage the result. The
fix will be to use sed against the first file to remove the ' NO',
thus providing a clean argument for grepping the other file.


Instead of grep -v take a look at comm.

-j
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Roland Smith
On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:
> I'm trying to do some text file manipulation, and it's driving me nuts.
> 
> I've got a sorted file of SMTP addresses, and want to eliminate the
> lines that are the same up to a space character within the line.
> 
> Example:
> 
> [EMAIL PROTECTED] NO
> [EMAIL PROTECTED] OK
> 
> The above lines *both* need to be eliminated from output - I don't
> want the first or second of them, I want them both gone.
> 
> I've looked at sort and uniq, and I've googled a fair bit but can't
> seem to find anything that would do this.
> 
> I don't have the perl skills, though that would be ideal.
> 
> Any help out there?

#!/usr/bin/perl
while (<>) {
# Assuming no whitespace in addresses; kill everything after the first space
s/ .*$//;
# Store the name & count in a hash
$names{$_}++;
}
# Go over the hash
while (($name,$count) = each(%names)) {
  if ($count == 1) {
  # print unique names.
  print $name, "\n";
  }
}


Roland
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgpjohzJDWDRW.pgp
Description: PGP signature


Re: Scripting question

2007-09-13 Thread Kurt Buff
On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
>
> First, please always make sure your responses go to the list.
> It is both list etiquette and of practical value.  Follow-ups to
> only an individual may not reach the person who can provide real help.
>
> Most Email clients have a group reply which will do the trick.

Yup - that's my fault, and contrary to my intent - I was using the web
interface, and it's too easy to just hit the reply button instead of
"reply to all"  - mea culpa.



> On Thu, Sep 13, 2007 at 10:32:34AM -0700, Kurt Buff wrote:
>
> > On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
> > > On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:
> > >
> > > > I'm trying to do some text file manipulation, and it's driving me nuts.
> > > >
> > > > I've got a sorted file of SMTP addresses, and want to eliminate the
> > > > lines that are the same up to a space character within the line.
> > > >
> > > > Example:
> > > >
> > > > [EMAIL PROTECTED] NO
> > > > [EMAIL PROTECTED] OK
> > > >
> > > > The above lines *both* need to be eliminated from output - I don't
> > > > want the first or second of them, I want them both gone.
> > > >
> > > > I've looked at sort and uniq, and I've googled a fair bit but can't
> > > > seem to find anything that would do this.
> > >
> > > Seems like this is right up sort's alley.
> > > Is the first string always separated from the rest by white space
> > > or does your first string sometimes include white space.
> > >
> > > jerry
> >
> > The only space is the one separating the SMTP address from the OK or NO.
>
> Then you should be able to tell it to sort on the first token in
> the string with white space as a separator and to eliminate
> duplicates.   It has been a long time since I had need of sort. I
> don't remember the arguments/flags but am sure that type of thing can be done.

Tried that, and it doesn't work the way I expect, or else I'm doing it
wrong, which is definitely possible.

My first difficulty is that I can't figure out how to specify the
space as the field delimiter, assuming that -t is the correct
parameter for that. I've tried specifying '@' for -t, but that doesn't
work either.

Next, my suspicion is that the -u parameter will simply output the
first line of a set of non-unique lines, which is what it does
normally - it doesn't seem to eliminate all non-unique lines, it just
makes the first line the unique one.

Am I making sense?

Kurt
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Kurt Buff
On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
> > The only space is the one separating the SMTP address from the OK or NO.
>
> Then you should be able to tell it to sort on the first token in
> the string with white space as a separator and to eliminate
> duplicates.   It has been a long time since I had need of sort. I
> don't remember the arguments/flags but am sure that type of thing can be done.
>
> jerry

Ya know, it's really easy to get wrapped around the axle on this stuff.

I think I may have a better solution. The file I'm trying to massage
has a predecessor - the non-unique lines are the result of a
concatenation of two files.

Silly me, it's better to 'grep -v' with the one file vs. the second
rather than trying to merge, sort and further massage the result. The
fix will be to use sed against the first file to remove the ' NO',
thus providing a clean argument for grepping the other file.

Sigh.

Kurt
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Jerry McAllister

First, please always make sure your responses go to the list.
It is both list etiquette and of practical value.  Follow-ups to
only an individual may not reach the person who can provide real help.

Most Email clients have a group reply which will do the trick.

On Thu, Sep 13, 2007 at 10:32:34AM -0700, Kurt Buff wrote:

> On 9/13/07, Jerry McAllister <[EMAIL PROTECTED]> wrote:
> > On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:
> >
> > > I'm trying to do some text file manipulation, and it's driving me nuts.
> > >
> > > I've got a sorted file of SMTP addresses, and want to eliminate the
> > > lines that are the same up to a space character within the line.
> > >
> > > Example:
> > >
> > > [EMAIL PROTECTED] NO
> > > [EMAIL PROTECTED] OK
> > >
> > > The above lines *both* need to be eliminated from output - I don't
> > > want the first or second of them, I want them both gone.
> > >
> > > I've looked at sort and uniq, and I've googled a fair bit but can't
> > > seem to find anything that would do this.
> >
> > Seems like this is right up sort's alley.
> > Is the first string always separated from the rest by white space
> > or does your first string sometimes include white space.
> >
> > jerry
> 
> The only space is the one separating the SMTP address from the OK or NO.

Then you should be able to tell it to sort on the first token in
the string with white space as a separator and to eliminate
duplicates.   It has been a long time since I had need of sort. I
don't remember the arguments/flags but am sure that type of thing can be done.

jerry

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Scripting question

2007-09-13 Thread Jerry McAllister
On Thu, Sep 13, 2007 at 10:16:40AM -0700, Kurt Buff wrote:

> I'm trying to do some text file manipulation, and it's driving me nuts.
> 
> I've got a sorted file of SMTP addresses, and want to eliminate the
> lines that are the same up to a space character within the line.
> 
> Example:
> 
> [EMAIL PROTECTED] NO
> [EMAIL PROTECTED] OK
> 
> The above lines *both* need to be eliminated from output - I don't
> want the first or second of them, I want them both gone.
> 
> I've looked at sort and uniq, and I've googled a fair bit but can't
> seem to find anything that would do this.

Seems like this is right up sort's alley.
Is the first string always separated from the rest by white space
or does your first string sometimes include white space.

jerry

> 
> I don't have the perl skills, though that would be ideal.
> 
> Any help out there?
> 
> Kurt
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: scripting question

2006-10-03 Thread jan gestre

On 10/3/06, Ivan Levchenko <[EMAIL PROTECTED]> wrote:


Remove the word root from the crontab entry. The user should be
specified only in the system crontab.



thanks ivan,  but the solution i made was i put in the
/usr/local/etc/periodic/daily directory, it is now working :D


On 10/3/06, jan gestre <[EMAIL PROTECTED]> wrote:

> i made a script and put on root's crontab, however it's not doing or
showing
> the output that is forwarded to my email address correctly therefore i'm
not
> sure if it is working or not. below is what the script look like:
>
> #
> # cvsrun - Weekly CVSup Run
>
> echo "Subject: `hostname` weekly cvsup run"
> /usr/local/bin/cvsup -g -L 2 /root/ports-supfile
> echo ""
> /usr/local/bin/portmanager -s | grep OLD
> echo ""
> echo "cvsrun done."
>
> #
>
> i would like the output of this command
>
> /usr/local/bin/portmanager -s | grep OLD
>
> to show in my mail where i forwarded it. below is the cronjob.
>
> 30 8 * * * root /usr/local/bin/cvsrun | mail -s "Daily cvsup run and
> portmanager" user1
>
> can someone help me to correct this script, to show the output that i
want.
>
> TIA
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "
[EMAIL PROTECTED]"
>


--
Best Regards,

Ivan Levchenko
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: scripting question

2006-10-03 Thread Ivan Levchenko

(forgot to cc the list =))
Remove the word root from the crontab entry. The user should be
specified only in the system crontab.

On 10/3/06, jan gestre <[EMAIL PROTECTED]> wrote:

i made a script and put on root's crontab, however it's not doing or showing
the output that is forwarded to my email address correctly therefore i'm not
sure if it is working or not. below is what the script look like:

#
# cvsrun - Weekly CVSup Run

echo "Subject: `hostname` weekly cvsup run"
/usr/local/bin/cvsup -g -L 2 /root/ports-supfile
echo ""
/usr/local/bin/portmanager -s | grep OLD
echo ""
echo "cvsrun done."

#

i would like the output of this command

/usr/local/bin/portmanager -s | grep OLD

to show in my mail where i forwarded it. below is the cronjob.

30 8 * * * root /usr/local/bin/cvsrun | mail -s "Daily cvsup run and
portmanager" user1

can someone help me to correct this script, to show the output that i want.

TIA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"




--
Best Regards,

Ivan Levchenko
[EMAIL PROTECTED]


--
Best Regards,

Ivan Levchenko
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: scripting question

2006-10-02 Thread Atom Powers

On 10/2/06, jan gestre <[EMAIL PROTECTED]> wrote:

i made a script and put on root's crontab, however it's not doing or showing
the output that is forwarded to my email address correctly therefore i'm not
sure if it is working or not. below is what the script look like:


...


30 8 * * * root /usr/local/bin/cvsrun | mail -s "Daily cvsup run and
portmanager" user1



Why don't you make this a periodic script. Put it in
'/usr/local/etc/periodic/weekly/299.cvrun' and it will be run every
week with the rest of your periodic; with periodic you get much
better control over where output is sent.

--
--
Perfection is just a word I use occasionally with mustard.
--Atom Powers--
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"