Re: Logging under CGI

2002-06-10 Thread Tom Brown

On Tue, 11 Jun 2002, Sam Tregar wrote:

> On Mon, 10 Jun 2002, Tom Brown wrote:
> 
> > ?? AFAIK, Files opened in append mode, and written to without buffering,
> > should _not_ get corrupted in any manner that flock would prevent.
> > (basically small writes should be atomic.)
> 
> Right, and does Perl write with buffering when you call print()?  Yes, it
> does!

huh? That's what $| is all about, and $|++ is a pretty common line of
code.

> 
> > that should be pretty universal for most UNIXs
> 
> I've actually never heard this before.  I've been taught that if you have
> multiple processes writing to one file you must use flock() or another
> equivalent mechanism to prevent overwrites.  Do you have a source where I
> could learn about guaranteed atomic file writes without locking under
> UNIX?

man(2) open.  see the O_APPEND option... the only footnote is that it
doesn't work properly via NFS...

This doesn't cover why small writes are atomic though. And man(2) write
doesn't seem to either, but the open man page on my system says

   O_APPEND
   The  file is opened in append mode. Initially, and
   before each write, the file pointer is  positioned
   at  the  end  of  the  file,  as  if  with  lseek.
   O_APPEND may lead to corrupted files on  NFS  file
   systems if more than one process appends data to a
   file at once.  This is because NFS does  not  sup
   port appending to a file, so the client kernel has
   to simulate it, which can't be done without a race
   condition.

which certainly implies that you can expect local files _not_ to get
corrupted.

p.s. I'm not the only one who considers it impolite to have off-list
messages taken back onto the list... I generally don't post AFAIK comments
to lists, prefering to keep the signal to noise ratio higher.


> 
> -sam
> 
> 
> 

--
[EMAIL PROTECTED]   | Courage is doing what you're afraid to do.
http://BareMetal.com/  | There can be no courage unless you're scared.
   | - Eddie Rickenbacker 






Re: newbie - installation problems

2002-04-18 Thread Tom Brown

On Thu, Apr 18, 2002 at 12:57:52PM +0200, Carlo Giomini - tesista Federico wrote:
> > > 2. If this risk exists, I should procede with the old installation BUT
> > > what would I get at the end? Mod_perl statically built inside Apache?
> > Build what you want and install it. I wouldn't worry about the
> > previous installs unless you start changing the install targets (where
> > the files go in the filesystem). Then you may start to confuse yourself with
> > copies of apache, httpd.conf, etc scattered around the fs.
> Thanks for the encouragement, but please tell me what would I get going on
> with the previous install: mod_perl statically linked? Also, I found
> already two different httpd.conf in the fs:
> /etc/httpd/conf/httpd.conf
> /usr/apache/conf/httpd.conf
> no idea what the /etc/... is worth for.

I'd recommend you use the command line args to work out what an apache
or httpd binary supports.
IE:
apache -V  will tell you where it gets its conf files by default
apache -L |grep LoadModule will output 'LoadModule (mod_so.c)' if it
supports a dynamic mod_perl
apache -l | grep -i perl should output something if mod_perl is
statically linked
apache -h or man apache will explain these and more options

As you may have noticed the apache binary is sometimes named httpd.
For example you may want to run:
/usr/sbin/httpd -l | grep -i perl
or something similar.

The extra .conf s and the .so s are harmless but may be a nuisance.


-- 
28 70 20 71 2C 65 29 61 9C B1 36 3D D4 69 CE 62 4A 22 8B 0E DC 3E
mailto:[EMAIL PROTECTED]
http://thecap.org/



Re: newbie - installation problems

2002-04-18 Thread Tom Brown

On Thu, Apr 18, 2002 at 12:20:41PM +0200, Carlo Giomini - tesista Federico wrote:
> I can't manage very well with Apache and mod_perl. I have made an 
> installation of mod_perl WITHOUT building a new httpd daemon (NO_HTTPD=1),
> undergoing all the steps until the end (perl MAKEFILE_PL, make, make
> install). After that, I read on the Stas Bekman's guide at
> http://perl.apache.org/guide that mod_perl can albe installed as DSO, via
> apxs. The question is:
Sounds like you are on the right path.

> 1. Is it possible to do a new installation of mod_perl as DSO that
> 'overrides' (so to say) the (incomplete) preceding one or there is the
> risk to mess up things worse?
It is possible and very common to build it one way, install, then
build with different options and install over the top.

> 2. If this risk exists, I should procede with the old installation BUT
> what would I get at the end? Mod_perl statically built inside Apache?
Build what you want and install it. I wouldn't worry about the
previous installs unless you start changing the install targets (where
the files go in the filesystem). Then you may start to confuse yourself with
copies of apache, httpd.conf, etc scattered around the fs.

> 3. Is there a procedure to 'uninstall' the (incomplete) installation made
> so far?
The only procedure I know is to manually remove the files. There may
be a automated method, but I've never felt the need to find out about
it.

-- 
28 70 20 71 2C 65 29 61 9C B1 36 3D D4 69 CE 62 4A 22 8B 0E DC 3E
mailto:[EMAIL PROTECTED]
http://thecap.org/



Re: Sharing Variable Across Apache Children

2002-04-17 Thread Tom Brown

Is the webserver useful if you have an error that warrants sending a
mail? If sending an email means the server is broken having a flood of
mails may be a feature. It will be incentive to fix whatever is
breaking your server/db.
Also, I would strongly recommend keeping your warning system as simple
as possible. Why not have the servers output an error message to a file
on a single nfs file system and then setup a crontab to watch the 
file(s)? One machine could run the crontab every minute. 60 mails/min 
isn't too much unless you are forwarding it to a beeper in which case
you could write a quick script to only mail once per unit time.

On Wed, Apr 17, 2002 at 11:56:36AM -0400, Benjamin Elbirt wrote:
> Wow,
> 
> I never expected the response I got!  Well, lets assume that I were to go with
> the shared memory option anyway... what would the pitfalls be / concerns?  The
> truth is, I don't want a separate system (as per the e-mail about having an
> error handling server), and I don't want to have to manage the e-mail on the
> receiving end because I'm not the only person who receives it (didn't mention
> it, but I guess that's important).  Further, I have no control over the mail
> server that handles the incoming mail so I'd have to handle it on the mail
> client (Outlook / Netscape Mail) resulting in the same problem I have now.
> 
> Thanks,
> 
> Ben
> 
> Perrin Harkins wrote:
> 
> > Andrew Ho wrote:
> > > Your error handlers on your five load-balanced boxes send an HTTP request
> > > to this error handling box.
> >
> > That sounds kind of slow, since it requires a network connection to be
> > created every time and some synchronous processing on the other end.  It
> > also depends on that box always staying up.  I think e-mail is actually
> > a good approach, since it's a robust message queuing system and if you
> > use something like qmail-inject to send the e-mail it takes almost no
> > time at all for mod_perl to finish with it and move on.  You just need
> > to process those messages on the other end instead of looking at the raw
> > output, i.e. use Mail::Audit to keep track of the current state and
> > remove duplicate messages.
> >
> > Matt posted something about PPerl yesterday, which could make a
> > Mail::Audit script more efficient by keeping it persistent.
> >
> > - Perrin
> 
> 
-- 
28 70 20 71 2C 65 29 61 9C B1 36 3D D4 69 CE 62 4A 22 8B 0E DC 3E
mailto:[EMAIL PROTECTED]
http://thecap.org/



RE: loss of shared memory in parent httpd

2002-03-14 Thread Tom Brown

On Thu, 14 Mar 2002, Bill Marrs wrote:

> 
> >It's copy-on-write.  The swap is a write-to-disk.
> >There's no such thing as sharing memory between one process on disk(/swap)
> >and another in memory.
> 
> agreed.   What's interesting is that if I turn swap off and back on again, 

what? doesn't seem to me like you are agreeing, and the original quote
doesn't make sense either (because a shared page is a shared page, it can
only be in one spot until/unless it gets copied).

a shared page is swapped to disk. It then gets swapped back in, but for
some reason the kernel seems to treat swapping a page back in as copying
the page which doesn't seem logical ... anyone here got a more
direct line with someone like Alan Cox?

That is, _unless_ you copy all the swap space back in (e.g.
swapoff)..., but that is probably a very different operation
than demand paging.

> the sharing is restored!  So, now I'm tempted to run a crontab every 30 
> minutes that  turns the swap off and on again, just to keep the httpds 
> shared.  No Apache restart required!
> 
> Seems like a crazy thing to do, though.
> 
> >You'll also want to look into tuning your paging algorithm.
> 
> Yeah... I'll look into it.  If I had a way to tell the kernel to never swap 
> out any httpd process, that would be a great solution.  The kernel is 
> making a bad choice here.  By swapping, it triggers more memory usage 
> because sharing removed on the httpd process group (thus multiplied)...

the kernel doesn't want to swap out data in any case... if it
does, it means memory pressure is reasonably high. AFAIK the kernel
would far rather drop executable code pages which it can just go
re-read ...

> 
> I've got MaxClients down to 8 now and it's still happening.  I think my 
> best course of action may be a crontab swap flusher.

or reduce MaxRequestsPerChild ? Stas also has some tools for
causing children to exit early if their memory usage goes above
some limit. I'm sure it's in the guide.

> 
> -bill
> 

--
[EMAIL PROTECTED]   | Courage is doing what you're afraid to do.
http://BareMetal.com/  | There can be no courage unless you're scared.
   | - Eddie Rickenbacker 




Re: loss of shared memory in parent httpd

2002-03-12 Thread Tom Brown

> No, I can't explain the nitty gritty either. :-)
> 
> Someone should write up a summary of this thread and ask in a
> technical linux place, or maybe ask Dean Gaudet.

I believe this is a linux/perl issue... stand alone daemons exhibit the
same behaviour... e.g. if you've got a parent PERL daemon that
fork()s ...  swapping in data from a child does _not_ have any
affect on other copies of that memory. I'm sure swapping in the
memory of the parent before fork()ing would be fine

Admittedly, my experience is from old linux kernels (2.0), but I
would not be suprised if current ones are similar.

I'm sure it is the same on some other platforms, but I haven't used much
else for a long time.

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain




Re: Urgent: Can we get compiled codes(class files in java) in perl like in java

2002-03-07 Thread Tom Brown

By 'compiled code ... just like that in Java' do you mean byte code?
You may want to look at
http://perlmonks.org/index.pl?lastnode_id=864&node_id=76685
which I found by searching for 'compiled' at perlmonks.org.

Your client is making a strange request. Most people put a higher
value on source code than object code, and mod_perl makes source
execute
as quickly as object code, on average.

Side note: please wrap your lines at something like 70 characters.


On Thu, Mar 07, 2002 at 11:48:51PM +0530, A.C.Sekhar wrote:
> Hi all ,
>I need a help. My requirement is like this, we are developing one portal site 
>in perl5(mod_perl)-apache-linux. our client don't want the perl source code. He want 
>only the compiled code. Is it possible to give the compiled code in perl just like 
>that in Java? How can we do that, plz help us in this regard and tell me what to do 
>and how to do? This is a bit urgent...
> 
> Thanks and Regards
> A C Sekhar
> 
-- 
mailto:[EMAIL PROTECTED]
http://www.ece.utexas.edu/~thecap/
28 70 20 71 2C 65 29 61 9C B1 36 3D D4 69 CE 62 4A 22 8B 0E DC 3E



Re: Post processing Perl output through PHP

2001-07-15 Thread Tom Brown


better, someone has written a makerpm.pl script which will build a .spec
file for an RPM, from which you can build .src.rpm or .i386.rpm files...
there is a version out there that works with rpm4, I won't post the it
here in the hopes that someone who is maintaining a version _will_ speak
up... basically it comes down to:

/usr/src/redhat/SOURCES> ./makerpm.pl --spec --source tarball.1.1.tgz
/usr/src/redhat/SOURCES> cd ../SPECS
/usr/src/redhat/SPECS> rpm -ba tarball-1.1.spec
/usr/src/redhat/SPECS> rpm -i ../RPMS/i386/perl-tarball-1.1.rpm

I've typed the above from memory and may have botched filenames/syntaxes
etc... search the list for similar and probably better examples..

On Sun, 15 Jul 2001, raptor wrote:

> 
> > Ironically, having tried the suggestion from Darren, I discover that I
> don't have
> > LWP installed. My sysadmin however, will install anything for me as long
> as
> > I provide him with an RPM for it.
> >
> > I don't mean to sound lazy, and I have just checked rpmfind.net, but I
> can't
> > quickly put my hands on an rpm which includes LWP::Simple for Red Hat 7.0
> >
> ]- get checkinstall and install LWP on your computer as stated in the
> checkinstall docs and it will make the RPM for U :")
> http://mayams.net/~izto/checkinstall-en.html
> after u install checkinstall u have to do something like this :
> 
> perl Makefile.pl
> make
> make test
> checkinstall make install
> 
> HtH
> =
> iVAN
> [EMAIL PROTECTED]
> =
> 
> 
> 

--
[EMAIL PROTECTED]   | Always bear in mind that your own resolution to
http://BareMetal.com/  | success is more important than any other one
web hosting since '95  | thing. - Abraham Lincoln




Re: [OT] ApacheCon BOF

2001-03-19 Thread Tom Brown

> > 
> > "mod_perl: 20 billion hits served"
> > And turn the "m" into a stylized arch. :)
> 
> LOL!! Way to go, Randal!
> 
> Or how about a play on the old Superman line?
> A graphic with O'Reilly's Eagle and a caption like
> 
> "Look! Up on the net! It's a bird! It's a plane!
>  No, it'swait, it *is* a bird."
> 
> And then we could have JAmPH submissions for the conference theme to be
> printed on the pocket? Lord knows we could come up with some beautiful
> one-liners, and what's a better way to represent Perl? ;o]

I would humbly suggest that displaying typical write-only-code
isn't the best way to promote mod_perl.

... and the real reason I posted was simply to say that I support
the idea of making the shirts available to NON attendees. Last
years shirt looked really good, but alas...

-Tom




(changing userids/2.0/suexec) was: RE: security!

2001-03-01 Thread Tom Brown

> > 
> > > This is a general Unix webserver issue and not specific to 
> > > mod_perl, so I've marked your message [OT] for off-topic.
> > 
> > Well, workarounds are available for specific webserver environments, so I
> > don't believe it's an inappropriate question.
> > 
> > With CGI, you use the suexec mechanism to start executable programs as a
> > particular user.  AFAIK you can't impersonate a user on unixy environments
> > without forking a new process.  And forking a new process under mod_perl
> > really defeats the purpose.

changing userids has nothing to do with fork()... the problem is simply
that it requires root priviledges, and since you need to give them up
permanently if you're going to run some else's "insecure" code, that
usually means a temporary process... (which typically means fork()ing a
short lived process, so you could make the connection)...

The apache 2.0 model seems to include a mechanism for routing requests to
a group of apache child processes which have _already_ switched to the
target userid... in short, the pre-fork model is extended to have classes
of pre-forked processes... it seems to be a mightly good fix for this
particular problem.

That said, I only took about two minutes reading one of the URLs posted
earlier today, but I got that far and said "that'll work!" and quit
reading until I have time to actually test some of this ...

-Tom





Re: trouble with path_info

2001-02-13 Thread Tom Brown

On Tue, 13 Feb 2001, Pierre Phaneuf wrote:

> Pierre Phaneuf wrote:
> 
> Does anyone has an idea about this? I think I have proper behavior from
> my perl handler by installing it at the root of the server, but this is
> no real solution!
> 
> What I am doing wrong here???
> 
> > I'm really stumped with that one. How come Apache::Registry gets the
> > right information and I don't??? I tried doing the exact same thing, to
> > no avail.

Because with Apache::Registry the "file" actually exists and has a 'real'
URI ... so it's easy to determine which parts are the scriptname and which
parts are 'extra path info' ... this is quite likely done outside of
mod_perl by the default handlers (sorry, no time to lookup the which
request stages involved)...

I would _guess_ that with a handler, it's not so clear... it would seem
that script_name should be whatever you have in the  directive,
and anything else would be path_info ... 





Re: [OT] Apache wedges when log filesystem is full

2001-01-17 Thread Tom Brown

On Wed, 17 Jan 2001, Andrew Ho wrote:

> Hello,
> 
> The other day we had a system fail because the partition that holds the
> logs became full, and Apache stopped responding to requests. Deleting some
> old log files in that partition solved the problem.
> 
> We pipe logs to cronolog (http://www.ford-mason.co.uk/resources/cronolog/)
> to roll them daily, so this introduces a pipe and also means that the
> individual logfile being written to was relatively small.
> 
> While this faulted our monitoring (we should have been monitoring free
> space on /var), I am also interested in something I was unable to find
> on-line: how to configure Apache to robustly failover in such a case,
> e.g. keep serving responses but just stop writing to logs.

I haven't tested anything I say below, but I believe that ...

... the child processes would have blocked because the pipe they
were writing to got full, the simple fix is to have "chronolog"
keep reading the logging info even if it can't write out the info
This isn't an Apache config issue, it's an operating
system/IPC/logging-agent issue. 

It is possible you could modify apache to set the pipe to
non-blocking and thus it would simply get back an error message
when it tried to write(2) to the logging process/pipe... but it's
probably a better idea to do it on the other side (keep reading
from the pipe)... at least in our shop the log-agent is much
simpler than apache, and a more logical place to put custom code.

Alternatively, of course you could write your own Log-handler
instead of using the default apache ones, and now that I think
about it, that probably was your question wasn't it :-(

-Tom

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain





Re: fork inherits socket connection

2000-12-19 Thread Tom Brown

> 
> Yes, yes, yes it was a bad suggestion. Sorry about that.
> I still didn't complete this section, looking for a clean solution to find
> a way to close only the fd that keeps the socket busy.
> So far you can use the closing fds in loop -- at least it works.

yuck... you'd either have to find some way to extract that information 
directly from apache, or loop through all the file descriptors calling
getsockname() and then close any descriptor connected to port 80/443/?
(depends how your daemon is setup, if you've got multiple Listen
statements you will have to close multiple sockets ... but normally it
would just be one or two (port 0.0.0.0:80 and/or 0.0.0.0:443)

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain




Re: greetings and questions

2000-12-14 Thread Tom Brown

On Thu, 14 Dec 2000, Tom Brown wrote:

> On Thu, 14 Dec 2000, Ajit Deshpande wrote:
> > > 2. The POD for Apache::Registry says that it doesn't like __END__ and
> > > __DATA__ tokens.  So what affect do these actually have if left in?  Does
> 
> In scripts? it's a syntax error, but that's a completely separate issue
> from modules which get used "as is" ...
> 
> scripts get wrapped inside braces (and probably an eval) and obviously if
> you cut off the closing braces with an __END__ you're going to be in
> trouble...
> 
> Apache::Registry isn't that big or complex ... have a look at it...

sorry, I realized I over simplified my answer and there are good
conclusions to be had from a better one... so my apologies for following
up on my own post... 

here's the immediately relevent code from Apache::Registry.pm

my $eval = join(
'',
'package ',
$package,
';use Apache qw(exit);',
'sub handler {',
$line,
$sub,
"\n}", # last line comment without newline?
   );
compile($eval);

... what you think of as your script is $sub in this join()... $package is
the Registry assigned package name, $line is a hint to the perl compiler
to try to get line numbers fixed...






Re: greetings and questions

2000-12-14 Thread Tom Brown

On Thu, 14 Dec 2000, Ajit Deshpande wrote:
> > 2. The POD for Apache::Registry says that it doesn't like __END__ and
> > __DATA__ tokens.  So what affect do these actually have if left in?  Does

In scripts? it's a syntax error, but that's a completely separate issue
from modules which get used "as is" ...

scripts get wrapped inside braces (and probably an eval) and obviously if
you cut off the closing braces with an __END__ you're going to be in
trouble...

Apache::Registry isn't that big or complex ... have a look at it...




Re: Certification

2000-12-07 Thread Tom Brown

On Thu, 7 Dec 2000, Matt Sergeant wrote:

> On Thu, 7 Dec 2000, J. J. Horner wrote:
> 
> > If I'm way off base, please let me know.  I'm spending considerable
> > brain power on this idea and if I'm wasting it, I need to know.  I
> > don't have much spare brain power and I could use it to try to figure
> > out my wife . . .
> 
> Ask yourself this question: Are you in need of a mod_perl job? If so, I'm
> willing to bet that there are employers who would snap you up in a second.
> 
> As has been said a few times here, certification is pretty pointless
> unless you need some distinguishing factor. With mod_perl, the
> distinguishing factor is that you're available!

(my apologies if this has already been said, I'm still catching up...)

yes and no.

having a certification program implies a lot more than just that there
will be something employers can look at. 

I would expect that the real value comes from the fact that a lot of hard
work has gone into a building a training program, which will by it's
nature create more mod_perl programmers ... how many is subject to
question, but if you can point prospective candidates at the list of
hungry employers, then it should be fairly successfull...

It's my belief that part of the reason microsoft has been so successfull
is that they have made it so easy for schools/institutes to teach their
material ... thus more students studying the M$ way, thus more folks
"selling" microsoft solutions...

... anyone who wants to teach an NT course just asks microsoft for the
curriculum... but wanna teach a linux course and your options are (or
were, things may have changed) less clear, and you're more likely to have
to build it yourself... given the quality and motivation levels of most
schools/institutes/instructors the choice is clear... especially when
they get to ride on the promotion bandwagon that microsoft has prepared...


--
[EMAIL PROTECTED]   | What I like about deadlines is the lovely
http://BareMetal.com/  | whooshing they make as they rush past.
web hosting since '95  | - Douglas Adams


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: RFC: mod_perl advocacy project resurrection

2000-12-06 Thread Tom Brown

On Wed, 6 Dec 2000, Ben Thompson wrote:

> On Tue, Dec 05, 2000 at 09:32:41AM -0800, brian moseley wrote:
> > 
> > if you really feel the need to compete with php in the
> > lowest tier web app space, you need to make simplicity your
> > #1 goal. php is awesome entry level technology, and i almost
> > always recommend it over perl to people who only have the
> > desire to do casual programming for personal sites and small
> > projects. and that's a significant percentage of the people
> > i know doing web programming.
> 
> Actually, PHP's advantage is that you can install it and all 250 sites
> on that machine can use it without problems. You just can't do that
> sanely under mod_perl.

Being in the webhosting industry, and running modperl-space.com, I'd
suggest that this really is an issue... even for the hobbyist discussed
earlier it's non-trivial to get a semi-serious mod-perl site online... the
gap between running it off your cable modem at home and a dedicated server
at a co-location facility is pretty big... 

our standard PHP configuration is CGI based, which gives us all
the suexec benefits, and process count/size/cpu limiting by
userid etc...  for folks that go beyond php-cgi, we can go to
mod_php, but it's rare... with mod_perl, there is no half step
unless you want to call it perl-CGI ... and even then we all know
the troubles of taking CGI/run-once PERL into a persistant
environment...





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: how to really bang on a script?

2000-10-28 Thread Tom Brown

On Sat, 28 Oct 2000, martin langhoff wrote:

> Chris,
> 
>   i'd bet my head a few months ago someone announced an apache::bench
> module, that would take a log and run it as a benchmarking secuence of
> HTTP requests. just get to the list archives and start searching with

I wrote a simple perl script (that forks multiple childredn and uses IPCs
to get multiple threads banging on your box) that runs from a parsed
log... but it was more to test functionality than as a benchmarking tool.
It _should_ still be floating around here...

> benchmarks and logs. CPAN is your friend, also.
> 
>   there are at least 2 or 3 benching perl scripts available. I bet at
> least one does what you need. but I may still loose my bet ... 
> 
> 
> 
> 
> m
> 

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain




Re: Apache::GzipChain

2000-10-28 Thread Tom Brown

On Sat, 28 Oct 2000, G.W. Haywood wrote:

> Hi there,
> 
> On Sat, 28 Oct 2000, Jerrad Pierce wrote:
> 
> > Is anybody using GzipChain?
> 
> IIRC, Josh said he was.  He didn't complain about it.  Raved, in fact.
> 
> > Is there some known means of verifying that it is in fact working properly?
> 
> LWP?

better to use your logs... LWP won't trigger it... if you download a 100k
page when you look at "view page info" or save it to disk, but your access
log shows 15k, you know it's doing it's job...  (hhmm, seems I was using
Apache::Gzip until I got my ADSL back ... but at that time, it was a
non-trivial exercise, and compressing _everything_ (including PHP scripts
etc...) required using LWP internally, which worked even better for
checking functionality, because you had two log entries, the raw one from
localhost, and the one compressed one from the remote agent, with
"appropriate" variances in their sizes :-)

> 
> 73,
> Ged.
> 

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain




Re: Bug in mod_perl

2000-10-09 Thread Tom Brown


Interesting, the Mason bug report I just filed is obviously mis-filed.

Apache::Registry scripts suffer the same behaviour.


On Mon, 9 Oct 2000, Dave Rolsky wrote:

> Try the following handler:
> 
> package Foo;
> 
> use Apache::Request;
> 
> sub handler
> {
> my $r = shift;
> 
> my (@vars) = ( 'abc', "abc\0def", "def" );
> 
> $r->send_http_header;
> $r->print("$_\n") foreach @vars;
> }
> 
> 
> 1;
> 
> 
> I'm using mod_perl 1.24/Apache 1.3.12/Perl 5.00503 and find that I receive
> no output after the \0.  Is this a mod_perl or Apache bug?  Or is it a
> client bug (using Netscape 4.75) or is it the expected behavior.
> 
> -dave
> 
> /*==
> www.urth.org
> We await the New Sun
> ==*/
> 

--
[EMAIL PROTECTED]   | Drive thy business, or it will drive thee.
http://BareMetal.com/  | - Benjamin Franklin
web hosting since '95  | 




Re: suexec: disabled?

2000-09-01 Thread Tom Brown

On Thu, 31 Aug 2000, Bakki Kudva wrote:

> I recently upgraded to perl5.6 and added php4 to my apache server. I
> don't know what I did wrong but I am getting the following errors.
> If I do a httpd -l I get...
> 
> suexec: disabled; invalid wrapper /usr/local/apache/bin/suexec

your copy of suexec is most likely installed in another location. Either
that or there are some sanity checks that your binary is
failing... in either case, for mod_php and mod_perl it doesn't
matter -- suexec is only for mod_cgi stuff.

> Also I cannot browse anything in htdocs becuase I get a "You don't have
> permission to access / on this server." and the error log contains..
> 
> 192.168.0.252 - - [31/Aug/2000:17:13:35 -0400] "GET / HTTP/1.0" 403 279
> 
> Where do I start to look for this permissions problem? The htdocs looks
> is owned by 'nobody'.

and nobody's home directory is probably more 0700 and the webserver is
running as "httpd" or vice-versa? The _error_ log would show the problem.
You have quoted the access log.





Re: Proxy setup w/ SSL (fwd)

2000-08-08 Thread Tom Brown


Stas wanted me to send this to the list, so I'll do that... I've
also done a little testing, and it looks like mod_status is
showing 4 keepalive connections on my old (1.2.6 redhat secure) SSL
server to my netscape 4.72 browser ... 

Srv  PIDAcc   MCPU  SSConn ChildSlot  Host VHostRequest
0  13413 4/4/4K0.08 4 0.9  0.00 0.00 216.86.106.124secure.baremetal.com GET 
/icons/burst.gif HTTP/1.0
2   8451 11/14/14 K0.23 4 3.0  0.01 0.01 216.86.106.124secure.baremetal.com GET 
/icons/forward.gif HTTP/1.0
3   8450 7/11/11  K0.35 4 1.6  0.01 0.01 216.86.106.124secure.baremetal.com GET 
/icons/sound.gif HTTP/1.0
4   8449 6/10/10  W0.31 0 5.0  0.01 0.01 216.86.106.124secure.baremetal.com GET 
/server-status HTTP/1.0

I'm not sure why only 28 files are shown in the "this connection"
column, there were 29 icons, an html file, and the status page...

  
  Srv Server number
  PID OS process ID
  Acc Number of accesses this connection / this child / this slot
   M  Mode of operation
  CPU CPU usage, number of seconds
  SS  Seconds since beginning of most recent request
 Conn Kilobytes transferred this connection
 ChildMegabytes transferred this child
 Slot Total megabytes transferred this slot



   Date: Tue, 8 Aug 2000 11:43:49 -0700 (PDT)
   From: Tom Brown <[EMAIL PROTECTED]>
   To: Stas Bekman <[EMAIL PROTECTED]>
   Subject: Re: Proxy setup w/ SSL

   > > > initiating many connections and downloading all the objects (e.g. images)
   > > > in parallel, the objects are downloaded sequencially.
   > > 
   > > No. AFAIK It still opens up multiple/parallel connections... it just
   > > doesn't go through the handshake stuff repeatedly...
   > 
   > Really? That's what I was always told. Any pointers to read about
   > this. Thanks!

   Sorry, no... although it should be easy enought to test, even mod_status
   should provide enough information...

   Part of my logic is that the browser doesn't even know if the connection
   is going to be keep alive until it gets the first response... so if you
   load a page from domain.com, and it contains 20 images from
   images.domain.com there would have to be a "test load" of the first image
   before deciding whether to open up multiple connections ... strikes me as
   simpler to just proceed as normal and use the pipelining on all
   connections if it is available...  

   (maybe things are different for SSL than normal connections, but again, I
   can't see why they would be...)





Re: template kit.....

2000-07-29 Thread Tom Brown

On Fri, 28 Jul 2000, Paul J. Lucas wrote:

> On Fri, 28 Jul 2000, Denton River wrote:
> 
> > Its been a long time since i have done a jobb without using sessions. I would
> > really like to have this feature included in the kit im using and i think
> > alot of developers are with me on this one.
> 
>   What I don't understand is *why*.  Why can't you use to
>   independent pieces of software: one for templates and the other
>   for sessions that work perfectly well together (or seperately)?
> 
>   I personally prefer smaller, more easily udnerstandable pieces
>   to large, complex, feature-bloated software.

Agreed. It seems to me that someone should write a simple package for
tieing in Apache::Session (or similar) in a transparent manner, perhaps
using an early handler stage of the request and leaving the session info
in $r->pnotes() ?? (via an object/typglob/whatever??)  That said, it's
late and I'm really not in the appropriate state for making _solid_
contributions ;-)

> 
>   - Paul
> 

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain




Re: Re: Re: redirecting a domain [OT]

2000-07-16 Thread Tom Brown

On Sun, 16 Jul 2000, Barry Hoggard wrote:

> Nothing is wrong with that solution if you only have a few domains.  
> We own a lot of misspellings of our company name, so I don't want to 
> add each of them individually to the conf file.

double that count... since you also nicely solved the problem of
domain.com versus www.domain.com, which becomes significant when cookies
are brought into play...

-Tom

>  Begin Original Message 
> 
> From: Todd Finney <[EMAIL PROTECTED]>
> Sent: Sun, 16 Jul 2000 22:33:18 -0400
> To: mod_perl <[EMAIL PROTECTED]>
> Subject: Re: Re: redirecting a domain [OT]
> 
> 
> At 10:23 PM 7/16/00, Barry Hoggard wrote:
> >No!  That's a silly way to do it.  You want to use mod_rewrite.
> >Here's the relevant part of my httpd.conf:
> >
> >RewriteEngine On
> >RewriteCond %{HTTP_HOST}  !^www.investorama.com$
> >RewriteCond %{HTTP_HOST}  !^$
> >RewriteRule /?(.*) http://www.investorama.com/$1 [R=permanent,L]
> 
> Too complicated.  What's wrong with this:
> 
> 
> ServerName www.domain.org
> Redirect permanent / http://www.domain.net/
> 




Re: Perl Registry ... Memory consumption.

2000-07-03 Thread Tom Brown


first off, David Hodgkinson's comments are correct... Stas's guide is very
thorough, and covers pretty much all of this.

On Mon, 3 Jul 2000, Nigel Hamilton wrote:

> Hi,
>   I've been trying to setup mod_perl in an Apache/Red Hat 
> Linux/mySQL environment for the last couple of weeks.
>   
>   When running Apache::Registry on production, mod_perl chewed up
> all the available memory (at that stage only 64 Meg) and the system
> started to use swap memory. 
> 
>   So we doubled the memory to 128 Meg and over the next 10 minutes
> of using the site (having restarted the server) we watched as all the
> extra memory was used up and it went back into swap!
> 
>   So here are some questions:
> 
> 1. What is a 'ball-park' figure for mod_perl memory requirements?

There is no such thing. It depends entirely on your application. A client
of ours runs a modperl based sniffer, and that familty of mod_perl httpds
can hardly be told appart from a normal family but he only has a few
scripts, and they "don't do much" (log some info and throw out a 47 byte
transparent gif).

on the otherhand, if you application deals with big strings, lots of
database queries (per page), etc... then it will be much more resource
intensive.
 
> 2. What is the entry level spec for a mod_perl/Apache/Linux server?

depends on your application, but these days I wouldn't even bother staging
a machine with less than 192 meg of RAM it is cheap compared to the
cost of having to piss around with your box or buy faster drives.
 
> 3. Should I reduce the number of Apache child processes (currently 10) ...
> to reduce memory consumption?

you haven't said whether you're using mod_proxy and a front_end/back_end
config (see the guide). You haven't said how many simultaneous hits you
expect...

> 
> 4. Will mod_perl always ramp up its memory consumption to whatever is
> available?

no.
 
> 5. Should we throw money at the problem and just buy more memory?

As I see it, you have two choices: take the time to figure out what part
of your app is using up memory, or buy more memory... 
 
> 6. I have not been able to get Apache::DBI to work. Will Apache::DBI
> significantly reduce memory consumption (every script opens a DB handle
> for session checking)?

I strongly doubt it. If anything the connection caching is going to cost
you memory... Apache::DBI is intended to save CPU cycles.
 
>   Any help, and war stories that you may have would be much
> appreciated ...
> 
> Nige
> 
> 
> On Sun, 2 Jul 2000, Christopher Suarez wrote:
> 
> > I'm running rh linux 6.2 on a laptop and have tried for sometime to get
> > apache::DBI runnning.
> > 
> > modperl wokrs fine with apache but when I add the line 
> > 
> > PerlModule Apache::DBI to srm.conf, httpd.conf or "use Apache::DBI" to
> > startup.pl and then PerlRequire /path/startup.pl to httpd.conf it won't
> > work. apache will start ok(saying it's using mod_perl) but won't shut down
> > ok and error_log doesn't
> > say anythings wrong. 
> > the webserver won't response on contacting 127.0.0.1 in a browser it'll
> > response connection refused. 
> > 
> > versions:
> > 
> > apache-1.3.12-2
> > apache-devel-1.3.12-2
> > perl-2.00503-10
> > perl-DBI-1.13-1
> > perl-DBD-msql-mysql-1.2210-1
> > perl-Apache-DBI-0.01-2
> > perl-Apache-DBILogin-1.5-2
> > perl-Apache-DBILogger-0.93-2
> > perl-Apache-RedirectDBI-0.01-2
> > mod_perl-1.21-10
> > strange???
> > 
> 

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain





Re: [RFC] Swapping Prevention

2000-06-20 Thread Tom Brown

On Tue, 20 Jun 2000, Joshua Chamas wrote:

> > your machine. Therefore you should configure the server, so that the
> > maximum number of possible processes will be small enough using the
> > C directive. This will ensure that at the peak hours the
> > system won't swap. Remember that swap space is an emergency pool, not
> > a resource to be used routinely.  If you are low on memory and you
> > badly need it, buy it or reduce a number of processes to prevent
> > swapping.
> 
> One common mistake that people make is to not load
> test against a server to trigger the full MaxClients
> in production.  In order to prevent swapping, one must
> simulate the server at its point of greatest RAM stress, 
> and one can start to do this by running ab against
> a program so that each of the MaxClient httpd processes
> goes through its MaxRequests.  

attached is a little perl script (3.5k) that I use to "replay" a log file
as fast as it will go... the script uses Sys-5 message queues and forks
off the number of children specified by -n to read these messages and make
the given request... Since we use it for benchmarking machines hosting
lots of virtual servers, the input file format is actually 

  host uri status size

(space separated) and then it whines about responses that don't match the
status or have a vastly differing size... 

This version isn't as polished as the previous one was... but I left that
under /tmp/ for too long :-(

Anyhow, my beef with ab is that it just pounds on one page (but it's 
very good at that).

Anyhow, I haven't generally used it for testing MaxClients settings, but
that would just mean increasing the -n parameter... it is
_simple_ and doesn't deal with stuff like keep alives, and it's
not the fastest client, so it would be plain stupid to try to use
this to benchmark the number of GIFs per second a faster CPU
could serve. Basically, we generally use for correctness testing rather
than all out performance.


-Tom

p.s. Options...
   # -n = num threads;
   # -v = verbose
   # -d = debug
   # -s = allowed size variance (bytes)
   # -p = allowed size variance (%)
   # -w = warn if response takes more than this seconds



#!/usr/bin/perl
#
# script to read in a file of "host uri status size"
# lines, and feed the requests to a pool of child processes via Sys5
# message queue.

# constants

my $MSG_NORM = 2;
my $MSG_EXIT = 1;
my $MSG_SIZE = 1;
my $KEY = 1984;  # use fixed key instead of IPC_PRIVATE
my $TIMEOUT = 10;

# use & init

use strict;
use IPC::SysV qw(IPC_CREAT S_IRWXU S_IRWXG S_IRWXO IPC_NOWAIT);
use IPC::Msg;
use IO::Socket::INET;

use Getopt::Std;
use vars qw($opt_n $opt_v $opt_d $opt_s $opt_p $opt_w);  
getopts("n:vdw:s:p:");
   # -n = num threads;
   # -v = verbose
 # -d = debug
 # -s = allowed size variance (bytes)
 # -p = allowed size variance (%)
 # -w = warn if response takes more than this seconds

$opt_v++ if ($opt_d);

# defaults

$opt_n = 10 unless ($opt_n);
$opt_p = 5 unless (defined $opt_p);
$opt_s = 50 unless (defined $opt_s);
$opt_w = 0 unless (defined $opt_w);

my %DNS_HASH = ();

##

my $msg = new IPC::Msg($KEY, IPC_CREAT | S_IRWXU | S_IRWXG | S_IRWXO)
   or die "message queue creation failed! ($!)";

# drain queue, since it might have old messages in it.
{ my $i=1e6;
  my $buffer;
  while ($i-- > 0 && $msg->rcv($buffer, $MSG_SIZE, $MSG_NORM, IPC_NOWAIT) ) {
# drop it ;-)
  }
  while ($i-- > 0 && $msg->rcv($buffer, $MSG_SIZE, $MSG_EXIT, IPC_NOWAIT) ) {
# drop it ;-)
  }
}

my $pid;
for (my $i=$opt_n; $i > 0; $i--) {
   $pid=fork();
die "fork failed ($!)" if ($pid < 0);
last if ($pid ==0);  # child
}

if ($pid != 0) { # parent

# send messages
while (defined (my $line = <>)) {
  $msg->snd($MSG_NORM,$line,undef);  # tough huh?
}

for (my $i=$opt_n; $i > 0; $i--) {
$msg->snd($MSG_EXIT,"goodbye",undef); # one for each child
}
while ( wait() > 0 ) {
  # reap children.
}
$msg->remove;  # remove the message queue
print "parent finished\n" if ($opt_v);
} else { # children
child:
   while ( 1 ) {  # loop until
  my $buffer = '';
  my $timeout = $TIMEOUT;
  while ($timeout > 0) {
  if ( $msg->rcv($buffer, $MSG_SIZE, $MSG_NORM, IPC_NOWAIT) ) {
  print "$$: $buffer\n" if ($opt_d);
  get_request($buffer);
  next child;
  } else {
  last child if( $msg->rcv($buffer, $MSG_SIZE, $MSG_EXIT, 
IPC_NOWAIT));
  sleep 1;
  $timeout--;
  }
}
}
print "child $$: finished\n" if (

Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Tom Brown

On Mon, 17 Apr 2000, Ime Smits wrote:

> | Also, my system has cgiexec (does suid for CGI scripts) installed. The
> | cgiexec documentation says that once cgiexec is installed, it is a
> | security risk if people can execute code as "nobody" since that user has
> | special access to the cgiexec code. Right now, anyone can execute code as
> | nobody by writing ASP code, so in essence I have a security hole in my
> | system, and I DO need cgiexec.
> 
> Like I said, doing something like suEXEC will solve your file access
> problems, but it won't prevent people from messing up things like the
> $Session and $Application objects which are accessible to all users running
> their site on this webserver. It won't even prevent a user to redefine a
> scalars, subroutines or even complete modules which are not belonging to
> their own scripts.

Huh? SuEXEC only works with mod_cgi (e.g. it requires the exec() part of
it's name to get the Su part), it is not applicable to the persistant
mod_perl world.

The rest of your discussion seems to relate to the persistance of
the mod_perl environment.



-Tom




Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Tom Brown

> Also, my system has cgiexec (does suid for CGI scripts) installed. The
> cgiexec documentation says that once cgiexec is installed, it is a
> security risk if people can execute code as "nobody" since that user has
> special access to the cgiexec code. Right now, anyone can execute code as
> nobody by writing ASP code, so in essence I have a security hole in my
> system, and I DO need cgiexec.
> 
> So, does anyone have suggestions on how to do suid for ASP scripts?

no (because there isn't an easy, or even moderately difficult one), but
the solution to the "nobody" problem is to run your mod_perl webserver
under a "modperl" userid. 

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain





Re: mod_perl virtual web hosting

2000-04-12 Thread Tom Brown

> >
> >I'm reading between the lines here, but it sounds like you are trying to
> >have _one_ parent apache daemon that services _everything_ on the machine
> >(likely _more_ than one website), which would imply that you are going to
> >have an _extremely_ low hit ratio on your mod_perl scripts.
> 
> nahh, that's not where we were going with it. I am pretty sure it's just a
> "maximum flexibility" feature they want to have on hand to minimize tech
> support, etc.  Why does DSO waste so much memory? I thought DSO would mean
> all processes share the resident copy of the perl library?

I'm speaking through my hat for a couple of reasons.
a) I've never used DSO
b) I don't know that much about your configuration.

1) if mod_perl isn't loaded by the parent apache process, then every time
it gets loaded by a child process, the memory consumed by mod_perl itself
is _not_ shared. (it's possible the actual code [text segment] might be
shared, but any data structures etc will not be -- dynamic linking is
not my cup of tea).

2) if you had (and it sounds like I was wrong,) one parent daemon handling
the requests for 100 virtual servers, then on average you'd have 100 times
as many child processes as needed for 'just' the mod_perl site. That also
means that your scripts are compiled and cached in 100 times more daemons
than needed, and conversely (same issue, different angle), you are 100
times _less_ likely to find that your script has already been compiled
when you hit a given page.

Hopefully someone on the list can provide the definative answers, lord
knows this is one list that still has a high Guru/beginner ratio ;-) And
the debian fellow debugging the DSO trouble sounded Knowledgable :-)

> >it strikes me that you _want_ a frontend proxy to feed your requests to
> >the smallest number of backend daemons which are most likely to already
> >have compiled your scripts. This saves memory and CPU, while simplifying
> >the configuration, and of course, for a dedicated backend daemon, DSO buys
> >nothing... even if that daemon uses access handlers, it still always needs
> >mod_perl
> 
> remember we're talking an entire ISP, not just a website. I think it might
> be a mighty pain to have everyone running or sharing some backend mod_perl
> server. Logs and all that.

Huh? The same techniques used to separate the logs for the frontend
server(s) can be used to split the backend logs, if you even use any log
info other than the error_log from the backend (they would just repeat the
frontend logs).

> >That said, we bought modperl-space.com back when domains suddenly got
> >cheap, but haven't put together a mod_perl package because we really don't
> >know what folks want/are using it for.
> 
> seems like most folks with enough ?? to be using mod_perl are either
> working corporate or have their own hosts and don't have to deal with ISP's
> on the mod_perl issue.  

True.

--
[EMAIL PROTECTED]   | Drive thy business, or it will drive thee.
http://BareMetal.com/  | - Benjamin Franklin
web hosting since '95  | 




Re: mod_perl virtual web hosting

2000-04-12 Thread Tom Brown

On Wed, 12 Apr 2000, Jesse Wolfe wrote:

> I am working with www.superb.net to get their mod_perl up and working
> again. They have great infrastrucure, lots of great tools, and an amazing
> price.
> They had apache/mod_perl for awhile, and upgrades broke it.  I expect they
> will have it in a week or two, if we can use all these dynamic/shared
> modules as planned. 

strikes me (as an owner of a web hosting service) that DSO is the wrong
answer. What does DSO buy you? NOTHING except a complete waste of
memory... 

I'm reading between the lines here, but it sounds like you are trying to
have _one_ parent apache daemon that services _everything_ on the machine
(likely _more_ than one website), which would imply that you are going to
have an _extremely_ low hit ratio on your mod_perl scripts.

it strikes me that you _want_ a frontend proxy to feed your requests to
the smallest number of backend daemons which are most likely to already
have compiled your scripts. This saves memory and CPU, while simplifying
the configuration, and of course, for a dedicated backend daemon, DSO buys
nothing... even if that daemon uses access handlers, it still always needs
mod_perl

That said, we bought modperl-space.com back when domains suddenly got
cheap, but haven't put together a mod_perl package because we really don't
know what folks want/are using it for.

> 
> I'm moving my mod-perl project there once I get it going. Have you checked
> out the list on perl.apache.org's documentation?
> 
> Jesse
> 
> 
> At 01:26 PM 4/12/00 -0400, Gagan Prakash wrote:
> >Hello,
> >
> >I have been looking for mod_perl virtual web hosting companies who have fast
> >servers and good infrastructure but the two I have found so far have either
> >had problems with their mod_perl setups (they installed the module, did not
> >change apache configs or changed them incorrectly) or have been very slow.
> >These two are www.123hostme.com or www.olm.net.

last I heard, olm.net treated their resellers well, but their
tech support to direct clients was supposed to be pretty poor.

> >
> >I would greatly appreciate if somebody could point me in a better direction.
> >
> >Thanks
> >Gagan

--
[EMAIL PROTECTED]   | Drive thy business, or it will drive thee.
http://BareMetal.com/  | - Benjamin Franklin
web hosting since '95  | 




Re: mod_ssl in fronend-backend Apache configuration

2000-01-31 Thread Tom Brown

On Mon, 31 Jan 2000, BeerBong wrote:

> Hello all!
> 
> I need encrypted access to some directories on some virtual hosts.
> 
> I have lightweight proxy apache server and backend mod_perl server.
> 
> mod_ssl is not a light thing, and I need encrypt mod_perl'd script results
> only, therefore I think that mod_ssl should be in back-end server. Am I
> right ? Does mod_proxy pass ssl encrypted data?


the only "mod_proxy" that I've seen that will be an SSL client is the one
in stronghold... if there is a public/open source one around somewhere,
I'd love to know...

> 
> And if I'm right and such configuration is possible and correct, are there
> any examples of configuring 2 apache servers
> frontend - mod_proxy
> backend - mod_perl + mod_ssl
> on the net ?
> Any advices are welcome.
> 
> Thanx in advance.
> --
> Sergey Polyakov (BeerBong)
> Chief of Web Lab (http://www.mustdie.ru/~beerbong)
> 
> 
> 
> 

--
[EMAIL PROTECTED]   | Drive thy business, or it will drive thee.
http://BareMetal.com/  | - Benjamin Franklin
web hosting since '95  | 



Re: Problems with RedHat

1999-11-25 Thread Tom Brown

On Thu, 25 Nov 1999, Robert Locke wrote:

> 
> I actually commented out the "exit" line below that and let it
> continue as if there were no error.
> 
> Then, when I ran "make", I discovered the actual error, which in my
> case involved not having gdbm properly installed.

here's the fix

[root@qmail /usr/lib]# ln -s libgdbm.so.2.0.0 libgdbm.so

so that:

[root@qmail /usr/lib]# ls -l libgdb*
lrwxrwxrwx   1 root  root  16 Nov 25 10:35 libgdbm.so -> libgdbm.so.2.0.0
lrwxrwxrwx   1 root  root  16 Nov 18  1997 libgdbm.so.2 -> libgdbm.so.2.0.0
-rw-r--r--   1 root  root   26041 Oct 15  1997 libgdbm.so.2.0.0 


Note that this isn't really (at least by redhat's definition) an
installation problem  (rpm -V was clean)

hhmm, maybe it is, perl -V shows -lgdbm, but the compiler doesn't find
libgdbm.so.2 when compiling/linking with -lgdbm, so something's a little
out of whack. :-(

> 
> Good luck,
> 
> Rob
> 
> 
> 
>  >if ./helpers/TestCompile sanity; then
>  > 
>  > change it to:
>  > 
>  >if ./helpers/TestCompile -v sanity; then
>  > 
>  > and try again.  Now you should get a useful error message.
>  > 
>  > -Rasmus
>  > 

--
[EMAIL PROTECTED]   | Don't go around saying the world owes you a living;
http://BareMetal.com/  | the world owes you nothing; it was here first.
web hosting since '95  | - Mark Twain




Re: extra '0' when printing to files other than STDOUT?

1999-10-05 Thread Tom Brown

On 5 Oct 1999, Chip Turner wrote:

> Tom Brown <[EMAIL PROTECTED]> writes:
> 
> > great ideas... I'm reading that man page now... It does match the symptom,
> > but it's gotta be corruption or an external module that we didn't write,
> > since I'd never heard of that variable... And this code is fairly
> > simple...
> 
> Did you try adding the "local $\ = undef" before the offending lines
> to eliminate the possibility?  The symptoms are very much like what a
> bad $\ would produce.  Are you running any Apache::Registry scripts or
> whatnot?  You never know what might be running; it's worth a shot just
> to eliminate the possibility.

That does seem to be the problem... Thanks Chip!!

I really haven't the faintest clue where that is getting set... This is an
internal use only machine and only runs about 30 different
scripts, _all_ of which were written here (mostly be me)... it
pretty much has to be in an included module :-(

-Tom





extra '0' when printing to files other than STDOUT?

1999-10-04 Thread Tom Brown


Anyone seen anything like this?

running modperl 1.21 on linux with a work around for the corrupted PATH
environment variable issue... perl 5.004_05 ...

This code:

  open( MAIL, "| $MAILER" ) || die "can't open $MAILER";
  print MAIL <<"EOM";
HELO localhost
ONEX
MAIL FROM: $from
EOM

  foreach (split(/,/,$to)) {
 print MAIL <<"EOM";
RCPT TO: $_
EOM
  }
  print MAIL <<"EOM";
DATA
To: $to
From: accounts\@baremetal.com
Subject: $subject

$body
.
QUIT
EOM
  close MAIL || die "couldn't close mailer";

is producing the following output when traced:

write(5, "HELO localhost\nONEX\nMAIL FROM: [EMAIL PROTECTED]\n0RCPT
TO: [EMAIL PROTECTED]\n0RCPT TO: [EMAIL PROTECTED]\n0DATA\nTo:
[EMAIL PROTECTED],[EMAIL PROTECTED]\nFrom:
[EMAIL PROTECTED]\nSubject: Warning, tbrown paid an invoice (7851)
and has held virtuals\n\nWarning, tbrown paid an invoice (7851)\nand has
the following held or scheduled held  virtuals:\n\nathleteschoice.net -
1999-08-02\n\n.\nQUIT\n0", 411) = 411
close(5)= 0


There's an extra 0 before the two RCPT TO lines, the DATA line, and after
the QUIT line.

Needless to say, the messages aren't getting out when the relevent SMTP
commands are getting corrupted... The "sendmail -bs" technique is usefull
when the return envelope address needs to be specified and can't be
trusted on the command line. (and isn't really the issue here anyway.)

If I kill -HUP the webserver the issue will go away for a while.

Anyone have any ideas? I rewrote a major section of the invoice generation
code to buffer up the output and use one print statement... in that
section it reduced the number of extraneous zeros to one, but that's still
ugly...

The code is "use strict" and perl -w friendly. everything is my()'d and
properly initialized ... That said, those points seem irrelevent, this
looks like something nasty inside the perl or mod_perl guts

The app was written under mod_perl, and I can post or mail the whole thing
if need be. Unfortunately, it takes a while to get the problem to appear,
although it's constant once it does.

The application does "use CGI;" and proceeds to send all it's normal
output to STDOUT as compared to using $cgi->print or anything funky...

The box is "internal use only" so I can take it apart and rebuild PERL or
whatever, but I'd really rather not have to, since we do use mod_perl on
production servers and the advantage has always been that it is so easy to
setup... (although perhaps not so easy to code for :-)

-Tom