required files not in a namespace?

2003-03-18 Thread Justin Luster








Are “required” files in a namespace under Apache::Registry
in Mod_Perl?  I have just done a simple test that seems to show that they
are not in a namespace.  In the documentation (http://perl.apache.org/docs/1.0/guide/intro.html#Apache__Registry)
it says that the initial script is stored under a unique key to “prevent
script name collisions”.  This seems to work for the main script
that you call but not for the “required” files that that script
uses.

 

Here is my example:

 

Test1 Directory

    test1.pl

    libone.pl

 

Test2 Directory

    test1.pl

    libone.pl

 

test1.pl contains:   

 

print "Content-type: text/html\n\n";

    print
"";

    print
"";

    print
"Debug Test #1";

    print
"";

    print
"";

 

    require
"libone.pl";

 

    libone::A();

 

    print
"Debug Test #1 Successful";

    print
"\n\n";

 

libone.pl contains:

 

    #!/usr/bin/perl

 

package libone;

 

sub A

{

    print
"Hello from A";

}

 

return 1;

 

In the 2nd copy of libone.pl, contained under the Test2
directory I changed the output slightly to say "Hello from A - 2nd
version".

 

I then restart Apache and try test1.pl from the Test1 directory and I
see “Hello from A”.  I then try the test1.pl from the Test2
directory and I still get "Hello from A”.  If I restart Apache
and try test1.pl from the Test2 directory first I see "Hello from A - 2nd
version".  If I then try test1.pl from the Test1 directory I still
get "Hello from A - 2nd version".

 

So it appears as though Apache::Registry is caching the first instance
of libone.pl and is not namespacing it as it does with the initial script
test1.pl.

 

If this is the case what can I do to fix it?  I view it as a big
problem because if my script is deployed I do not know if my script is going to
use my libone.pl or a libone.pl written by someone else that has already been stored
in the cache.  

 

I hope you understand my problem.  Please help.

 

Thanks.

 

My installation information is below.

 

MOD_PERL => mod_perl/1.26

SERVER_SOFTWARE => Apache/1.3.27 (Unix) (Red-Hat/Linux) mod_python/2.7.6
Python/1.5.2 mod_ssl/2.8.12 OpenSSL/0.9.6b DAV/1.0.2 PHP/4.1.2 mod_perl/1.26
mod_throttle/3.1.2

PERL_SEND_HEADER => On








RE: make errors with mod_perl-1.99_08 on aix 4.3.3 & 5.1

2003-03-04 Thread Justin Derrick
I've been following this thread for a few days now, and I just 
thought that I'd mention that this compile problem appears to be the 
same on AIX 5.1 ML02 with C for AIX 6.0.  I may be able to offer 
access to this system for one individual to assist in the process of 
debugging this, since it's been mentioned that access to an AIX boxen 
is a problem.

-JD.

At 5:02 PM -0500 3/3/03, Priest, Darryl - BALTO wrote:
#> I got the new CVS version applied the patch and I got a bit further
#
#good, I've committed that patch.
#
#>, but
#> it's still dying with:
#>
#> cd "src/modules/perl" && make -f Makefile.modperl
#> rm -f mod_perl.so
#> ld -bhalt:4 -bM:SRE
#> -bI:/usr/local/perl5.8.0/lib/5.8.0/aix/CORE/perl.exp -bE:mod_perl.exp
#> -bnoentry -lc -L/usr/local/libmod_perl.lo modperl_interp.lo
#> modperl_tipool.lo modperl_log.lo modperl_config.lo modperl_cmd.lo
#> modperl_options.lo modperl_callback.lo modperl_handler.lo modperl_gtop.lo
#> modperl_util.lo modperl_io.lo modperl_filter.lo modperl_bucket.lo
#> modperl_mgv.lo modperl_pcw.lo modperl_global.lo modperl_env.lo
#> modperl_cgi.lo modperl_perl.lo modperl_perl_global.lo modperl_perl_pp.lo
#> modperl_sys.lo modperl_module.lo modperl_svptr_table.lo modperl_const.lo
#> modperl_constants.lo modperl_hooks.lo modperl_directives.lo
modperl_flags.lo
#> modperl_xsinit.lo  -bE:/usr/local/perl5.8.0/lib/5.8.0/aix/CORE/perl.exp
#> -brtl -L/usr/local/lib -b32
#> /usr/local/perl5.8.0/lib/5.8.0/aix/auto/DynaLoader/DynaLoader.a
#> -L/usr/local/perl5.8.0/lib/5.8.0/aix/CORE -lperl -lbind -lnsl -ldl -lld
-lm
#> -lc -lcrypt -lbsd -lPW  -o mod_perl.so
#> ld: 0706-004 Cannot find or read export file: mod_perl.exp
#> ld:accessx(): A file or directory in the path name does not
exist.
#> make: 1254-004 The error code from the last command is 255.
#
#> To get that far, in the src/modules/perl/Makefile.modperl I added
#> definitions for BASEEXT and PERL_INC, as copied from
modperl-2.0/Makefile,
#> as shown below, since they were missing.
#
#why would you need them? I mean what was the error that you had to add
them?
Without PERL_INC I got this error:

ld -bhalt:4 -bM:SRE -bI:/perl.exp -bE:.exp -bnoentry -lc
-L/usr/local/libmod_perl.lo modperl_interp.lo modperl_tipool.lo
modperl_log.lo modperl_config.lo modperl_cmd.lo modperl_options.lo
modperl_callback.lo modperl_handler.lo modperl_gtop.lo modperl_util.lo
modperl_io.lo modperl_filter.lo modperl_bucket.lo modperl_mgv.lo
modperl_pcw.lo modperl_global.lo modperl_env.lo modperl_cgi.lo
modperl_perl.lo modperl_perl_global.lo modperl_perl_pp.lo modperl_sys.lo
modperl_module.lo modperl_svptr_table.lo modperl_const.lo
modperl_constants.lo modperl_hooks.lo modperl_directives.lo modperl_flags.lo
modperl_xsinit.lo  -bE:/usr/local/perl5.8.0/lib/5.8.0/aix/CORE/perl.exp
-brtl -L/usr/local/lib -b32
/usr/local/perl5.8.0/lib/5.8.0/aix/auto/DynaLoader/DynaLoader.a
-L/usr/local/perl5.8.0/lib/5.8.0/aix/CORE -lperl -lbind -lnsl -ldl -lld -lm
-lc -lcrypt -lbsd -lPW  -o mod_perl.so
ld: 0706-003 Cannot find or read import file: /perl.exp
ld:accessx(): A file or directory in the path name does not exist.
ld: 0706-004 Cannot find or read export file: .exp
ld:accessx(): A file or directory in the path name does not exist.
make: 1254-004 The error code from the last command is 255.
So, I looked at the -bI:/perl.exp and looked at the Makefile.modperl and saw
that it referenced PERL_INC, which didn't appear to be defined, although it
was referenced, after setting it to PERL_INC =
/usr/local/perl5.8.0/lib/5.8.0/aix/CORE and got this error next:
ld -bhalt:4 -bM:SRE
-bI:/usr/local/perl5.8.0/lib/5.8.0/aix/CORE/perl.exp -bE:.exp -bnoentry -lc
-L/usr/local/libmod_perl.lo modperl_interp.lo modperl_tipool.lo
modperl_log.lo modperl_config.lo modperl_cmd.lo modperl_options.lo
modperl_callback.lo modperl_handler.lo modperl_gtop.lo modperl_util.lo
modperl_io.lo modperl_filter.lo modperl_bucket.lo modperl_mgv.lo
modperl_pcw.lo modperl_global.lo modperl_env.lo modperl_cgi.lo
modperl_perl.lo modperl_perl_global.lo modperl_perl_pp.lo modperl_sys.lo
modperl_module.lo modperl_svptr_table.lo modperl_const.lo
modperl_constants.lo modperl_hooks.lo modperl_directives.lo modperl_flags.lo
modperl_xsinit.lo  -bE:/usr/local/perl5.8.0/lib/5.8.0/aix/CORE/perl.exp
-brtl -L/usr/local/lib -b32
/usr/local/perl5.8.0/lib/5.8.0/aix/auto/DynaLoader/DynaLoader.a
-L/usr/local/perl5.8.0/lib/5.8.0/aix/CORE -lperl -lbind -lnsl -ldl -lld -lm
-lc -lcrypt -lbsd -lPW  -o mod_perl.so
ld: 0706-004 Cannot find or read export file: .exp
ld:accessx(): A file or directory in the path name does not exist.
make: 1254-004 The error code from the last command is 255.
Noticed -bE:.exp and looked in the makefile for where that was coming from,
which led me to set BASEEXT = mod_perl since it was defined in
Makefile.modperl but was referenced. Which led a bit further but still
without success.
#
#> BASEEXT = mod_perl
#
#what if you replace it 

RE: "do" as temp solution for "require" problem ?

2003-01-28 Thread Justin Luster
When a Perl script runs under Mod_Perl the current working directory is
no longer the location of the Perl script (I think it is where
Apache.exe is).  So when you require an additional file it does not look
in the same directory as your original script for the file.  One
alternative that has been mentioned is to place your included file in
one of the locations of the @INC array.  Another option that I have used
is to add the path of the original Perl file to the @INC array so that
included files will be looked for there too.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 28, 2003 11:51 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: "do" as temp solution for "require" problem ?

Hi,
Yes, I am using Apache::Registry; how did you know that? ;-)
In fact I am trying to change the CGI-Perl pages of 
http://www.deweertsport.be to mod_perl.
As I was used to work with include files in PHP, I sort continued this 
way of making pages in Perl-CGI.
If you look at the previous mentioned site, you can see there is only 
one file, but it contains a lot of includes.
- a random function for the banners on top
- a file for the navigation on the left which includes a file for the 
date and a file for the counter (mysql database)
- the content pages with different files for the forms redirected per 
OS and type of Browser.
The reason why I work that way is to have a sort of frame in which the 
content is included, directed via the variables of the URL.
That gives me a good overview on how the site is built and it makes it 
easy to maintain.
Now, with mod_per this is a whole different story. Probably I need to 
review my strategy as things get more complicated regarding using 
"use", or "require" ... or "do" 
Would using Apache::PerlRun be a better option to deal with this way of 
building a website?
Thanks for your advise!
Bart

On Tuesday, January 28, 2003, at 05:21 PM, Randal L. Schwartz wrote:

>> "mail@adventureforum" == mail@adventureforum net 
>> <[EMAIL PROTECTED]> writes:
>
> mail@adventureforum> I am using: mod_perl/1.26
>
> mail@adventureforum> Now I tried to include subroutines from an 
> external .pl file with
> mail@adventureforum> "require".
>
> This smells a bit like you're using Apache::Registry (you haven't said
> yet) and you've moved some subroutines into a separate file, but not a
> separate package, and you either aren't aware or don't understand the
> significance of the fact that every Apache::Registry script runs in a
> different package.
>
> Could that be the case?
>
> If you're using Apache::Registry, and you're not properly using
> packages, you'll get burned.  Turn your external code into a real
> module, and things will work again.  Use "use", not "require", not
> "do".
>
> print "Just another (mod) Perl hacker,"
>
> -- 
> Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777

> 0095
> <[EMAIL PROTECTED]> http://www.stonehenge.com/merlyn/>
> Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
> See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl 
> training!
>




RE: Resetting cache Apache::Registry

2002-12-03 Thread Justin Luster
Thank you.  That is a good suggestion.

-Original Message-
From: Geoffrey Young [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, December 03, 2002 9:56 AM
To: Justin Luster
Cc: [EMAIL PROTECTED]
Subject: Re: Resetting cache Apache::Registry


> The situation is that I?m using a shared server from a 3rd party
hosting 
> provider and I do not have control over what they have in their Apache

> configuration file.  Every time I make a change to a helper file I
need 
> them to restart Apache.

on the Cobalt and Ensim systems I have websites at, I was able to just 
create a .htaccess file with

PerlInitHandler Apache::StatINC

in it and let that take care of everything.

HTH

--Geoff








Resetting cache Apache::Registry

2002-12-03 Thread Justin Luster








I know that when you “require” or “use”
helper files in a Perl Script, and you are using Apache::Registry, when changes
are made to the helper files they are not recognized until you restart
Apache.  In the documentation it says that you can change the Apache configuration
file to do this for you.  What I want to know is if there is a way to
clear out the files or code in the Apache::Registry cache via a Perl
Script.  I would like to write a simple Perl Script (clear.pl) that would
have some code like $r->ResetCache that would clear out the cache and allow
my other Perl script to see the new changes in its helper files.

 

The situation is that I’m using a shared server from a
3rd party hosting provider and I do not have control over what they
have in their Apache configuration file.  Every time I make a change to a
helper file I need them to restart Apache.

 

Are there any ideas out there in the world of Mod_Perl
gurus?

 

Thank you.








Re: state of the art throttling?

2002-11-21 Thread Justin
Well for the purposes of documentation, I'll follow up
to myself.
I was pointed at a netfilter module (rule) available
as a patch, called iplimit, which limits simultaneous
open tcp connections to N from either a single IP or from
a netblock.. this helps a lot..
-Justin

On Thu, Nov 21, 2002 at 05:45:36PM -0500, Justin wrote:
> What is the state of the art now in apache or modperl
> related modules that will throttle based on a combination
> of the following metrics:
> 
>   * recent bandwidth per IP
>   * recent request count per IP
>   * max number of parallel requests per IP
> 
> I'm using a tweaked version of the Stonehenge utility
> and it works ok but a bad robot (and there are SO many
> now) can fill all request slots before a long enough
> measurement period has elapsed to start denying it
> service..  plus the process of denial is not insignificant
> because the recent request record has to be opened and
> summed for each new request.. ideally the IP or IP+ua
> combination should be just bounced out for a defined
> period of time to cool off.
> 
> Also this mystical throttle module I'm hoping exists
> would sit at the front end, along with mod_rewrite,
> rather than be installed on multiple back end modperl
> servers..
> 
> Something that crawled the apache status tree to deny
> requests when more than N servers are already engaged
> in serving the same IP, would be ideal.. Since I
> offload image serving, I think this would not hurt
> any legit users.
> 
> thanks!
> -Justin




state of the art throttling?

2002-11-21 Thread Justin
What is the state of the art now in apache or modperl
related modules that will throttle based on a combination
of the following metrics:

  * recent bandwidth per IP
  * recent request count per IP
  * max number of parallel requests per IP

I'm using a tweaked version of the Stonehenge utility
and it works ok but a bad robot (and there are SO many
now) can fill all request slots before a long enough
measurement period has elapsed to start denying it
service..  plus the process of denial is not insignificant
because the recent request record has to be opened and
summed for each new request.. ideally the IP or IP+ua
combination should be just bounced out for a defined
period of time to cool off.

Also this mystical throttle module I'm hoping exists
would sit at the front end, along with mod_rewrite,
rather than be installed on multiple back end modperl
servers..

Something that crawled the apache status tree to deny
requests when more than N servers are already engaged
in serving the same IP, would be ideal.. Since I
offload image serving, I think this would not hurt
any legit users.

thanks!
-Justin



RE: Using Perl END{} with Apache::Registry

2002-11-13 Thread Justin Luster
After doing some additional testing it appears that this problem only
occurs on my Windows machine with Apache 2.0.  I tried it on my Linux
box Apache 1.3 and things worked fine.  Since I am not using Windows in
a production environment I will be OK.

Thanks anyway.



-Original Message-
From: Jim Schueler [mailto:jschueler@;tqis.com] 
Sent: Tuesday, November 12, 2002 9:02 PM
To: Justin Luster
Subject: RE: Using Perl END{} with Apache::Registry


Pity that the module doesn't help.

I spent many hours testing END {} block behavior in Apache::Registry and
relied heavily on logged error messages.  I cannot confirm your
hypothesis
that END {} blocks affect error reporting.

When testing code, reliable failures are important.  If it won't fail
predictably, it won't run predictably.  I recommend you double check
your assumption that "It seems to be working fine."

Apache::Registry is tricky because it's there's so much uncertainty
about
the state of a process.  For example, it's impossible to determine which
sequence various scripts will run in.  One of the reasons I recommend my
Apache::ChildExit module is because otherwise, all of a process's
encountered END {} blocks are run at the end of a request, including END
{} blocks from other scripts and all modules that have been imported
over
the lifetime of the process.  Apache::ChildExit eliminates the
possibility
that unexpected or unknown END {} blocks will impact the process state
because it ensures that END {} blocks are only run when the process
terminates.

 -Jim


On Tue, 12 Nov 2002, Justin Luster wrote:

> Thanks for the reply.  Unfortunately I need the END block to run for
> every request.  I just was wondering why it altered the way error
> messages were logged.
> 
> Thanks.
> 
> -Original Message-
> From: Jim Schueler [mailto:jschueler@;tqis.com] 
> Sent: Tuesday, November 12, 2002 2:41 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: RE: Using Perl END{} with Apache::Registry
> 
> Hello Justin.
> 
> I've done a little work on a similar problem due to Apache::Registry's
> unusual treatment of END {} blocks.  You may want to take a look at
> the module I recently submitted:
> 
> http://www.cpan.org/authors/id/T/TQ/TQISJIM/ChildExit_0-1.tar.gz
> 
>  -Jim
> 
> > Hi, I'm trying to use the END{ } block in my Perl Scripts to do some
> > code clean up (making sure files are not locked) at the end of each
> > request.  It seems to be working fine.  I'm using Apache::Registry
to
> > run a regular Perl script.  I'm having a problem with error
messages.
> 
> > 
> >  
> > 
> > I have an included file that I'm requiring:
> > 
> >  
> > 
> > require "test.pl";
> > 
> >  
> > 
> > Without the END { } block if the script cannot find test.pl I get a
> > Server error 500 and an appropriate error message in the log file.
> When
> > I include the END{ } block I get no Server Error and no message in
the
> > log file.  It is almost as if the END{ } is overwriting the
> > ModPerlRegistry error system.  
> > 
> >  
> > 
> > Any ideas?
> > 
> >  
> > 
> > Thanks.
> 
> 
> 







RE: Using Perl END{} with Apache::Registry

2002-11-12 Thread Justin Luster
No.  If there is an END block empty or not then the error logging does
not happen.

By the way do you know of any way to capture what would have been logged
and print it through Apache->something?

Thanks.

-Original Message-
From: Perrin Harkins [mailto:perrin@;elem.com] 
Sent: Tuesday, November 12, 2002 3:34 PM
To: Justin Luster
Cc: [EMAIL PROTECTED]
Subject: Re: Using Perl END{} with Apache::Registry

Justin Luster wrote:

> I have an included file that I'm requiring:
>
> require "test.pl";
>
> Without the END { } block if the script cannot find test.pl I get a
> Server error 500 and an appropriate error message in the log file.
When
> I include the END{ } block I get no Server Error and no message in the
> log file.  It is almost as if the END{ } is overwriting the
> ModPerlRegistry error system.


Does it make any difference if you change what's in the END block?

- Perrin







RE: Using Perl END{} with Apache::Registry

2002-11-12 Thread Justin Luster
Thanks for the reply.  Unfortunately I need the END block to run for
every request.  I just was wondering why it altered the way error
messages were logged.

Thanks.

-Original Message-
From: Jim Schueler [mailto:jschueler@;tqis.com] 
Sent: Tuesday, November 12, 2002 2:41 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: RE: Using Perl END{} with Apache::Registry

Hello Justin.

I've done a little work on a similar problem due to Apache::Registry's
unusual treatment of END {} blocks.  You may want to take a look at
the module I recently submitted:

http://www.cpan.org/authors/id/T/TQ/TQISJIM/ChildExit_0-1.tar.gz

 -Jim

> Hi, I'm trying to use the END{ } block in my Perl Scripts to do some
> code clean up (making sure files are not locked) at the end of each
> request.  It seems to be working fine.  I'm using Apache::Registry to
> run a regular Perl script.  I'm having a problem with error messages.

> 
>  
> 
> I have an included file that I'm requiring:
> 
>  
> 
> require "test.pl";
> 
>  
> 
> Without the END { } block if the script cannot find test.pl I get a
> Server error 500 and an appropriate error message in the log file.
When
> I include the END{ } block I get no Server Error and no message in the
> log file.  It is almost as if the END{ } is overwriting the
> ModPerlRegistry error system.  
> 
>  
> 
> Any ideas?
> 
>  
> 
> Thanks.






Using Perl END{} with Apache::Registry

2002-11-12 Thread Justin Luster








Hi, I’m trying to use the END{ } block in my Perl
Scripts to do some code clean up (making sure files are not locked) at the end
of each request.  It seems to be working fine.  I’m using Apache::Registry
to run a regular Perl script.  I’m having a problem with error
messages.  

 

I have an included file that I’m requiring:

 

require “test.pl”;

 

Without the END { } block if the script cannot find test.pl I
get a Server error 500 and an appropriate error message in the log file. 
When I include the END{ } block I get no Server Error and no message in the log
file.  It is almost as if the END{ } is overwriting the ModPerlRegistry
error system.  

 

Any ideas?

 

Thanks.








Graphics and mod_perl

2002-10-02 Thread Justin Luster








 

I’m new to mod_perl and I’m really enjoying it.  It
has really improved performance.  Right now I’m just using Modperl::Registry
to speed up things.  I have a question about showing graphics using a Perl
Script and running it through mod_perl.  

 

Using Perl under regular CGI to create a dynamic web page I have always
used:

 

print “

 

and this would display the graphic to the web page assuming that the
file “thefile.jpg”  was in the same directory as the Perl
script .  If the graphic was in another directory then something like:

 

print “

 

was used.  Now that I’m using mod_perl and an Alias to my cgi-bin
I’m having difficulties knowing how to reference the graphics file. 
The following directive is in my httpd.conf file.

 

Alias /ssiweb/ "C:/Server/htdocs/develop/cgi-bin/"



  SetHandler perl-script

  PerlResponseHandler ModPerl::Registry

  Options +ExecCGI

  PerlOptions +ParseHeaders



 

It seems that the current working directory for the Perl scripts when
run under mod_perl is in the bin directory where Apache.exe is.  I have considered
using absolute paths but even that does not seem to work correctly with
graphics.  I could also do something like:

 

print “

 

but it seems that there is a delay in displaying the graphic when I do
this.

 

Where is the current working directory when running a Perl script under
mod_perl.  

 

I would appreciate any help.

 

Thanks.

 








Re: Filehandles

2002-08-29 Thread Justin Luster

The stress tool that I'm using is from Microsoft and is a free download.  It
is called Web Application Stress.  There is a setting in this tool called
Concurrent Connections (threads).  As I mentioned before I am able to do
this no problem under regular CGI with 10 concurrent users but when I run
mod_perl all heck breaks loose.

- Original Message -
From: "Perrin Harkins" <[EMAIL PROTECTED]>
To: "Justin Luster" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, August 29, 2002 1:44 PM
Subject: Re: Filehandles


> Justin Luster wrote:
> > The load tester that I'm using works fine with my script outside of
> > mod_perl.
>
> Does it work when running them concurrently under CGI?
>
> > When multiple concurrent users began hitting the script under mod_perl,
> > using Apache::Registry or Apache::RunPerl all heck breaks loose.
>
> Hmmm, which version of mod_perl are you using?  If you're using 2.x, I
> can't help you there since I've never tried it on Win32.  Maybe Randy
> will have some ideas.  If you have 1.x, there is no concurreny under
> mod_perl -- requests are serialized.
>
> - Perrin
>
>
>




Re: Filehandles

2002-08-29 Thread Justin Luster

Thanks for responding so quickly.

flock does work under Windows 2000 and above.

The load tester that I'm using works fine with my script outside of
mod_perl.  My script works inside of mod_perl with only one concurrent user.
When multiple concurrent users began hitting the script under mod_perl,
using Apache::Registry or Apache::RunPerl all heck breaks loose.  It seems
that the file locking breaks down.  Another thing that is totally bizarre is
that stuff is being written to all kinds of files that I'm not writing to.
Even my scripts themselves will sometimes have text appended to the end of
them.  Another problem is that Apache will give me an error message saying
that it is trying to write to memory that is locked.

Do you have any idea where I can go from here?

What it the address of the Win32 mod_perl mailing list?

Thanks.

- Original Message -
From: "Perrin Harkins" <[EMAIL PROTECTED]>
To: "Justin Luster" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, August 29, 2002 12:11 PM
Subject: Re: Filehandles


> Justin Luster wrote:
> > Does anyone know anything about flock and mod_perl?
>
> Yes.  There is no problem with flock and mod_perl.  However, if you were
> to open a filehandle in startup.pl and then use that same filehandle
> after forking, that could be a problem.
>
> Personally I would suspect Windows in this case.  I don't know about XP,
> but Windows 95/98/ME did not have a working flock.  If XP is based on
> the NT code, it may not have that problem.  Even so, I would try testing
> that first, or maybe asking about it on Win32 perl mailing list.
>
> - Perrin
>
>
>
>




Filehandles

2002-08-29 Thread Justin Luster



I'm using mod_perl 2.0 and Apache::Registry on a 
Windows XP machine.  I'm using a load tester to test my Perl CGI 
program.  I'm reading and writing to files and I'm using flock to control 
collisions.  I keep getting an error when the load tester is going (5 
concurrent users).  It seems that the file handles are getting messed 
up.  The script will write to files that I never write to but only 
read.  
 
Does anyone know anything about flock and 
mod_perl?  Have you ever seen file handles get messed up where things are 
being written to the wrong file?
 
Thanks.


HTTP_HOST clarification

2001-09-09 Thread Justin Rains

   I forgot to mention I am trying to access the http_host variable through an 
authentication script..

Thanks!
Justin



==
Justin C. Rains, President WSI.com Consulting

_
Get your own FREE branded portal! Visit www.wsicnslt.com to learn more



Getting to the $ENV{HTTP_HOST} variable

2001-09-08 Thread Justin Rains

   Hi all. I am relatively new to mod perl. I have a script that works using the 
following:

my $dm = $s->server_hostname;

But in my normal perl scripts I am now using the HTTP_HOST value.. I tried this:

my $env = $r->subprocess_env;
%ENV = %$env;
my $dm = $ENV{'HTTP_HOST'};

without any luck. Is it possible to get to the HTTP_HOST value in mod_perl? If I need 
to send my complete code just e-mail me and let me know.. Thanks!

Justin

==
Justin C. Rains, President WSI.com Consulting

_
Get your own FREE branded portal! Visit www.wsicnslt.com to learn more



Errors when trying to use AuthAny.pm

2001-07-11 Thread Justin Rains

Hi all. I am relatively new to mod_perl so try to bear with me. I am trying to use 
the AuthAny.pm module to provide some basic authentication. First off.. Do I put it in 
the same directory as Registry.pm? That is where I have it now. In my httpd.conf file 
I put the following in:


AuthName Test
AuthType Basic
PerlAuthenHandler AuthAny
require valid-user



I am running on a cobalt raq 3. Here is what I have in AuthAny.pm:

package Apache::AuthAny;
# file: Apache/AuthAny.pm


use strict;
use Apache::Constants qw(:common);


sub handler {
my $r = shift;

my($res, $sent_pw) = $r->get_basic_auth_pw;
return $res if $res != OK;


my $user = $r->connection->user;
unless($user and $sent_pw) {
$r->note_basic_auth_failure;
$r->log_reason("Both a username and password must be provided", $r->filename);
return AUTH_REQUIRED;
}


return OK;
}


1;
__END__

The error log message is:
[Wed Jul 11 09:04:59 2001] [error] (2)No such file or directory: access to /tools/ 
failed for nr2-216-196-142-76.fuse.net, reason: User not known to the underlying 
authentication module

Am I missing something here? I am using the standard apache that came with the raq.

Thanks for any help!
Justin

==
Justin Rains
WSI.com Consulting
http://www.wsicnslt.com/

_
Buy gift certificates online @ http://gc.portalplanet.com/



Re: IP based instant throttle?

2001-06-08 Thread Justin

I'm glad I haven't got your user.. I think most any site on the
net can be brought to its knees by, for example, stuffing its
site search form with random but very common words and pressing
the post button and issuing these requests as frequently as
possible from a long list of open proxies.. or how about repeatedly
fetching random pages of very old postings such that the SQL
server index/table memory cache becomes useless... nightmare ;)

All one can do is respond with appropriate measures at the time
of the attack, which is why working in modperl is cool because
of the ease with which one can patch in defenses and modify
them while live.

Writing a short script that takes the last 20 minutes of access_log
and automatically identifies abuse based on frequency of request,
IPs and URL patterns, and drops the route to those IPs is a
good start.. to have this auto-triggered from a site monitoring
script is even better.

-Justin

On Thu, Jun 07, 2001 at 08:37:04PM -0700, Jeremy Rusnak wrote:
> Hi all,
> 
> Just thought I would add my two cents...I run an online gaming site
> and the end users often decide to mess with our systems.  We service
> a pretty juvenile crowd in some regards.  So there definately is a
> need for better protection from floods.
> 
> I've had one user in particular who has been attacking our site
> regularly for the past year and a half.  He'll setup a couple
> machines with scripts to call forum posting scripts with random
> information passed into them.  He'll call a generic CGI script
> ten times a second because he can tell it slows down the server.
> He'll bombard the servers with huge UDP packets.  He bulks E-mails
> viruses and zombies to our usersIt's insane.
> 
> In short, this is a big issue for sites that get a decent amount of
> traffic.  Better flood protection is always a good thing.
> 
> We've got a great Cisco firewall that stops a lot of these kinds
> of things, but this fellow discovered open proxies and has been
> a pain ever since.  He has a script that bombards us using a
> different proxy every five seconds.  (There are lists out there
> updated in real-time with hundreds of open proxies thanks to
> the "privacy advocates" on the Net.)
> 
> By the way, the guy is in Spain so the government can't/won't do
> anything.  WE've blocked have the providers in Spain as a result
> of him getting a new IP when he has been stupid enough to use
> a real IP.
> 
> So I would suggest that rate limiting based on IP address is a
> start, but it isn't the end all.  You've got to have a big bag
> of tricks.  Don't just look for one solution.
> 
> Jeremy
> 
> -Original Message-
> From: Martin Redington [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, June 07, 2001 6:08 PM
> To: [EMAIL PROTECTED]
> Cc: Justin
> Subject: Re: IP based instant throttle?
> 
> 
> 
> Do you get flooded that frequently that this is an issue?
> 
> I've seen DOS, and various buffer overflows etc. in the real world, but 
> I've never seen this.
> 
> Unless its happening very often, I would have thought that some 
> monitoring and a 2am "Deny from ip" in your httpd.conf would be 
> enough ...
> 
> 
> On Friday, June 8, 2001, at 01:50  am, Justin wrote:
> 
> > Does anyone see the value in a Throttle module that looked at
> > the apache parent status block and rejected any request where
> > another child was already busy servicing *that same IP* ?
> > (note: the real IP is in the header in a backend setup so it
> >  is not possible to dig it out across children without
> >  creating another bit of shared memory or using the filesystem?).
> >
> > I'm still finding existing throttle modules do not pickup and
> > block parallel or fast request streams fast enough .. ok there are
> > no massive outages but 10 seconds of delay for everyone because
> > all demons are busy servicing the same guy before we can conclude
> > we're being flooded is not really great.. modperl driven forums
> > (or PHP ones even) can be killed this way since there are so
> > many links on one page, all active..
> >
> > thanks for any thoughts on this.
> >
> > -Justin
> >




Re: IP based instant throttle?

2001-06-08 Thread Justin

good ideas, thanks.

as someone said its cloggage on the backend due to either
SQL server contention or more likely largish pages draining
to the user even with all the buffers en-route helping to
mitigate this. you can't win : if they are on a modem they can
tie up 8 modperl demons, and if they are on a cable modem they
can disrupt your SQL server creating select/insert locks and
yet more stalled stuff. A cable modem, user could request
1mbit/s of dynamic content.. thats a big ask..

Since the clogging is not images (that is hopefully handled by
an image server like mathopd), its modperl pages, I'm going to
try a timed ban triggered by parallel requests from a single IP.

And yes it does happen often enough to annoy.. (often might be
two or three times a day even though as a percentage of uniques
its very tiny) since many of the culprits don't even know what
they've got installed on their PCs and are on dhcp addresses
and probably never return anyway IP bans after the event are
never any good and may hit the next user who picks up the IP.

-Justin

On Thu, Jun 07, 2001 at 07:34:45PM -0700, Randal L. Schwartz wrote:
> >>>>> "Justin" == Justin  <[EMAIL PROTECTED]> writes:
> 
> Justin> Does anyone see the value in a Throttle module that looked at
> Justin> the apache parent status block and rejected any request where
> Justin> another child was already busy servicing *that same IP* ?
> Justin> (note: the real IP is in the header in a backend setup so it
> Justin>  is not possible to dig it out across children without
> Justin>  creating another bit of shared memory or using the filesystem?).
> 
> Justin> I'm still finding existing throttle modules do not pickup and
> Justin> block parallel or fast request streams fast enough .. ok there are
> Justin> no massive outages but 10 seconds of delay for everyone because
> Justin> all demons are busy servicing the same guy before we can conclude
> Justin> we're being flooded is not really great.. modperl driven forums
> Justin> (or PHP ones even) can be killed this way since there are so
> Justin> many links on one page, all active.. 
> 
> It would be pretty simple, basing it on my CPU-limiting throttle that
> I've published in Linux Magazine
> <http://www.stonehenge.com/merlyn/LinuxMag/col17.html>.  Just grab a
> flock on the CPU-logging file in the post-read-request phase instead
> of writing to it.  If you can't get the flock, reject the request.
> Release the flock by closing the file in the log phase.
> 
> But this'd sure mess up my ordinary visit to you, since my browser
> makes 4 connections in parallel to fetch images, and I believe most
> browsers do that these days.
> 
> -- 
> Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
> <[EMAIL PROTECTED]> http://www.stonehenge.com/merlyn/>
> Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
> See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!




IP based instant throttle?

2001-06-07 Thread Justin

Does anyone see the value in a Throttle module that looked at
the apache parent status block and rejected any request where
another child was already busy servicing *that same IP* ?
(note: the real IP is in the header in a backend setup so it
 is not possible to dig it out across children without
 creating another bit of shared memory or using the filesystem?).

I'm still finding existing throttle modules do not pickup and
block parallel or fast request streams fast enough .. ok there are
no massive outages but 10 seconds of delay for everyone because
all demons are busy servicing the same guy before we can conclude
we're being flooded is not really great.. modperl driven forums
(or PHP ones even) can be killed this way since there are so
many links on one page, all active.. 

thanks for any thoughts on this.

-Justin



success with Apache::Compress

2001-02-22 Thread Justin

Hi, after looking at mod_gzip, Apache::Gzip, Apache::GzipChain
and so on, I decided to try Apache::Compress, with some doubt that
it was "worth it"

There were a few hiccups, but it worked out great.

To test "in production" I created an alias /x to my cgi location,
and then added

Aloas /y /path/to/my/modperl/programs/
Aloas /x /path/to/my/modperl/programs/
PerlModule Apache::Filter
PerlModule Apache::RegistryNG
PerlModule Apache::RegistryBB

# original gateway

  ..
  SetHandler perl-script
  PerlHandler Apache::RegistryBB->handler
  ..


# optional compressing gateway

  ..
  SetHandler perl-script
  PerlSetVar Filter on
  PerlHandler Apache::RegistryFilter Apache::Compress
  ..


I also changed Apache::RegistryFilter to subclass from Apache::RegistryBB
(bare bones), as that is my preferred harness.. (if you mix 
Registry classes, mod_perl will not recognize the same script as
the same script and recompile it again, wasting memory).

Ok so I found one problem ..

If a client is 1.1 but does not accept gzip, then I'd get an internal
server error and a perl complaint about an invalid file handle GE.
I fixed this by changing the fallback (non compressing filter) code in
Apache::Compress to print, rather than use send_fd() .. I don't know why
this worked, but it does..

Next problem .. the zoo of browsers out there that lie about being
able to handle gzip.. I don't want to assume they all work, then have
users look at pages of gibberish, so I used mod_rewrite to conditionally
use the compressing gateway initially ONLY for two browser regexps:
(but they are by far the most popular):

RewriteCond %{HTTP_USER_AGENT} ^Mozilla/[45].*Windows.* [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mozilla/[45].*Opera.*
RewriteRule ^(.*) http://back.end.server.ip/x/mysite$1 [P]

One could also, I guess, offer a new subdomain using mod_rewrite..
 www.mysite.com (directed to non compressing URL)
 fast.mysite.com (directed to compressing URL)

Next problem .. I had an unfortunate habit of printing
Location: blah\n\n
from my mod_perl programs which breaks under the compressing filter
chain so rather than fix this now, I used mod_rewrite RewriteRule
again to strip out my URLs that have this kind of redirect, to
point to the /y (non compressing) handler regardless.

Result: this is the good part :-)

* Since all my content is dynamic and most is HTML with fiddly stuff,
  nett site outgoing bandwidth (and bill) has dropped by 2/3rds !!

* the modperl machine is now working harder.. where previously it was
  delivering approx 300k/sec and was 20% cpu busy, it is now delivering
  the same page count but 142kb/sec of compressed data but is 40% cpu
  busy .. since it is SMP I'm guessing it takes 40% of one PIII 1ghz
  to handle the load of about 140k/sec of compressed html..

* page sizes are MUCH smaller, in many/most cases...
  home page --> 50k --> 12k
  one 50 post forum thread --> 120k --> 22k
  large html table with finnicky cell colors etc --> 87k --> 8k

* load times from the users perspective, even on a DSL line, feel
  twice as fast. On a modem, for some examples above, it would feel
  5x faster! my subjective feel under MSIE on just a 300mhz laptop,
  but connected to a 784kbps DSL line, was that pages appear
  twice as fast.. 

* my bandwidth bill drops by 2/3rd (would be much more but html was
  not 100% of bandwidth, plus I'm conservative about switching it
  on for more browsers)..

AWESOME..

-Justin



Re: success with Apache::Compress

2001-02-21 Thread Justin

Great, thanks.
please excuse the enthusiasm, here is some more: I've been
long convinced that speed always wins in speed vs features..
Apache::Compress gives you more speed without feature reduction!
thats worth getting enthusiastic about - its like free money :-)

Also.. popular portals are probably not implementing content 
compression yet(?) due to worries over patchy browser support,
so that just makes it even better..
-Justin

On Wed, Feb 21, 2001 at 07:26:03PM -0500, Geoffrey Young wrote:
>  
> 
> -Original Message-----
> > From: Justin
> > To: [EMAIL PROTECTED]
> > Sent: 2/21/01 5:19 PM
> > Subject: success with Apache::Compress
> > 
> > Hi, after looking at mod_gzip, Apache::Gzip, Apache::GzipChain
> > and so on, I decided to try Apache::Compress, with some doubt that
> > it was "worth it"
> >
> > There were a few hiccups, but it worked out great.
> >
> [snip]
> > 
> > * page sizes are MUCH smaller, in many/most cases...
> >  home page --> 50k --> 12k
> > one 50 post forum thread --> 120k --> 22k
> > large html table with finnicky cell colors etc --> 87k --> 8k
> 
> well, I was going to wait until after TPC5 to release it, but you sound so
> excited :)
> 
> I came up with Apache::Clean while preparing some slides:
> 
>   http://morpheus.laserlink.net/~gyoung/modules/Apache-Clean-0.01.tar.gz
> 
> it's just a simple interface into Paul Lindner's nifty HTML::Clean, but set
> up as a PerlHandler that can stand on it's own or be used with the latest
> Apache::Filter.
> 
> Just in case you want to eek out that last bit of bandwidth - I saw about
> another 10% drop when combining Apache::Clean with Apache::Compress
> 
> Anyway, I'll probably put it on CPAN tomorrow...
> 
> --Geoff
> 
> >
> > * load times from the users perspective, even on a DSL line, feel
> >   twice as fast. On a modem, for some examples above, it would feel
> >   5x faster! my subjective feel under MSIE on just a 300mhz laptop,
> >   but connected to a 784kbps DSL line, was that pages appear
> >   twice as fast.. 
> >
> > * my bandwidth bill drops by 2/3rd (would be much more but html was
> >  not 100% of bandwidth, plus I'm conservative about switching it
> >  on for more browsers)..
> >
> > AWESOME..
> >
> > -Justin

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



Re: the edge of chaos (URL correction)

2001-01-05 Thread Justin

My bad. it is
  www.dslreports.com/front/example.gif
Sorry for those curious enough to check the URL out.

On Thu, Jan 04, 2001 at 06:10:09PM -0500, Rick Myers wrote:
> On Jan 04, 2001 at 17:55:54 -0500, Justin twiddled the keys to say:
> > 
> > If you want to see what happens to actual output when this
> > happens, check this gif:
> >http://www.dslreports.com/front/eth0-day.gif
> 
> You sure about this URL? I get a 404...
> 
> Rick Myers[EMAIL PROTECTED]
> 
> The Feynman Problem   1) Write down the problem.
> Solving Algorithm 2) Think real hard.
>   3) Write down the answer.




Re: the edge of chaos

2001-01-04 Thread Justin

I need more horsepower. Yes I'd agree with that !

However... which web solution would you prefer:

A. (ideal)
load equals horsepower:
  all requests serviced in <=250ms
load slightly more than horsepower:
  linear falloff in response time, as a function of % overload

..or..

B. (modperl+front end)
load equals horsepower:
  all requests serviced in <=250ms
sustained load *slightly* more than horsepower
  site too slow to be usable by anyone, few seeing pages

Don't all benchmarks (of disk, webservers, and so on),
always continue increasing load well past optimal levels,
to check there are no nasty surprises out there.. ?

regards
-justin

On Thu, Jan 04, 2001 at 11:10:25AM -0500, Vivek Khera wrote:
> >>>>> "J" == Justin  <[EMAIL PROTECTED]> writes:
> 
> J> When things get slow on the back end, the front end can fill with
> J> 120 *requests* .. all queued for the 20 available modperl slots..
> J> hence long queues for service, results in nobody getting anything,
> 
> You simply don't have enough horsepower to serve your load, then.
> 
> Your options are: get more RAM, get faster CPU, make your application
> smaller by sharing more code (pretty much whatever else is in the
> tuning docs), or split your load across multiple machines.
> 
> If your front ends are doing nothing but buffering the pages for the
> mod_perl backends, then you probably need to lower the ratio of
> frontends to back ends from your 6 to 1 to something like 3 to 1.
> 
> -- 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Vivek Khera, Ph.D.Khera Communications, Inc.
> Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
> AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



Re: the edge of chaos

2001-01-04 Thread Justin

Hi,
Thanks for the links! But. I wasnt sure what in the first link
was useful for this problem, and, the vacuum bots discussion
is really a different topic.
I'm not talking of vacuum bot load. This is real world load.

Practical experiments (ok - the live site :) convinced me that 
the well recommended modperl setup of fe/be suffer from failure
and much wasted page production when load rises just a little
above *maximum sustainable throughput* ..

If you want to see what happens to actual output when this
happens, check this gif:
   http://www.dslreports.com/front/eth0-day.gif
>From 11am to 4pm (in the jaggie middle secton delineated by
the red bars) I was madly doing sql server optimizations to
get my head above water.. just before 11am, response time
was sub-second. (That whole day represents about a million
pages). Minutes after 11am, response rose fast to 10-20 seconds
and few people would wait that long, they just hit stop..
(which doesnt provide my server any relief from their request).

By 4pm I'd got the SQL server able to cope with current load,
and everything was fine after that..

This is all moot if you never plan to get anywhere near max
throughput.. nevertheless.. as a business, if incoming load
does rise (hopefully because of press) I'd rather lose 20% of
visitors to a "sluggish" site, than lose 100% of visitors
because the site is all but dead..

I received a helpful recommendation to look into "lingerd" ...
that would seem one approach to solve this issue.. but a
lingerd setup is quite different from popular recommendations.
-Justin

On Thu, Jan 04, 2001 at 11:06:35AM -0500, Geoffrey Young wrote:
> 
> 
> > -Original Message-
> > From: G.W. Haywood [mailto:[EMAIL PROTECTED]]
> > Sent: Thursday, January 04, 2001 10:35 AM
> > To: Justin
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: the edge of chaos
> > 
> > 
> > Hi there,
> > 
> > On Thu, 4 Jan 2001, Justin wrote:
> > 
> > > So dropping maxclients on the front end means you get clogged
> > > up with slow readers instead, so that isnt an option..
> > 
> > Try looking for Randall's posts in the last couple of weeks.  He has
> > some nice stuff you might want to have a play with.  Sorry, I can't
> > remember the thread but if you look in Geoff's DIGEST you'll find it.
> 
> I think you mean this:
> http://forum.swarthmore.edu/epigone/modperl/phoorimpjun
> 
> and this thread:
> http://forum.swarthmore.edu/epigone/modperl/zhayflimthu
> 
> (which is actually a response to Justin :)
> 
> > 
> > Thanks again Geoff!
> 
> glad to be of service :)
> 
> --Geoff
> 
> > 
> > 73,
> > Ged.
> > 

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



Re: the edge of chaos

2001-01-03 Thread Justin

Yep, I am familiar with MaxClients .. there are two backend servers
of 10 modperl processes each (Maxclients=start=10). Thats sized
about right. They can all pump away at the same time doing about
20 pages per second. The problem comes when they are asked to do
21 pages per second :-)

There is one frontend mod_proxy.. currently has MaxClients set
to 120 processes (it doesnt serve images).. the actual number 
in use near peak output varies from 60 to 100, depending on the
mix of clients using the system. Keepalive is *off* on that,
(again, since it doesnt serve images).

When things get slow on the back end, the front end can fill with
120 *requests* .. all queued for the 20 available modperl slots..
hence long queues for service, results in nobody getting anything,
results in a dead site. I don't mind performance limits, just don't
like the idea that pushing beyond 100% (which can even happen with
one of the evil site hoovers hitting you) results in site death.

So dropping maxclients on the front end means you get clogged
up with slow readers instead, so that isnt an option..

-Justin

On Wed, Jan 03, 2001 at 11:57:17PM -0600, Jeff Sheffield wrote:
> this is not the solution...
> but it could be a bandaid until you find one.
> set the MaxClients # lower.
> 
> # Limit on total number of servers running, i.e., limit on the number
> # of clients who can simultaneously connect --- if this limit is ever
> # reached, clients will be LOCKED OUT, so it should NOT BE SET TOO
> LOW.
> # It is intended mainly as a brake to keep a runaway server from
> taking
> # the system with it as it spirals down...
> #
> MaxClients 150
> 
> >On Wed, Jan 03, 2001 at 10:25:04PM -0500, Justin wrote:
> > Hi, and happy new year!
> > 
> > My modperl/mysql setup does not degrade gracefully when reaching
> > and pushing maximum pages per second  :-) if you could plot 
> > throughput, it rises to ceiling, then collapses to half or less,
> > then slowly recovers .. rinse and repeat.. during the collapses,
> > nobody but real patient people are getting anything.. most page
> > production is wasted: goes from modperl-->modproxy-->/dev/null
> > 
> > I know exactly why .. it is because of a long virtual
> > "request queue" enabled by the front end .. people "leave the
> > queue" but their requests do not.. pressing STOP on the browser
> > does not seem to signal mod_proxy to cancel its pending request,
> > or modperl, to cancel its work, if it has started.. (in fact if
> > things get real bad, you can even find much of your backend stuck
> > in a "R" state waiting for the Apache timeout variable
> > to tick down to zero..)
> > 
> > Any thoughts on solving this? Am I wrong in wishing that STOP
> > would function through all the layers?
> > 
> > thanks
> > -Justin
> Thanks, 
> Jeff
> 
> ---
> | "0201: Keyboard Error.  Press F1 to continue."  |
> |  -- IBM PC-XT Rom, 1982 |
> ---
> | Jeff Sheffield  |
> | [EMAIL PROTECTED]  |
> | AIM=JeffShef|
> ---




the edge of chaos

2001-01-03 Thread Justin

Hi, and happy new year!

My modperl/mysql setup does not degrade gracefully when reaching
and pushing maximum pages per second  :-) if you could plot 
throughput, it rises to ceiling, then collapses to half or less,
then slowly recovers .. rinse and repeat.. during the collapses,
nobody but real patient people are getting anything.. most page
production is wasted: goes from modperl-->modproxy-->/dev/null

I know exactly why .. it is because of a long virtual
"request queue" enabled by the front end .. people "leave the
queue" but their requests do not.. pressing STOP on the browser
does not seem to signal mod_proxy to cancel its pending request,
or modperl, to cancel its work, if it has started.. (in fact if
things get real bad, you can even find much of your backend stuck
in a "R" state waiting for the Apache timeout variable
to tick down to zero..)

Any thoughts on solving this? Am I wrong in wishing that STOP
would function through all the layers?

thanks
-Justin



experience on modperl-killing "vacuum bots"

2000-12-20 Thread Justin

Hi again,

Tracing down periods of unusual modperl overload I've
found it is usually caused by someone using an agressive
site mirror tool of some kind.

The Stonehenge Throttle (a lifesaver) module was useful
to catch the really evil ones that masquerade as a real
browser ..  although the version I grabbed did need to be
tweaked as when you get really hit hard, the determination
that yes, it is that spider again, involved a long read loop
of a rapidly growing fingerprint of doom.. to the point
where the determination that it was the same evil spider
was taking quite a long time per hit! (some real nasty ones
can hit you with 1000s of requests per minute!)

Also - sleeping to delay the reader as it reached the
soft limit was also bad news for modperl.

So I changed it to be more brutal about number of requests
per time frame, and bytes read per time frame, and also
black-list the md5 of the IP/useragent combination for
longer when that does happen. Matching on IP/useragent
combo is necessary rather than just IP to avoid blocking
big proxy on one IP which are in use in some large companies
and some telco ISPs.

In filtering error_logs over time, I've assembled a list
of nastys that have triggered the throttle repeatedly.

The trouble is, the throttle can take some time to 
wake up which can still floor your server for very
short periods..
So I also simply outright ban these user agents:

(EmailSiphon)|(LinkWalker)|(WebCapture)|(w3mir)|
(WebZIP)|(Teleport Pro)|(PortalBSpider)|(Extractor)|
(Offline Explorer)|(WebCopier)|(NetAttache)|(iSiloWeb)|
(eCatch)|(ecila)|(WebStripper)|(Oxxbot)|(MuscatFerret)|
(AVSearch)|(MSIECrawler)|(SuperBot 2.4)

Nasty little collection huh..

MSIECrawler is particularly annoying. I think that is
when somebody uses one of the bill gates IE5 "ideas":
save for offline view, or something.

Anyway.. hope this is helpful next time your modperl
server gets so busy you have to wait 10 seconds just to
get a server-status URL to return.

This also made me think that perhaps it would be nice
to design a setup that reserved 1 or 2 modperl processes
for serving (say) the home page .. that way, when the site
gets jammed up at least new visitors get a reasonably 
fast home page to look at (perhaps including an alert
warning against slow response lower down..).. that is
better than them coming in from a news article or search
engine, and getting no response at all.

It would also be nice for mod_proxy to have a better
way of controlling timeout on fetching from the backend,
and the page to show in case timeout occurs.. has anyone
done something here? then after 10 seconds (say) mod_proxy
can show a pretty page explaining that due to the awesome
success of your product/service, the website is busy and
please try again very soon :-) [we should be so lucky].
At the moment what happens under load is mod_proxy seems
to queue the request up (via the tcp listen queue) .. the
user might give up and press stop or reload (mod_proxy does
not seem to know this) and thus queue up another request via
another front end, and pretty soon there is a 10 second
page backlog for everyone and loads of useless requests to
start to fill ..

-Justin



Re: recommendation for image server with modperl

2000-12-20 Thread Justin

I did try thttpd.

As I understood it, and I did send an email to acmesoftware to ask
but got no reply, thttpd does not handle keep-alive, and indeed
users complained that images "came in slowly". I also observed this.
I'm happy to be corrected, maybe I picked up the wrong version or
did not study the source carefully enough. I could not find any
config variables relating to keep-alive either..

I found some benchmarks which showed mathopd and thttpd similar in
speed. Only linux kernel httpd can do better than either.. but
request rates per second of 1000+ is of academic interest only..

-Justin

On Tue, Dec 19, 2000 at 08:37:23PM -0800, Perrin Harkins wrote:
> On Tue, 19 Dec 2000, Justin wrote:
> > I've been catching up on the modperl list archives, and would 
> > just like to recommend "mathopd" as an image web server.
> 
> I think you'll find thttpd (http://www.acme.com/software/thttpd/) faster
> and somewhat better documented.  However, I'd like to point out that we've
> had no problems using Apache as an image server.  We need the ability to
> serve HTTPS images, which mathopd and thttpd can't do, but more than that
> we've found the performance to be more than good enough with a stripped
> down Apache server.
> 
> > After having difficulties with the sheer number of front end apache
> > processes necessary to handle 10 backend modperls, (difficulties: high
> > load average and load spikes, kernel time wasted scheduling lots of
> > httpds, higher than expected latency on simple requests)
> 
> Load averages are tricky beasts.  The load can get high on our machines
> when many processes are running, but it doesn't seem to mean much: almost
> no CPU is being used, the network is not saturated, the disk is quiet,
> response is zippy, etc.  This leads me to think that these load numbers
> are not significant.
> 
> Select-based servers are very cool though, and a good option for people
> who don't need SSL and want to squeeze great performance out of budget
> hardware.
> 
> - Perrin

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



recommendation for image server with modperl

2000-12-19 Thread Justin

Hi, 
I've been catching up on the modperl list archives, and would 
just like to recommend "mathopd" as an image web server.

After having difficulties with the sheer number of front end
apache processes necessary to handle 10 backend modperls,
(difficulties: high load average and load spikes, kernel time
 wasted scheduling lots of httpds, higher than expected latency
 on simple requests), I switched all images to another IP
address on the same box (extending their IMG SRC somewhat
obviously), but now served by mathopd.

Mathopd is a simple select based, http 1.1 (keep alive)
compliant tiny webserver that is very configurable. I was
doubtful that it would hold up (it comes with zero documentation ..
the docs say "read the source"). But it has not crashed *once*,
and its very nice to see just one 22mb process with 500+ minutes
of cpu time (for several weeks of work), and have images
come in fast and reliably.

It uses select and as many file handles for a proc as you have.
If load increases beyond (say) your limit of 1024 fds, it re-uses
currently unused but kept-alive fds, meaning a graceful
degradation. It is also exceedingly fast, much faster than
apache serving images (that doesnt matter, but it does mean
its frugal with your CPU).

Of course I still have 120+ apache httpds (now just being front
end for *page* requests), and my wish is that mathopd would add
proxy and regexp rewrite capability, then I could do away with
apache completely on the front end !! or I guess apache2 with
mod_rewrite and mod_proxy would solve that, at the risk of
thread related teething problems.

Just a recommendation from left field.
-Justin



JOB - NYC - Looking for a Linux/Apache/mod_perl/mysql programmer

2000-09-15 Thread Justin

Hi, 
We just placed a little job ad targetted at New York city residents on
dslreports.com. we're looking for an enthusiastic employee #4 who is
very comfortable with apache/modperl/linux/mysql

We cannot give an accurate job description, because there is so much
we need to DO, you can carve out your own area if you can convince me
and my partner. It could be:
  * community/forum coding
  * and/or .. network monitoring (for users) systems coding 
  * and/or .. mysql related work (optimizations, upgrades, etc)
  * and/or .. web design within the constraints of a modperl setup
  * and/or .. a little sysadmin (if you like that)
  * and/or .. automation of security scan tools
  * and/or .. a bit of java applet work

dslreports.com has a lot of daily users and gets a lot of feedback,
so if you enjoy the idea of giving users what they want, and doing
it fast, without focus groups and without a lot of project management,
then this position would be ideal for you.. 

Telecommuting is fine : unlimited DSL provided to your home. come to
the office every second day if you prefer ..

Please reply to me and not the list.
thanks
-Justin



Re: tracking down why a module was loaded?

2000-09-15 Thread Justin

Thanks for the hint. It worked perfectly. I didnt connect
Cluck, and BEGIN. doh.

nobody spotted the unintended irony in my question ..
perl-status itself (Apache::Status) was the one pulling in CGI.pm !!

Turns out, if you load Apache::Request before Apache::Status,
it uses that instead of the (large) CGI.pm .. that wasnt mentioned
in my Apache::Status manual :-(

-Justin

On Thu, Sep 14, 2000 at 10:46:39PM +0200, Stas Bekman wrote:
> On Thu, 14 Sep 2000, Justin wrote:
> 
> > Can anyone tell me the easiest slickest way of determining
> > what was responsible for requesting a module, having discovered
> > that it has been loaded when viewing perl-status?
> 
> A.pm:
> -
> package A;
> use Carp ();
> BEGIN { Carp::cluck("I don't want to wake up!!!")}
> 1;
> 
> test.pl
> ---
> require "A.pm";
> 
> now run:
> % perl test.pl
> I don't want to wake up!!! at A.pm line 4
>   A::BEGIN() called at A.pm line 4
>   eval {...} called at A.pm line 4
>   require A.pm called at test.pl line 1
> 
> Also see:
> perldoc -f caller
> 
> 
> > and while I've got the podium:
> > I would like to congratulate Doug and 
> > everyone involved in modperl.. by checking
> > pcdataonline.com, I found our entirely modperl site is
> > now #1850 in the top 10,000 websites, with over 10 million
> > *entirely dynamic* pages pushed out a month.. to half a 
> > million unique users. This is a single Dell 2300 box with
> > two PII 450s cpus and a gig of memory.. (the mysql server
> > is on another box). Most pages are built between 20-100ms
> > of user time.. the same box runs backend and frontend
> > servers, serves images as well, plus a hunk of other
> > processes.
> > If a fortune500 company asked razor fish / concrete media
> > to build them a website that would scale to that, with
> > entirely dynamic pages, they'd be dragging the client down
> > to Sun to pick out the color of their enterprise 4000, or
> > microsoft would be pushing a cluster of multi processor
> > compaqs and NT servers with all possible software trimmings
> > added.. and then you'd need a team of specialists to keep
> > the whole thing moving.. no wonder dotcoms go broke.
> 
> Wow! That's cool!
> 
> > modperl is the best kept secret on the net. Shame!
> 
> :)
> 
> _
> Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
> http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
> mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
> http://singlesheaven.com http://perlmonth.com   perl.org   apache.org
> 

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



tracking down why a module was loaded?;

2000-09-14 Thread Justin

Can anyone tell me the easiest slickest way of determining
what was responsible for requesting a module, having discovered
that it has been loaded when viewing perl-status?


and while I've got the podium:
I would like to congratulate Doug and 
everyone involved in modperl.. by checking
pcdataonline.com, I found our entirely modperl site is
now #1850 in the top 10,000 websites, with over 10 million
*entirely dynamic* pages pushed out a month.. to half a 
million unique users. This is a single Dell 2300 box with
two PII 450s cpus and a gig of memory.. (the mysql server
is on another box). Most pages are built between 20-100ms
of user time.. the same box runs backend and frontend
servers, serves images as well, plus a hunk of other
processes.
If a fortune500 company asked razor fish / concrete media
to build them a website that would scale to that, with
entirely dynamic pages, they'd be dragging the client down
to Sun to pick out the color of their enterprise 4000, or
microsoft would be pushing a cluster of multi processor
compaqs and NT servers with all possible software trimmings
added.. and then you'd need a team of specialists to keep
the whole thing moving.. no wonder dotcoms go broke.

modperl is the best kept secret on the net. Shame!

- 
Justin Beech  http://www.dslreports.com



Re: Installing mod_perl

2000-08-29 Thread Justin Wheeler

It appears your Apache wasn't compiled with support for loading
modules.  You either need to recompile apache with mod_perl statically
linked in, or recompile apache with dso support.

--
Regards,
Justin Wheeler
[EMAIL PROTECTED]


On Tue, 29 Aug 2000, Marco Marchi wrote:

> Hi all,
> I'm a newcomer to this mailing list.
> I have installed mod_perl (rel. 1.24) on my machine (Linux, kernel rel.
> 2.2.13). Apache is already configured and running (rel. 1.3.9). But the
> plug-in  (i.e. mod_perl) is not running: I mean, running httpd the machine
> gives the following error message:
> "Syntax error on line 207 of (path follows)
> Invalid command LoadModule, perhaps mis-spelled or defined by a module
> not included in the server configuration".
> Watching line 207 into the file httpd.conf, there the list of modules to
> load begins with the instruction "LoadModule etc.".
> 
> Can anybody help me to solve this problem?
> 
> Thanks
> 
> Marco
> 
> 
> 
> ---
> Marco Marchi - Resp ISO 9000 & ICT
> Audio Lab Srl - Gruppo Sistemi Integrati
> Modena - V. D'Avia Sud, 198/1 - 41010
> Tel. +39-059-343424 - Fax +39-059-344955
> Bologna - V. della Barca, 26 - 40133
> Tel. +39-051-6198620 Fax +39-051-6193400
> ---
> 




Args and Params.

2000-08-25 Thread Justin Wheeler

While writing an apache module, using mod_perl 1.24, I followed the
instructions in my "Writing Apache Modules with Perl and C" book.  The
book told me to get at all parameters, 

my %params;
my @args = ($r->args, $r->content); # This line is the line in question.
while (my($name, $value) = splice @args, 0, 2) {
push @{$params{$name}}, $value;
}

The line I commented stops Apache from doing anything.. the code just
halts.  If I don't send anything in POST or GET form, it runs fine, and
feeds the pages.  But if I do send any data, it will stop
running.  CGI::param, however, works fine.  I would prefer not to use it
if I don't have to though.  Bug?

--
Regards,
Justin Wheeler
[EMAIL PROTECTED]






Re: followup to MaxChildRequests question ;-(

2000-06-23 Thread Justin

On Fri, Jun 23, 2000 at 09:54:49AM -0400, Vivek Khera wrote:
> >>>>> "J" == Justin  <[EMAIL PROTECTED]> writes:
> 
> J> Its still worth stating for the purposes of getting into the
> J> modperl archive at least, that MaxChildRequests is to be avoided
> J> for modperl backend setups.
> 
> I disagree.  You just need to have enough back-ends to make sure that
> you don't overwhelm the remaining ones for enough time that it takes
> to fire up a replacement.
> 
> And add more RAM.  You can never have enough RAM.  (and be sure to
> configure your operating system to allow mod_perl to use that
> additional RAM).
> 

I totally agree on the RAM - no paging on my box, thats the easiest
way to insanity..  nevertheless, ok let me modify this.. avoid
MaxChildRequests when they are set at a fairly low level, AND traffic
is sufficient to toggle them every 10 minutes or so AND your program
is so large that it takes 10+ seconds for a new child to get going
when the box is under pressure, because you then observe, I think,
a coordinated rush for the exit(0) every 10 minutes.

the goal of producing a stable back-end (not one where apache is
constantly killing and starting big children in response to spikes
in load) produces a stable of httpds that tick down at, on average,
the same rate.

On the questions from the previous post (sorry not to respond quickly)
MaxRequests was 12, MinSpareServers was 2, StartServers was 10.. this
would led to steady state of 12 processes.

I gave up on SizeLimit because I dont want to worry about what limit to
pick.. What I settled on was this : simply increment REQUEST_NO in
each child, and do:
 if ($REQUEST_NO++ > 500 && !($REQUEST_NO % 3) && rand()>0.98) {
   $r->child_terminate
 }

Probably a better way is:
 if (($REQUEST_NO++ > (400+$my_slot*100)) {
   $r->child_terminate;
 }

(MaxRequestsPerChild is set to something huge like 5000)

but I didnt know the variable for slot number and only just thought
of that..

thanks for the help!
-Justin



followup to MaxChildRequests question ;-(

2000-06-22 Thread Justin

I RTFM'd some more..

Seems like Apache::SizeLimit helps, if I switch to that, it
would avoid the galloping-herd-off-a-cliff problem I am seeing..

Its still worth stating for the purposes of getting into the
modperl archive at least, that MaxChildRequests is to be avoided
for modperl backend setups.

It would be nice to have Apache:RandomRequestLimit instead though..

-Justin



MaxChildRequests and modperl - issue

2000-06-22 Thread Justin

The backend/frontend setup works well, but for the following problem
that I think is a risk for loaded sites:

I set MaxChildRequests to 500, to effectively clean house every
now and again, as is recommended.. I have about 12 backend
modperl servers, and that handles the load fine. (about 250 front
ends).

What happens is, though, they all tick up towards MaxChildRequests,
and they pretty much start getting cloe to 500 over the period of
less than a minute. (about 10-20 minutes after a server restart).

What I think happens is the children die after their last request,
and apache does not kick off a new child straight away.. MinFree is
set to 2 .. as 12 becomes 11 becomes 10 becomes 9, my backend is
getting less and less powerful and more and more swamped. When
Apache does wakeup and spawn a new child, it takes many seconds
to interpret all the (large amount) of perl code and modules that
are not in the parent... up to 10 seconds now since its only getting
a fraction of the box.. this gives the unlucky user a dead browser.
Worse, the remaining alive servers are dieing faster now as they
handle more and more of the load, and rush towards 500 to contribute
to the same traffic jam of booting children.

The effect is essentially that what starts out as random child
death/restart collapses to all backend processes rebooting at the
same time, and an effectively dead server for about 20 seconds.

Any easy fixes to this? am I the first to find this? Am i missing
something obvious? Perhaps having my own ChildRequest counter and
dieing myself, but more randomly, is a better way?

thanks
-Justin
dslreports.com