RE: squid performance

2000-01-17 Thread Gerald Richter

>
> Gerald, thanks for your answer.
> I'm still confused... which is the right scenario:
>
> 1) a mod_perl process generates a response of 64k, if the
> ProxyReceiveBufferSize is 64k, the process gets released immediately, as
> all 64k are buffered at the socket, then a proxy process comes in, picks
> 8k of data every time and sends down the wire.
>

Yes, I am not quite sure if it get's release immediately, or if it has to
wait until the whole transmission is successfull.

> 2) a mod_perl process generates a response of 64k, a proxy request reads
> from mod_perl socket by 8k chunks and sends down the socket, No matter
> what's the client's speed the data gets buffered once again at the socket.
> So even if the client is slow the proxy server completes the proxying of
> 64k data even before the client was able to absorb the data. Thus the
> system socket serves as another buffer on the way to the client.
>

yes, too (but receive and transmit buffer may be of different size,
depending on the OS)

The problem I don't know is, does the call to close the socket wait, until
all data is actually send successfully or not. If it doesn't wait, you may
not be noticed of any failiure, but because the proxing Apache can write as
fast to the socket transmitt buffer as he can read, it should be possible
that the proxing Apache copies all the data from the receive to the
transmitt buffer and after that releaseing the receive buffer, so the
mod_perl Apache is free to do other things, while the proxing Apache still
wait until the client returns the success of data transmission. (The last,
is the part I am not sure on)

>
> Also if the scenario 1 is the right one and it looks like:
>
> [  socket  ]
> [mod_perl] => [  ] => [mod_proxy] => wire
> [  buffer  ]
>
> When the buffer size is of 64k and the generated data is 128k, is it a
> shift register (pipeline) alike buffer, so every time a chunk of 8k is
> picked by mod_proxy, new 8k can enter the buffer. Or no new data can enter
> the buffer before it gets empty, i.e. all 64k get read by mod_proxy?
>
> As you understand the pipeline mode provides a better performance as it
> releases the heavy mod_perl process as soon as the amount of data awaiting
> to be sent to the client is equal to socket buffer size + 8k. I think it's
> not a shift register buffer type...
>

That depends on your OS, but a normal OS should of course use a piplined
buffer (often it's implemented as a ring buffer, when the write pointer
reaches the end of the buffer, it continues to write at the start of te
buffer and stopps when it hit's the current read position in the buffer)

Gerald



RE: squid performance

2000-01-17 Thread Stas Bekman

> > > Yes, as Joshua posted today morning (at least it was morning in
> > germany :-),
> > > the application buffer size is hardcoded, the size is 8192 (named
> > > IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> > >
> > > The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> > > it's an OS issue.
> >
> > Which means that setting of ProxyReceiveBufferSize higher than 8k is
> > usless unless you modify the sources. Am I right? (I want to make it as
> > clear as possible i in the Guide)
> >
> 
> No, that means that Apache reads (and writes) the data of the request in
> chunks of 8K, but the OS is providing a buffer with the size of
> ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
> request data is buffered by the OS and if the whole page fit's inside the OS
> buffer the sending Apache should be imediately released after sending the
> page, while the proxing Apache can read and write the data in 8 K chunks as
> slow as the client is.

Gerald, thanks for your answer.
I'm still confused... which is the right scenario:

1) a mod_perl process generates a response of 64k, if the
ProxyReceiveBufferSize is 64k, the process gets released immediately, as
all 64k are buffered at the socket, then a proxy process comes in, picks
8k of data every time and sends down the wire. 

2) a mod_perl process generates a response of 64k, a proxy request reads
from mod_perl socket by 8k chunks and sends down the socket, No matter
what's the client's speed the data gets buffered once again at the socket.
So even if the client is slow the proxy server completes the proxying of
64k data even before the client was able to absorb the data. Thus the
system socket serves as another buffer on the way to the client.

3) neither of them

Also if the scenario 1 is the right one and it looks like:

  [  socket  ]
[mod_perl] => [  ] => [mod_proxy] => wire
  [  buffer  ]

When the buffer size is of 64k and the generated data is 128k, is it a
shift register (pipeline) alike buffer, so every time a chunk of 8k is
picked by mod_proxy, new 8k can enter the buffer. Or no new data can enter
the buffer before it gets empty, i.e. all 64k get read by mod_proxy? 

As you understand the pipeline mode provides a better performance as it
releases the heavy mod_perl process as soon as the amount of data awaiting
to be sent to the client is equal to socket buffer size + 8k. I think it's
not a shift register buffer type...

Thank you!

> 
> That's the result of the discussion. I didn't tried it out myself until now
> if it really behaves this way. I will do so the next time and let you know
> if I find any different behaviour.
> 
> Gerald
> 
> 
> 



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Gerald Richter

Hi Stas,

> >
> > Yes, as Joshua posted today morning (at least it was morning in
> germany :-),
> > the application buffer size is hardcoded, the size is 8192 (named
> > IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> >
> > The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> > it's an OS issue.
>
> Which means that setting of ProxyReceiveBufferSize higher than 8k is
> usless unless you modify the sources. Am I right? (I want to make it as
> clear as possible i in the Guide)
>

No, that means that Apache reads (and writes) the data of the request in
chunks of 8K, but the OS is providing a buffer with the size of
ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
request data is buffered by the OS and if the whole page fit's inside the OS
buffer the sending Apache should be imediately released after sending the
page, while the proxing Apache can read and write the data in 8 K chunks as
slow as the client is.

That's the result of the discussion. I didn't tried it out myself until now
if it really behaves this way. I will do so the next time and let you know
if I find any different behaviour.

Gerald




Re: ASP->Loader result in 'Attempt to free non-existent shared...'

2000-01-17 Thread Joshua Chamas

Dmitry Beransky wrote:
> 
> Hi again, folks,
> 
> Last Saturday after manually relinking SDBM_File with a reference to
> mod_perl libperl.so, I was able to preload Apache::ASP and precompile the
> asp scripts from startup.pl without any segfaults.   This however, resulted
> in a different problem.  I didn't notice it right away (don't know how I
> could've missed it), but now every time a child process is been shutdown a
> hole slew of 'null: Attempt to free non-existent shared string during
> global destruction' messages (on the order of 2500 per process) is been
> dumped into the error log.  I've narrowed the problem down to the
> ASP->Loader call in startup.pl.  Any chance anybody knows what's going
> on?  Is it possible to at least somehow disable this error?
> 

You could try a " local $^W = 0; " before the Apache::ASP->Loader()
call, this might do away with these altogether.  But, there
is something else wrong lurking here, 2500 errors!!, so keep
this is the back of your mind in case anything else comes up.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Can't find modules with mod_perl install

2000-01-17 Thread gnielson

I am trying to figure out why I can not start up httpd now that I have
compiled Apache 1.3.6 Unix with mod_perl 1.21 with perl 5.004_01
under Redhat Linux 5.0, and can not run my startup script, even though I have 
added directories to my path in my startupscript. 

Whenever I try to run my startup.pl script, I get the error:

Can't locate object method "module" via package "Apache" at
/usr/lib/perl5/site_perl/Apache/DBI.pm line  202.
BEGIN failed--compilation aborted at startup.pl line 23.

FYI, line 202 is:  if ($INC{'Apache.pm'} and
Apache->module('Apache::Status'));

I added to my startup.pl script:
BEGIN {
 unshift @INC, "/usr/local/apache/lib /usr/lib/perl5/site_perl/Apache"  ;
 }
because I would do a locate command and find that some of my *.pm files
were not in the @INC path. 

Whenever I try to run httpd -d /usr/local/apache, I get numerous errors,
even though I am pretty sure that with my additions to @INC I have the
correct paths of all the modules listed. Any help much appreciated. Output
from running httpd below:
 
[root@localhost bin]# ./httpd -d /usr/local/apache
[Mon Jan 17 21:38:33 2000] [error] [Mon Jan 17 21:38:33 2000] startup.pl:
[Mon Jan 17 21:38:33 2000] Cookie.pm: [Mon Jan 17 21:38:33 2000]
Cookie.pm: [Mon Jan 17 21:38:33 2000] Table.pm: [Mon Jan 17 21:38:33 2000]
Table.pm: Can't find loadable object for module Apache::Table in @INC
(/usr/local/apache/lib /usr/local/apache/lib
/usr/lib/perl5/site_perl/Apache /usr/lib/perl5/i386-linux/5.00401
/usr/lib/perl5 /usr/lib/perl5/site_perl/i386-linux
/usr/lib/perl5/site_perl . /usr/local/apache/ /usr/local/apache/lib/perl)
at /usr/lib/perl5/site_perl/mod_perl.pm line 14
[Mon Jan 17 21:38:33 2000] startup.pl: [Mon Jan 17 21:38:33 2000]
Cookie.pm: [Mon Jan 17 21:38:33 2000] Cookie.pm: BEGIN failed--compilation
aborted at /usr/lib/perl5/site_perl/Apache/Cookie.pm line 5.
[Mon Jan 17 21:38:33 2000] startup.pl: BEGIN failed--compilation aborted
at /usr/local/apache/conf/startup.pl line 26.

Syntax error on line 48 of /usr/local/apache/conf/httpd.conf:
[Mon Jan 17 21:38:33 2000] startup.pl: [Mon Jan 17 21:38:33 2000]
Cookie.pm: [Mon Jan 17 21:38:33 2000] Cookie.pm: [Mon Jan 17 21:38:33
2000] Table.pm: [Mon Jan 17 21:38:33 2000] Table.pm: Can't find loadable
object for module Apache::Table in @INC (/usr/local/apache/lib
/usr/local/apache/lib /usr/lib/perl5/site_perl/Apache
/usr/lib/perl5/i386-linux/5.00401 /usr/lib/perl5
/usr/lib/perl5/site_perl/i386-linux /usr/lib/perl5/site_perl .
/usr/local/apache/ /usr/local/apache/lib/perl) at
/usr/lib/perl5/site_perl/mod_perl.pm line 14
[Mon Jan 17 21:38:33 2000] startup.pl: [Mon Jan 17 21:38:33 2000]
Cookie.pm: [Mon Jan 17 21:38:33 2000] Cookie.pm: BEGIN failed--compilation
aborted at /usr/lib/perl5/site_perl/Apache/Cookie.pm line 5.
[Mon Jan 17 21:38:33 2000] startup.pl: BEGIN failed--compilation aborted
at /usr/local/apache/conf/startup.pl line 26.




ASP->Loader result in 'Attempt to free non-existent shared...'

2000-01-17 Thread Dmitry Beransky

Hi again, folks,

Last Saturday after manually relinking SDBM_File with a reference to 
mod_perl libperl.so, I was able to preload Apache::ASP and precompile the 
asp scripts from startup.pl without any segfaults.   This however, resulted 
in a different problem.  I didn't notice it right away (don't know how I 
could've missed it), but now every time a child process is been shutdown a 
hole slew of 'null: Attempt to free non-existent shared string during 
global destruction' messages (on the order of 2500 per process) is been 
dumped into the error log.  I've narrowed the problem down to the 
ASP->Loader call in startup.pl.  Any chance anybody knows what's going 
on?  Is it possible to at least somehow disable this error?

Thanks
Dmitry 



Re: redhat apache and modperl oh my! just so anyone else knows

2000-01-17 Thread Michael

> so i guess this is the only way to do it with dso
> 
> for apache 1.3.9
>  ./configure --enable-rule=shared_core --with-perl=/usr/bin/perl
> --enable-module=so --enable-module=rewrite
> 
> for modperl-1.21
>  perl Makefile.PL USE_APXS=1 USE_DSO=1 WITH_APXS=/usr/sbin/apxs
> EVERYTHING=1
> 
This does not appear to work all the time. My build looks like this:
openssl-0.9.4
apache-1.3.9
modperl=1.21

./configure --prefix=/usr/local/apache \
--enable-rule=SHARED_CORE \
--with-layout=Apache \
--enable-module=so
make
make install

then.

perl Makefile.PL \
 USE_APXS=1 \
 USE_DSO=1 \
 WITH_APXS=/usr/local/apache/bin/apxs \
 EVERYTHING=1

make

gives ..

In file included from /usr/local/apache/include/httpd.h:78,
 from mod_perl.h:114,
 from mod_perl.c:60:
/usr/local/apache/include/buff.h:74: openssl/ssl.h: No such file or
directory /usr/local/apache/include/buff.h:77: #error "Don't use
OpenSSL/SSLeay versions less than 0.9.2b, they have a serious security
problem!" make[1]: *** [mod_perl.lo] Error 1 make[1]: Leaving
directory `/usr/src/mod_perl-1.21/apaci' make: *** [apxs_libperl]
Error 2

Any ideas -- I've tried every conceivable build possibility with and 
without DSO and still can't get ssl + modperl to coexist. I know it 
is possible as I have earlier versions running in production, but 
don't seem to be able to upgrade to this set of pieces.
[EMAIL PROTECTED]



Re: redhat apache and modperl oh my! just so anyone else knows

2000-01-17 Thread Clay

Aaron Johnson wrote:

> Do you feel good enough about the compile to make it into an RPM
> to replace the old one that was referenced earlier?
>
> I for one think a lot of new users would find it helpful and the one
> that is offered now is Apache 1.3.6.
>
> Aaron Johnson
>
> Clay wrote:
> >
> > ok i recompiled both apache and modperl  on redhat 6.1 and things are a go
> > , my own modules are in etc
> > yay!
> >
> > so i guess this is the only way to do it with dso
> >
> > for apache 1.3.9
> >  ./configure --enable-rule=shared_core --with-perl=/usr/bin/perl
> > --enable-module=so --enable-module=rewrite
> >
> > for modperl-1.21
> >  perl Makefile.PL USE_APXS=1 USE_DSO=1 WITH_APXS=/usr/sbin/apxs
> > EVERYTHING=1
> >
> > ill leave the compiles here in case anyone needs them
> > contact me post haste if you do

I really have no experience in making rpm's

i do know how to make slp's for stampede {my #1 choice}

but if anyone wants my compiled dirs i could pass them along so someone can
make them into rpms
?






Re: redhat apache and modperl oh my! just so anyone else knows

2000-01-17 Thread Aaron Johnson

Do you feel good enough about the compile to make it into an RPM
to replace the old one that was referenced earlier?

I for one think a lot of new users would find it helpful and the one
that is offered now is Apache 1.3.6.

Aaron Johnson

Clay wrote:
> 
> ok i recompiled both apache and modperl  on redhat 6.1 and things are a go
> , my own modules are in etc
> yay!
> 
> so i guess this is the only way to do it with dso
> 
> for apache 1.3.9
>  ./configure --enable-rule=shared_core --with-perl=/usr/bin/perl
> --enable-module=so --enable-module=rewrite
> 
> for modperl-1.21
>  perl Makefile.PL USE_APXS=1 USE_DSO=1 WITH_APXS=/usr/sbin/apxs
> EVERYTHING=1
> 
> ill leave the compiles here in case anyone needs them
> contact me post haste if you do



Re: redhat apache and modperl oh my! just so anyone else knows

2000-01-17 Thread Clay

ok i recompiled both apache and modperl  on redhat 6.1 and things are a go
, my own modules are in etc
yay!

so i guess this is the only way to do it with dso

for apache 1.3.9
 ./configure --enable-rule=shared_core --with-perl=/usr/bin/perl
--enable-module=so --enable-module=rewrite


for modperl-1.21
 perl Makefile.PL USE_APXS=1 USE_DSO=1 WITH_APXS=/usr/sbin/apxs
EVERYTHING=1

ill leave the compiles here in case anyone needs them
contact me post haste if you do



Re: redhat apache and modperl oh my!

2000-01-17 Thread Clay

Aaron Johnson wrote:

> Clay,
>
> Well I agree after seeing your startup.pl that the problem you are
> expriencing is not with the DSO, however from past posts with problems
> there are several modules that do not play well with DSO.
>

which ones?

in my experience i havent found any that can me out
Pg, Email::Valid
and my own have never cause major probs, such as memory leaks etc

well i guess its back to the table



>
> However as far as any problem with using the Red Hat distributed RPM
> of Apache and mod_perl on 6.1 I can not say, I always compile my own.
>
>

me too but i thought redhat was supposed to make these things easy!

lol



it would seem that the earlier mentioned broken apxs is indeed the prob:




(cd ./apaci && make install;)
make[1]: Entering directory `/home/DrFrog/mod_perl-1.21/apaci'
/usr/sbin/apxs -i -a -n perl libperl.so
cp libperl.so modules/libperl.so
cp: cannot create regular file `modules/libperl.so': No such file or
directory
apxs:Break: Command failed with rc=65536
make[1]: *** [install] Error 1
make[1]: Leaving directory `/home/DrFrog/mod_perl-1.21/apaci'
make: *** [apxs_install] Error 2



Re: redhat apache and modperl oh my!

2000-01-17 Thread Aaron Johnson

Clay,

Well I agree after seeing your startup.pl that the problem you are
expriencing is not with the DSO, however from past posts with problems
there are several modules that do not play well with DSO.

However as far as any problem with using the Red Hat distributed RPM
of Apache and mod_perl on 6.1 I can not say, I always compile my own.

Aaron Johnson

Clay wrote:
> 
> DSO is not the issue
> 
> that works just fine
> i know it does,
> the problem seems to be with the default binary packages that redhat 6.1
> comes with
> 
> i ve just uninstalled/installed twice same results
> and there is no errors out in the error log
> 
> i need this for work or i wouldnt be stressin!



RE: squid performance

2000-01-17 Thread Stas Bekman

> > > No, that's the size of the system call buffer.  It is not an
> > > application buffer.
> >
> > So how one should interpret the info at:
> > http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
> >
> > 
> > The ProxyReceiveBufferSize directive specifies an explicit network buffer
> > size for outgoing HTTP and FTP connections, for increased throughput. It
> > has to be greater than 512 or set to 0 to indicate that the system's
> > default buffer size should be used.
> > 
> >
> > So what's the application buffer parameter? A hardcoded value?
> >
> 
> Yes, as Joshua posted today morning (at least it was morning in germany :-),
> the application buffer size is hardcoded, the size is 8192 (named
> IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> 
> The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> it's an OS issue.

Which means that setting of ProxyReceiveBufferSize higher than 8k is
usless unless you modify the sources. Am I right? (I want to make it as
clear as possible i in the Guide)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Gerald Richter

> > No, that's the size of the system call buffer.  It is not an
> > application buffer.
>
> So how one should interpret the info at:
> http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
>
> 
> The ProxyReceiveBufferSize directive specifies an explicit network buffer
> size for outgoing HTTP and FTP connections, for increased throughput. It
> has to be greater than 512 or set to 0 to indicate that the system's
> default buffer size should be used.
> 
>
> So what's the application buffer parameter? A hardcoded value?
>

Yes, as Joshua posted today morning (at least it was morning in germany :-),
the application buffer size is hardcoded, the size is 8192 (named
IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().

The ProxyReceiveBufferSize set the receive buffer size of the socket, so
it's an OS issue.

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: redhat apache and modperl oh my!

2000-01-17 Thread Ask Bjoern Hansen

On Mon, 17 Jan 2000, Clay wrote:

> no, i have only used the redhat packages, i have extensivley search all
> related new s groups etc,

Please try the rpm at http://perl.apache.org/rpm/ instead.


 - ask

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



RE: Apache locking up on WinNT

2000-01-17 Thread Gerald Richter


> >Yes, on NT all accesses to the perl part are serialized. This will not
> >change before mod_perl 2.0
> >
> 
> I had a horrible feeling it was going to have something to do 
> with the fact
> that Apache on NT is multi-threaded and perl isn't (yet). 
> 
> I assume that Apache::Registry has the same problems.  

Yes

> However, good old
> fashioned CGI scripts in the /cgi-bin directory should be OK?  

Yes

> Does anybody
> have any performance stats between perl running externally on Apache and
> PerlIS on IIS?
> 



Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Mon, 17 Jan 2000, G.W. Haywood wrote:

> > At ValueClick we can't use the caching for obvious reasons so we're using
> > a bunch of apache/mod_proxy processes in front of the apache/mod_perl
> > processes to save memory.
> > 
> > Even with our average <1KB per request we can keep hundreds of mod_proxy
> > childs busy with very few active mod_perl childs.
> 
> Would it be breaching any confidences to tell us how many
> kilobyterequests per memorymegabyte or some other equally daft
> dimensionless numbers?

Uh, I don't understand the question.

The replies to the requests are all redirects to the real content (which
is primarily served by Akamai) so it's quite non-typical.


 - ask

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



Re: DynaLoader/MakeMaker problem? - Apache::ASP: crash when placed in startup.pl

2000-01-17 Thread Matt Sergeant

On Sun, 16 Jan 2000, Alan Burlison wrote:
> I think we have a strong case for:
> 
> a) Requesting that MakeMaker adds a dependency between the .so files it
> generates and the perl libperl.so
> 
> b) Requesting that a 'remove a module' method is added to DynaLoader

Option b would be very useful for mod_perl because you could remove modules
used at startup, but not needed throughout the use of the system. For
example, say in your  section you want to parse an XML config file
that changes your httpd configuration somehow. So you load in XML::Parser.
But now you've got XML::Parser in each of your child procs that you
don't need and can't unload. Being able to call
DynaLoader::RemoveFromMemory(XML::Parser) would be ideal (yes, it could be
dangerous - so are most power tools).

-- 


Details: FastNet Software Ltd - XML, Perl, Databases.
Tagline: High Performance Web Solutions
Web Sites: http://come.to/fastnet http://sergeant.org
Available for Consultancy, Contracts and Training.



OpenSSL upgrade => modperl handler suddenly gives 403 ("Directory index forbidden by rule")?

2000-01-17 Thread dave-mlist

Recently I upgraded to OpenSSL following the HOWTO.  Version info:

Redhat 6.1
apache_1.3.9
mod_perl-1.21
mod_ssl-2.4.9-1.3.9
openssl-0.9.4

I'm using Randal's photo handler Stonehenge::Pictures, here is my
.htacces file:

| SetHandler perl-script
| PerlHandler Stonehenge::Pictures
| PerlSendHeader On

Here is the logfile snippet that I get when I try to access that
directory:

| Directory index forbidden by rule: /home/httpd/wuertele.com/html/photos/

Everything worked fine before the upgrade.  Advice greatly
appreciated.  Appended please find my httpd.conf file.

Dave

##
## httpd.conf -- Apache HTTP server configuration file
##

#
# Based upon the NCSA server configuration files originally by Rob McCool.
#
# This is the main Apache server configuration file.  It contains the
# configuration directives that give the server its instructions.
# See http://www.apache.org/docs/> for detailed information about
# the directives.
#
# Do NOT simply read the instructions in here without understanding
# what they do.  They're here only as hints or reminders.  If you are unsure
# consult the online docs. You have been warned.  
#
# After this file is processed, the server will look for and process
# /usr/local/apache/conf/srm.conf and then /usr/local/apache/conf/access.conf
# unless you have overridden these with ResourceConfig and/or
# AccessConfig directives here.
#
# The configuration directives are grouped into three basic sections:
#  1. Directives that control the operation of the Apache server process as a
# whole (the 'global environment').
#  2. Directives that define the parameters of the 'main' or 'default' server,
# which responds to requests that aren't handled by a virtual host.
# These directives also provide default values for the settings
# of all virtual hosts.
#  3. Settings for virtual hosts, which allow Web requests to be sent to
# different IP addresses or hostnames and have them handled by the
# same Apache server process.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path.  If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "logs/foo.log"
# with ServerRoot set to "/usr/local/apache" will be interpreted by the
# server as "/usr/local/apache/logs/foo.log".
#

### Section 1: Global Environment
#
# The directives in this section affect the overall operation of Apache,
# such as the number of concurrent requests it can handle or where it
# can find its configuration files.
#

#
# ServerType is either inetd, or standalone.  Inetd mode is only supported on
# Unix platforms.
#
ServerType standalone

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE!  If you intend to place this on an NFS (or otherwise network)
# mounted filesystem then please read the LockFile documentation
# (available at http://www.apache.org/docs/mod/core.html#lockfile>);
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
ServerRoot /etc/httpd

#
# The LockFile directive sets the path to the lockfile used when Apache
# is compiled with either USE_FCNTL_SERIALIZED_ACCEPT or
# USE_FLOCK_SERIALIZED_ACCEPT. This directive should normally be left at
# its default value. The main reason for changing it is if the logs
# directory is NFS mounted, since the lockfile MUST BE STORED ON A LOCAL
# DISK. The PID of the main server process is automatically appended to
# the filename. 
#
#LockFile /usr/local/apache/logs/httpd.lock

#
# PidFile: The file in which the server should record its process
# identification number when it starts.
#
#PidFile /usr/local/apache/logs/httpd.pid
PidFile /var/run/httpd.pid

#
# ScoreBoardFile: File used to store internal server process information.
# Not all architectures require this.  But if yours does (you'll know because
# this file will be  created when you run Apache) then you *must* ensure that
# no two invocations of Apache share the same scoreboard file.
#
#ScoreBoardFile /usr/local/apache/logs/httpd.scoreboard
ScoreBoardFile /var/run/httpd.scoreboard

#
# In the standard configuration, the server will process this file,
# srm.conf, and access.conf in that order.  The latter two files are
# now distributed empty, as it is recommended that all directives
# be kept in a single file for simplicity.  The commented-out values
# below are the built-in defaults.  You can have the server ignore
# these files altogether by using "/dev/null" (for Unix) or
# "nul" (for Win32) for the arguments to the directives.
#
#ResourceConfig conf/srm.conf
#AccessConfig conf/access.conf

#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 300

#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#

Re: redhat 6.1 apache and modperl woes !!

2000-01-17 Thread Clay

Naren Dasu wrote:

> I seem to be having problems with Red Hat 6.1 & mod_perl. Compiled and
> installed from scratch. This is an out-of-the box Penguin Computer running
> RH6.1.
>
> The strange behavior that manifests itself as follows:
>
> Config 1
> 
> SetHandler server-status
> Order deny,allow
> Deny from all
> Allow from .divatv.com
> 
>
> The above configuration fails, I get the "Client does not have permission"
> in the error_log file.
>
> Config 2
>
> BUT this works ... I did this to test if the server-status modules were
> properly installed.
>
> 
> SetHandler server-status
> Order allow,deny
> Allow from all
> 
>
> I am running out of ideas on why Config 1 fails.  I also tried a bunch of
> chmod/chgrp commands to change the permissions on the files, but no luck.
> Could someone shed some light on this ?
>
> thanks a bunch
> naren
>
> At 12:26 PM 1/17/00 -0500, you wrote:
> >Clay wrote:
> >>
> >> so i am just wanting to know what anyone
> >> has found out on mod perl not working properly
> >> under redhat 6.1?
> >
> >If you install everything (including modperl) from RedHat's RPMs, no
> problem (I
> >did this on five very different boxes, some new, some upgraded). If you
> try to
> >do it yourself by building from sources, etc, ... oh well - then RedHat is in
> >the way and gets all confused..
> >Attachment Converted: "c:\eudora\attach\korte14.vcf"
> >

ok for server status im not sure but here is perl status 's block


  SetHandler  perl-script
  PerlHandler Apache::Status
  


as for modperl actually workin' my startup works as long as i dont put in additional
modules besides
use Apache::Registry();
use Apache::Constants();
use CGI qw(-compile :all);
 use CGI::Carp () ;
in my startup.pl

otherwise apache dies with no errors anywhere

if i try to add my own module, or any others it totally fails, and know one has found
out anything on this , by the sounds of it


i find it strange  that no one has seen this! since rh6.1 has been out since before
christmas



redhat 6.1 apache and modperl woes !!

2000-01-17 Thread Naren Dasu

I seem to be having problems with Red Hat 6.1 & mod_perl. Compiled and
installed from scratch. This is an out-of-the box Penguin Computer running
RH6.1.

The strange behavior that manifests itself as follows: 

Config 1

SetHandler server-status
Order deny,allow
Deny from all
Allow from .divatv.com


The above configuration fails, I get the "Client does not have permission"
in the error_log file. 


Config 2

BUT this works ... I did this to test if the server-status modules were
properly installed. 


SetHandler server-status
Order allow,deny
Allow from all


I am running out of ideas on why Config 1 fails.  I also tried a bunch of
chmod/chgrp commands to change the permissions on the files, but no luck.
Could someone shed some light on this ? 

thanks a bunch 
naren 



At 12:26 PM 1/17/00 -0500, you wrote:
>Clay wrote:
>> 
>> so i am just wanting to know what anyone
>> has found out on mod perl not working properly
>> under redhat 6.1?
>
>If you install everything (including modperl) from RedHat's RPMs, no
problem (I
>did this on five very different boxes, some new, some upgraded). If you
try to
>do it yourself by building from sources, etc, ... oh well - then RedHat is in
>the way and gets all confused..
>Attachment Converted: "c:\eudora\attach\korte14.vcf"
>



RE: Off Topic Questions

2000-01-17 Thread G.W. Haywood

Hi all,

On Mon, 17 Jan 2000, Stas Bekman wrote:

> Please try to keep it clean and not encourage off-topic questions. 

Sorry, Stas, you're quite right.  I often do reply privately to the
off-topic questions, and I suppose even that might be construed as
encouraging them.  I do however also try to point out gently that it
*is* off topic, and so shouldn't be here on the List, and also when
you should press `D'.  Er, now.

It's not always easy to know where to draw the line.  And unlike many,
I like the prickly feeling I get at the back of my neck when I think
that the likes of the Camel Book's authors are probably disecting my
replies and will blow me out of the water without mercy if I get it
wrong.  It's kind of like doing homework exercises, and it's amazing
what you learn when you try to explain something to someone else.  I
guess I'll lose some of these joys if I don't reply to the completely
off-topic questions that go to the List.  But that's OK.

For the record, I'm happy to answer Perl-specific questions if mailed
to me privately.  I can't guarantee to cope with the demand, even if
there's only one question in my mailbox.  And it has to be said that
if you have to ask me (and if I can answer it:) then it's probably a
dumb question.  But that's OK, too.

As for keeping it clean, well I suppose it was a racist joke...

73,
Ged.



RE: squid performance

2000-01-17 Thread Stas Bekman

> On Mon, 17 Jan 2000, Markus Wichitill wrote:
> 
> > > So, if you want to increase RCVBUF size above 65535, the default max
> > > value, you have to raise first the absolut limit in
> > > /proc/sys/net/core/rmem_max, 
> > 
> > Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do
> > this? I don't have much experience with /proc, but this seems to work.
> 
> Yes, that's the way described in Linux kernel documentation and I use
> myself.

So you should put this into /etc/rc.d/rc.local ?

> > If it's ok, it could be added to the Guide, which already mentions how
> > to change it in FreeBSD.
> 
> I'd also like to see this info added to the Guide.

Of course! Thanks for this factoid!

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: redhat apache and modperl oh my!

2000-01-17 Thread Clay

DSO is not the issue

that works just fine
i know it does,
the problem seems to be with the default binary packages that redhat 6.1
comes with

i ve just uninstalled/installed twice same results
and there is no errors out in the error log

i need this for work or i wouldnt be stressin!




RE: squid performance

2000-01-17 Thread Radu Greab



On Mon, 17 Jan 2000, Markus Wichitill wrote:

> > So, if you want to increase RCVBUF size above 65535, the default max
> > value, you have to raise first the absolut limit in
> > /proc/sys/net/core/rmem_max, 
> 
> Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do
> this? I don't have much experience with /proc, but this seems to work.

Yes, that's the way described in Linux kernel documentation and I use
myself.

> If it's ok, it could be added to the Guide, which already mentions how
> to change it in FreeBSD.

I'd also like to see this info added to the Guide.


Radu Greab




Re: redhat apache and modperl oh my!

2000-01-17 Thread Aaron Johnson

I have had the same exprience as Stas.

The Red Hat RPM uses Dynamic Shared Objects (DSO) for all the
modules.  This is NOT the ideal way to run mod_perl, I am not saying
you can't, but a lot of modules won't preload under these conditions.

my $.02

Aaron Johnson

Stas Bekman wrote:
> 
> > Clay wrote:
> > >
> > > so i am just wanting to know what anyone
> > > has found out on mod perl not working properly
> > > under redhat 6.1?
> 
> Clay, did you try to find your answer in the list's archives? (hint: at
> perl.apache.org) There is no need to roll the broken record again. Thank
> you!
> 
> > If you install everything (including modperl) from RedHat's RPMs, no
> > problem (I did this on five very different boxes, some new, some
> > upgraded). If you try to do it yourself by building from sources, etc,
> > ... oh well - then RedHat is in the way and gets all confused..
> 
> Gerd, there is no problems to build mod_perl from scratch. I did it with
> all versions from the past 2.5 years I think. Make sure to remove the
> apache and mod_perl RPMs first!!!
> 
> ___
> Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
> Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
> perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
> single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: redhat apache and modperl oh my!

2000-01-17 Thread Clay


no, i have only used the redhat packages, i have extensivley search all
related new s groups etc,

the startup file ive incl'd has been the one ive borrowed from the mod perl
book or the guide online,
i have never had problems up until redhat 6.1 {stampede slak and redhat 6
all worked fine}

i realize this is not modperls fault but if anyone has any hints please let
me know

ive included the startup noticed the rem'd out onesd, if i load any of them
it cans out, i know they are installed !


 startup.pl


Re: redhat apache and modperl oh my!

2000-01-17 Thread Stas Bekman

> Clay wrote:
> > 
> > so i am just wanting to know what anyone
> > has found out on mod perl not working properly
> > under redhat 6.1?

Clay, did you try to find your answer in the list's archives? (hint: at
perl.apache.org) There is no need to roll the broken record again. Thank
you!

> If you install everything (including modperl) from RedHat's RPMs, no
> problem (I did this on five very different boxes, some new, some
> upgraded). If you try to do it yourself by building from sources, etc,
> ... oh well - then RedHat is in the way and gets all confused.. 

Gerd, there is no problems to build mod_perl from scratch. I did it with
all versions from the past 2.5 years I think. Make sure to remove the
apache and mod_perl RPMs first!!!


___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-17 Thread Stas Bekman

On Mon, 17 Jan 2000, Vivek Khera wrote:

> > "OB" == Oleg Bartunov <[EMAIL PROTECTED]> writes:
> 
> OB> I always thought ProxyReceiveBufferSize is supposed to be a 
> OB> buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9
> 
> No, that's the size of the system call buffer.  It is not an
> application buffer.

So how one should interpret the info at:
http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize


The ProxyReceiveBufferSize directive specifies an explicit network buffer
size for outgoing HTTP and FTP connections, for increased throughput. It
has to be greater than 512 or set to 0 to indicate that the system's
default buffer size should be used.


So what's the application buffer parameter? A hardcoded value?

Oleg, very you able to benchmark the buffer size change?



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: redhat apache and modperl oh my!

2000-01-17 Thread Gerd Kortemeyer

Clay wrote:
> 
> so i am just wanting to know what anyone
> has found out on mod perl not working properly
> under redhat 6.1?

If you install everything (including modperl) from RedHat's RPMs, no problem (I
did this on five very different boxes, some new, some upgraded). If you try to
do it yourself by building from sources, etc, ... oh well - then RedHat is in
the way and gets all confused..

begin:vcard 
n:Kortemeyer;Gerd
tel;fax:(517) 432-2175
tel;work:(517) 432-5468
x-mozilla-html:TRUE
url:http://www.lite.msu.edu/kortemeyer/
org:Michigan State University;LITE Lab
adr:;;123 North Kedzie Labs;East Lansing;Michigan;48824;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Instructional Technology Specialist
x-mozilla-cpt:;3
fn:Gerd Kortemeyer
end:vcard



Re: Run away processes

2000-01-17 Thread Bill Moseley

At 06:48 PM 1/17/00 +0200, Stas Bekman wrote:
>> The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
>> the client breaks the connection.
>> 
>> Is there a recommendation on how to catch & stop run away mod_perl programs
>> in a way that's _not_ part of the run away program.  Or is this even
>> possible?  Some type of watchdog, just like httpd.conf Timeout?
>
>Try Apache::SafeHang
>http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz

Oh, ya.  Thanks.

I'm curious.  What is the reason Timeout doesn't work?   Does Timeout only
work with mod_cgi?




Bill Moseley
mailto:[EMAIL PROTECTED]



redhat apache and modperl oh my!

2000-01-17 Thread Clay

so i am just wanting to know what anyone
has found out on mod perl not working properly
under redhat 6.1?

thanks





Re: squid performance

2000-01-17 Thread Vivek Khera

> "OB" == Oleg Bartunov <[EMAIL PROTECTED]> writes:

OB> I always thought ProxyReceiveBufferSize is supposed to be a 
OB> buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9

No, that's the size of the system call buffer.  It is not an
application buffer.



Re: Apache locking up on WinNT

2000-01-17 Thread Ian Struble

With out going into too much detail I can tell you that you have probably
discovered for yourself that mod_perl on NT only has a single perl
interpreter thread.  Try putting a proxy in front of some backend mod_perl
procs so that when Joe super-slow-connection comes along he doesn't get to
tie up that thread.  But try to get the proxy on a unix box because when I
did this on an NT machine mod_proxy was a dog.  

If you are really stuck on NT you may want to mess around with 
ActiveState's PerlEx.  I have not done anything with it myself but I 
gather that it tries to do the same thing as mod_perl.  The only 
difference being that it perlforms alot better than mod_perl does on NT 
since it isn't crippled by the single interpreter issue.

Ian

On Mon, 17 Jan 2000, Matthew Robinson wrote:

> 
> I am currently in the process of transferring a database driven site from
> IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
> 10-20 minutes and the only way to get things going again is to restart
> Apache (Apache is running from the console not as a service).
> 
> The site isn't particularly heavily loaded, currently handling a request
> every 5-10 seconds.
> 
> I have also noticed that on some occasions (after the lock up) the
> error.log contains an entry stating that one of my content handlers is not
> defined, the content handler works fine until this point.  I have checked
> the FAQ's etc and I am almost 100% certain that I don't have a problem with
> my namespace.
> 
> When the server locks up netstat lists a number of clients who have a
> TIME_WAIT status on port 8080 but these connections are not listed in
> /server-status.
> 
> I am using Apache/1.3.9 (Win32) with mod_perl/1.21 which I downloaded in
> December last year.  My worry is that there is a problem with Apache on NT
> and mod_perl, given that Apache on NT is multi-threaded.
> 
> Unfortunately, I am stuck in NT due to parts of the legacy system,
> otherwise I would move to Linux or FreeBSD.  If anyone can offer any
> suggestions I would be most grateful as the only alternative I have is to
> re-engineer the site in IIS.
> 
> If anyone has any suggestions, or would like further specific detail then
> please let me know.
> 
> Thanks
> 
> Matt
> 
> --
> Matthew RobinsonE: [EMAIL PROTECTED]
> Torrington Interactive Ltd  W: www.torrington.net
> 4 Printing House Yard   T: (44) 171 613 7200
> LONDON E2 7PR   F: (44) 171 613 7201
> 



RE: squid performance

2000-01-17 Thread Markus Wichitill

> So, if you want to increase RCVBUF size above 65535, the default max
> value, you have to raise first the absolut limit in
> /proc/sys/net/core/rmem_max, 

Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do this? I don't have 
much experience with /proc, but this seems to work. If it's ok, it could be added to 
the Guide, which already mentions how to change it in FreeBSD.



Re: Run away processes

2000-01-17 Thread Stas Bekman

> The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
> the client breaks the connection.
> 
> Is there a recommendation on how to catch & stop run away mod_perl programs
> in a way that's _not_ part of the run away program.  Or is this even
> possible?  Some type of watchdog, just like httpd.conf Timeout?

Try Apache::SafeHang
http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz

It should be renamed one day when I get back to work on it, into something
like Apache::Watchdog::RunAwayProc as kindly was suggested by Ken Williams
(the Apache::Watchdog:: part :)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Ask Bjoern Hansen wrote:

> At ValueClick we can't use the caching for obvious reasons so we're using
> a bunch of apache/mod_proxy processes in front of the apache/mod_perl
> processes to save memory.
> 
> Even with our average <1KB per request we can keep hundreds of mod_proxy
> childs busy with very few active mod_perl childs.

Would it be breaching any confidences to tell us how many
kilobyterequests per memorymegabyte or some other equally daft
dimensionless numbers?

73,
Ged.



Re: squid performance

2000-01-17 Thread Stas Bekman

> > On Solaris, default seems to be 256K ...
> 
> As I remember, that's what Linux defalts to.  Don't take may word for
> it, I can't remember exactly where or when I read it - but I think it
> was in this List some time during the last couple of months!

Guide is your friend :)
http://perl.apache.org/guide/scenario.html#Building_and_Using_mod_proxy


You can control the buffering feature with ProxyReceiveBufferSize
directive: 

ProxyReceiveBufferSize 16384

The above setting will set a buffer size to be of 16Kb. If it is not set
explicitly or set to 0, then the default buffer size is used. It may not
be smaller than 512 and it should be a number that it's a multiplicative
of 512.

Both the default and the maximum possible value are depend on OS. For
example on linux OS with kernel 2.2.5 the maximum and default values are
either 32k or 64k (hint: grep the kernel sources for SK_RMEM_MAX
variable). If you set the value bigger than limit, the default one will be
used.

Under FreeBSD it's possible to configure kernel to have bigger socket
buffers: 

   % sysctl -w kern.ipc.maxsockbuf=2621440

When you tell the kernel to use bigger sockets you can set bigger values
for ProxyReceiveBufferSize. i.e. 1048576 (1Mb) and bigger. 

So basically to get an immediate release of the mod_perl server from stale
awaiting, ProxyReceiveBufferSize should be set to a value greater than the
biggest generated respond produced by any mod_perl script but not bigger
than the limit. But even if not all the requests' output will be small
enough or the buffer big enough to absorb it all, you've got an improve
since the processes that generated smaller responds will be immideately
released. 




___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Run away processes

2000-01-17 Thread Bill Moseley

The httpd.conf Timeout setting doesn't effect mod_perl, it seems, even if
the client breaks the connection.

Is there a recommendation on how to catch & stop run away mod_perl programs
in a way that's _not_ part of the run away program.  Or is this even
possible?  Some type of watchdog, just like httpd.conf Timeout?

Thanks,

Bill Moseley
mailto:[EMAIL PROTECTED]



Re: Program very slow

2000-01-17 Thread Stas Bekman

> An Englishman asked an Irishman for directions to a place some
> distance away.  The Irishman replied, "T'be sure, if oi was going
> there, oi wouldn't start from here!".
> 
> This *is* a bit off-topic, but the guy needs help.

Ged, this is a wonderful thing that you do.

But, if you help someone with off-topic questions, please reply in person,
not to the list (like others did). Also make sure you stress in your
answer that this question shold be asked somewhere else. 

Why's that? Because we have worked hard to avoid a situation where the
list becomes ask-everything-and-you-will-be-answered and loose its value,
and worse its best contributors. 

Please try to keep it clean and not encourage off-topic questions. 

Thank you for understanding!

> Press `D' if you're bored already.
> 
> On Sun, 16 Jan 2000, Kader Ben wrote:
> 
> > I want to check if @rec contains the string "Unknown" but when I do
> > so the program is very very slow (this process 6M file into @rec
> > array). Is there any other away to rewrite this code?
> 
> > for ($i = 0; $i < scalar(@rec); $i++) { $rec[$i] = '"'.$rec[$i].'"'; }
> > if($rec[16] eq '"Unknown"') { Alert_Unknown_ChannelID($rec[0]); }
> > else { my $out = join(',', @rec) . "\n"; print (G $out); }
> > }
> 
> I'd really need more to go on than you've given, so I'll make some
> wild assumptions, and here goes...
> 
> It's horribly inefficient to read a big file into an array with a
> large number of elements only to process it with things like:
> 
> $rec[$i] = '"'.$rec[$i].'"';
> 
> Think about what you're asking.  Each element has to grow by a couple
> of bytes...
> 
> Maybe you can manipulate smaller chunks of the file?  If you must add
> the quotes, do it before the pieces go into the array.  If you don't
> need to do any more processing on the array, just put the first 16
> elements into it (I assume they're relatively small), something like
> the code below.  Process the 16 element array as you do now, but deal
> with the remaining input on the fly, without putting it in an array.
> Try to use $_ wherever you can.
> 
> I take it you *are* using the `-w' switch and `use strict;'?
> 
> 73,
> Ged.
> 
> #!/usr/bin/perl -w
> # Read a file, put quotes around all the lines.
> # You can probably tell I'm really a `C' programmer.
> 
> use strict;
> 
> my @rec=();# Small array, big file
> my $fileName = "/home/ged/website/input/create/data/input/catalogue.srt";
> 
> open(FD,$fileName);
> 
> # Read first bit of file
> my $i=0;
> while( $rec[$i++]= ) { last if $i==16; }
> 
> # My file has newlines so chop 'em off before wrapping with quotes.
> # More efficient to print this inside the body of the while() above,
> # maybe you don't need the array at all...
> for( $i=0; $i<=$#rec; ) { chop($rec[$i]); print "\"$rec[$i++]\"\n"; }
> 
> # Announce
> print "* Here we are at line 16. *\n";
> 
> # Add quotes on the fly.  O'course we don't have to do it at all...
> if( 1 ) { while(  ) { chop; print "\"$_\"\n"; } }
> 
> close(FD);
> 
> # EOF: ged.pl
> 
> 
> 



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com




RE: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

>> I added the warns to the scripts and it appears that access to the modules
>> is serialised.  Each call to the handler has to run to completion before
>> any other handlers can execute.
>>
>
>Yes, on NT all accesses to the perl part are serialized. This will not
>change before mod_perl 2.0
>
>Gerald

I had a horrible feeling it was going to have something to do with the fact
that Apache on NT is multi-threaded and perl isn't (yet). 

I assume that Apache::Registry has the same problems.  However, good old
fashioned CGI scripts in the /cgi-bin directory should be OK?  Does anybody
have any performance stats between perl running externally on Apache and
PerlIS on IIS?

Matt
--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Joshua Chamas wrote:

> On Solaris, default seems to be 256K ...

As I remember, that's what Linux defalts to.  Don't take may word for
it, I can't remember exactly where or when I read it - but I think it
was in this List some time during the last couple of months!

> I needed to buffer up to 3M files, which I did by dynamically 
> allocating space in ap_proxy_send_fb.

For such large transfers between proxy and server, is there any reason
why one shouldn't just dump it into a tempfile in a ramdisk for the
proxy to deal with at its leisure, and let the OS take care of all the
virtual and sharing stuff?  After all, that's what it's for...

73
Ged.



Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Sun, 16 Jan 2000, DeWitt Clinton wrote:

[...]
> On that topic, is there an alternative to squid?  We are using it
> exclusively as an accelerator, and don't need 90% of it's admittedly
> impressive functionality.  Is there anything designed exclusively for this
> purpose?

At ValueClick we can't use the caching for obvious reasons so we're using
a bunch of apache/mod_proxy processes in front of the apache/mod_perl
processes to save memory.

Even with our average <1KB per request we can keep hundreds of mod_proxy
childs busy with very few active mod_perl childs.


  - ask

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



RE: squid performance

2000-01-17 Thread Vivek Khera

> "GR" == Gerald Richter <[EMAIL PROTECTED]> writes:

>> Lately I've been using apache on the front end with mod_rewrite and
>> mod_proxy to send mod_perl-required page requests to the heavy back

GR> Do you know how does this work with slow clients compared to
GR> squid. I always thought (but never tried) one benfit of squid is,
GR> that it temporaly caches (should be better say buffers?) the

Squid does indeed cache and buffer the output like you describe.  I
don't know if Apache does so, but in practice, it has not been an
issue for my site, which is quite busy (about 700k pages per month).

I think if you can avoid hitting a mod_perl server for the images,
you've won more than half the battle, especially on a graphically
intensive site.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-301-545-6996
PGP & MIME spoken herehttp://www.kciLink.com/home/khera/



RE: Apache locking up on WinNT

2000-01-17 Thread Gerald Richter

>
> I added the warns to the scripts and it appears that access to the modules
> is serialised.  Each call to the handler has to run to completion before
> any other handlers can execute.
>

Yes, on NT all accesses to the perl part are serialized. This will not
change before mod_perl 2.0

Gerald



RE: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

At 01:26 PM 1/17/00 +0100, Gerald Richter wrote:
>I would suggest, put
>
>warn "foo" ;
>
>allover in your perl code and then look in the error log and see where the
>last warn came from, then you have the place where it lock up.
>
>Gerald

I added the warns to the scripts and it appears that access to the modules
is serialised.  Each call to the handler has to run to completion before
any other handlers can execute.  

I think the problem I am getting is that somebody on a slow link comes
along and effectively limits every user to run at that speed.  If I had
been more patient I would have got a response.  Once the queue of requests
cleared.

Can anyone verify this to be the case.  If this is the case I will have to
go back to IIS (for the time being) as I don't have time to work around this.


If you look at the following section of the log you will see that all of
the accesses are sequential but there is a period when a number of requests
appear to be bunched up after a slowish response. Names have been changed
to protect the innocent.

[Mon Jan 17 13:48:53 2000] [warn] Module::A::handler start
[Mon Jan 17 13:48:53 2000] [warn] Module::A::handler exit
[Mon Jan 17 13:48:57 2000] [warn] Module::B::handler start
[Mon Jan 17 13:48:57 2000] [warn] Module::B::handler exit
[Mon Jan 17 13:49:07 2000] [warn] Module::C::handler start
[Mon Jan 17 13:49:29 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:49:29 2000] [warn] Module::D::handler start
[Mon Jan 17 13:49:29 2000] [warn] Module::D::handler exit
[Mon Jan 17 13:49:29 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:02 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:50:14 2000] [warn] Module::A::handler start
[Mon Jan 17 13:50:14 2000] [warn] Module::A::handler exit
[Mon Jan 17 13:50:14 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:21 2000] [warn] Module::C::handler exit
[Mon Jan 17 13:50:22 2000] [warn] Module::C::handler start
[Mon Jan 17 13:50:51 2000] [warn] Module::C::handler exit

Matt
--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



RE: squid performance

2000-01-17 Thread radu



On Mon, 17 Jan 2000, Gerald Richter wrote:

> Look at proxy_http.c line 263 (Apache 1.3.9):
> 
>   if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
>  (const char *) &conf->recv_buffer_size, sizeof(int))
> 
> I am not an expert in socket programming, but the setsockopt man page on my
> Linux says: "The system places an absolut limit on these values", but
> doesn't says where this limit will be?


For 2.2 kernels the max limit is in /proc/sys/net/core/rmem_max and the
default value is in /proc/sys/net/core/rmem_default. It's good to note the
following comment from the kernel source:

"Don't error on this BSD doesn't and if you think about it this is right.
Otherwise apps have to play 'guess the biggest size' games. RCVBUF/SNDBUF
are treated in BSD as hints."

So, if you want to increase RCVBUF size above 65535, the default max
value, you have to raise first the absolut limit in
/proc/sys/net/core/rmem_max, otherwise you might be thinking that by
calling setsockopt you increased it to say 1 MB, but in fact the RCVBUF
size is still 65535.


HTH,
Radu Greab



RE: Apache locking up on WinNT

2000-01-17 Thread Gerald Richter

>
> I have just gone back and checked the logs.  The majority of the time the
> server locks up without putting anything in the error log.  Currently,
> there are only about 10 content handlers in the system and I am fairly
> confident they work.
>
> When it locks up I can still telnet into the server and get a connection
> immediately but I don't get a response.  I have waited considerably longer
> than the Timeout (60 secs) and the connections are not terminated.
>

I guess it locks up somewhere in perl part, because perl isn't renetrant, so
everythingelse has to wait!

> On some occasions /server-status has reported that almost all of the
> threads are reading (although they don't give the remote addresses).
> Normally, I would expect a maximum of about 5 threads doing concurrent
> processing.
>
> I am fairly happy to accept that the problem is in my scripts but I am not
> entirely sure what I am doing wrong.
>

I would suggest, put

warn "foo" ;

allover in your perl code and then look in the error log and see where the
last warn came from, then you have the place where it lock up.

Gerald



Re: Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson

At 12:20 PM 1/17/00 +0100, Waldek Grudzien wrote:
>> I am currently in the process of transferring a database driven site from
>> IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
>> 10-20 minutes and the only way to get things going again is to restart
>> Apache (Apache is running from the console not as a service).
>
>Well - I am using NT distribution 
>(APACHE 1.3.9 / mod_perl 1.21/ PERL 5.005_003)
>downloaded from  :
>ftp://theoryx5.uwinnipeg.ca/pub/other/
>
>and noticed no problem with it.
>Maybe the problem lays in your scripts (have
>you analised entry from apache log you have descibed) ?

I have just gone back and checked the logs.  The majority of the time the
server locks up without putting anything in the error log.  Currently,
there are only about 10 content handlers in the system and I am fairly
confident they work.

When it locks up I can still telnet into the server and get a connection
immediately but I don't get a response.  I have waited considerably longer
than the Timeout (60 secs) and the connections are not terminated.

On some occasions /server-status has reported that almost all of the
threads are reading (although they don't give the remote addresses).
Normally, I would expect a maximum of about 5 threads doing concurrent
processing.

I am fairly happy to accept that the problem is in my scripts but I am not
entirely sure what I am doing wrong.

Matt

>
>Regards,
>
>Waldek Grudzien
>_
>http://www.uhc.lublin.pl/~waldekg/
>University Health Care
>Lublin/Lubartow, Poland
>tel. +48 81 44 111 88
>ICQ # 20441796
>
--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



RE: accessing request headers from CGI?

2000-01-17 Thread Thomas Corte


Hi,

On Mon, 17 Jan 2000, Gerald Richter wrote:

> Normaly only these one, which your server setup in the environement for you.
> Print out the whole %ENV to see what you can get.

Thanks for the quick response! I totally forgot the header->environment
mapping done by Apache, and it turns out that my (non-standard) header
fields are indeed copied to ENV, at least by apache (prefixed with
"HTTP_")!

If Fasttrack behaves the same way, my problem is solved.

_

Thomas Corte
<[EMAIL PROTECTED]>



Re: Apache locking up on WinNT

2000-01-17 Thread Waldek Grudzien

> I am currently in the process of transferring a database driven site from
> IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
> 10-20 minutes and the only way to get things going again is to restart
> Apache (Apache is running from the console not as a service).

Well - I am using NT distribution 
(APACHE 1.3.9 / mod_perl 1.21/ PERL 5.005_003)
downloaded from  :
ftp://theoryx5.uwinnipeg.ca/pub/other/

and noticed no problem with it.
Maybe the problem lays in your scripts (have
you analised entry from apache log you have descibed) ?

Regards,

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



RE: accessing request headers from CGI?

2000-01-17 Thread Gerald Richter

>
> i am trying to read the headers of an incoming HTTP request
> in a CGI script. It seems to me that the only way to do so is
> to use mod_perl methods since the standard CGI interface does not
> provide request header access - or am I missing something here?
>
> The problem is that in this specific project I am stuck with
> a non-apache http-server (NS fasttrack) - do I have
> a chance to get the http headers without mod_perl?
>

Normaly only these one, which your server setup in the environement for you.
Print out the whole %ENV to see what you can get.

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-




accessing request headers from CGI?

2000-01-17 Thread Thomas Corte


Hi,

i am trying to read the headers of an incoming HTTP request
in a CGI script. It seems to me that the only way to do so is
to use mod_perl methods since the standard CGI interface does not
provide request header access - or am I missing something here?

The problem is that in this specific project I am stuck with
a non-apache http-server (NS fasttrack) - do I have
a chance to get the http headers without mod_perl?

_

Thomas Corte
<[EMAIL PROTECTED]>



Re: squid performance

2000-01-17 Thread Joshua Chamas

Gerald Richter wrote:
> 
> I have seen this in the source too, that's why I wrote it will not work with
> Apache, because most pages will be greater the 8K. Patching Apache, is one
> possibility, that's right, but I just looked after the
> ProxyReceiveBufferSize which Oleg pointed to, and this one sets the socket
> options and therefore should do the same job (as far as the OS supports it).
> Look at proxy_http.c line 263 (Apache 1.3.9):
> 
> if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
>(const char *) &conf->recv_buffer_size, sizeof(int))
> 
> I am not an expert in socket programming, but the setsockopt man page on my
> Linux says: "The system places an absolut limit on these values", but
> doesn't says where this limit will be?
> 

On Solaris, default seems to be 256K ...

tcp_max_buf

 Specifies the maximum buffer size a user is allowed to specify with the SO_SNDBUF 
or
 SO_RCVBUF options. Attempts to use larger buffers fail with EINVAL. The default is
 256K. It is unwise to make this parameter much larger than the maximum buffer
 size your applications require, since that could allow malfunctioning or malicious
 applications to consume unreasonable amounts of kernel memory.

I needed to buffer up to 3M files, which I did by dynamically 
allocating space in ap_proxy_send_fb.  I didn't know that you 
could up the tcp_max_buf at the time, and would be interested 
in anyone's experience in doing so, whether this can actually 
be used to buffer large files.  Save me a source tweak in 
the future. ;)

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Apache locking up on WinNT

2000-01-17 Thread Matthew Robinson


I am currently in the process of transferring a database driven site from
IIS to Apache on NT using mod_perl.  Apache seems to lock up after about
10-20 minutes and the only way to get things going again is to restart
Apache (Apache is running from the console not as a service).

The site isn't particularly heavily loaded, currently handling a request
every 5-10 seconds.

I have also noticed that on some occasions (after the lock up) the
error.log contains an entry stating that one of my content handlers is not
defined, the content handler works fine until this point.  I have checked
the FAQ's etc and I am almost 100% certain that I don't have a problem with
my namespace.

When the server locks up netstat lists a number of clients who have a
TIME_WAIT status on port 8080 but these connections are not listed in
/server-status.

I am using Apache/1.3.9 (Win32) with mod_perl/1.21 which I downloaded in
December last year.  My worry is that there is a problem with Apache on NT
and mod_perl, given that Apache on NT is multi-threaded.

Unfortunately, I am stuck in NT due to parts of the legacy system,
otherwise I would move to Linux or FreeBSD.  If anyone can offer any
suggestions I would be most grateful as the only alternative I have is to
re-engineer the site in IIS.

If anyone has any suggestions, or would like further specific detail then
please let me know.

Thanks

Matt

--
Matthew RobinsonE: [EMAIL PROTECTED]
Torrington Interactive Ltd  W: www.torrington.net
4 Printing House Yard   T: (44) 171 613 7200
LONDON E2 7PR   F: (44) 171 613 7201



RE: squid performance

2000-01-17 Thread Gerald Richter

Joshua,
>
> I don't know what squid's buffer is like, but back in apache
> 1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes,
> which would be used in proxy_util.c:ap_proxy_send_fb() to loop
> over content being proxy passed in 8K chunks, passing that
> on to the client.
>
> So if all the web files are <8K, perfect, but I'd suggest
> increasing the value that ap_proxy_send_fb uses to buffer
> to the largest size output commonly sent by the mod_perl server.
> If this is done, then apache's mod_proxy can be used as
> effectively as squid to buffer output from a mod_perl server.
>
I have seen this in the source too, that's why I wrote it will not work with
Apache, because most pages will be greater the 8K. Patching Apache, is one
possibility, that's right, but I just looked after the
ProxyReceiveBufferSize which Oleg pointed to, and this one sets the socket
options and therefore should do the same job (as far as the OS supports it).
Look at proxy_http.c line 263 (Apache 1.3.9):

if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
   (const char *) &conf->recv_buffer_size, sizeof(int))

I am not an expert in socket programming, but the setsockopt man page on my
Linux says: "The system places an absolut limit on these values", but
doesn't says where this limit will be?

Gerald



Re: squid performance

2000-01-17 Thread Oleg Bartunov

I always thought ProxyReceiveBufferSize is supposed to be a 
buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9

Oleg
On Mon, 17 Jan 2000, Joshua Chamas wrote:

> Date: Mon, 17 Jan 2000 00:36:17 -0800
> From: Joshua Chamas <[EMAIL PROTECTED]>
> To: Gerald Richter <[EMAIL PROTECTED]>
> Cc: Vivek Khera <[EMAIL PROTECTED]>, Modperl list <[EMAIL PROTECTED]>
> Subject: Re: squid performance
> 
> Gerald Richter wrote:
> > 
> > > These are on the same server, and all images and CGI's run on the
> > > small apache, and the page contents are dynamically generated by a
> > > heavy back-end proxied transparently.  The front end apache proxies to
> > > a different back-end based on the hostname it is contacted under.
> > >
> > Do you know how does this work with slow clients compared to squid. I always
> > thought (but never tried) one benfit of squid is, that it temporaly caches
> > (should be better say buffers?) the output generated by mod_perl scripts, so
> > the script can run as fast as possible and deliver it's output to squid,
> > while squid delivers the output to a slower client, the the process running
> > mod_perl can already serve the next request, therfore keeping the number of
> > mod_perl processes small.
> > 
> > Does this work in this way with squid? I don't think this will work with
> > Apache and a simple ProxyPass...
> > 
> 
> Gerald,
> 
> I don't know what squid's buffer is like, but back in apache 
> 1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes, 
> which would be used in proxy_util.c:ap_proxy_send_fb() to loop 
> over content being proxy passed in 8K chunks, passing that
> on to the client.  
> 
> So if all the web files are <8K, perfect, but I'd suggest 
> increasing the value that ap_proxy_send_fb uses to buffer
> to the largest size output commonly sent by the mod_perl server.
> If this is done, then apache's mod_proxy can be used as 
> effectively as squid to buffer output from a mod_perl server.
> 
> Regards,
> 
> Joshua
> _
> Joshua Chamas Chamas Enterprises Inc.
> NodeWorks >> free web link monitoring Huntington Beach, CA  USA 
> http://www.nodeworks.com1-714-625-4051
> 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



Re: squid performance

2000-01-17 Thread Joshua Chamas

Gerald Richter wrote:
> 
> > These are on the same server, and all images and CGI's run on the
> > small apache, and the page contents are dynamically generated by a
> > heavy back-end proxied transparently.  The front end apache proxies to
> > a different back-end based on the hostname it is contacted under.
> >
> Do you know how does this work with slow clients compared to squid. I always
> thought (but never tried) one benfit of squid is, that it temporaly caches
> (should be better say buffers?) the output generated by mod_perl scripts, so
> the script can run as fast as possible and deliver it's output to squid,
> while squid delivers the output to a slower client, the the process running
> mod_perl can already serve the next request, therfore keeping the number of
> mod_perl processes small.
> 
> Does this work in this way with squid? I don't think this will work with
> Apache and a simple ProxyPass...
> 

Gerald,

I don't know what squid's buffer is like, but back in apache 
1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes, 
which would be used in proxy_util.c:ap_proxy_send_fb() to loop 
over content being proxy passed in 8K chunks, passing that
on to the client.  

So if all the web files are <8K, perfect, but I'd suggest 
increasing the value that ap_proxy_send_fb uses to buffer
to the largest size output commonly sent by the mod_perl server.
If this is done, then apache's mod_proxy can be used as 
effectively as squid to buffer output from a mod_perl server.

Regards,

Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051