Re: horrible memory consumption

2000-01-20 Thread Vivek Khera

> "JT" == Jason Terry <[EMAIL PROTECTED]> writes:

JT> Is there a way I can tell where my memory usage is going in an
JT> Apache child?  I have a server that starts with acceptable
JT> numbers, but after a while it turns into this

It would probably be best if you started by reading through the
performance tuning guide and Stas' mod_perl guide on how to reduce
memory consumption.  Basically, when you have complex mod_perl
operations going in, you want to offload non-mod_perl related tasks
(images, static content) to other servers.



Re: squid performance

2000-01-20 Thread Leslie Mikesell

According to Greg Stark:

> > I think if you can avoid hitting a mod_perl server for the images,
> > you've won more than half the battle, especially on a graphically
> > intensive site.
> 
> I've learned the hard way that a proxy does not completely replace the need to
> put images and other other static components on a separate server. There are
> two reasons that you really really want to be serving images from another
> server (possibly running on the same machine of course).

I agree that it is correct to serve images from a lightweight server
but I don't quite understand how these points relate.  A proxy should
avoid the need to hit the backend server for static content if the
cache copy is current unless the user hits the reload button and
the browser sends the request with 'pragma: no-cache'.

> 1) Netscape/IE won't intermix slow dynamic requests with fast static requests
>on the same keep-alive connection

I thought they just opened several connections in parallel without regard
for the type of content.

> 2) static images won't be delayed when the proxy gets bogged down waiting on
>the backend dynamic server.

Is this under NT where mod_perl is single threaded?  Serving a new request
should not have any relationship to delays handling other requests on
unix unless you have hit your child process limit.

> Eg, if the dynamic content generation becomes slow enough to cause a 2s
> backlog of connections for dynamic content, then a proxy will not protect the
> static images from that delay. Netscape or IE may queue those requests after
> another dynamic content request, and even if they don't the proxy server will
> eventually have every slot taken up waiting on the dynamic server. 

A proxy that already has the cached image should deliver it with no
delay, and a request back to the same server should be serviced
immediately anyway.

> So *every* image on the page will have another 2s latency, instead of just a
> 2s latency for the entire page. This is worst in Netscape of course course
> where the page can't draw until all the images sizes are known.

Putting the sizes in the IMG SRC tag is a good idea anyway.

> This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
> your images on pics.mydomain.foo even if that resolves to the same address and
> run a separate apache instance for them.

This is a good idea because it is easy to move to a different machine
if the load makes it necessary.  However, a simple approach is to
use a non-mod_perl apache as a non-caching proxy front end for the
dynamic content and let it deliver the static pages directly.  A
short stack of RewriteRules can arrange this if you use the 
[L] or [PT] flags on the matches you want the front end to serve
and the [P] flag on the matches to proxy.

  Les Mikesell
[EMAIL PROTECTED]



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Gerald Richter

>
> That's what I thought. Setting 'EMBPERL_DEBUG 0' should really
> turn off any
> kind of logging including even trying to open the log file.
>

Look at epio.c function OpenLog line 838

if (r -> bDebug == 0)
return ok ; /* never write to logfile if debugging is disabled */

If the DEBUG Flags are zero Embperl should never open the log, but if you do
one request with EMBPERL_DEBUG != 0, the logfile is open and will stay open.

> > I consider this a bug and a security
> > hazard
> > (writing anything blindly to /tmp can have potentially lethal
> side effects,
> > eg: user foo puts in a symlink from /tmp/embperl.log to
> anything owned by the
> > user running the server and that file gets embperl logs
> appended to it!).
> >

If the logfile really get opend before you have a chance to set
EMBPERL_DEBUG to 0, then it's a bug and a security whole, but I can't see
this for now, but maybe I oversee something...

> > The log file is tied to at a few different spots within the
> code. None of
> > these check the setting of EMBPERL_DEBUG before tying to the
> log. They should
> > only tie to the log if the debug setting is not zero.
> >

The logfile is only opened at this one place in OpenLog I mentioned above
and this function checks the debug setting, _before_ opening the log. So if
EMBPERL_DEBUG is zero, the log file will never get opend and all other
function will just throw anything you try to write to the logfile away, if
the log file isn't opened.

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: Using mod_backhand for load balancing and failover.

2000-01-20 Thread Leslie Mikesell

According to Jeffrey W. Baker:
> 
> Is anyone using mod_backhand (http://www.backhand.org/) for load
> balancing?  I've been trying to get it to work but it is really flaky. 
> For example, it doesn't seem to distribute requests for static content. 
> Bah.

I just started to look at it (and note that there was a recent update)but
haven't got it configured yet.  I thought it distributed whatever it
is configured to handle - it shouldn't be aware of the content type.
The parts I don't like just from looking at it are that the backend
servers all have to have the module included as well (I was hoping
to balance some non-apache servers too) and it looks like it may
be difficult or impossible to make it mesh with RewriteRules.

The mod_jserv load balancing looks much nicer at least at first
glance, but of course that doesn't help for mod_perl.   
 
Les Mikesell
 [EMAIL PROTECTED]



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Jason Bodnar

There must be a bug somewhere because I had EMBPERL_DEBUG = 0 and was getting
errors about not being able to write to /tmp/embperl.log.

This is with v 1.2b4 I believe so if this has changed recently that may be why
I got the errors.

On 20-Jan-00 Gerald Richter wrote:
>>
>> That's what I thought. Setting 'EMBPERL_DEBUG 0' should really
>> turn off any
>> kind of logging including even trying to open the log file.
>>
> 
> Look at epio.c function OpenLog line 838
> 
> if (r -> bDebug == 0)
>   return ok ; /* never write to logfile if debugging is disabled */
> 
> If the DEBUG Flags are zero Embperl should never open the log, but if you do
> one request with EMBPERL_DEBUG != 0, the logfile is open and will stay open.
> 
>> > I consider this a bug and a security
>> > hazard
>> > (writing anything blindly to /tmp can have potentially lethal
>> side effects,
>> > eg: user foo puts in a symlink from /tmp/embperl.log to
>> anything owned by the
>> > user running the server and that file gets embperl logs
>> appended to it!).
>> >
> 
> If the logfile really get opend before you have a chance to set
> EMBPERL_DEBUG to 0, then it's a bug and a security whole, but I can't see
> this for now, but maybe I oversee something...
> 
>> > The log file is tied to at a few different spots within the
>> code. None of
>> > these check the setting of EMBPERL_DEBUG before tying to the
>> log. They should
>> > only tie to the log if the debug setting is not zero.
>> >
> 
> The logfile is only opened at this one place in OpenLog I mentioned above
> and this function checks the debug setting, _before_ opening the log. So if
> EMBPERL_DEBUG is zero, the log file will never get opend and all other
> function will just throw anything you try to write to the logfile away, if
> the log file isn't opened.
> 
> Gerald
> 
> -
> Gerald Richterecos electronic communication services gmbh
> Internetconnect * Webserver/-design/-datenbanken * Consulting
> 
> Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
> E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
> WWW:http://www.ecos.de  Fax:  +49 6133 925152
> -

---
Jason Bodnar + [EMAIL PROTECTED] + Tivoli Systems

I swear I'd forget my own head if it wasn't up my ass. -- Jason Bodnar



Re: squid performance

2000-01-20 Thread Leslie Mikesell

According to Greg Stark:

> I tried to use the minspareservers and maxspareservers and the other similar
> parameters to let apache tune this automatically and found it didn't work out
> well with mod_perl. What happened was that starting up perl processes was the
> single most cpu intensive thing apache could do, so as soon as it decided it
> needed a new process it slowed down the existing processes and put itself into
> a feedback loop. I prefer to force apache to start a fixed number of processes
> and just stick with that number.

I've never noticed that effect, but I thought that apache always
grew in increments of 'StartServers' so I've tried to keep that
small, equal to MinSpareSevers, and an even divisor of MaxSpareServers
just on general principles.  Maybe you are starting a large number
as you cross the minspareservers boundaries.

  Les Mikesell
   [EMAIL PROTECTED]



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Gerald Richter

>
> There must be a bug somewhere because I had EMBPERL_DEBUG = 0 and
> was getting
> errors about not being able to write to /tmp/embperl.log.
>
> This is with v 1.2b4 I believe so if this has changed recently
> that may be why
> I got the errors.
>

This didn't have change recently, but it is possible that in 1.2b4 there are
some debug output that is written before the first request and in this case
the log file will be opened. Are you able to upgrade to 1.2.1 (you should do
this anyway) and see if the problem disapears.

Anyway I will put this on the TODO list, to make it more secure so the
logfile will not accidently opened

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-




RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Gerald Richter

> The file gets created even if EMBPERL_DEBUG = 0 is set in 1.2.0. The
> Changes.pod from
> 1.2.1 doesn't indicate that this problem was corrected, so it
> should still be
> present.
>
> Try setting EMBPERL_DEBUG = 0 and not setting a EMBPERL_LOG (so
> it is set to
> default). Make sure the /tmp/embperl.log doesn't exist before starting the
> server. Start the server. The log won't exist. Request a page that uses
> Embperl. The log file will now exist.
>

ok, you are right. I found the bug. Seems like I indrocuded it sometime in
the past, while im rearragend the setup of the request.

You could try the following: In file epmain.c in function SetupRequest about
line 1786, there is call to OpenLog, place

if (pConf -> bDebug)

in front of it, so it looks now like:

if (pConf -> bDebug)
if ((rc = OpenLog (pCurrReq, NULL, 2)) != ok)


LogError (pCurrReq, rc) ;
}

Now Embperl should be quite!

Gerald



RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Christian Gilmore

Thanks, Gerald! It might be a good idea as well to make the default log file
be auto-configured during build time (if it finds apache sources) to point to
embperl.log in whatever apache has configured as the log directory. Using /tmp
is just a bad, bad idea.

Regards,
Christian

> -Original Message-
> From: Gerald Richter [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, January 20, 2000 3:26 PM
> To: Christian Gilmore; 'Jason Bodnar'
> Cc: mod_perl Maillinglist
> Subject: RE: How do you turn logging off completely in Embperl?
>
>
> > The file gets created even if EMBPERL_DEBUG = 0 is set in 1.2.0. The
> > Changes.pod from
> > 1.2.1 doesn't indicate that this problem was corrected, so it
> > should still be
> > present.
> >
> > Try setting EMBPERL_DEBUG = 0 and not setting a EMBPERL_LOG (so
> > it is set to
> > default). Make sure the /tmp/embperl.log doesn't exist
> before starting the
> > server. Start the server. The log won't exist. Request a
> page that uses
> > Embperl. The log file will now exist.
> >
>
> ok, you are right. I found the bug. Seems like I indrocuded
> it sometime in
> the past, while im rearragend the setup of the request.
>
> You could try the following: In file epmain.c in function
> SetupRequest about
> line 1786, there is call to OpenLog, place
>
> if (pConf -> bDebug)
>
> in front of it, so it looks now like:
>
> if (pConf -> bDebug)
>   if ((rc = OpenLog (pCurrReq, NULL, 2)) != ok)
>
>
>   LogError (pCurrReq, rc) ;
>   }
>
> Now Embperl should be quite!
>
> Gerald
>
>



httpd.conf's 407 setting doesn't quite work

2000-01-20 Thread Nancy Lin


Hi 

I don't know if this is a problem w/ modperl or apache itself.

I'm running proxy server apache 1.3.9 and modperl 1.21.  I'm using modperl
to authenticate my users.  When a
user is invalid, my code does:

  } else {
  loginfo($r, "AuthenSession::handler: bad password") ;
  $r->note_basic_auth_failure;
  return AUTH_REQUIRED;
  }

On Netscape 3.x, a little window pops up saying authentication failed, do
you want to retry?  Here's the part I don't quite understand.  If I
configure httpd.conf with 'ErrorDocument 407 "Wrong Password!', that's
what I'll see when I click on the Cancel button on that little popup.
But, if I configure httpd.conf with 'ErrorDocument 407 /error.html, it
gives me the default error 407 page.  I'm not sure why it's doing that.  I
would rather point this to an file than to write it in httpd.conf.

My httpd.conf has:


Options Indexes FollowSymLinks ExecCGI
AllowOverride None
Order Allow,Deny
Allow from All
#require valid-user



order deny,allow
allow from all
AuthName "Test"
AuthType Basic
PerlAuthenHandler Apache::AuthenSession
require valid-user




Thanks

-- 
Nancy




RE: How do you turn logging off completely in Embperl?

2000-01-20 Thread Gerald Richter

> Thanks, Gerald! It might be a good idea as well to make the 
> default log file
> be auto-configured during build time (if it finds apache sources) 
> to point to
> embperl.log in whatever apache has configured as the log 
> directory. Using /tmp
> is just a bad, bad idea.
> 

I put it on the TODO list.

Gerald



problem

2000-01-20 Thread Etienne Pelaprat

Hi All,

I've hit a problem that I can't seem to rectify.  I compile 
mod_perl with EVERYTHING=1, but in one of my modules, I get the error:

[Wed Jan 19 20:30:05 2000] null: Rebuild with -DPERL_STACKED_HANDLERS 
to $r->push_handlers at /usr/local/apache/lib/perl/Apache/BOSCIndex.pm 
line 37.

This is that module that I wrote:

package Apache::BOSCIndex;

# use
use strict;
use Audrey;
use Audrey::Display;
use Audrey::News;
use CGI qw(:standard);
use Apache::Constants qw(OK DECLINED);

my $q = CGI->new;
my $Audrey = Audrey->new;
my $Display = Audrey::Display->new;
my $News = Audrey::News->new;

sub handler {
my $r = shift;

# PerlTransHandler
return DECLINED unless $r->uri eq "/";
$r->handler( "perl-script" );
$r->push_handlers( PerlHandler => sub {
my $user_id = $Audrey->get_user_id( $r );
$Display->update_stats( $user_id, "Front_Page" );

# print the header
$r->content_type('text/html');
$r->send_http_header;

# load the profile
$Display->load_user( $user_id );

# now print the html
$Display->pre_main( $r, $user_id, "BeOSCentral - always" );
$News->print_news_items( $r );
$Display->post_main( $r, $user_id );
});

return OK;
}

# return true
1;

I use a PerlTransHandler because that module is address to the root of 
my site, and so whenever an img src points to root, it would try and 
call this module.  But when I do this I get the above error.  I asked 
Randal Schwartz and he thought that rebuilding mod_perl with EVERYTHING
=1 would fix it, but it hasn't.  Do you have any suggestions?

How do I rebuild with -DPERL_STACKED_HANDLERS on?

Thanks in advance,

Etienne



httpd not copied into APACHE_PREFIX

2000-01-20 Thread Wang, Pin-Chieh

Hi,
I am building mod_perl-1.21 into apache_1.3.9 using apaci. 
I run the following commands under mod_perl-1.21 directory
perl Makefile.PL EVERYTHING=1 APACHE_PREFIX=/usr/local/apache
make
make test
make install
Everything looks fine, httpd was created in apache_1.3.9/src, but was not
get copied into /usr/local/apache/bin. After I manually copied the httpd
file and try to start it using apachectl, I got 
./apachectl start: httpd started
But it did not create httpd.pid in the logs directory neither httpd really
started. 
Any one can give a hint?
I am running Solaris 2.6
Thanks,
PC



HELP.... apache::session, Invalid Arg: SysVSemaphoreLocker.pm line 63

2000-01-20 Thread Keith Kwiatek

Hey Guys,

Just installed Apache:Session tried to run the example.perl script that
come with the install.

I keep getting "Invalid Argument at SysVSemaphoreLocker.pm line 63.

any idea what is going on?

Keith



Re: problem

2000-01-20 Thread Cliff Rayman

unfortunately PERL_STACKED_HANDLERS used to be
experimental and therefore EVERYTHING includes just
about EVERYTHING except PERL_STACKED_HANDLERS.

i think you need to add PERL_STACKED_HANDLERS=1 to
your long list of Makefile.PL parameters.

this has been discussed in the mail archives so you can search
there and you'll find the original message from Doug.

cliff rayman
genwax.com

Etienne Pelaprat wrote:

> Hi All,
>
> I've hit a problem that I can't seem to rectify.  I compile
> mod_perl with EVERYTHING=1, but in one of my modules, I get the error:
>
> [Wed Jan 19 20:30:05 2000] null: Rebuild with -DPERL_STACKED_HANDLERS
> to $r->push_handlers at /usr/local/apache/lib/perl/Apache/BOSCIndex.pm
> line 37.
>
> This is that module that I wrote:
>
> package Apache::BOSCIndex;
>
> # use
> use strict;
> use Audrey;
> use Audrey::Display;
> use Audrey::News;
> use CGI qw(:standard);
> use Apache::Constants qw(OK DECLINED);
>
> my $q = CGI->new;
> my $Audrey = Audrey->new;
> my $Display = Audrey::Display->new;
> my $News = Audrey::News->new;
>
> sub handler {
> my $r = shift;
>
> # PerlTransHandler
> return DECLINED unless $r->uri eq "/";
> $r->handler( "perl-script" );
> $r->push_handlers( PerlHandler => sub {
> my $user_id = $Audrey->get_user_id( $r );
> $Display->update_stats( $user_id, "Front_Page" );
>
> # print the header
> $r->content_type('text/html');
> $r->send_http_header;
>
> # load the profile
> $Display->load_user( $user_id );
>
> # now print the html
> $Display->pre_main( $r, $user_id, "BeOSCentral - always" );
> $News->print_news_items( $r );
> $Display->post_main( $r, $user_id );
> });
>
> return OK;
> }
>
> # return true
> 1;
>
> I use a PerlTransHandler because that module is address to the root of
> my site, and so whenever an img src points to root, it would try and
> call this module.  But when I do this I get the above error.  I asked
> Randal Schwartz and he thought that rebuilding mod_perl with EVERYTHING
> =1 would fix it, but it hasn't.  Do you have any suggestions?
>
> How do I rebuild with -DPERL_STACKED_HANDLERS on?
>
> Thanks in advance,
>
> Etienne



Re: How do you handle simultaneous/duplicate client requests with modperl?

2000-01-20 Thread Stas Bekman

> I have a mod_perl application that takes a request from a client,  then does
> some transaction processing with a remote system, which then returns a
> success/fail result to the client. The transaction MUST happen only ONCE per
> client session.
> 
> PROBLEM: the client clicks the submit buttom twice thus sending two
> requests, spawning two different processes to do the same remote
> transaction. BUT, the client request MUST be processed only ONCE for a given
> session_id. The first request will start a process to initiate the remote
> transaction, and then the second request process start, not knowing about
> the first  process. The result is that the client has the transaction
> performed two times!
> 
> How do you handle this? My first thought is to write a "processing status"
> value to the session hash (using apache::session) AS SOON as the first
> request is received, and then when the second duplicate request is received,
> check the "processing status" in the session hash. If the processing status
> is "in progress", then wait till the processing status in the session hash
> is updated by the first request process and return the result.
> 
> Is my concept on target? Is my implementation right? (or should I write
> directly to the files system?)
> 
> Does anyone have any experience with such things? Can you give me your
> wisdom?

If the form page is generated dynamically, insert a magic (unique) 
number/string hidden field, when you do INSERT, insert the magic number
into a DB along with request. Before inserting the record, check whether
there is already a record with this magic. If there is do not insert,
otherwise continue.

I dunno what SQL engine implementation you use, but if it's a
multithreaded (not msql!) you can have a race problem, just like with
external file locking (not flock!), so you do the check, find nothing but
the process gets context switched and another thread inserts the same
record. Then when the control comes to your process (thread) it happily
duplicates the record, without knowing that another thread already did
that. So what you need is an automic operation (test and insert). Read
your engine's manual to find out how to do: 

INSERT (something) if not SELECT (magick is already there) 

Another solution is to use javascript to maintain a state (the easiest),
but user can turn it off, so you cannot rely on that. 

Hope this helps...

To the Victor's note of not being able to perform double click (resubmit) 
-- Netscape and IE aren't the only browsers/user agents. Use LWP,
LWP::ParallelUA or a similar module instead, to run not just 2 requests in
parallel but many of them. In fact you can act as a super-human submitting
hundreds requests in the same millisecond. Hey isn't called a benchmarking
process :")

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: How do you handle simultaneous/duplicate client requests with modperl?

2000-01-20 Thread Jeffrey W. Baker

Keith Kwiatek wrote:
> 
> Hello,
> 
> I have a mod_perl application that takes a request from a client,  then does
> some transaction processing with a remote system, which then returns a
> success/fail result to the client. The transaction MUST happen only ONCE per
> client session.
> 
> PROBLEM: the client clicks the submit buttom twice thus sending two
> requests, spawning two different processes to do the same remote
> transaction. BUT, the client request MUST be processed only ONCE for a given
> session_id. The first request will start a process to initiate the remote
> transaction, and then the second request process start, not knowing about
> the first  process. The result is that the client has the transaction
> performed two times!
> 
> How do you handle this? My first thought is to write a "processing status"
> value to the session hash (using apache::session) AS SOON as the first
> request is received, and then when the second duplicate request is received,
> check the "processing status" in the session hash. If the processing status
> is "in progress", then wait till the processing status in the session hash
> is updated by the first request process and return the result.
> 
> Is my concept on target? Is my implementation right? (or should I write
> directly to the files system?)

Yes yes no.  Apache::Session effectively serializes all requests for the
same session_id, so using a flag in the session hash is race-safe.

-jwb



Re: httpd not copied into APACHE_PREFIX

2000-01-20 Thread John M Vinopal

perl Makefile.PL USE_APACI=1 EVERYTHING=1 APACHE_PREFIX=/usr/local/apache

On Thu, Jan 20, 2000 at 03:36:44PM -0600, Wang, Pin-Chieh wrote:
> Hi,
> I am building mod_perl-1.21 into apache_1.3.9 using apaci. 
> I run the following commands under mod_perl-1.21 directory
> perl Makefile.PL EVERYTHING=1 APACHE_PREFIX=/usr/local/apache



RE: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-20 Thread G.W. Haywood

Hi all,

On Wed, 19 Jan 2000, Gerald Richter wrote:

> in the long term, the solution that you have prefered in previous
> mail, not to unload modperl at all, maybe the better one

As I understand it with Apache/mod_perl:

1.  The parent (contains the Perl interpreter) fires up, initialises
things and launches some children.  Any memory it leaks stays
leaked until restart.  That could be weeks away.  Apart from
making copies of it, most of the time it doesn't do much with the
interpreter.  More waste.

2.  The children occasionally get the coup de grace, so we recover
any memory they leaked.  They do lots with the interpreter.

3.  When the parent fork()s a new child it can fork some leaked memory
too, which gradually will become unshared, so the longer this goes
on the closer we get to bringing the whole system to its knees.

So in the longer term, is there a reason the parent has to contain the
interpreter at all?  Can't it just do a system call when it needs one?
It seems a bit excessive to put aside a couple of megabytes of system
memory just to run startup.pl.  If one could get around any process
communication difficulties, the children could be just the same as
they are now, but exec()ed instead of fork()ed by a (smaller) child
process which has never leaked any memory.  The exec() latency isn't
an issue because of the way that Apache preforks a pool of processes
and the overhead will be minimal if the children live long enough.

Please tell me if I have got this all around my neck.

73,
Ged.



Re: Apache::ASP Debugging

2000-01-20 Thread G.W. Haywood

Hi there,

On Thu, 20 Jan 2000, suresh gopal wrote:

> since the eval of the program puts some of the errors to STDERR -
> which goes to log file ( i hope this explaniation is correct). Is
> there a way to display these infomation to the browser output??

I don't think I'd want to do that.  What's the reason for asking?

73,
Ged.
 




Re: oracle : The lowdown

2000-01-20 Thread G.W. Haywood

Hi there,

On Thu, 20 Jan 2000, Perrin Harkins wrote:

> We're veering WAY off-topic here

Maybe.  But I for one am happy for the diversion.  A lot of mod-perl
sites are doing just this kind of thing - after all, mod-perl is just
a link in a chain, it's of no use intrinsically without some things to
link together!  Greg has given me some valuable insights.

> you can't guarantee your data will be in a consistent state without
> transactions or some other way to do atomic updates
[snip]
> (e.g. you're running a message board and who cares if a post gets
> lost somewhere) then transactions might be considered unnecessary

Might be?  Having worked with a BTREE/ISAM package written in C and
assembler for the last 15 years or so, I wouldn't dream of using a DB
for some of this stuff.  It would just get in the way and be 100 times
slower than my C code.  I lock records as necessary so the data will
*always* be consistent and a whole bunch of gotchas simply evaporates.
For a lot of things on the Web, you can even get away with just the
operating system and flat files half the time.

I've got to admit that the way machine performance is going there may
come a time when it's just not worth the extra effort of tinkering in
the guts but we aren't nearly there yet.  Why do so many people seem
to insist on using a sledgehammer to crack a nut?  Horses for courses,
as we join our metaphors around here.

Just my 0.02p...

73,
Ged.




Re: Can't exec programs ?

2000-01-20 Thread Pierre-Yves BONNETAIN


[EMAIL PROTECTED] said:
> you'll get a better idea of the problem running strace (or truss) 
> against the server.  in any case, you should avoid any code that's 
> forking a process, since it's throwing performance out the window. 
   Is there a 'nice way' (meaning, a patch or manual change I can do to those
modules) to prevent forking or, rather, replace it by stg else that gets me the
same thing ? I can spend (a lot of) time looking for system() and
backticks in the modules I use, but if I need the functionnality how can I 
'correct' the code of those modules ?

> 
> > On Thu, 6 Jan 2000, Pierre-Yves BONNETAIN wrote:
> > 
> > > [Wed Jan  5 17:46:49 2000] null: Can't exec "pwd": Permission denied at
> > > /usr/lib/perl5/5.00503/Cwd.pm line 82.
> 
> This is most likely due to a corruption of the PATH environment
> variable. In my case, Daniel Jacobowitz fixed this problem on Debian,
> I think by upgrading to the latest mod_perl snapshot.
> 
   I thought I had the latest modperl, but...
   Still, your diagnostic seems to be right. I got those errors away by changing the 
.pm files and including  FULL PATH information ('/bin/pwd' instead
of 'pwd'). And one of my test, printing the $PATH, displayed weird characters at
the begining of this variable (@n:/usr/bin: instead of /bin:/usr/bin).

[EMAIL PROTECTED] said:
> There is a patch to correct the PATH environment variable corruption 
> problem, if you'd rather not go to the development mod_perl snapshot. 
>  I applied the patch to mod_perl version 1.21 on Red Hat Linux 6.0 
> and it has been working fine for me.

> The patch was forwarded to me, originally authored by Doug 
> MacEachern. 
   And I will test it as soon as I get my dirty hands on the webserver.

   Thanks for everything !
-- Pierre-Yves BONNETAIN
   http://www.rouge-blanc.com



Re: oracle : The lowdown

2000-01-20 Thread Perrin Harkins

"G.W. Haywood" wrote:
> On Thu, 20 Jan 2000, Perrin Harkins wrote:
> > you can't guarantee your data will be in a consistent state without
> > transactions or some other way to do atomic updates
> [snip]
> > (e.g. you're running a message board and who cares if a post gets
> > lost somewhere) then transactions might be considered unnecessary
> 
> Might be?  Having worked with a BTREE/ISAM package written in C and
> assembler for the last 15 years or so, I wouldn't dream of using a DB
> for some of this stuff.  It would just get in the way and be 100 times
> slower than my C code.  I lock records as necessary so the data will
> *always* be consistent and a whole bunch of gotchas simply evaporates.

Right, you've just implemented simple transactions.  Your locking
serializes access to the data and solves race condition problems.
- Perrin



Re: squid performance

2000-01-20 Thread Greg Stark


Vivek Khera <[EMAIL PROTECTED]> writes:

> Squid does indeed cache and buffer the output like you describe.  I
> don't know if Apache does so, but in practice, it has not been an
> issue for my site, which is quite busy (about 700k pages per month).
> 
> I think if you can avoid hitting a mod_perl server for the images,
> you've won more than half the battle, especially on a graphically
> intensive site.

I've learned the hard way that a proxy does not completely replace the need to
put images and other other static components on a separate server. There are
two reasons that you really really want to be serving images from another
server (possibly running on the same machine of course).

1) Netscape/IE won't intermix slow dynamic requests with fast static requests
   on the same keep-alive connection

2) static images won't be delayed when the proxy gets bogged down waiting on
   the backend dynamic server.

Both of these result in a very slow user experience if the dynamic content
server gets at all slow -- even out of proportion to the slowdown. 

Eg, if the dynamic content generation becomes slow enough to cause a 2s
backlog of connections for dynamic content, then a proxy will not protect the
static images from that delay. Netscape or IE may queue those requests after
another dynamic content request, and even if they don't the proxy server will
eventually have every slot taken up waiting on the dynamic server. 

So *every* image on the page will have another 2s latency, instead of just a
2s latency for the entire page. This is worst in Netscape of course course
where the page can't draw until all the images sizes are known.

This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
your images on pics.mydomain.foo even if that resolves to the same address and
run a separate apache instance for them.

-- 
greg



Re: horrible memory consumption

2000-01-20 Thread Stas Bekman

> > Is there a way I can find out where all this RAM is being used.  Or does
> > anyone have any suggestions (besides limiting the MaxRequestsPerChild)
> 
>If anyone knows how to figure out shared vs not shared memory in a 
> process on Linux, I'd be interested in that too...and it sounds like Jason
> could benefit from the info as well. I know that Apache::Gtop can be 
> used for mod_perl from reading The Guide (thanks Stas!) but I'm interested 
> in finding out these numbers for other non-mod_perl binaries too (such as
> an apache+mod_proxy binary). Also if anyone has any good pointers to info
> on dynamic linking and libraries (again, oriented somewhat towards Linux),
> I've yet to see anything that's explained things sufficiently to me yet. 
> Thanks.

GTop (not Apache::Gtop) is written by Doug, not me :) So the credits go to
Doug.

What you are talking about is Apache::VMonitor which shows you almost
everything the top(1) does and much more. You can monitor the
apache/mod_perl processes and any other non-mod_perl processes as well.

When you click on the process id you get lots of information about it,
including memory maps a sizes of the loaded libs. 

This all of course uses GTop, which in turn uses libgtop from the GNOME
project and lately reported to be ported to new platforms as well. 

Enjoy!

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-20 Thread Greg Stark


"G.W. Haywood" <[EMAIL PROTECTED]> writes:

> Would it be breaching any confidences to tell us how many
> kilobyterequests per memorymegabyte or some other equally daft
> dimensionless numbers?

I assume the number you're looking for is an ideal ratio between the proxy and
the backend server? No single number exists. You need to monitor your system
and tune. 

In theory you can calculate it by knowing the size of the average request, and
the latency to generate an average request in the backend. If your pages take
200ms to generate, and they're 4k on average, then they'll take 1s to spool
out to a 56kbs link and you'll need a 5:1 ratio. In practice however that
doesn't work out so cleanly because the OS is also doing buffering and because
it's really the worst case you're worried about, not the average.

If you have the memory you could just shoot for the most processes you can
handle, something like 256:32 for example is pretty aggressive. If your
backend scripts are written efficiently you'll probably find the backend
processes are nearly all idle.

I tried to use the minspareservers and maxspareservers and the other similar
parameters to let apache tune this automatically and found it didn't work out
well with mod_perl. What happened was that starting up perl processes was the
single most cpu intensive thing apache could do, so as soon as it decided it
needed a new process it slowed down the existing processes and put itself into
a feedback loop. I prefer to force apache to start a fixed number of processes
and just stick with that number.

-- 
greg



Re: Run away processes

2000-01-20 Thread Greg Stark


Stas Bekman <[EMAIL PROTECTED]> writes:

> > Is there a recommendation on how to catch & stop run away mod_perl programs
> > in a way that's _not_ part of the run away program.  Or is this even
> > possible?  Some type of watchdog, just like httpd.conf Timeout?
> 
> Try Apache::SafeHang
> http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz

Runaway? you mean 100% CPU ? Set up Apache::Resource then.
This isn't related to Oracle by any chance is it? 
We had this problem inside the Oracle libs at one point.

-- 
greg



Re: Run away processes

2000-01-20 Thread Stas Bekman

On 20 Jan 2000, Greg Stark wrote:

> 
> Stas Bekman <[EMAIL PROTECTED]> writes:
> 
> > > Is there a recommendation on how to catch & stop run away mod_perl programs
> > > in a way that's _not_ part of the run away program.  Or is this even
> > > possible?  Some type of watchdog, just like httpd.conf Timeout?
> > 
> > Try Apache::SafeHang
> > http://www.singlesheaven.com/stas/modules/Apache-SafeHang-0.01.tar.gz
> 
> Runaway? you mean 100% CPU ? Set up Apache::Resource then.
> This isn't related to Oracle by any chance is it? 
> We had this problem inside the Oracle libs at one point.

The process can be "runaway" waiting for some event to happen, or not to
complete for some other reason. It might use 0% CPU in this case and
untrappable by Apache::Resource. I've showen a few examples in the debug
chapter of the guide. 

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: How to make EmbPerl stuff new content into a existing frame?

2000-01-20 Thread Gerald Richter

>
> When the user hits the login button, I am calling a CGI script that
> validates the login against a database.  I can't make it have a action
> that loads a HTML page before the script is executed.  Therefore the
> script has to reload the frame with frame pages.  I also need to pass
> values to the frame, as in the example link above.
>
> Can you make a redirection have a "target=frame" and
> "?parameter=value" to do this?
>
No you can't do this, but you can say in your form  , so the cgi script will displayed on the whole screen, when
the cgi does the redirect, it will request the frame page

Gerald




Re: oracle : The lowdown

2000-01-20 Thread Perrin Harkins

Greg Stark wrote:
> Actually for web sites the lack of transactions is more of a boon than a
> problem.

We're veering WAY off-topic here, but the fact is you can't guarantee
your data will be in a consistent state without transactions or some
other way to do atomic updates.  Anything short of that suffers from
race conditions.  So, if that's no big deal for your data (e.g. you're
running a message board and who cares if a post gets lost somewhere)
then transactions might be considered unnecessary.

> For example, it makes it very hard to mix any kind of long running query with
> OLTP transactions against the same data, since rollback data accumulates very
> quickly. I would give some appendage for a while to tell Oracle to just use
> the most recent data for a long running query without attempting to rollback
> to a consistent view.

I believe setting the isolation level for dirty reads will allow you to
do exactly that.  You can keep the appendage.

- Perrin



Re: oracle : The lowdown

2000-01-20 Thread Perrin Harkins

Perrin Harkins wrote:
> Greg Stark wrote:
> > For example, it makes it very hard to mix any kind of long running query with
> > OLTP transactions against the same data, since rollback data accumulates very
> > quickly. I would give some appendage for a while to tell Oracle to just use
> > the most recent data for a long running query without attempting to rollback
> > to a consistent view.
> 
> I believe setting the isolation level for dirty reads will allow you to
> do exactly that.

Oh, silly me.  Oracle doesn't appear to offer dirty reads.  The lowest
level of isolation is "read committed" which reads all data that was
committed at the time the query began, but doesn't preserve that state
for future queries.  So, if you have lots of uncommitted data or you
commit lots of data to the table being queried while the query is
running you could make your rollback segment pretty big.  But, if you
can afford Oracle, you can afford RAM.

- Perrin



Re: Why does Apache do this braindamaged dlclose/dlopen stuff?

2000-01-20 Thread Gerald Richter

>
> Yes and no. If XS libraries are written with OO-style wrappers (which,
IMHO,
> they always should be), then surely you can catch the unloading in a
DESTROY
> sub and use that to do the deallocation of resources? Perl can only manage
> Perl resources, and extension resources should be the responsibility of
the
> the programmer.
>
That's why Doug wrote we need the perl_destruct/perl_free addtionaly to the
dlclose. These two routines will take care of the Perlpart destruction, i.e.
call the DESTROYs

GErald




Re: How download data/file in http [Embperl 1.2.0]

2000-01-20 Thread Gerald Richter



Hi

  I am using Emberl 1.2.0.
   
  This is the probleme :
  I have a form with a submit bouton to download some data ( 
  or a file ). I want that the user can save these data.
  When I submit it, the header that I want to send to generate 
  the download is printed in the page ( ...and the data too ) and after its the 
  html part.
   
  What's wrong ? The header ? The Method ?
   
  Thanks
   
   
  -
   
  The result is : 
   
  Content-type: application/octet-stream 
  Content-Transfer-Encoding: binary Content-Disposition: attachment; 
  filename="logs.txt" one lineone lineone lineone lineone lineone lineone 
  lineone lineone lineone lineone lineone lineone lineone lineone line
  
   etc...
   
  ...and the html part
  
   
  The file "this_file.epl" :
   
  [- if ( $fdat{'export'} ) 
  {print "Content-type: 
  application/octet-stream\n";print 
  "Content-Transfer-Encoding: binary\n";print 
  "Content-Disposition: attachment; 
  filename=\"logs.txt\"\r\n\r\n";
Don't use print inside a Embperl page (unless you 
set optRedirectStdout). You can't print headers inside a Embperl page, that is 
done by Embperl. Assign them to the %http_headers hash, or use the $req_rec 
-> header_out mod_perl function.
 
Take a look at the Embperl Faq for 
examples
 
Gerald

   


Re: question?

2000-01-20 Thread Rod Butcher

If I understand you correctly, you don't want to use mod_perl, just
Perl.
If so, the easiest way to use perl is to dowload ActivePerl (free) from
Activestate website http://www.activestate.com/ActivePerl/download.htm
It comes with great documentation, install it as per documentation.

If you wish to use CGI, the first line of each script should start with 
#!x:/yyy/perl.exe
where x is the drive and yyy is the directory where perl.exe is.
This is known as the shebang line in the trade.

You should ejoin the activestate perl newsgroup at 
http://www.activestate.com/lyris/lyris.pl?join=perl-win32-users

Best Rgds
Rod Butcher

Jingtao Yun wrote:
> 
> Hi,
>I installed Apache Server for NT on my machine. But
> I don't know how to get perl to work not using module perl.
> Any message will be appreciated.
> 

-- 
Rod Butcher | "... I gaze at the beauty of the world,
Hyena Holdings Internet | its wonders and its miracles and out of
  Programming   | sheer joy I laugh even as the day laughs.
("it's us or the vultures") | And then the people of the jungle say,
[EMAIL PROTECTED] | 'It is but the laughter of a hyena'".
|Kahlil Gibran..  The Wanderer



Re: A fix in sight: Apache::ASP: crash when placed in startup.pl

2000-01-20 Thread Tim Bunce

On Wed, Jan 19, 2000 at 09:11:51AM +, Alan Burlison wrote:
> 
> I've posted a bug against DynaLoader to P5P.  I think the fix is
> probably to put an END handler in DynaLoader to unload the XS modules,
> but I'm not familiar enough with the perl interpreter cleanup processing
> to know if this is the best way.

Seems reasonable. Though people will object to the overhead for normal
perl unless it only does it when perl_destruct_level > 0 (as it is for
mod_perl and should be for all embedded perls).

Also, the mod_perl patch I just saw posted should probably remove
entries from DynaLoader::dl_librefs as it unloads them. Likewise for
any new DynaLoader END code. That way they'll play safe together.

Tim.
[Architect and author of much of DynaLoader in years gone by.]



Re: Apache locking up on WinNT

2000-01-20 Thread Waldek Grudzien

> > I added the warns to the scripts and it appears that access to the
modules
> > is serialised.  Each call to the handler has to run to completion before
> > any other handlers can execute.
> >
>
> Yes, on NT all accesses to the perl part are serialized. This will not
> change before mod_perl 2.0

Oh my ...
Indeed this happens. This is horrible ;o(
and make mod_perl unusable with NT web site with many visitors ;o(
How do you think - for intranet web application is it reasonable
to run few Apaches (with mod_perl)  on the same box ?
[and assign users to the different apache]. How many Apache can
I start ?

BTW Does anyone know when mod_perl 2.0 is supposed to be released ?

Best regards

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



Re: Apache locking up on WinNT

2000-01-20 Thread Gerald Richter

> Oh my ...
> Indeed this happens. This is horrible ;o(
> and make mod_perl unusable with NT web site with many visitors ;o(
> How do you think - for intranet web application is it reasonable
> to run few Apaches (with mod_perl)  on the same box ?

yes, but as far as I know that isn't possible as service. You must start
them from the DOS prompt. (I didn't look at this since 1.3.6, so there may a
change in 1.3.9)

> [and assign users to the different apache]. How many Apache can
> I start ?
>

Only a matter of your memory...

You should also consider to move long running script to good old (external)
cgi scripts

> BTW Does anyone know when mod_perl 2.0 is supposed to be released ?
>

It should be comming when Apache 2.0 is comming, don't know when this will
be, but I don't expect it in the near future

Gerald




EMBPERL_SESSION_ARGS problem

2000-01-20 Thread Jean-Philippe FAUVELLE

The following directives work fine with Embperl 1.2.0

>PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=dev2-sparc UserName=www
Password=secret"

>PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=localhost UserName=www
Password=secret"


But these one cause a permanent fatal error.

>PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=dev2-sparc.dev.fth.net
UserName=www Password=secret"

>PerlSetEnv EMBPERL_SESSION_ARGS
"DataSource=dbi:mysql:database=www_sessions;host=193.252.66.1 UserName=www
Password=secret"


>[6431]ERR: 24: Line 13: Error in Perl code: DBI->connect failed: Access
denied for user: 'www@dev2-sparc' (Using password: YES) at
>/usr/local/lib/perl5/site_perl/5.005/Apache/Session/DBIStore.pm line 117
>
>Apache/1.3.9 (Unix) mod_perl/1.21 HTML::Embperl 1.2.0 [Thu Jan 20 12:17:42
2000]
>
>HTTP/1.1 500 Internal Server Error Date: Thu, 20 Jan 2000 11:17:41 GMT
Server: Apache/1.3.9 (Unix) mod_perl/1.21 Connection: close

Note that the two parameters differ only by the target hostname...
and that the sql server is local.

Could this be a bug in the parameter parser ?

Regards.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Jean-Philippe FAUVELLE
<[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
Responsable du Pole de Developpement Unix/Web
Departement Developpement Applicatif
France Telecom Hosting
40 rue Gabriel Crie, 92245 Malakoff Cedex, France
[http://www.fth.net/] [+33 (0) 1 46 12 67 89]
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=




Re: squid performance

2000-01-20 Thread Stas Bekman

On 20 Jan 2000, Greg Stark wrote:
> I tried to use the minspareservers and maxspareservers and the other similar
> parameters to let apache tune this automatically and found it didn't work out
> well with mod_perl. What happened was that starting up perl processes was the
> single most cpu intensive thing apache could do, so as soon as it decided it
> needed a new process it slowed down the existing processes and put itself into
> a feedback loop. I prefer to force apache to start a fixed number of processes
> and just stick with that number.

This shouldn't happen if you preload most or all your code that you use.
The fork is very effective on modern OSes, and since most use
copy-on-write methodm the spawning of a new process should be almost
unnoticeable.

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: EMBPERL_SESSION_ARGS problem

2000-01-20 Thread Gerald Richter



>
> >[6431]ERR: 24: Line 13: Error in Perl code: DBI->connect failed: Access
> denied for user: 'www@dev2-sparc' (Using password: YES) at
> >/usr/local/lib/perl5/site_perl/5.005/Apache/Session/DBIStore.pm line 117
> >
> Note that the two parameters differ only by the target hostname...
> and that the sql server is local.
>
> Could this be a bug in the parameter parser ?
>

I don't think so because the ARGS are only splited by \s+, so there should
be a problem with dot's (also I never tried it). To verify this you may
search for

push @args, $1 ;
push @args, $2 ;

(about line 369 in Embperl.,pm) and insert a

a warn "$1 = $2" ; afterwards. Now you should see the result of the parsing,
when you start Apache

Gerald




Re: Apache locking up on WinNT

2000-01-20 Thread Waldek Grudzien

> > Oh my ...
> > Indeed this happens. This is horrible ;o(
> > and make mod_perl unusable with NT web site with many visitors ;o(
> > How do you think - for intranet web application is it reasonable
> > to run few Apaches (with mod_perl)  on the same box ?
>
> yes, but as far as I know that isn't possible as service. You must start
> them from the DOS prompt. (I didn't look at this since 1.3.6, so there may
a
> change in 1.3.9)

In 1.3.9 you can register as many services as you want... ;o)

> > [and assign users to the different apache]. How many Apache can
> > I start ?
> >
>
> Only a matter of your memory...

How much apaches with mod_perl won't hurt 128 MB NT box ?
(for application with 91 scripts and modules with summary
PERL code 376 KB)
(or how many RAM should I add ?)

> You should also consider to move long running script to good old
(external)
> cgi scripts

I know -  now... ;o)

> > BTW Does anyone know when mod_perl 2.0 is supposed to be released ?
> >
>
> It should be comming when Apache 2.0 is comming, don't know when this will
> be, but I don't expect it in the near future

It is not good news ;-(. Many fellows can choose meantime PHP instead PERL
solutions ;o( (moreover speed charts  says PHPis a little bit faster than
mod_perl
http://www.chamas.com/hello_world.html)
I like PERL so much so I wouldn't like to be forced (by boss ;o)) some day
to
start next web apps with PHP ;o(

Best regards,

Waldek Grudzien
_
http://www.uhc.lublin.pl/~waldekg/
University Health Care
Lublin/Lubartow, Poland
tel. +48 81 44 111 88
ICQ # 20441796



Performance ?'s regarding Apache::Request

2000-01-20 Thread Clifford Lang

mod_perl 1.21
Apache 1.3.9
Solaris 2.5.1, Linux 6.0

Is this a good or bad idea?

I want to create an inheritable module based on Apache::Request mainly for
uploading files, then create individual PerlHandler modules for individual
page content.

If I do this, will the uploaded files end up increase the memory consumption
of the module, or is all memory freed after the fileupload process?

I was going to use "register_cleanup(\&CGI::_reset_globals)" to clear the
CGI environment but don't know if that frees up the memory.  If used, should
the register_cleanup be attached to the original request object ($r =
shift;), or the $apr=Apache::Request->new($r) object?

Should I (How can I) destroy the entire page after each request?  Doing so
would lose some of the reason for using mod_perl, I'd like to write these
handlers with all dynamic content (no "real" page source).  Some thoughts
for destruction would be to create the Apache::Request::Upload as a blessed
object, then destroy it on page completion - Is this wise or even possible?
Any pointers on how to accomplish this?


TIA,  Cliff



Re: Apache locking up on WinNT

2000-01-20 Thread Gerald Richter

> > > [and assign users to the different apache]. How many Apache can
> > > I start ?
> > >
> >
> > Only a matter of your memory...
>
> How much apaches with mod_perl won't hurt 128 MB NT box ?
> (for application with 91 scripts and modules with summary
> PERL code 376 KB)
> (or how many RAM should I add ?)
>

I think you have to try it out. The code should be shared in memory, but the
data is unique for every Apache. Use NT Systemmonitor to make sure not too
much memory swaping takes place.

>
> It is not good news ;-(. Many fellows can choose meantime PHP instead PERL
> solutions ;o( (moreover speed charts  says PHPis a little bit faster than
> mod_perl
> http://www.chamas.com/hello_world.html)

I would give so much on these benchmarks, because it only tests the startup
time. In a real application, maybe with database access, things may look
different...

> I like PERL so much so I wouldn't like to be forced (by boss ;o)) some day
> to
> start next web apps with PHP ;o(
>

Of course you are right, but somebody has to make the changes in mod_perl


Gerald




Re: horrible memory consumption

2000-01-20 Thread Bill


Stas Bekman wrote:
> >If anyone knows how to figure out shared vs not shared memory in a
> > process on Linux, I'd be interested in that too...and it sounds like Jason
> > could benefit from the info as well. I know that Apache::Gtop can be
> > used for mod_perl from reading The Guide (thanks Stas!) but I'm interested
> > in finding out these numbers for other non-mod_perl binaries too (such as
> > an apache+mod_proxy binary). Also if anyone has any good pointers to info
> > on dynamic linking and libraries (again, oriented somewhat towards Linux),
> > I've yet to see anything that's explained things sufficiently to me yet.
> > Thanks.
> 
> GTop (not Apache::Gtop) is written by Doug, not me :) So the credits go to
> Doug.

   Well, I meant thanks for mentioning it in the guide. I hadn't heard of
it before. :)

> What you are talking about is Apache::VMonitor which shows you almost
> everything the top(1) does and much more. You can monitor the
> apache/mod_perl processes and any other non-mod_perl processes as well.
> 
> When you click on the process id you get lots of information about it,
> including memory maps a sizes of the loaded libs.
> 
> This all of course uses GTop, which in turn uses libgtop from the GNOME
> project and lately reported to be ported to new platforms as well.
> 
> Enjoy!

   Interesting. I'll have to check it out. Thanks...

- Bill



RE: mod_rewrite and Apache::Cookie

2000-01-20 Thread Geoffrey Young

for anyone interested...

I wrote a PerlTransHandler and removed mod_rewrite and am seeing the same
problem as outlined below...

can anyone verify this?

--Geoff

> -Original Message-
> From: Geoffrey Young 
> Sent: Wednesday, January 19, 2000 9:27 AM
> To: '[EMAIL PROTECTED]'
> Subject: mod_rewrite and Apache::Cookie
> 
> 
> hi all..
> 
> I've noticed that using mod_rewrite with Apache::Cookie 
> exhibits odd behavior...
> 
> scenario:
>   foo.cgi uses Apache::Cookie to set a cookie
>   mod_rewite writes all requests for index.html to 
> /perl-bin/foo.cgi
> 
> problem:
>   access to /perl-bin/foo.cgi sets the cookie properly
>   access to /  or index.html runs foo.cgi, and attempts 
> to set the cookie, but $cookie->bake issues the generic:
> Warning: something's wrong at 
> /usr/local/apache/perl-bin/foo.cgi line 34.
> 
> While I know I can use a PerlTransHandler here (and probably 
> will now), does anyone have any ideas about this behavior?
> 
> In the meanwhile, if I find out anything more while 
> investigating, I'll post it...
> 
> --Geoff
> 



Re: How do I package my DBI subroutines?

2000-01-20 Thread G.W. Haywood

Hi there,

On Wed, 19 Jan 2000, Keith Kwiatek wrote:

> We have recently installed a new machine with Apache/1.3.9
> mod_perl/1.21 mod_ssl/2.4.10 OpenSSL/0.9.4 perl 5.004_04 configured

> A perl transaction handler that works fine on Apache/1.3.6
> mod_perl/1.21 5.00503 is now intermittantly dying on the new box
> with the following error;

> Can't call method "register_cleanup" on an undefined value at
> /usr/lib/perl5/5.00503/CGI.pm at line 263

Did you get a reply yet?

Your installation is apparently out of order.  You claim to have
configured perl 5.004_04 on the new box yet the error message says:

/usr/lib/perl5/5.00503/CGI.pm at line 263
   ^^^
which is the wrong version of Perl.  This doesn't seem like a good
idea to me.  I'd delete the lot and reinstall everything from scratch.

> I have some dbi subroutines that I want my mod_perl program to
> use. how do I go about packaging these (or whatever the correct
> terminology is)

I think you'll find everything you need for this in Stas' Guide.

73,
Ged.



How do you handle simultaneous/duplicate client requests with modperl?

2000-01-20 Thread Keith Kwiatek

Hello,

I have a mod_perl application that takes a request from a client,  then does
some transaction processing with a remote system, which then returns a
success/fail result to the client. The transaction MUST happen only ONCE per
client session.

PROBLEM: the client clicks the submit buttom twice thus sending two
requests, spawning two different processes to do the same remote
transaction. BUT, the client request MUST be processed only ONCE for a given
session_id. The first request will start a process to initiate the remote
transaction, and then the second request process start, not knowing about
the first  process. The result is that the client has the transaction
performed two times!

How do you handle this? My first thought is to write a "processing status"
value to the session hash (using apache::session) AS SOON as the first
request is received, and then when the second duplicate request is received,
check the "processing status" in the session hash. If the processing status
is "in progress", then wait till the processing status in the session hash
is updated by the first request process and return the result.

Is my concept on target? Is my implementation right? (or should I write
directly to the files system?)

Does anyone have any experience with such things? Can you give me your
wisdom?

Keith



question about PerlChildInitHandler & PerlChildExitHandler

2000-01-20 Thread Huan He

Hello,

Are there some similar directives for the threads on NT ? 
The problem I run into is on UNIX, I open sockets during PerlChildInitHandler 
and close the sockets during PerlChildInitHandler phase, but when I try to
port my code to NT platform, I don't know how to do it. Does anyone know any
documents or web sites I can refer to regarding NT mod_perl environment ?

Thanks,
Huan



Compile error with mod_perl_1.21

2000-01-20 Thread Asghar Nafarieh


Hi,

I get the following link error when I try to make apache_1.3.9 with 
mod_perl-1.21. Am I missing a library module?

Thanks,

-Asghar


This is how I built it:
cd mod_perl-1.21
perl Makefile.PL PREP_HTTPD=1
make
make test
make install

cd ../apache_1.3.9
./configure --with-layout=RedHat --target=perlhttpd 
--activate-module=src/modules/perl/libperl.a






gcc -c  -I./os/unix -I./include   -DLINUX=2 -DTARGET=\"perlhttpd\" -DUSE_HSREGEX 
-DUSE_EXPAT -I./lib/expat-lite `./apaci` buildmark.c
gcc  -DLINUX=2 -DTARGET=\"perlhttpd\" -DUSE_HSREGEX -DUSE_EXPAT 
-I./lib/expat-lite `./apaci`\
  -o perlhttpd buildmark.o modules.o modules/perl/libperl.a 
modules/standard/libstandard.a main/libmain.a ./os/unix/libos.a ap/libap.a 
regex/libregex.a lib/expat-lite/libexpat.a  -lm -lcrypt
modules/perl/libperl.a(mod_perl.o): In function `perl_shutdown':
mod_perl.o(.text+0xf8): undefined reference to `PL_perl_destruct_level'
mod_perl.o(.text+0x102): undefined reference to `PL_perl_destruct_level'
mod_perl.o(.text+0x10c): undefined reference to `PL_perl_destruct_level'
mod_perl.o(.text+0x13b): undefined reference to `Perl_av_undef'

 MORE ERROR



Re: How do you handle simultaneous/duplicate client requests with modperl?

2000-01-20 Thread Victor Zamouline

>If the processing status
>is "in progress"

Be careful if you expect the "in progress" status change to "done". If an
error happens and the status never switches to "done", the session ID will
remain forever "in progress" and a second attempt of the same request will
be refused by your own application.

I use the "done" status immediately in my own applications.

Just for info, my own server has plenty of pages where there is a "double
click risk", but I have never been able to reproduce the problem by
double-clicking the "Submit" button myself. Do both Netscape and MSIE have
"doubleclickproof" Submit buttons?

Also, I have not yet received a single problem report from my own server due
to that problem, although my clientele is entirely computer-illiterate.

Anyway, I guess there is absolutely no way to know if receiving two
identical requests from the same SessionID would mean or no that the user
had double-clicked Submit. I don't think you can rely on the time interval
between the two requests, due to network latency and the fact of having the
two requests treated by two distinct children.

Vic.



Re: oracle : The lowdown

2000-01-20 Thread Ed Phillips

For those of you tired of this thread please excuse me, but
here is MySQL's current position statement on and discussion
about transactions:

Disclaimer: I just helped Monty write this partly in response to
some of the fruitful, to me, discussion on this list. I know
this is not crucial to mod_perl but I find the "wise men who 
are enquirers into many things" to be one of the great things
about this list, to paraphrase old Heraclitus. I learn quite
a bit about quite many things by following leads and hints here
as well as by seeing others problems.

I'd love to see your criticism of the below either here or
off the list.


Ed
-


The question is often asked, by the curious and the critical, "Why is
MySQl not a transactional database?" or "Why does MySQl not support 
transactions."

MySQL has made a conscious decision to support another paradigm for 
data integrity, "atomic operations." It is our thinking and experience 
that atomic operations offer equal or even better integrity with much 
better performance. We, nonetheless, appreciate and understand the 
transactional database paradigm and plan, in the next few releases, 
on introducing transaction safe tables on a per table basis. We will 
be giving our users the possibility to decide if they need
the speed of atomic operations or if they need to use transactional 
features in their applications. 

How does one use the features of MySQl to maintain rigorous integrity 
and how do these features compare with the transactional paradigm?

First, in the transactional paradigm, if your applications are written 
in a way that is dependent on the calling of "rollback" instead of "commit" 
in critical situations, then transactions are more convenient. Moreover, 
transactions ensure that unfinished updates or corrupting activities 
are not commited to the database; the server is given the opportunity 
to do an automatic rollback and your database is saved. 

MySQL, in almost all cases, allows you to solve for potential 
problems by including simple checks before updates and by running 
simple scripts that check the databases for inconsistencies and 
automatically repair or warn if such occurs. Note that just by 
using the MySQL log or even adding one extra log, one can normally 
fix tables perfectly with no data integrity loss. 

Moreover, "fatal" transactional updates can be rewritten to
 be atomic. In fact,we will go so far as to say that all
 integrity problems that transactions solve can be done with 
LOCK TABLES or atomic updates, ensuring that 
you never will get an automatic abort from the database, which is a
common problem with transactional databases.
 
Not even transactions can prevent all loss if the server goes down.  
In such cases even a transactional system can lose data.  
The difference between different systems lies in just how small 
the time-lap is where they could lose data. No system is 100 % secure, 
only "secure enough". Even Oracle, reputed to be the safest 
of transactional databases, is reported to sometimes lose data
 in such situations.

To be safe with MySQL you only need to have backups and have the update
logging turned on.  With this you can recover from any situation that you could
with any transactional database.  It is, of course, always good to have
backups, independent of which database you use.

The transactional paradigm has its benefits and its drawbacks. Many users
and application developers depend on the ease with which they can code around
problems where an "abort" appears or is necessary, and they may have to do
 a little more work with MySQL to either think differently or write more.
 If you are new to the atomic operations paradigm, or more familiar or more
comfortable with transactions, do not jump to the conclusion that MySQL 
has not addressed these issues. Reliability and integrity are foremost 
in our minds.

Recent estimates are that there are more than 1,000,000 mysqld servers 
currently running, many of which are in production environments.  We hear
 very, very seldom from our users that they have lost any data, and in
 almost all of those cases user error is involved. This is in our 
opinion the best proof of MySQL's stability and reliability.

Lastly, in situations where integrity is of highest importance, MySQL's
 current features allow for transaction-level or 
better  reliability and integrity. 

If you lock tables with LOCK TABLES, all updates will stall until any
integrity checks are made.  If you only do a read lock (as opposed to
a write lock), then reads and inserts are still allowed to happen.
The new inserted records will not be seen by any of the clients
that have a READ lock until they relaease their read locks.
With INSERT DELAYED you can queue insert into a local queue, until
the locks are released, without having to have the client to wait for
the insert to complete.


Atomic in the sense that we mean it is nothing magical, it only means 
that you can be sure that while each specific up