On Fri, Apr 28, 2000 at 03:01:01PM -0500, Igor Chudov @ home wrote:
>
> Glad you liked it! I am having fun writing it... I hope that it is helpful
> to children...
Maybe, but if I see more of those punch the monkey ads, i'm going to retire
to a log cabin in the woods and write an anti-web manife
A better way? how about the httpd stuffs a new record to an SQL
table and returns, and then have another daemon that just pulls off
new records from the top of the SQL table for processing sequentially...
that way you process them as fast as you can, but dont give a bad guy
(or a bad spider) a cha
On Sat, 15 Apr 2000 [EMAIL PROTECTED] wrote:
>
> > It is very memory inneffecient basically. Each process of apache has
> > it's registry which holds the compiled perl scripts in..., a copy of
> > each for each process. This has become an issue for one of the
> > companies that I work for, and
Hope this helps...
GNU gdb 19991004
(gdb) httpd
Program received signal SIGSEGV, Segmentation fault.
0x806412e in perl_handler (r=0x8727d9c) at mod_perl.c:844
844 dPPREQ;
(gdb) bt
#0 0x806412e in perl_handler (r=0x8727d9c) at mod_perl.c:844
#1 0x8097e23 in ap_invoke_handler (r=0x8727
The usual reasons: I bet they use lotus notes internally for
everything, and domino is the only choice for those
companies that have chained themselves^h^h^h^h made a strategic
decision to go notes.
-Justin
On Sun, Apr 23, 2000 at 02:24:20PM -0400, gnielson wrote:
> I am just curious why they are
After much fast progress buiding a new machine, I'm stuck.
This is a vanilla RH6.2 box with almost nothing on it.. no
residue from RPM perl or httpd (deselected at machine blast time).
I've built perl 5.6.0 (all tested out ok), also built apache 1.3.12
both with and without Ben-SSL (all tested ou
Hi, After much fast progress buiding a new machine, I'm stuck.
This is a vanilla RH6.2 box with almost nothing on it.. no
residue from RPM perl or httpd (deselected at machine blast time).
I've built perl 5.6.0 (all tested out ok), also built apache 1.3.12
both with and without Ben-SSL (all teste
I looked at mod_proxy and found the pass thru buffer size
is IOBUFSIZ, it reads that from the remote server then
writes to the client, in a loop.
Squid has 16K.
Neither is enough.
In an effort to get those mod_perl daemons to free up for long
requests, it is possible to patch mod_proxy to read as
Hi, I am switching my modperl site to squid in httpd acclerator mode
and everything works as advertised, but was very surprised to find
squid 10x slower than apache on a cached 6k gif as measured by
apache bench... 100 requests/second vs almost 1000 for apache.
The squid logs show all cache hits.
Hi, I've just emerged from about 5 months of fairly continuous late
night development, what started as a few Berkeley db files and a tiny
cgi perl script has now grown to (for me) a monster size, kept largely
at bay by modperl..
I hope this story will be interesting to those who are taking the fi
Thanks,
I will probably do that next. I am not clear what the difference is
between running squid and doing a mod_proxy, in the case of all dynamic
content... a remark in the tuning guide that i saw back in June was vague about
this, saying it wasnt clear whether squid can cache larger doucments
Hi, I've just emerged from about 5 months of fairly continuous late
night development, what started as a few Berkeley db files and a tiny
cgi perl script has now grown to (for me) a monster size, kept largely
at bay by modperl..
I hope this story will be interesting to those who are dabbling and
12 matches
Mail list logo