pnotes preserved accross calls?!
Hi perlites, I'm getting my brain twisted here... [FYI: Apache 2.0.51, MP 2.0.1, Linux 2.6.5] still debugging but it *seems* like $r->pnotes is being preserved between requests (I'm storing a hash ref in it like this: $r->pnotes('cookies',$cookie_hash) Are there any circumstances where pnotes could be preserved between requests to the same thread? I presume $r is never re-used... (I know refs themselves will be preserved so I'm only concerned that the values in pnotes itself are being preserved, not the values they refer to) cheers John
Re: pnotes preserved accross calls?!
A little but more code would be help to see what you are really doing ;-) Tom John ORourke wrote: > Hi perlites, I'm getting my brain twisted here... > > [FYI: Apache 2.0.51, MP 2.0.1, Linux 2.6.5] > > still debugging but it *seems* like $r->pnotes is being preserved > between requests (I'm storing a hash ref in it like this: > $r->pnotes('cookies',$cookie_hash) > > Are there any circumstances where pnotes could be preserved between > requests to the same thread? I presume $r is never re-used... > > (I know refs themselves will be preserved so I'm only concerned that the > values in pnotes itself are being preserved, not the values they refer to) > > cheers > John > > signature.asc Description: OpenPGP digital signature
Re: build problems/not finding libapr
Thanks, that fixed it. I knew it was something very simple, and I can stop beating my head against the wall. For the record, my Makefile.PL call is as follows: perl Makefile.PL MP_USE_STATIC=1 MP_AP_PREFIX=/home/albert/download/ httpd-2.0.55 MP_AP_CONFIGURE="--with-mpm=worker --enable-proxy -- enable-proxy-http --enable-ssl --enable-so --enable-rewrite" -albert On 19.12.2005, at 06:11, Philip M. Gollucci wrote: What was your Makefile.PL line for mp2 and your ./configure line for httpd? The reason this is failing is that /home/albert/download/ httpd-2.0.55/httpd/lib is not in your LD_LIBRARY_PATH ldconfig -m /home/albert/download/httpd-2.0.55/httpd/lib or SETENV LD_LIBRARY_PATH /home/albert/download/httpd-2.0.55/httpd/lib before make test in mod_perl2 -- -- "Love is not the one you can picture yourself marrying, but the one you can't picture the rest of your life without." "It takes a minute to have a crush on someone, an hour to like someone, and a day to love someone, but it takes a lifetime to forget someone..." Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 Consultant / http://p6m7g8.net/Resume/resume.shtml Senior Software Engineer - TicketMaster - http://ticketmaster.com On Mon, 19 Dec 2005, Albert Vernon Smith wrote: -8<-- Start Bug Report 8<-- 1. Problem Description: I am trying to build mod_perl 2.0.2 against httpd 2.0.55 (static mod_perl and worker MPM). Everything runs fine, except when I run 'make test' all of the tests in t/apr-util fail with similar errors.For example: t/apr-ext/uriCan't load '/home/albert/download/mod_perl-2.0.2/ blib/arch/auto/APR/APR.so' for module APR: libaprutil-0.so.0: cannot open shared ob ject file: No such file or directory at /usr/lib64/perl5/5.8.5/ x86_64-linux-thread-multi/DynaLoader.pm line 230. at /home/albert/download/mod_perl-2.0.2/blib/lib/APR/URI.pm line 23 Compilation failed in require at /home/albert/download/ mod_perl-2.0.2/blib/lib/APR/URI.pm line 23. BEGIN failed--compilation aborted at /home/albert/download/ mod_perl-2.0.2/blib/lib/APR/URI.pm line 23. Compilation failed in require at /home/albert/download/ mod_perl-2.0.2/t/lib/TestAPRlib/uri.pm line 11. BEGIN failed--compilation aborted at /home/albert/download/ mod_perl-2.0.2/t/lib/TestAPRlib/uri.pm line 11. Compilation failed in require at t/apr-ext/uri.t line 7. BEGIN failed--compilation aborted at t/apr-ext/uri.t line 7. dubious Test returned status 255 (wstat 65280, 0xff00) Following a suggestion in the troubleshooting guide, I looked with 'ldd blib/arch/auto/APR/APR.so | grep apr' and I get: libaprutil-0.so.0 => not found libapr-0.so.0 => not found So, it seems that compilation of APR is not finding libapr or libaprutil for some reason.
Re: pnotes preserved accross calls?!
Quite right Tom... I think I found the problem anyway, it appears to work fine now - see below, I've collapsed a few function calls to make it clearer but you'll see what silliness was going on... I'm using method handlers and the object is preserved between requests, like so: package DataSite; $__PACKAGE__::persistent=__PACKAGE__->new(); httpd.conf: PerlFixupHandler $DataSite::persistent->fixup Then I allow each phase to collect the cookies like this... spot the preserved reference... sub fixup : method { my ($self,$r)[EMAIL PROTECTED]; my $cookies=$self->get_cookies($r); if($$cookies{shopping_cart}){ $self->{shopping_cart}=$$cookies{'shopping_cart'}; # oh boy now I feel stupid... } } And just for the record, get_cookies (written this way so every phase can call this but the cookies are only fetched once): sub get_cookies { my ($self,$r)[EMAIL PROTECTED]; my $cookies=$r->pnotes('cookies') and return $cookies; $cookies=Apache2::Cookie::Validated->fetch($r,$my_secret_key); $r->pnotes('cookies',$cookies); return $cookies; } (Apache2::Cookie::Validated is a subclass I made which validates cookie content to prevent tampering, useful for authentication and online ordering cookies - hoping to get time to CPANify it one day) cheers John Tom Schindl wrote: A little but more code would be help to see what you are really doing ;-) Tom
Re: go crazy with me
Please, I specifically asked not to tell me how else to do it. I want to know how, if at all, it's possible to do it under Apache/modperl. I know I can do it 1,000,000 other ways that I'm totally not interested in. I just want everyone to focus on what's possible with Apache/modperl, and nothing outside of Apache/modperl. If it's not possible so be it. I'm not trying to solve a problem here. I've already solved the problem. What I'm trying to do is see if I can solve it better with Apache/modperl. Sorry if my words sound harsh. They aren't meant to. They're simply meant to stress that solutions outside of Apache/modperl aren't what I'm interested in. On Mon, 19 Dec 2005 10:31:05 -0500 Jonathan Vanasco <[EMAIL PROTECTED]> wrote: You could look into the Twisted framework for python: http://twistedmatrix.com/ It's a really solid networking framework, but its python based (not perl). On Dec 19, 2005, at 12:48 AM, JT Smith wrote: Yup, I've actually already done it that way with both Parallel::ForkManager in one instance and Proc::Queue as an alternative. I added in event handling with both Event and Event::Lib as seperate trials. All those implementations were relatively easy to do. But the question becomes, why? If everything else is running in Apache, why start a seperate service to run these tasks? And again, I said I want to go crazy. Let's not figure out how else we could do that (I already know that), but how could we do it using Apache? However, you're right, I should look back at the list archives and see what conclusions other people asking similar questions came to. I guess I hadn't considered that this question would have been asked before. On Mon, 19 Dec 2005 00:28:42 -0500 Perrin Harkins <[EMAIL PROTECTED]> wrote: On Sun, 2005-12-18 at 21:18 -0600, JT Smith wrote: I want to turn it into a workflow system. If you think about it, workflow is nothing but a set of transactional tasks (nothing new) with two additional components (here's where it get's weird). The two additional components are cron (scheduling) and queue (a task executor). Earl Cahill and I talked about how to use apache for a queue system on the list a while back. In the end, we both decided it was a bad idea. Apache is a very flexible network server, but this task is very unlike a network server. I ended up writing a simple forking daemon with Parallel::ForkManager that stores the queue in a database, and I think Earl ended up with something similar. - Perrin JT ~ Plain Black ph: 703-286-2525 ext. 810 fax: 312-264-5382 http://www.plainblack.com I reject your reality, and substitute my own. ~ Adam Savage
Re: build problems/not finding libapr
Thanks, that fixed it. I knew it was something very simple, and I can stop beating my head against the wall. For the record, my Makefile.PL call is as follows: perl Makefile.PL MP_USE_STATIC=1 MP_AP_PREFIX=/home/albert/download/httpd-2.0.55 MP_AP_CONFIGURE="--with-mpm=worker --enable-proxy --enable-proxy-http --enable-ssl --enable-so --enable-rewrite" Just so you know, you should add the ldconfig -m to run at boot time or httpd will fail to start as that will not be in the LD_LIBRARY_PATH again. -- "It takes a minute to have a crush on someone, an hour to like someone, and a day to love someone, but it takes a lifetime to forget someone..." Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 Consultant / http://p6m7g8.net/Resume/resume.shtml Senior Software Engineer - TicketMaster - http://ticketmaster.com
Re: go crazy with me
On Sun, 2005-12-18 at 23:48 -0600, JT Smith wrote: > I added in event handling with both Event > and Event::Lib as seperate trials. I just used a short sleep with Time::HiRes between polling the database for new jobs. > If everything else is running in Apache, why start a > seperate service to run these tasks? Because everything else sounds much harder and less reliable. That was my reason. > And again, I said I want to go crazy. Let's not > figure out how else we could do that (I already know that), but how > could we do it using > Apache? I came up with about a half dozen possible ways of doing our queue system. Only one of them used apache as the only daemon. I looked into custom protocol handlers and the rest of the mod_perl 2 API and there's nothing I can see that would make a time-based system possible. It would require rewriting some C code and probably changing things that are not part of the module API. The idea I had for handling events without all that goes like this: Run a mod_perl server that the job submitters contact via HTTP. When a process gets a request, it checks to see if there are enough listener processes free for accepting jobs (as opposed to processing jobs). If there are not, it adds the job to the queue and goes back to listening for requests. If there are, it processes the job. This ensures that processing jobs does not starve the ability to accept new ones. A process which has started working on a job will loop (keeping the current request alive), pulling jobs off the queue and working on them until the queue is empty again, when it will allow the request to finish and go back to sleep. In other words, new requests will start child processes working, and all processes that get started will stay working until the queue is empty again. Pros * No polling except by processes that have just finished a job and are deciding whether or not to exit. (This may still be more than the polling done by a simple perl daemon.) * Quick pickup of new jobs. Cons * Clustering would require some kind of custom load balancer that would know which machines were actually busiest. This might involve reading the scoreboard, or something more complex. Much harder than other approaches. * No obvious way to tell how many processes are working vs. listening. Would probably need to use something like Cache::FastMmap to track this. * The whole idea is fairly hard to explain, which probably means it's too complex and will be hard to build and debug. Anyway, feel free to expand on this idea or try it out. As complex as it is, it avoids having to delve into the guts of the httpd code. - Perrin
Re: go crazy with me
JT Smith wrote: Yup, I've actually already done it that way with both Parallel::ForkManager in one instance and Proc::Queue as an alternative. I added in event handling with both Event and Event::Lib as seperate trials. All those implementations were relatively easy to do. But the question becomes, why? If everything else is running in Apache, why start a seperate service to run these tasks? And again, I said I want to go crazy. Let's not figure out how else we could do that (I already know that), but how could we do it using Apache? Here at mailchannels.com we have first used mp2 to handle the email traffic shaping entirely inside mod_perl2, but the nature of our product is so different from serving HTTP, it just won't scale (mostly memory-wise, but also too many processes). We have now switched to having Event::Lib (over libevent) doing all the non-blocking IO and using mp2's protocol handler to do blocking IO (like network-bound operations). The performance is just amazing, hardly any memory used and we can easily handle a thousand concurrent connections on very low-end hardware. Switching to event based flow was a challenge, since you no longer have the normal logic flow. But we have written a few abstraction layers and now it's almost easy. We are planning to release our AsyncIO abstraction module on CPAN once we have some spare resources. I highly recommend Event::Lib, at least for its wonderful maintainer: Tassilo von Parseval, who's a great perl/C/XS expert and who is resolving any problems with Event::Lib almost as soon as we are posting the bug reports. I wish more CPAN authors were as responsive as Tassilo is :) -- _ Stas Bekman mailto:[EMAIL PROTECTED] http://stason.org/ MailChannels: Assured Messaging(TM) http://mailchannels.com/ The "Practical mod_perl" book http://modperlbook.org/ http://perl.apache.org/ http://perl.org/ http://logilune.com/
Re: go crazy with me
On Mon, 19 Dec 2005, Perrin Harkins wrote: processes free for accepting jobs (as opposed to processing jobs). If there are not, it adds the job to the queue and goes back to listening for requests. If there are, it processes the job. This ensures that processing jobs does not starve the ability to accept new ones. Be careful with this approach because this idea, while sweet and peachy on paper, is often fundamentally wrong in practice. My place of employment implemented complex call processing software based on queues percisely because "processing jobs does not starve the ability to accept new ones". I considered that the implementation would have terrible consequences for performance and behavior under overload, and my coworkers strongly disagreed. So I armed a test: (call generator) -> (our software) -> (call receiver) At 20 CPS, our software ate 20% of the CPU and 20% of the Memory. Thus it wouldn't be all that unreasonable to expect 30 CPS to be possible. As it turned out, problems of lock contention were harsh enough that attempting the rate of call attempts per second on the call generator meant that the call receiver would only get 5 call attempts per second. But that failure was much more about lock contention than it was about application workflow. The real demonstration was yet to come. I fetched one of our developers back to our desk to point out the contention issues. Rather than re-starting all the software, I told the call generator to "pause" - ie, stop making new phone calls. Stunningly, the moment I did this, the call receiver went from 5 call attempts per second to 10, and kept receiving call attempts for several minutes (even though they were all invalid and the generator had been stopped long ago). What was happening? The application had been taking messages into the queue, promising the call generator to handle them. Thus the queue kept growing, and growing, and growing... Now, a queue is of course applicable in some places. The application mentioned in this thread is one of them. But IMNSHO, you should never ever think that a queue's ability to allow you to accept new work while you're busy processing current work is a good thing unless this trait is (a) necessary for performance and (b) carefully constrained to keep things under control (ie, the kernel's network buffers). - Perrin Cheers, Chase
Re: go crazy with me
On Mon, 2005-12-19 at 13:43 -0600, Chase Venters wrote: > What was happening? The application had been taking > messages into the queue, promising the call generator to handle them. Thus > the queue kept growing, and growing, and growing... That is what a queue is supposed to do when the demand exceeds the capacity. > Now, a queue is of course applicable in some places. The application > mentioned in this thread is one of them. But IMNSHO, you should never ever > think that a queue's ability to allow you to accept new work while you're > busy processing current work is a good thing unless this trait is (a) > necessary for performance and (b) carefully constrained to keep things > under control (ie, the kernel's network buffers). A queue is just a method for handling bursty demand in applications where it would be too expensive to always provide enough throughput to handle all requests immediately. You have to build the system with enough throughput to handle the common load level without backing up. If you don't need to handle bursts that exceed capacity, or you don't mind telling clients to go away when capacity is full, there's no good reason to use a queue. - Perrin
Circular References
I've got some code that gets away with a lot of circular references because of the "magic load order" in the startup.pl file(s). Are they are CPAN modules i.e. Module::ScanDeps that might be able to programatically identify thse ? Its not always A uses B and B uses A it might be many levels. "Love is not the one you can picture yourself marrying, but the one you can't picture the rest of your life without." "It takes a minute to have a crush on someone, an hour to like someone, and a day to love someone, but it takes a lifetime to forget someone..." Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 Consultant / http://p6m7g8.net/Resume/resume.shtml Senior Software Engineer - TicketMaster - http://ticketmaster.com
Re: Circular References
On Mon, 2005-12-19 at 15:11 -0500, Philip M. Gollucci wrote: > I've got some code that gets away with a lot of circular references > because of the "magic load order" in the startup.pl file(s). Usually circular references like this are not a problem in Perl. The only issue I know of is when you try to use imported subs or variables at compile time. - Perrin
Re: go crazy with me
On Mon, 19 Dec 2005, Perrin Harkins wrote: On Mon, 2005-12-19 at 13:43 -0600, Chase Venters wrote: What was happening? The application had been taking messages into the queue, promising the call generator to handle them. Thus the queue kept growing, and growing, and growing... That is what a queue is supposed to do when the demand exceeds the capacity. Indeed. And that's why using a queue is sometimes wrong. A queue is just a method for handling bursty demand in applications where it would be too expensive to always provide enough throughput to handle all requests immediately. You have to build the system with enough throughput to handle the common load level without backing up. If you don't need to handle bursts that exceed capacity, or you don't mind telling clients to go away when capacity is full, there's no good reason to use a queue. To be fair, it depends a bit on what your application is. In handling calls, it's particularly nasty because a user expects a call to complete rather quickly and if they're simply put on some line tens of thousands of calls long, to be addressed several minutes from now, they'll just hang up and complain. An example application for a queue in a web application would be to set up a request to send out an e-mail. (Simple apps simply invoke sendmail directly, but the MTA can elect to queue the outgoing message as qmail does [Sendmail might too, I've just never been a fan or a user :P]) The advantage here is interactive response - the user doesn't have to wait for the SMTP client to look up the MX records, initiate a session, send the message... So if your server can handle 10 mails per second and you never *ask* it to do more, you have no problems. But let's suppose you start asking it for 15 mails per second. This spike lasts five seconds. You've now built your load up by 25 messages (more if you factor in that spending the time accepting the extra five takes time away from sending the 10). If you were to continue from that point forward with 10 mails per second, you'd be permanently 25 messages behind, until your activity dips low enough to catch back up. In this case, the queue has done its job. I don't want to imply that queues are bad or wrong... just that you have to be careful when you consider accepting work while busy a good property to have. If you spend much of your time close to capacity, queues can bite you in the ass. And if you do happen to spend much time over your capacity limits, the problem will at best be nasty and at worst be a damn nightmare, depending on what it is you're using the queues for. - Perrin Cheers, Chase
Re: Circular References
On Mon, 2005-12-19 at 15:11 -0500, Philip M. Gollucci wrote: I've got some code that gets away with a lot of circular references because of the "magic load order" in the startup.pl file(s). Usually circular references like this are not a problem in Perl. The only issue I know of is when you try to use imported subs or variables at compile time. And "I" am then what ?
Re: Circular References
On Mon, 2005-12-19 at 15:19 -0500, Philip M. Gollucci wrote: > > On Mon, 2005-12-19 at 15:11 -0500, Philip M. Gollucci wrote: > >> I've got some code that gets away with a lot of circular references > >> because of the "magic load order" in the startup.pl file(s). > > > > Usually circular references like this are not a problem in Perl. The > > only issue I know of is when you try to use imported subs or variables > > at compile time. > And "I" am then what ? PPI? Kind of sounds like more trouble than manually finding the right order though. - Perrin
Re: Circular References
On Mon, 19 Dec 2005, Perrin Harkins wrote: Usually circular references like this are not a problem in Perl. The only issue I know of is when you try to use imported subs or variables at compile time. Generally, this is true. Do beware though of code that may lean on perl's grammar a bit much: func_foo 'asdf'; Will fail compilation if Perl has never seen 'func_foo' before, whereas: func_foo('asdf'); will not. - Perrin Cheers, Chase
Re: go crazy with me
Hello, On Monday 19 December 2005 04:18, JT Smith wrote: > Apache is the ultimate event handler. It's listening for socket events. Why > couldn't we change it just a bit to listen to timer events and thusly kick > off an execution once per minute to check a cron tab. The reading of cron > tabs is the easy part (DateTime::Cron::Simple for example). What would it > take to just just get Apache to handle events other than a socket request? > Is it possible? Of course it is, presumably it already knows how to handle > signals. If it couldn't, there wouldn't be a way for us to issue a SIGHUP > to do a soft restart. So, how do we get it to also handle timer events? > > Any ideas? I investigated the signal approach with MP1 for a client of mine, and it is feasible. The idea was to build a queue system that was able to accept messages via HTTP and used signals to wake up a child dedicated to delivery. Here it is a simple piece of code to stuck an Apache child in a loop and control it via signals (USR1 and USR2). We used it as a PerlLogHandler. package Test::ChildLoop; use Apache::Constants qw(OK DECLINED); use strict; my $usr1 = 0; sub catch_usr1 { my $signame = shift; $usr1++; warn "Somebody sent me a SIG$signame"; return; } $SIG{USR1} = \&catch_usr1; my $exit = 0; sub catch_usr2 { my $signame = shift; $exit++; warn "Somebody sent me a SIG$signame"; return; } $SIG{TERM} = \&catch_usr2; sub handler { my $r = shift; my $count = 0; while (not $exit) { warn "child $$ waited for $count seconds and got $usr1 usr1 signals\n"; sleep 5; $count += 5; warn "hey, we got a shutdown\n" if $exit; } return OK; } 1; They ended using a modified version of Postfix, so I didn't investigate further on, but I'm still interested in this approach. HTH, Valerio ---
How do I check a socket to know it is not closed?
I'm trying to tweak a custom Apache Connection handler. The short description of what I want to do is this: I receive a connection from an application. I read the message which tells me what work I should be doing. I go off and do the work. In the time that I do the work, the application could decide that it has waited too long and time out the connection. When I get done working, I want to check the socket to make sure it is still connected before I send data across it. I have looked at the docs, but I'm not making much for head-way. Checking the connection to see if it aborted ($c->aborted) did not give me what I need. I've tried polling the socket ($sock = $c->client_sock(); ... $sock->poll( $c->pool, 5, APR::Const::POLLOUT ) ) to see that I can send output on the socket, but this gives me the same result when I close the connection as when I leave it open. Any ideas or pointers on how to do this. Even a redirect to some (helpful) docs (Not the "Possible values: " docs) would be helpful. Thanks. Ivan
Re: How do I check a socket to know it is not closed?
On Mon, 19 Dec 2005, Ivan Heffner wrote: Any ideas or pointers on how to do this. Even a redirect to some (helpful) docs (Not the "Possible values: " docs) would be helpful. t/protocol/TestProtocol/echo_timeout.pm "Love is not the one you can picture yourself marrying, but the one you can't picture the rest of your life without." "It takes a minute to have a crush on someone, an hour to like someone, and a day to love someone, but it takes a lifetime to forget someone..." Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 Consultant / http://p6m7g8.net/Resume/resume.shtml Senior Software Engineer - TicketMaster - http://ticketmaster.com
Re: How do I check a socket to know it is not closed?
I tried that and it didn't work. I have this code: my $len = $sock->recv($buff, $want); warn "got the message\n"; sleep 10; warn "sending the response\n"; my $wlen = eval { $sock->send( "I heard $buff\n" ) }; if ($@) { warn "They hung up!\n"; } else { warn "sent $wlen bytes\n"; $sock->send( "It's been good talking to you. Good-bye\n" ); $sock->close(); } I get the following in the log: got the message sending the response sent 13 bytes [Mon Dec 19 17:07:23 2005] [error] APR::Socket::send: (32) Broken pipe at /home/iheffner/proto/lib/ProtoConnection.pm line 55 I need to know that the client app just stopped listening. There is no explicit timeout on the socket. Ivan On 12/19/05, Philip M. Gollucci <[EMAIL PROTECTED]> wrote: > On Mon, 19 Dec 2005, Ivan Heffner wrote: > > Any ideas or pointers on how to do this. Even a redirect to some > > (helpful) docs (Not the "Possible values: " docs) would be > > helpful. > > t/protocol/TestProtocol/echo_timeout.pm > > > > "Love is not the one you can picture yourself marrying, > but the one you can't picture the rest of your life without." > > "It takes a minute to have a crush on someone, an hour to like someone, > and a day to love someone, but it takes a lifetime to forget someone..." > > Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 > Consultant / http://p6m7g8.net/Resume/resume.shtml > Senior Software Engineer - TicketMaster - http://ticketmaster.com > >
Re: How do I check a socket to know it is not closed?
I just asked this question elsewhere, and got a good working response. Below is my trivialised code: my $BytesRead = $sock->sysread($buffer,1024); if (!defined($BytesRead)) { print "WARNING: Connection lost!\n"; exit; } Because sysread is blocking (I don't know how to unblock it), use IO::Select to test for data before calling sysread. I'm working on a clean socket reading method. Write me if you've got it before me. - Original Message - From: "Ivan Heffner" <[EMAIL PROTECTED]> To: "Philip M. Gollucci" <[EMAIL PROTECTED]> Cc: Sent: Tuesday, December 20, 2005 9:10 AM Subject: Re: How do I check a socket to know it is not closed? > I tried that and it didn't work. I have this code: > > my $len = $sock->recv($buff, $want); > > warn "got the message\n"; > sleep 10; > warn "sending the response\n"; > > my $wlen = eval { $sock->send( "I heard $buff\n" ) }; > if ($@) { > warn "They hung up!\n"; > } > else { > warn "sent $wlen bytes\n"; > $sock->send( "It's been good talking to you. Good-bye\n" ); > $sock->close(); > } > > > I get the following in the log: > > got the message > sending the response > sent 13 bytes > [Mon Dec 19 17:07:23 2005] [error] APR::Socket::send: (32) Broken pipe > at /home/iheffner/proto/lib/ProtoConnection.pm line 55 > > I need to know that the client app just stopped listening. There is > no explicit timeout on the socket. > > Ivan > > On 12/19/05, Philip M. Gollucci <[EMAIL PROTECTED]> wrote: > > On Mon, 19 Dec 2005, Ivan Heffner wrote: > > > Any ideas or pointers on how to do this. Even a redirect to some > > > (helpful) docs (Not the "Possible values: " docs) would be > > > helpful. > > > > t/protocol/TestProtocol/echo_timeout.pm > > > > > > > > "Love is not the one you can picture yourself marrying, > > but the one you can't picture the rest of your life without." > > > > "It takes a minute to have a crush on someone, an hour to like someone, > > and a day to love someone, but it takes a lifetime to forget someone..." > > > > Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 > > Consultant / http://p6m7g8.net/Resume/resume.shtml > > Senior Software Engineer - TicketMaster - http://ticketmaster.com > > > >
Re: go crazy with me
Just went to your company web site and read that you got the White Camel award. Congrats, both on the award and your new career! We're talking to the Director of Development here guys... :) - Original Message - From: "Stas Bekman" <[EMAIL PROTECTED]> To: "JT Smith" <[EMAIL PROTECTED]> Cc: Sent: Tuesday, December 20, 2005 3:33 AM Subject: Re: go crazy with me > JT Smith wrote: > > Yup, I've actually already done it that way with both > > Parallel::ForkManager in one instance and Proc::Queue as an alternative. > > I added in event handling with both Event and Event::Lib as seperate > > trials. All those implementations were relatively easy to do. But the > > question becomes, why? If everything else is running in Apache, why > > start a seperate service to run these tasks? And again, I said I want to > > go crazy. Let's not figure out how else we could do that (I already know > > that), but how could we do it using Apache? > > Here at mailchannels.com we have first used mp2 to handle the email > traffic shaping entirely inside mod_perl2, but the nature of our product > is so different from serving HTTP, it just won't scale (mostly > memory-wise, but also too many processes). We have now switched to having > Event::Lib (over libevent) doing all the non-blocking IO and using mp2's > protocol handler to do blocking IO (like network-bound operations). The > performance is just amazing, hardly any memory used and we can easily > handle a thousand concurrent connections on very low-end hardware. > > Switching to event based flow was a challenge, since you no longer have > the normal logic flow. But we have written a few abstraction layers and > now it's almost easy. We are planning to release our AsyncIO abstraction > module on CPAN once we have some spare resources. > > I highly recommend Event::Lib, at least for its wonderful maintainer: > Tassilo von Parseval, who's a great perl/C/XS expert and who is resolving > any problems with Event::Lib almost as soon as we are posting the bug > reports. I wish more CPAN authors were as responsive as Tassilo is :) > > -- > _ > Stas Bekman mailto:[EMAIL PROTECTED] http://stason.org/ > MailChannels: Assured Messaging(TM) http://mailchannels.com/ > The "Practical mod_perl" book http://modperlbook.org/ > http://perl.apache.org/ http://perl.org/ http://logilune.com/
Two Problems: fatalsToBrowser and ./
Hi modperlmailinglist, I'm new to mod_perl and have two problems I couldn't solve: 1st: I can't get my error messages being displayed in the browser. use CGI::Carp qw/fatalsToBrowser/; doesn't work anymore. I googled for but didn't find anything real helpful. Is there a way to write errors to the Browser and the Apache-log? 2nd: My ./-dir is eq to my docroot now. Is there an easy way to get the path, where the script is? (any other than substitueing around with $ENV{SCRIPT_FILENAME}?) Ok, that's it. Thanks in advance, Michael Saller.
Re: go crazy with me
Foo Ji-Haw wrote: Just went to your company web site and read that you got the White Camel award. Congrats, both on the award and your new career! Thanks for the kind words, Foo! We're talking to the Director of Development here guys... :) Hehe, don't let titles mislead you :) BTW, we are always looking for bright people to join our team. So if you want to work in a fun team, have challenging projects and like Vancouver, BC drop me a note! Here at mailchannels.com we have first used mp2 to handle the email traffic shaping entirely inside mod_perl2, but the nature of our product is so different from serving HTTP, it just won't scale (mostly memory-wise, but also too many processes). We have now switched to having Event::Lib (over libevent) doing all the non-blocking IO and using mp2's protocol handler to do blocking IO (like network-bound operations). The performance is just amazing, hardly any memory used and we can easily handle a thousand concurrent connections on very low-end hardware. Switching to event based flow was a challenge, since you no longer have the normal logic flow. But we have written a few abstraction layers and now it's almost easy. We are planning to release our AsyncIO abstraction module on CPAN once we have some spare resources. I highly recommend Event::Lib, at least for its wonderful maintainer: Tassilo von Parseval, who's a great perl/C/XS expert and who is resolving any problems with Event::Lib almost as soon as we are posting the bug reports. I wish more CPAN authors were as responsive as Tassilo is :) -- _ Stas Bekman mailto:[EMAIL PROTECTED] http://stason.org/ MailChannels: Assured Messaging(TM) http://mailchannels.com/ The "Practical mod_perl" book http://modperlbook.org/ http://perl.apache.org/ http://perl.org/ http://logilune.com/
what is difference with static and DSO mod_perl ?
How to setup httpd.conf ? when i using static (non-DSO) mod_perl. DSO mod_perl httpd.conf === LoadModule perl_module modules/mod_perl.so PerlRequire "/usr/local/perlmods/startfile.pl" Will i need recomplie apache when i modify my (perl)code , when using non-DSO mod_perl?