Looking at your source a lot of these modules are not on CPAN so I am
unsure as to what is going on under the hood. I am no expert at POE
internals, but various actual experts can be found in #poe on
irc.perl.org

Before you go there though I have a couple of suggestions,
setup your program in a test environment thats as close to your
production as possible,  bench mark it and then use the various trace
options to look for places where you are losing time

upgrade any modules you are "use" ing with newer versions through the
cpan shell and benchmark again

finally,
use constant LOG_OUTPUT = 0;
and change all your print_log(..) calls to print_log(...) if (LOG_OUTPUT);

(i know it seems minor but I've had cases where it makes a significant
difference)

you also probably dont need to undef all those variables, perl will
clean up after itself once they are out of scope.

Can you describe what is goin on in the program? I am mainly wondering
why you need to fork out the requests to the CONDA action queue..

I am sorry I can't be a lot of help since I am having a bit of trouble
really understanding what is going on behind the scenes here but I
suggest you look at POE::Wheel::Run  to spin out long  blocking
processes to a separate process and log into IRC





On 10/31/06, Fei Liu <[EMAIL PROTECTED]> wrote:
Hi Guillermo
 Many thanks for your response. Here are some relevant information you
might find interesting:

ls -la /usr/lib/perl5/site_perl/5.8.6/POE/ -R
/usr/lib/perl5/site_perl/5.8.6/POE/:
total 12
drwxr-xr-x   3 root root 4096 Oct 25 17:23 ./
drwxr-xr-x  48 root root 4096 Oct 25 17:25 ../
drwxr-xr-x   4 root root 4096 Oct 25 17:23 Component/

/usr/lib/perl5/site_perl/5.8.6/POE/Component:
total 24
drwxr-xr-x  4 root root 4096 Oct 25 17:23 ./
drwxr-xr-x  3 root root 4096 Oct 25 17:23 ../
drwxr-xr-x  3 root root 4096 Oct 25 17:23 Client/
drwxr-xr-x  2 root root 4096 Oct 25 17:23 DaemonStatus/
-rw-r--r--  1 root root 6606 Jul 26 11:36 DaemonStatus.pm

/usr/lib/perl5/site_perl/5.8.6/POE/Component/Client:
total 20
drwxr-xr-x  3 root root 4096 Oct 25 17:23 ./
drwxr-xr-x  4 root root 4096 Oct 25 17:23 ../
drwxr-xr-x  2 root root 4096 Oct 25 17:23 NAD/
-rw-r--r--  1 root root 6926 Jul 26 11:28 NAD.pm

/usr/lib/perl5/site_perl/5.8.6/POE/Component/Client/NAD:
total 12
drwxr-xr-x  2 root root 4096 Oct 25 17:23 ./
drwxr-xr-x  3 root root 4096 Oct 25 17:23 ../
-rw-r--r--  1 root root 2771 Oct 10  2002 Exception.pm

/usr/lib/perl5/site_perl/5.8.6/POE/Component/DaemonStatus:
total 12
drwxr-xr-x  2 root root 4096 Oct 25 17:23 ./
drwxr-xr-x  4 root root 4096 Oct 25 17:23 ../
-rw-r--r--  1 root root  672 Jun  6  2003 Exception.pm

#ls /usr/lib/perl5/vendor_perl/5.8.6/POE
POE/    POE.pm
# ls /usr/lib/perl5/vendor_perl/5.8.6/POE
API/        Component.pm  Driver.pm  Filter.pm  Loop/    Macro/
Pipe/    Preprocessor.pm  Queue.pm   Resource.pm   Session.pm  Wheel.pm
Component/  Driver/       Filter/    Kernel.pm  Loop.pm  NFA.pm
Pipe.pm  Queue/           Resource/  Resources.pm  Wheel/

Inside the attached module, you will find more POE modules that we use
in this client/server system.

No, I haven't tried upgrading POE and I am afraid that won't happen any
time soon in production environment due to the nature the system is
tightly designed to stay together. But  I can certainly try in a test
environment, should I just do 'perl -MCPAN -e install POE'?

Yes, Time::HiRes is installed and enabled before 'use POE'

I've attached the POE work horse module in this email. I am not sure if
this is the cause of the performance issue though because my timing
does not show that this is slowing down things.

Let me know if you need more information about this case. One of my
colleagues claims that somehow request wasn't picked up from actual
request handler after the request handler process exits. Perl/POE spends
up to 60ms in waitpid call to wait the handler completion and then reaps
the response.

Fei

Guillermo Roditi wrote:
> can you provide us with a list of the POE-related modules you are using?
>
> have you tried, (on a test environment) upgrading any components to
> look for a difference in speed?
>
> Do you have Time::HiRes installed?
>
> Can you paste some of the more relevant code to the list? (In your
> case this seems like it would be  the point when sessions and wheels
> are created and scheduled for polling, the message flow and related
> yield, delay etc calls. )
>
> In my personal opinion I have found that my performance has had the
> best responses to minimizing the number of events needed for X to
> happen and minimizing logging upon events (i was surprised at how much
> better my performance was when i turned off debug,info and warn output
> in Log::Log4Perl [if you use it])
>
> On 10/31/06, Fei Liu <[EMAIL PROTECTED]> wrote:
>> Hello, I am new to the POE architecture and intricacies. I am
>> maintaining a code developped by such an architecture,
>>
>> Client<->SOAP<->Server (scheduled by POE).
>>
>> Request is delivered from client to server through SOAP and POE queues
>> the requests and handles them. The problem is we are getting extremely
>> poor performance from this message flow. I've tested each individual
>> component but POE (the code is almost unreadable partially due to my
>> unfamiliarity with POE). Each component is responding to request
>> blazingly fast (on the order of millisecond). However, the POE kernel is
>> queueing up all the requests before they are sent to the actualy request
>> handler and the request handler are not efficiently used. For some
>> reason, the response from the request handlers are not picked up by the
>> POE kernel after 60ms. I've timed almost everything, including the POE
>> state machines. I still see relatively fast processing (on the order of
>> 10ms) speed from POE.
>>
>> Thus my problem requires a good understanding of the inner working of
>> POE. Someone said POE is designed to be responsive on the order of
>> second. And I cann't expect better performance than that. Is this
>> statement true?
>>
>> If you can provide some pointers on understanding POE or general POE
>> performance tuning tips, I'd really appreciate them.
>>
>> Fei
>>




Reply via email to