On Feb 18, 2007, at 15:00, Michael Hare wrote:

Hello-

Two items/questions regarding Client::Ping;

1) Client::Ping payload option?
Would a patch for Client::Ping be considered that would allow a user to
set their own data pattern?

my $datapattern = delete $params{DataPattern};
$datapattern = 'Use POE!' x 7 unless defined $datapattern
...
$heap->{data}          = $datapattern;

It's a good idea.  I added the option as "Payload".

I would like to have the opportunity for a small payload.

That's now up to you. I have noticed in previous tests that some firewalls drop ICMP packets with nonstandard payload sizes. If you run into mysterious packet drops, that might be it.

2) First Client::Ping post 'slow'?
I noticed that some of the ips posted get handled much slower than
subsequent IPs; In this example, I have parallelism set to 4, so I get
4 late replies [if i had set it to 14, i'd get 14]

4.79.64.42 is alive (packet return time: 0.938143014907837)
38.99.202.26 is alive (packet return time: 0.935018062591553)
38.101.160.229 is alive (packet return time: 0.927701950073242)
63.145.159.18 is alive (packet return time: 0.923621892929077)
65.77.115.178 is alive (packet return time: 0.00751113891601562)
65.113.85.6 is alive (packet return time: 0.00957703590393066)
..

From a tcpdump, the packets are hitting the host, but just taking a
long time to get processed.

13:39:09.030498 205.213.110.242 > 4.79.64.42: icmp: echo request (DF)
13:39:09.034209 4.79.64.42 > 205.213.110.242: icmp: echo reply (DF)
[others with short response times]

Any thoughts? I can hack around this by pinging '$parallelism' ignored
first items but I am curious to what is the source of this problem and
if it is fixable.  I currently have a icmp polling model using
Threads::shared that I would like to replace with this much simpler model.

I suspect the problem is how you're priming the component. I assume that you're posting all your requests at once and letting POE::Component::Client::Ping's Parallelism feature handle the throttling. If I'm right the first $parallelism response times are inflated by the time it takes to process the remaining N-$parallelism requests in the queue.

The problem occurs because POE's event queue is ordered by event due times. post()ed and I/O events are due to be dispatched at Time::HiRes::time(), effectively making them FIFO.

By posting all your requests at once, you've delayed any I/O events until after all the requests can be received by POE::Component::Client::Ping.

The component sends the first $parallelism ICMP ping packets immediately. Then it receives N-$parallelism more requests, and finally the first ICMP pong response. At that point, POE's event queue is relatively quiet, so the next ICMP ping/pong happens quickly.

There are a few possible workarounds:

As you say, you can discard the first $parallelism responses, but that's cheezy.

Another workaround would be to handle the throttling yourself. Send $parallelism requests, and then send new requests as responses arrive.

A third way, internal to POE::Component::Client::Ping, might be to delay the initial ICMP pinging for a brief time, perhaps 1/10 second. That should push the initial ICMP sends beyond the processing of the queued requests. Or provide a way to say "queue this until I say go", and then a "go" message that indicates the last request has been handled.

Finally, we could make POE's I/O events truly FIFO. Right now filehandles are checked for readiness between FIFO timeslices. This is the mechanism that delays their delivery until after all the ping client requests are dispatched. We can dispatch I/O events at more finely grained intervals, at the expense of FIFO event throughput.

POE's I/O dispatch granularity is defined by POE::Resource::Events' _data_ev_dispatch_due() method. It may be possible to redefine the dispatch in a subclass, but POE::Kernel would need a way to let people load their own replacements.

--
Rocco Caputo - [EMAIL PROTECTED]


Reply via email to