You know, I thought of something along those lines, but I can't
see how it makes the receiver actually use less CPU permanently.
It seems like it ought to simply get a backlog, but go back 
to normal CPU usage.

Can you think of any way that a backlog would cause receiver to 
stay at low CPU?

Now that i can make this happen easily instead of waiting forever-
and-a-day, I will get callgrind snapshots of both programs when the
test is fast and slow. It seems like that just must show me 
something.







----- Original Message -----
> On 27. 10. 14 09:10, Michael Goulish wrote:
> > Earlier I reported a very gradual slowdown in the performance
> > of my simple 1-sender 1-receiver test, on RHEL 7.0 and Fedora 20
> > but not on RHEL 6.3 .
> >
> > The slowdown caused the test to end up running at half speed after
> > a billion or two billion messages.  ( Which took hours to run. )
> >
> >
> > I now know how to cause this slowdown to happen any time, and
> > it works just as well on RHEL 6.3 as it does on RHEL 7.0 .
> >
> > All I have to do is make the machine busy.  Even though I do not
> > swamp all processors -- in fact, I leave a couple processors idle --
> > my receiver program slows down when the machine becomes busy --
> > ***and it never recovers***.
> >
> 
> Michael,
> 
> this is totally a wild guess, but looking at your psend.c the only thing
> that jumps out is that you couple 1:1 number of deliveries created and
> number of
> calls to pn_driver_wait(). So if anything (which I cannot explain
> /what/) happens where
> sender starts to lag in talking to receiver it may not be able to dig
> itself out.
> 
> Maybe try to create first delivery before the loop and create the next
> delivery after
> pn_link_advance()
> 
> Bozzo
> 

Reply via email to