----- Original Message -----
> On Tue, Feb 26, 2013 at 12:18 PM, Ken Giusti <kgiu...@redhat.com>
> wrote:
> 
> > Hi Mick
> >
> > ----- Original Message -----
> > >
> > > One hole that I feel like I'm seeing in the messenger
> > > interface concerns credit.
> > >
> > > I have a way of using credit to set a max number of
> > > messages that a recv call should return in one gulp,
> > > or a way of doing ... something that I'm still figuring
> > > out ... by setting credit=-1.
> > >
> > > What I don't have is any way of getting guidance about
> > > what effect my credit allocation is having.
> > >
> > > A messenger app might have some flexibility in how
> > > much time it spends servicing incoming messages vs.
> > > time it spends doing its own processing, and it might
> > > be able to allocate more time to servicing incoming
> > > messages if it knows more about what's happening.
> > >
> > > Alternatively, it might want to set the credit allocated
> > > per recv call based on the number of current incoming
> > > links.  ( And assume that the credit will be distributed
> > > round-robin across all incoming links. )
> > >
> > > Would it be practical / desirable / catastrophic
> > > to expose current backlog or number of incoming links,
> > > or both, at the messenger level ?
> > >
> > > Or would that violate part of the messenger API philosophy?
> > > ( And if so, what is that philosophy?  I want to be able
> > > to explain it. )
> > >
> >
> > I feel your pain - the more I use the "recv(n)" interface, the more
> > I
> > think it's an awkward approach to the credit/flow issue.
> >
> 
> I don't think it's aiming to solve a credt/flow issue at all. That's
> the
> job of the messenger implementation. It's simply providing a way for
> the
> application to ask for n messages.
> 
> 
> >
> > From what I understand, Messenger's model is based on two queues -
> > one for
> > incoming messages and one for outgoing messages.
> >
> > For flow control, the recv(n) method indicates how many messages
> > are
> > allowed on the incoming queue (max n).  But recv(n) is also used as
> > the
> > "pend for incoming messages" call.  So every time my app needs to
> > fetch
> > some messages, it also needs to compute flow control.  It's that
> > dual use
> > that I don't like.
> >
> 
> As I mentioned above, I don't think recv should be thought of as a
> flow
> control thing. It does provide input into what messenger does for
> flow
> control, but it's really just about a way for the app to fetch
> messages,
> and so far I've been considering three scenarios:
> 
>   (1) an app wants to receive and process messages indefinitely, in
>   which
> case pn_messenger_recv(-1) now does that job pretty nicely
>   (2) an app wants to make a simple request/response, in which case
>   it
> wants to receive exactly one message back and getting anymore would
> be a
> bug.
>   (3) a generalized form of option (2) where an app makes N requests
>   and
> processes each one as they arrive back. In this case you have to do
> pn_messenger_recv(N - what you've already processed).
> 
> I think these are probably the 3 most common scenarios, and I can see
> how
> using a pattern like (3) to cater to scenario (1) would be awkward,
> however
> I think it's less awkward when used in scenario (3).
> 

Thanks, I think you've clarified my understanding of the intended behavior of N 
in messenger::recv(N) - it places a limit to the number of messages the 
application can pn_messenger_get() for that invokation of recv(N).  If N==-1, 
then the number is unspecified.

My understanding is that this API is not making any explicit guarantees about 
what is happening at the wire level - for example, it's possible that more 
messages may in fact arrive via the network - is that correct?

I can buy that.


> That said I'm open to trying to simplify the API here, but I
> fundamentally
> don't think of this as a wire level flow control API. I get the
> impression
> from the comments in this thread that there is an idea that the app
> developer somehow has more knowledge or is in a better position to
> distribute credit than the messenger implementation, whereas I think
> the
> opposite is true. In my experience, the nature/shape of incoming
> message
> traffic is not a variable that is well known at development time.
> Perhaps
> you can define the extreme bounds of what loads you want to be able
> to
> handle, but at runtime there are many unpredictable factors:
> 
>   - your service can go from idle to extreme bursts with no warning
>   - round trip times can fluctuate based general network activity
>   - message processing times might vary due to other activity on
>     your machine or degradation in services you depend on
>   - your load might be unevenly distributed across different
> links/connections
>   - a buggy or malicious app might be (unintentionally) DoSing you
> 
> All of these factors and more go into determining the optimal and/or
> fair
> credit allocation at any given point in time, and that means a robust
> flow
> control algorithm really needs to be dynamic in nature. Not only
> that, but
> a robust flow control algorithm is a huge part of the value that
> messaging
> infrastructure provides, and should really be a fundamentally
> separate
> concern from how apps logically process messages.
> 
> 
> > Since Messenger models a queue of incoming messages, I'd rather see
> > flow
> > control configured as thresholds on that queue, and recv() not take
> > any
> > arguments at all.
> >
> > Something like this:
> >
> >  Messenger m;
> >  ...
> >  m.set_flow_stop( 10000 )
> >  m.set_flow_resume( 9000 )
> >  ...
> >  for (;;) {
> >     m.recv()
> >     while (m.incoming())
> >     ....
> >
> > IMHO, this is a lot "cleaner" than the current approach.  Of
> > course, some
> > may find my sample names too cryptic :)
> >
> 
> I think limits the API to only the first scenario I described above.
> At
> least it's not clear to me how you'd fetch exactly N messages.
> 
> 
> >
> > From an implementation point of view, the "flow stop" threshold is
> > really
> > just a suggestion for how much credit should be distributed across
> > the
> > links.  We could distribute more, as we would need to if the number
> > of
> > links is greater than the flow stop threshold.  Or less, assume a
> > point of
> > diminishing returns.
> >
> > Once the flow stop threshold is hit, credit would be drained from
> > all
> > links.  No further credit would be granted until the number of
> > "queued"
> > messages drops below "flow resume".
> >
> > This is the same model we use for queue flow control in the C++
> > broker.
> >
> 
>  This is starting to mix two things: (1) how the application fetches
> messages from the messenger, and (2) how to tune the messengers
> internal
> flow control algorithm in the specific case that the application
> wants to
> receive messages indefinitely. I think (2) is premature given that we
> haven't really done any performance work yet. Ideally I'd say we
> don't want
> to have to tune it, rather just give it some bounds to work within,
> e.g.
> limit to no more than X megabytes or no more than Y messages.
> 

Bingo!  That's _exactly_ my problem: the current impl tries to make a mapping 
from the recv(N) argument into the proper distribution of link credit. And it 
does so badly in the multi-link case (w/o the ability to really drain, as you 
point out).  Using N makes it hard to optimize credit allocation for any 
scenario that involves more than one underlying link (probably the case for 
scenario(1) and some cases of scenario(3)).  Good optimization for multi-link 
is complicated because N is unconstrained between calls to recv(N) - very hard 
to do something predictive and fair in this case.

However, establishing bounds not based on recv(N) - as you point out - would go 
a long way to making a very nice multi-link credit distribution impl.

>From a documentation point of view - yes, certainly, a description of recv(N) 
>shouldn't involve describing credit at all.



> In any case I think we need to be clear on the application scenarios
> we're
> trying to support. I've given (3) common ones above. Are there cases
> that
> you think are missing, and do you have a better way to cater to the 3
> I've
> mentioned?
> 
> --Rafael
> 

Reply via email to