Gordon,

Further digging on network level sniffing shows that the bulk msg_size is
limited to = 16373 (16K). this observation is inline with previously
reported issue

http://grokbase.com/t/qpid/users/163z91rhdy/ssl-maximum-message-size

As suggested I posted the q on the Azure SB forum too to find if there are
knobs in the SB configuration to make this un-ack buffer size bigger on the
amqp ssl.

Coming back on your suggestion about unsettled messages. Can u guide me
what should be client side configuration (if any) to force server to keep
sending without waiting for flow control ack from the client (number of
unsettled messages ?). I would like server to stop on the link-credit
running out, but not on the max_buffer of 16kb. Ideally  i need is a
behavior of atleast-once, but I am ready to sacrifice this requirement to
get better perf.

currently client is requesting its recv-settle-mode to be "unsettled" and
server is sending its rev-settle-mode=settled. and my simplistic receiver
is initiating the messaging handler like
class Recv(MessagingHandler):
     def __init__(self):
                 super(Recv, self).__init__(prefetch=100, auto_accept=True,
auto_settle=True)


[name="bc599ddc-74df-46b0-800c-401aed27f321-kukatopic/Subscriptions/kukasub",handle=0,
role=true, snd-settle-mode=2, rcv-settle-mode=0, source=@source(40)
[address="kukatopic/Subscriptions/kuka\

sub", durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0,
timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]

[0xb58380]:0 -> @flow(19) [incoming-window=2147483647,
next-outgoing-id=0,outgoing-window=2147483647, handle=0, delivery-count=0,
link-credit=10000, drain=false]

[0xb58380]:0 <- @open(16)
[container-id="fa8f5d5577be485ebd7f5ebdbdfd9ca1_G13", max-frame-size=65536,
channel-max=4999, idle-time-out=240000]

[0xb58380]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1,
incoming-window=5000, outgoing-window=2147483647, handle-max=255]

[name="bc599ddc-74df-46b0-800c-401aed27f321-kukatopic/Subscriptions/kukasub",
handle=0, role=false, rcv-settle-mode=1, source=@source(40)
[address="kukatopic/Subscriptions/kukasub", durable=0, \

timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0,
dynamic=false], initial-delivery-count=0, max-message-size=266240]



On Wed, Aug 9, 2017 at 1:56 AM, Gordon Sim <g...@redhat.com> wrote:

> On 09/08/17 08:22, Pankaj Bhagra wrote:
>
>> Thanks Gordon for looking into my query. It makes sense what u said,
>> however i am still searching for a reason of flow control and limited
>> batch
>> size.
>>
>> As per your suggestion i tried increasing the link-credit to 10k, 100k,
>> but
>> that doesn't change much. my understanding of prefetch was that its number
>> of packets not the number of bytes (i confirmed this by reducing the
>> prefetch to 2 and then i see only 1 pkt per bulk message (half of the
>> window size)).
>>
>> The size of the each pkt is roughly 900B, and as u can see that i am not
>> able to read more than 12 pkts per batch in the complete logs below. So
>> looking back yes the size of 12x900B is greater than 5KB, so the heading
>> may need correction - it looks like 2x of that which is 10kb.
>>
>> would appreciate if someone can suggest some more knobs i should play to
>> figure out where this limit of 10kb is coming from ?
>>
>
> My guess is that it is a service-bus choice (i.e. the buffer size it
> writes with). In itself that shouldn't require a roundtrip to get more. If
> that is happening it could conceivably be something to do with the number
> of unsettled messages?
>
> It may be worth asking about the issue on the service bus forums.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>

Reply via email to