This doesn't make sense to me. To reach the global limit of 5000 messages
Allan is mentioning, you must have at least two open connections, and flood
both with 2500 unread messages.

Can you confirm that this is what is happening, Bharat?

If you only have one connection, it should *never* be aborted because of
overload. Instead, the sending process should be blocked until the receiver
has read enough messages, just as you would expect.

If not, there is either a bug somewhere, or a misunderstanding of what you
are trying to do.

///jon

Bharat Joshi wrote:
> Hi Allan,
>
>   
>> TIPC currently *does* have a mechanism in place to allow a congested
>> receiver to provide back pressure to its connection-oriented peer,
>> thereby automatically blocking the peer from sending additional messages
>> until the receiver begins consuming messages.  A connection-oriented
>> sender keeps track of the number of messages it has sent to the
>> receiver, and also records acknowledgements from the receiver as to how
>> many messages it has consumed; the difference between the two is the
>> number of "sent, but not acknowledged" messages, and reflects the
>> sender's view of the size of the receiver's receive queue.  If this
>> value reaches 1024, TIPC should block the sender automatically until
>> more acknowledgements are received.
>>     
>
> Yes. I see this mechanism and unfortunately it does not work for my
> case. In my case, receiver is not reading its receive queue and its
> receive queue gets filled so in this case, I do not want it to tear
> down the connection but just block the sender till there is space
> available in the receive queue. The issue here is that the
> receive-queue of the socket is not big enough to hold that many
> packets.
>
> Anyway thanks for the reply.
>
> Thanks,
> Bharat
>
>   
>> Now, I can think of at least one way where this flow control mechanism
>> can be circumvented (and there may be more).  It could be that the
>> receiver begins rejecting messages because TIPC's global socket queue
>> limit has been reached (meaning there are at least 5000 messages waiting
>> on all socket queues taken as a whole), rather than because its own
>> receive queue has reached the per-socket 2500 message limit normally
>> required to trigger rejection.
>>
>> Does this seem to be a likely scenario for your system?
>>
>> Regards,
>> Al
>>
>>
>>     
>>> -----Original Message-----
>>> From: [EMAIL PROTECTED]
>>> [mailto:[EMAIL PROTECTED] On
>>> Behalf Of Bharat Joshi
>>> Sent: Wednesday, November 14, 2007 10:07 AM
>>> To: Horvath, Elmer
>>> Cc: [email protected]
>>> Subject: Re: [tipc-discussion] Query on the overloading of a receiver
>>>
>>> Hi Elmer,
>>>
>>>       Thanks for your reponse. Please see my reply in line.
>>>
>>> Regards,
>>> Bharat
>>>
>>>       
>>>> The behaviour you have observed is the way that TIPC handles a full
>>>> queue.  It is difficult to distinguish between an
>>>>         
>>> application that has
>>>       
>>>> simply gone to do something else for a while (allowing the queue to
>>>> fill) and one which has died and will never come back to
>>>>         
>>> process its
>>>       
>>>> queue.
>>>>
>>>>         
>>> But I guess if the process dies, socket would be closed and
>>> socket receive queue won't be there. Right?
>>>
>>>       
>>>> Your suggestion is interesting and with merit depending on the
>>>> application.  Since you are using stream sockets, the only
>>>>         
>>> thing that
>>>       
>>>> pops to mind is to do what you suggest but at the application level.
>>>> Ie, have the client send some sort of application level
>>>> acknowledgement that it is processing data and use this to
>>>>         
>>> sychronize
>>>       
>>>> the transmission of data to it to keep the queue sizes
>>>>         
>>> reasonable.  In
>>>       
>>>> this way, only those applications that need this
>>>>         
>>> functionality would
>>>       
>>>> use it and the TIPC code base would remain uncluttered and
>>>>         
>>> not have to
>>>       
>>>> try to distinguish between a slow and a dead application
>>>>         
>>> receiving data.
>>>       
>>> But why do not we add this in TIPC itself? I would say that
>>> if receive queue is full, we send OVERLOAD message back to
>>> the sender, sender should see this as port-congestion and
>>> sleep. Now when receiver read some message, it invokes
>>> tipc_acknowledge() and this will send a proto-msg to other
>>> end, which will wakeup the sender.
>>>
>>> Why I raised this in the mailing list is because I wanted to
>>> know why TIPC does not behave this way?
>>>
>>>       
>>>> Maybe others have other suggestions for you.
>>>>         
>>> Will wait for other reply as well.
>>>
>>>       
>>>> Elmer
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: [EMAIL PROTECTED]
>>>> [mailto:[EMAIL PROTECTED] On Behalf Of
>>>> Bharat Joshi
>>>> Sent: Wednesday, November 14, 2007 4:47 AM
>>>> To: [email protected]
>>>> Subject: [tipc-discussion] Query on the overloading of a receiver
>>>>
>>>> Hi,
>>>>
>>>>      I have a Stream Socket client-server application using
>>>>         
>>> TIPC socket.
>>>       
>>>> Both client and server run on the same node. After client
>>>>         
>>> connects to
>>>       
>>>> server, server start sending 1kB data to the client.
>>>>
>>>>      I see that if clients receive queue is full, we return
>>>>         
>>> an error
>>>       
>>>> TIPC_ERR_OVERLOAD through reject-message. This results in
>>>>         
>>> the server
>>>       
>>>> tearing down the connection. Should not server wait for the
>>>>         
>>> client to
>>>       
>>>> process all its messages and when it can accept more
>>>>         
>>> messages, it can
>>>       
>>>> ask server to send more messages.
>>>>
>>>>     Is this behavior correct?
>>>>
>>>> Thanks,
>>>> Bharat
>>>>
>>>>
>>>>         
>>> ----------------------------------------------------------------------
>>>       
>>>> --
>>>> -
>>>> This SF.net email is sponsored by: Splunk Inc.
>>>> Still grepping through log files to find problems?  Stop.
>>>> Now Search log events and configuration files using AJAX
>>>>         
>>> and a browser.
>>>       
>>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>>> _______________________________________________
>>>> tipc-discussion mailing list
>>>> [email protected]
>>>> https://lists.sourceforge.net/lists/listinfo/tipc-discussion
>>>>
>>>>         
>>> --------------------------------------------------------------
>>> -----------
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems?  Stop.
>>> Now Search log events and configuration files using AJAX and
>>> a browser.
>>> Download your FREE copy of Splunk now >>
>>> http://get.splunk.com/ _______________________________________________
>>> tipc-discussion mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/tipc-discussion
>>>
>>>       
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> tipc-discussion mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/tipc-discussion
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
tipc-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to