Yup thats what i have observed with xmpp and wrote in my last mail. Thanks for digging another level to explain the same. But my question was, why is a previous request replied back in case another request comes from the client.
When i work with event based push server, i can actually avoid empty response to client request which was on hold before. By doing so, I save unnecessary another request going from browser to the server. == xmpp case == a client request is on hold looking for new data from server a new client request goes for publishing a message server respond to previous holded request, and then process next incoming request finally client fires another request to maintain that 1 necessary connection with CM. == event based push case == a client request is on hold looking for new data Another request comes from client for publishing a message. Server respond to client's new request while still able to hold previous request by the client. Is this something related to how xmpp works, probably because of incremental rid and related specs? Abhinav Singh, Bangalore, India http://abhinavsingh.com ________________________________ From: Mridul Muralidharan <[email protected]> To: Bidirectional Streams Over Synchronous HTTP <[email protected]> Sent: Sun, January 10, 2010 3:23:06 AM Subject: Re: [BOSH] Pipelining / avoiding use of 2x HTTP-sockets For xmpp, you need to simulate async two-way communication between client and server : but http is a client initiated protocol, and server cant push messages to clients. Instead of going over other approaches, I will briefly explain how 124 works ... When a client request comes in, if it contains a request payload, CM will fwd it to the xmpp server. If there are pending xmpp server response(s) for the client, CM will immediately respond to the client with it. If there are no pending response for the client, CM will defer sending a response to the client for a configured amount of time (determined by the initial handshake request). This is so that, during that period of time, if the server has to send a response to the client - there is a pending request to send it on. The CM ensures that at any given point of time, only a single request is 'held' with it and not more : if a new client request comes in while a previous (based on rid) request is being held, it will immediately respond to the old request and 'hold' on to the new request. >From a client's point of view, things are more simpler - whenever the client >has to send a message to server, simply send a new rid request. The only constraint is, if CM has responded to all the requests (that is, there are no pending requests at CM), then immediately initiate an empty request. I am glossing over the retransmission, timeouts negotiated, error paths, etc here ... but hopefully this explains the working. So bottomline is - CM always has one request to send response back to client on - for any async notification from xmpp server : this is ensured by both CM (by responding immediately to older requests when newer req comes in and holding on to only one req) and client by immediately ensuring that there is a pending req at CM (if all requests are responsed to). Similarly, client can always send its messages to server immediately - since CM will not hold on to more than one request leaving atleast one more req for the client. Hopefully this clarifies atleast some things. Regards, Mridul --- On Sat, 9/1/10, Abhinav Singh <[email protected]> wrote: > From: Abhinav Singh <[email protected]> > Subject: Re: [BOSH] Pipelining / avoiding use of 2x HTTP-sockets > To: "Bidirectional Streams Over Synchronous HTTP" <[email protected]> > Date: Saturday, 9 January, 2010, 8:09 PM > Hi, > > I tried setting up a bosh connection manager with PHP > sometime back. > Then i noticed, if you have a pending request from the > browser, and user tries to send another message. > In such case for some reason the previous pending request > is immediately replied back by the jabber server to the conn > manager. > Finally, ajax corresponding to send message is processed > and replied back. > Thereafter, another request is put on hold for any incoming > data. > > As read from various docs that we can have 2 requests per > domain at a single time, I actually never see that > happening. > My previous request is never being replied at a later point > when data was actually available. > However since everything works fine, I think this is how it > should be. > Checked speeqe and other web based bosh > implementations, seems like same thing happen there too. > > Since this seemed like a relevant ongoing thread, i though > i would clear my point here. > Is this how it should be? > > Abhinav Singh, > Bangalore, > India > http://abhinavsingh.com/blog > > From: > Mridul Muralidharan <[email protected]> > To: > Bidirectional Streams Over Synchronous HTTP > <[email protected]> > Sent: Sat, > January 9, 2010 4:24:46 AM > Subject: Re: > [BOSH] Pipelining / avoiding use of 2x > HTTP-sockets > > > > > --- On Sat, 9/1/10, Peter Saint-Andre <[email protected]> > wrote: > > > From: Peter Saint-Andre <[email protected]> > > Subject: Re: [BOSH] Pipelining / avoiding use of 2x > HTTP-sockets > > To: "Bidirectional Streams Over Synchronous > HTTP" <[email protected]> > > Date: Saturday, 9 January, 2010, 1:50 AM > > On 12/30/09 8:47 AM, Mridul > > Muralidharan wrote: > > > > > > Ian should really write up some document > describing > > the way 124 is > > > supposed to work, I have seen it confusing quite > a lot > > of people. > > > > Ian disappeared quite a while ago. > > > Ah ! Was not aware of that. > > > > > > > 124 requires that when client > wants to send a request, > > it should be > > > able to as soon as possible : since the previous > > request from client > > > would typically be blocked at CM if there is no > > response to be > > > returned. > > > > Correct. > > > > > This means that : a) Client uses > 'another' connection > > to talk to CM. > > > In this case, CM will immediately respond back on > the > > previous > > > connection and 'block' on the new > connection (for > > returning responses > > > with minimum delay when server needs to send > async > > messages back). > > > > Yes, that is the pattern we assume. > > > > > b) > > > If client uses same socket (for whatever reason > : > > pipelining POST's > > > is really weird behavior IIRC), then CM should > detect > > availability of > > > a new request from client and send a response > back for > > > the previous > > > request. > > > > > > (b) is not required since most, if not all, > impl's do > > not pipeline > > > post requests. > > > > Mridul, I agree with your later message that > pipelining > > POSTs should be > > strongly discouraged, as it already is in RFC 2616. Do > we > > need some text > > about that in XEP-0124? > > > I always assumed that was implicit, but it is not obvious > when starting out I guess. Considering the confusion it > raised, I think you are right - we might want to discourage > it strongly : with references to http rfc for the > "why". > > > Regards, > Mridul > > > > > > Peter > > > > -- > > Peter Saint-Andre > > https://stpeter.im/ > > > > > > > > > > > The INTERNET now has a > personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ > > > > > > > > The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
