Necko-folk, This is the second in my design sketches for spdy.. I just want to keep necko-devs apprised of the basic approach so you can let me know if there is something important to consider which I have not. These are just broad high level notes, and there are certainly details and holes that haven't been worked out yet. As you know I've spent some time with the pipelining code, so some of this borrows from the bits I like from its relationship to nsHttpTransaction.
I like this approach because it really doesn't modify the flow and expectations of the base HTTP code very much at all. It also keeps the lingua-franca of headers to be in the HTTP format which we already use (sometimes flattened!) to interact with interfaces outside of the proper necko stack. (For instance, the upload stream can start by including http formatted request headers that are external to the nshttprequestheader object). The first step is to define a a spdy-session object, which contains some state and a set of spdy-streams along with a couple indexes to those streams. Each stream corresponds to one nsHttpTransction. One index to the streams is a log-n lookup by stream id for use by the inbound frame decoder, and the other is a fifo of the write-ready streams. spdy-session implements nsAhttpTransaction, nsAhttpSegmentReader, and nsAhttpSegmentWriter and probably contains refs to the real nshttpconnection and the real nshttptransaction that corresponds to each stream in the session. after NPN indicates we are doing spdy, instantiate the spdy-session object and replace nshttpconnection::mtransaction with it (adding in the original nshttptransaction as the first spdy-stream, of course) in httpConnectionMgr, if a connection entry indicates a live connection that is using spdy then dispatch a new stream on that connection using a new addtransction() method on nshttpconnection. The implementation of that just adds the new nshttptranscation object into the spdy-session. This is going to create a slight race condition where we open 2 or more spdy sessions in parallel at first, before we determine SPDY is active. We'll mitigate that by only using one of them once we have determined what is going on and then aggressively cleaning up the others as soon as their transactions are complete, so it should be short lived. We also want to keep the nsconnectionentry object with a spdy bit in it around when there are no live connections so we don't need to relearn this info soon. For data input : OnSocketReadable calls mtransaction->writesegments(). That implementation in the new spdy-session object can read the frame header.. if it is a control frame it can just act on it as necessary, if it is a data frame then it can lookup the appropriate stream (along with its real transaction) using the stream index and set the "readable stream and length" ptrs.. then call through to RealTransaction->writesegments with the spdy object as the nsAhttpSegmentReader.. when OnWriteSegment() is called it can limit the reading from the socket to the number of bytes indicated in the frame header without incurring any extra reads or copies.. if the stream state indicates that we are still reading the http headers this the place to convert them from SPDY compressed format into the http format. This path also needs to update a counter of "unacknowledged bytes" and if it passes some kind of watermark then the stream object needs to be marked write-ready and added to the write-ready fifo so it can write out a window update. For data output : OnSocketWritable calls mtransaction->readsegments(). In the new spdy object it can figure out which stream gets attention by first looking at a transaction-that-blocked-mid-frame variable and finishing up that frame if necessary. Assuming that is cleared up, it pulls off the fifo to find a stream that needs data written out - that might be part of the http request or it might be control data such as a window update. it writes out some of that up to a max frame size.. if the frame is incomplete (due to TCP EWOULDBLOCK bubbled up through the socketstream) it goes into that blocked-mid-frame variable and we schedule another write callback. If the whole frame is written out but there is more data (e.g. the frame did not cover the entire contents of the post) then the stream is added back to the write-ready FIFO. This allows multiplexing of the streams on this connection, especially if we stop iterating the fifo after a few turns in order to allow the connection manager to run and potentially add new streams into the active spdy ojbect - it is important to remember to do that as the amount of data absorbed by buffering on the write path will take quite a while to actually transmit. As with the read path, the actual writing happens by calling RealTransaction->readsegments and passing the spdy object as the nsahttpsegmentwriter. In the nsahttpsegementwriter implementation of onReadSegment() the appropriate frame header is added, sending the SYN_STREAM and converting the HTTP formatted headers to SPDY formatted headers.. Fun details for dealing with after the basics are working: server-push and its interaction with the cache.. settings provides a whole bunch of interesting meta data - e.g. rtt, and cwnd. We should provide those where possible to the server and we should record them via telemetry when we receive them. -Pat _______________________________________________ dev-tech-network mailing list [email protected] https://lists.mozilla.org/listinfo/dev-tech-network
