On Fri, Aug 05, 2016 at 03:06:28PM -0700, Junio C Hamano wrote:

> Torsten Bögershausen <tbo...@web.de> writes:
> > On 2016-08-03 18.42, larsxschnei...@gmail.com wrote:
> >> The filter is expected to respond with the result content in zero
> >> or more pkt-line packets and a flush packet at the end. Finally, a
> >> "result=success" packet is expected if everything went well.
> >> ------------------------
> >> packet:          git< SMUDGED_CONTENT
> >> packet:          git< 0000
> >> packet:          git< result=success\n
> >> ------------------------
> > I would really send the diagnostics/return codes before the content.
> I smell the assumption "by the time the filter starts output, it
> must have finished everything and knows both size and the status".
> I'd prefer to have a protocol that allows us to do streaming I/O on
> both ends when possible, even if the initial version of the filters
> (and the code that sits on the Git side) hold everything in-core
> before starting to talk.

I think you really want to handle both cases:

  - the server says "no, I can't fulfill your request" (e.g., HTTP 404)

  - the server can abort an in-progress response to indicate that it
    could not be fulfilled completely (in HTTP chunked encoding, this
    requires hanging up before sending the final EOF chunk)

If we expect the second case to be rare, then hanging up before sending
the flush packet is probably OK. But we could also have a trailing error
code after the data to say "ignore that, we saw an error, but I can
still handle more requests".

It is true that you don't need the up-front status code in that case
(you can send an empty body and say "ignore that, we saw an error") but
that feels a little weird. And I expect it makes the lives of the client
easier to get a code up front, before it starts taking steps to handle
what it _thinks_ is probably a valid response.


PS I haven't followed HTTP/2 development much, but I think it solves the
   "hangup" issue by putting each request/response in its own framed
   stream. I actually wonder if that is a direction we will want to go
   eventually, too, or the same reason that HTTP/2 did: multiple async
   requests across a single connection.

   We already have some precedent in the sideband protocol. So imagine,
   for example, that we could ask the filter to work on several files
   simultaneously, by sending

     git> \1[file1 content]
     git> \2[file2 content]
     git> \1[file1 content]

   and so on. I don't think this is something that needs to happen in
   the initial protocol (it's not like git can do parallel checkout
   right now anyway). If there's a capability negotiation at the front
   of the protocol, then an async feature can be worked out later. Just
   food for thought at this point.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to