On Wed, 2013-11-06 at 15:42 -0800, Shawn Pearce wrote:
> On Wed, Nov 6, 2013 at 1:41 PM, Carlos Martín Nieto <c...@elego.de> wrote:
> > On Wed, 2013-11-06 at 12:32 -0800, Junio C Hamano wrote:
> >> I'll queue these for now, but I doubt the wisdom of this series,
> >> given that the ship has already sailed long time ago.
> >>
> >> Currently, no third-party implementation of a receiving end can
> >> accept thin push, because "thin push" is not a capability that needs
> >> to be checked by the current clients.  People will have to wait
> >> until the clients with 2/2 patch are widely deployed before starting
> >> to use such a receiving end that is incapable of "thin push".
> >>
> >> Wouldn't the world be a better place if instead they used that time
> >> waiting to help such a third-party receiving end to implement "thin
> >> push" support?
> >>
> >
> > Support in the code isn't always enough. The particular case that
> > brought this on is one where the index-pack implementation can deal with
> > thin packs just fine.
> >
> > This particular service takes the pack which the client sent and does
> > post-processing on it to store it elsewhere. During the receive-pack
> > equivalent, there is no git object db that it can query for the missing
> > base objects. I realise this is pretty a unusual situation.
> How... odd?
> At Google we have made effort to ensure servers can accept thin packs,
> even though its clearly easier to accept non-thin, because clients in
> the wild already send thin packs and changing the deployed clients is
> harder than implementing the existing protocol.

It is harder, but IMO also more correct, as thin packs are an
optimisation that was added somewhat later. Not to say it shouldn't be
something you should attempt to do, but it's a trade-off between the
complexity of the communication between the pieces and the potential
amount of extra data you're willing to put up with.

The Google (Code) servers don't just support thin packs, but for
upload-pack, they force it upon the client, which is quite frustrating
as it won't even tell you why it closes the connection but sends a 500
instead, but that's a different story.

> If the server can't complete the pack, I guess this also means the
> client cannot immediately fetch from the server it just pushed to?

Not all the details have been worked out yet, but the new history should
be converted into the target format before reporting success and closing
the connection. The Git frontend/protocol is one way of putting data
into the system, but that's not its native data storage format. The
database where this is getting stored only has very limited knowledge of

I'll reroll the series with "no-thin" as mentioned elsewhere in this


To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to