Re: Proposal for missing blob support in Git repos

2017-05-04 Thread Jonathan Tan

On 05/03/2017 09:29 PM, Junio C Hamano wrote:

Jonathan Tan  writes:


I see the semantics as "don't write what you already have", where
"have" means what you have in local storage, but if you extend "have"
to what upstream has, then yes, you're right that this changes
(ignoring shallow clones).

This does remove a resistance that we have against hash collision (in
that normally we would have the correct object for a given hash and
can resist other servers trying to introduce a wrong object, but now
that is no longer the case), but I think it's better than consulting
the hook whenever you want to write anything (which is also a change
in semantics in that you're consulting an external source whenever
you're writing an object, besides the performance implications).


As long as the above pros-and-cons analysis is understood and we are
striking a balance between performance and strictness with such an
understanding of the implications, I am perfectly fine with the
proposal.  That is why my comment has never been "I think that is
wrong" but consistently was "I wonder if that is a good thing."

Thanks.


Noted - if/when I update the patch, I'll include this information in.


Re: Proposal for missing blob support in Git repos

2017-05-03 Thread Junio C Hamano
Jonathan Tan  writes:

> I see the semantics as "don't write what you already have", where
> "have" means what you have in local storage, but if you extend "have"
> to what upstream has, then yes, you're right that this changes
> (ignoring shallow clones).
>
> This does remove a resistance that we have against hash collision (in
> that normally we would have the correct object for a given hash and
> can resist other servers trying to introduce a wrong object, but now
> that is no longer the case), but I think it's better than consulting
> the hook whenever you want to write anything (which is also a change
> in semantics in that you're consulting an external source whenever
> you're writing an object, besides the performance implications).

As long as the above pros-and-cons analysis is understood and we are
striking a balance between performance and strictness with such an
understanding of the implications, I am perfectly fine with the
proposal.  That is why my comment has never been "I think that is
wrong" but consistently was "I wonder if that is a good thing."

Thanks.


Re: Proposal for missing blob support in Git repos

2017-05-02 Thread Jonathan Tan

On 05/02/2017 11:32 AM, Ævar Arnfjörð Bjarmason wrote:

On Tue, May 2, 2017 at 7:21 PM, Jonathan Tan  wrote:

On Mon, May 1, 2017 at 6:41 PM, Junio C Hamano  wrote:

Jonathan Tan  writes:


On 05/01/2017 04:29 PM, Junio C Hamano wrote:

Jonathan Tan  writes:


Thanks for your comments. If you're referring to the codepath
involving write_sha1_file() (for example, builtin/hash-object ->
index_fd or builtin/unpack-objects), that is fine because
write_sha1_file() invokes freshen_packed_object() and
freshen_loose_object() directly to check if the object already exists
(and thus does not invoke the new mechanism in this patch).


Is that a good thing, though?  It means that you an attacker can
feed one version to the remote object store your "grab blob" hook
gets the blobs from, and have you add a colliding object locally,
and the usual "are we recording the same object as existing one?"
check is bypassed.


If I understand this correctly, what you mean is the situation where
the hook adds an object to the local repo, overriding another object
of the same name?


No.

write_sha1_file() pays attention to objects already in the local
object store to avoid hash collisions that can be used to replace a
known-to-be-good object and that is done as a security measure.
What I am reading in your response was that this new mechanism
bypasses that, and I was wondering if that is a good thing.


Oh, what I meant was that write_sha1_file() bypasses the new
mechanism, not that the new mechanism bypasses the checks in
write_sha1_file().

To be clear, here's what happens when write_sha1_file() is invoked
(before and after this patch - this patch does not affect
write_sha1_file at all):
1. (some details omitted)
2. call freshen_packed_object
3, call freshen_loose_object if necessary
4. write object (if freshen_packed_object and freshen_loose_object do
not both return 0)

Nothing changes in this patch (whether a hook is defined or not).


But don't the semantics change in the sense that before
core.missingBlobCommand you couldn't write a new blob SHA1 that was
already part of your history,


Strictly speaking, you can already do this if you don't have the blob in 
your local repo (for example, with shallow clones - you likely wouldn't 
have blobs pointed to by historical commits outside whatever depth is set).


> whereas with this change

write_sha1_file() might write what it considers to be a new blob, but
it's actually colliding with an existing blob, but write_sha1_file()
doesn't know that because knowing would involve asking the hook to
fetch the blob?


Yes, this might happen.

I see the semantics as "don't write what you already have", where "have" 
means what you have in local storage, but if you extend "have" to what 
upstream has, then yes, you're right that this changes (ignoring shallow 
clones).


This does remove a resistance that we have against hash collision (in 
that normally we would have the correct object for a given hash and can 
resist other servers trying to introduce a wrong object, but now that is 
no longer the case), but I think it's better than consulting the hook 
whenever you want to write anything (which is also a change in semantics 
in that you're consulting an external source whenever you're writing an 
object, besides the performance implications).



And here's what happens when has_sha1_file (or another function listed
in the commit message) is invoked:
1. check for existence of packed object of the requested name
2. check for existence of loose object of the requested name
3. check again for existence of packed object of the requested name
4. if a hook is defined, invoke the hook and repeat 1-3

Here, in step 4, the hook could do whatever it wants to the repository.


This might be a bit of early bikeshedding, but then again the lack of
early bikeshedding tends to turn into standards.

Wouldn't it be better to name this core.missingObjectCommand & have
the hook take a list on stdin like:

[]

And have the hook respond:

[]

I.e. what you'd do now is send this to the hook:

1  blobmissing

And the hook would respond:

1  ok

But this leaves open the door addressing this potential edge case with
writing new blobs in the future, i.e. write_sha1_file() could call it
as:

1  blobnew

And the hook could either respond immediately as:

1  ok

If it's in some #YOLO mode where it's not going to check for colliding
blobs over the network, or alternatively & ask the parent repo if it
has those blobs, and if so print:

1  collision

Or something like that.

This also enables future lazy loading of trees/commits from the same
API, and for the hook to respond out-of-order to the input it gets as
it can, since each request is prefixed with an incrementing request
id.


My initial thought is that it would be better to extend hook support by 
adding configuration options for separate hooks instead of extending an 
existing protocol. For example, with t

Re: Proposal for missing blob support in Git repos

2017-05-02 Thread Ævar Arnfjörð Bjarmason
On Tue, May 2, 2017 at 7:21 PM, Jonathan Tan  wrote:
> On Mon, May 1, 2017 at 6:41 PM, Junio C Hamano  wrote:
>> Jonathan Tan  writes:
>>
>>> On 05/01/2017 04:29 PM, Junio C Hamano wrote:
 Jonathan Tan  writes:

> Thanks for your comments. If you're referring to the codepath
> involving write_sha1_file() (for example, builtin/hash-object ->
> index_fd or builtin/unpack-objects), that is fine because
> write_sha1_file() invokes freshen_packed_object() and
> freshen_loose_object() directly to check if the object already exists
> (and thus does not invoke the new mechanism in this patch).

 Is that a good thing, though?  It means that you an attacker can
 feed one version to the remote object store your "grab blob" hook
 gets the blobs from, and have you add a colliding object locally,
 and the usual "are we recording the same object as existing one?"
 check is bypassed.
>>>
>>> If I understand this correctly, what you mean is the situation where
>>> the hook adds an object to the local repo, overriding another object
>>> of the same name?
>>
>> No.
>>
>> write_sha1_file() pays attention to objects already in the local
>> object store to avoid hash collisions that can be used to replace a
>> known-to-be-good object and that is done as a security measure.
>> What I am reading in your response was that this new mechanism
>> bypasses that, and I was wondering if that is a good thing.
>
> Oh, what I meant was that write_sha1_file() bypasses the new
> mechanism, not that the new mechanism bypasses the checks in
> write_sha1_file().
>
> To be clear, here's what happens when write_sha1_file() is invoked
> (before and after this patch - this patch does not affect
> write_sha1_file at all):
> 1. (some details omitted)
> 2. call freshen_packed_object
> 3, call freshen_loose_object if necessary
> 4. write object (if freshen_packed_object and freshen_loose_object do
> not both return 0)
>
> Nothing changes in this patch (whether a hook is defined or not).

But don't the semantics change in the sense that before
core.missingBlobCommand you couldn't write a new blob SHA1 that was
already part of your history, whereas with this change
write_sha1_file() might write what it considers to be a new blob, but
it's actually colliding with an existing blob, but write_sha1_file()
doesn't know that because knowing would involve asking the hook to
fetch the blob?

> And here's what happens when has_sha1_file (or another function listed
> in the commit message) is invoked:
> 1. check for existence of packed object of the requested name
> 2. check for existence of loose object of the requested name
> 3. check again for existence of packed object of the requested name
> 4. if a hook is defined, invoke the hook and repeat 1-3
>
> Here, in step 4, the hook could do whatever it wants to the repository.

This might be a bit of early bikeshedding, but then again the lack of
early bikeshedding tends to turn into standards.

Wouldn't it be better to name this core.missingObjectCommand & have
the hook take a list on stdin like:

[]

And have the hook respond:

[]

I.e. what you'd do now is send this to the hook:

1  blobmissing

And the hook would respond:

1  ok

But this leaves open the door addressing this potential edge case with
writing new blobs in the future, i.e. write_sha1_file() could call it
as:

1  blobnew

And the hook could either respond immediately as:

1  ok

If it's in some #YOLO mode where it's not going to check for colliding
blobs over the network, or alternatively & ask the parent repo if it
has those blobs, and if so print:

1  collision

Or something like that.

This also enables future lazy loading of trees/commits from the same
API, and for the hook to respond out-of-order to the input it gets as
it can, since each request is prefixed with an incrementing request
id.


Re: Proposal for missing blob support in Git repos

2017-05-02 Thread Jonathan Tan
On Mon, May 1, 2017 at 6:41 PM, Junio C Hamano  wrote:
> Jonathan Tan  writes:
>
>> On 05/01/2017 04:29 PM, Junio C Hamano wrote:
>>> Jonathan Tan  writes:
>>>
 Thanks for your comments. If you're referring to the codepath
 involving write_sha1_file() (for example, builtin/hash-object ->
 index_fd or builtin/unpack-objects), that is fine because
 write_sha1_file() invokes freshen_packed_object() and
 freshen_loose_object() directly to check if the object already exists
 (and thus does not invoke the new mechanism in this patch).
>>>
>>> Is that a good thing, though?  It means that you an attacker can
>>> feed one version to the remote object store your "grab blob" hook
>>> gets the blobs from, and have you add a colliding object locally,
>>> and the usual "are we recording the same object as existing one?"
>>> check is bypassed.
>>
>> If I understand this correctly, what you mean is the situation where
>> the hook adds an object to the local repo, overriding another object
>> of the same name?
>
> No.
>
> write_sha1_file() pays attention to objects already in the local
> object store to avoid hash collisions that can be used to replace a
> known-to-be-good object and that is done as a security measure.
> What I am reading in your response was that this new mechanism
> bypasses that, and I was wondering if that is a good thing.

Oh, what I meant was that write_sha1_file() bypasses the new
mechanism, not that the new mechanism bypasses the checks in
write_sha1_file().

To be clear, here's what happens when write_sha1_file() is invoked
(before and after this patch - this patch does not affect
write_sha1_file at all):
1. (some details omitted)
2. call freshen_packed_object
3, call freshen_loose_object if necessary
4. write object (if freshen_packed_object and freshen_loose_object do
not both return 0)

Nothing changes in this patch (whether a hook is defined or not).

And here's what happens when has_sha1_file (or another function listed
in the commit message) is invoked:
1. check for existence of packed object of the requested name
2. check for existence of loose object of the requested name
3. check again for existence of packed object of the requested name
4. if a hook is defined, invoke the hook and repeat 1-3

Here, in step 4, the hook could do whatever it wants to the repository.


Re: Proposal for missing blob support in Git repos

2017-05-01 Thread Junio C Hamano
Jonathan Tan  writes:

> On 05/01/2017 04:29 PM, Junio C Hamano wrote:
>> Jonathan Tan  writes:
>>
>>> Thanks for your comments. If you're referring to the codepath
>>> involving write_sha1_file() (for example, builtin/hash-object ->
>>> index_fd or builtin/unpack-objects), that is fine because
>>> write_sha1_file() invokes freshen_packed_object() and
>>> freshen_loose_object() directly to check if the object already exists
>>> (and thus does not invoke the new mechanism in this patch).
>>
>> Is that a good thing, though?  It means that you an attacker can
>> feed one version to the remote object store your "grab blob" hook
>> gets the blobs from, and have you add a colliding object locally,
>> and the usual "are we recording the same object as existing one?"
>> check is bypassed.
>
> If I understand this correctly, what you mean is the situation where
> the hook adds an object to the local repo, overriding another object
> of the same name?

No.  

write_sha1_file() pays attention to objects already in the local
object store to avoid hash collisions that can be used to replace a
known-to-be-good object and that is done as a security measure.
What I am reading in your response was that this new mechanism
bypasses that, and I was wondering if that is a good thing.



Re: Proposal for missing blob support in Git repos

2017-05-01 Thread Brandon Williams
On 05/01, Jonathan Tan wrote:
> On 05/01/2017 04:29 PM, Junio C Hamano wrote:
> >Jonathan Tan  writes:
> >
> >>Thanks for your comments. If you're referring to the codepath
> >>involving write_sha1_file() (for example, builtin/hash-object ->
> >>index_fd or builtin/unpack-objects), that is fine because
> >>write_sha1_file() invokes freshen_packed_object() and
> >>freshen_loose_object() directly to check if the object already exists
> >>(and thus does not invoke the new mechanism in this patch).
> >
> >Is that a good thing, though?  It means that you an attacker can
> >feed one version to the remote object store your "grab blob" hook
> >gets the blobs from, and have you add a colliding object locally,
> >and the usual "are we recording the same object as existing one?"
> >check is bypassed.
> 
> If I understand this correctly, what you mean is the situation where
> the hook adds an object to the local repo, overriding another object
> of the same name? If yes, I think that is the nature of executing an
> arbitrary command. If we really want to avoid that, we could drop
> the hook functionality (and instead, for example, provide the URL of
> a Git repo instead from which we can communicate using a new
> fetch-blob protocol), although that would reduce the usefulness of
> this, especially during the transition period in which we don't have
> any sort of batching of requests.

If I understand correctly this is where we aim to be once all is said
and done.  I guess the question is what are we willing to do during the
transition phase.

-- 
Brandon Williams


Re: Proposal for missing blob support in Git repos

2017-05-01 Thread Jonathan Tan

On 05/01/2017 04:29 PM, Junio C Hamano wrote:

Jonathan Tan  writes:


Thanks for your comments. If you're referring to the codepath
involving write_sha1_file() (for example, builtin/hash-object ->
index_fd or builtin/unpack-objects), that is fine because
write_sha1_file() invokes freshen_packed_object() and
freshen_loose_object() directly to check if the object already exists
(and thus does not invoke the new mechanism in this patch).


Is that a good thing, though?  It means that you an attacker can
feed one version to the remote object store your "grab blob" hook
gets the blobs from, and have you add a colliding object locally,
and the usual "are we recording the same object as existing one?"
check is bypassed.


If I understand this correctly, what you mean is the situation where the 
hook adds an object to the local repo, overriding another object of the 
same name? If yes, I think that is the nature of executing an arbitrary 
command. If we really want to avoid that, we could drop the hook 
functionality (and instead, for example, provide the URL of a Git repo 
instead from which we can communicate using a new fetch-blob protocol), 
although that would reduce the usefulness of this, especially during the 
transition period in which we don't have any sort of batching of requests.


Re: Proposal for missing blob support in Git repos

2017-05-01 Thread Junio C Hamano
Jonathan Tan  writes:

> Thanks for your comments. If you're referring to the codepath
> involving write_sha1_file() (for example, builtin/hash-object ->
> index_fd or builtin/unpack-objects), that is fine because
> write_sha1_file() invokes freshen_packed_object() and
> freshen_loose_object() directly to check if the object already exists
> (and thus does not invoke the new mechanism in this patch).

Is that a good thing, though?  It means that you an attacker can
feed one version to the remote object store your "grab blob" hook
gets the blobs from, and have you add a colliding object locally,
and the usual "are we recording the same object as existing one?"
check is bypassed.



Re: Proposal for missing blob support in Git repos

2017-05-01 Thread Jonathan Tan

On 04/30/2017 08:57 PM, Junio C Hamano wrote:

One thing I wonder is what the performance impact of a change like
this to the codepath that wants to see if an object does _not_ exist
in the repository.  When creating a new object by hashing raw data,
we see if an object with the same name already exists before writing
the compressed loose object out (or comparing the payload to detect
hash collision).  With a "missing blob" support, we'd essentially
spawn an extra process every time we want to create a new blob
locally, and most of the time that is done only to hear the external
command to say "no, we've never heard of such an object", with a
possibly large latency.

If we do not have to worry about that (or if it is no use to worry
about it, because we cannot avoid it if we wanted to do the lazy
loading of objects from elsewhere), then the patch presented here
looked like a sensible first step towards the stated goal.

Thanks.


Thanks for your comments. If you're referring to the codepath involving 
write_sha1_file() (for example, builtin/hash-object -> index_fd or 
builtin/unpack-objects), that is fine because write_sha1_file() invokes 
freshen_packed_object() and freshen_loose_object() directly to check if 
the object already exists (and thus does not invoke the new mechanism in 
this patch).


Having said that, looking at other parts of the fetching mechanism, 
there are a few calls to has_sha1_file() and others that might need to 
be checked. (We have already discussed one - the one in rev-list when 
invoked to check connectivity.) I could take a look at that, but was 
hoping for discussion on what I've sent so far (so that I know that I'm 
on the right track, and because it somewhat works, albeit slowly).


Re: Proposal for missing blob support in Git repos

2017-04-30 Thread Junio C Hamano
Jonathan Tan  writes:

> In order to determine the code changes in sha1_file.c necessary, I
> investigated the following:
>  (1) functions in sha1_file that take in a hash, without the user
>  regarding how the object is stored (loose or packed)
>  (2) functions in sha1_file that operate on packed objects (because I
>  need to check callers that know about the loose/packed distinction
>  and operate on both differently, and ensure that they can handle
>  the concept of objects that are neither loose nor packed)
>
> For (1), I looked through all non-static functions in sha1_file.c that
> take in an unsigned char * parameter. The ones that are relevant, and my
> modifications to them to resolve this problem, are:
>  - sha1_object_info_extended (fixed in this commit)
>  - sha1_object_info (auto-fixed by sha1_object_info_extended)
>  - read_sha1_file_extended (fixed by fixing read_object)
>  - read_object_with_reference (auto-fixed by read_sha1_file_extended)
>  - force_object_loose (only called from builtin/pack-objects.c, which
>already knows that at least one pack contains this object)
>  - has_sha1_file_with_flags (fixed in this commit)
>  - assert_sha1_type (auto-fixed by sha1_object_info)
>
> As described in the list above, several changes have been included in
> this commit to fix the necessary functions.
>
> For (2), I looked through the same functions as in (1) and also
> for_each_packed_object. The ones that are relevant are:
>  - parse_pack_index
>- http - indirectly from http_get_info_packs
>  - find_pack_entry_one
>- this searches a single pack that is provided as an argument; the
>  caller already knows (through other means) that the sought object
>  is in a specific pack
>  - find_sha1_pack
>- fast-import - appears to be an optimization to not store a
>  file if it is already in a pack
>- http-walker - to search through a struct alt_base
>- http-push - to search through remote packs
>  - has_sha1_pack
>- builtin/fsck - fixed in this commit
>- builtin/count-objects - informational purposes only (check if loose
>  object is also packed)
>- builtin/prune-packed - check if object to be pruned is packed (if
>  not, don't prune it)
>- revision - used to exclude packed objects if requested by user
>- diff - just for optimization
>  - for_each_packed_object
>- reachable - only to find recent objects
>- builtin/fsck - fixed in this commit
>- builtin/cat-file - see below
>
> As described in the list above, builtin/fsck has been updated. I have
> left builtin/cat-file alone; this means that cat-file
> --batch-all-objects will only operate on objects physically in the repo.

One thing I wonder is what the performance impact of a change like
this to the codepath that wants to see if an object does _not_ exist
in the repository.  When creating a new object by hashing raw data,
we see if an object with the same name already exists before writing
the compressed loose object out (or comparing the payload to detect
hash collision).  With a "missing blob" support, we'd essentially
spawn an extra process every time we want to create a new blob
locally, and most of the time that is done only to hear the external
command to say "no, we've never heard of such an object", with a
possibly large latency.

If we do not have to worry about that (or if it is no use to worry
about it, because we cannot avoid it if we wanted to do the lazy
loading of objects from elsewhere), then the patch presented here
looked like a sensible first step towards the stated goal.

Thanks.


Proposal for missing blob support in Git repos

2017-04-26 Thread Jonathan Tan
Here is a proposal for missing blob support in Git repos. There have
been several other proposals [1] [2]; this is similar to those except
that I have provided a more comprehensive analysis of the changes that
need to be done, and have made those changes (see commit below the
scissors line).

This proposal is limited to local handling using a user-specified hook.
I have another proposal out [3] for what a potential default hook should
be - how Git on the server can serve blobs to repos that have missing
blobs.

[1] <20170304191901.9622-1-mar...@efaref.net>
[2] <1488999039-37631-1-git-send-email-...@jeffhostetler.com>
[3] 

-- 8< --
sha1_file, fsck: add missing blob hook support

Currently, Git does not support repos with very large numbers of blobs
or repos that wish to minimize manipulation of certain blobs (for
example, because they are very large) very well, even if the user
operates mostly on part of the repo, because Git is designed on the
assumption that every blob referenced by a tree object is available
somewhere in the repo storage.

As a first step to reducing this problem, add rudimentary support for
missing blobs by teaching sha1_file to invoke a hook whenever a blob is
requested but missing, and by updating fsck to tolerate missing blobs.
The hook is a shell command that can be configured through "git config";
this hook takes in a list of hashes and writes (if successful) the
corresponding objects to the repo's local storage.

This commit does not include support for generating such a repo; neither
has any command (other than fsck) been modified to either tolerate
missing blobs (without invoking the hook) or be more efficient in
invoking the missing blob hook. Only a fallback is provided in the form
of sha1_file invoking the missing blob hook when necessary.

In order to determine the code changes in sha1_file.c necessary, I
investigated the following:
 (1) functions in sha1_file that take in a hash, without the user
 regarding how the object is stored (loose or packed)
 (2) functions in sha1_file that operate on packed objects (because I
 need to check callers that know about the loose/packed distinction
 and operate on both differently, and ensure that they can handle
 the concept of objects that are neither loose nor packed)

For (1), I looked through all non-static functions in sha1_file.c that
take in an unsigned char * parameter. The ones that are relevant, and my
modifications to them to resolve this problem, are:
 - sha1_object_info_extended (fixed in this commit)
 - sha1_object_info (auto-fixed by sha1_object_info_extended)
 - read_sha1_file_extended (fixed by fixing read_object)
 - read_object_with_reference (auto-fixed by read_sha1_file_extended)
 - force_object_loose (only called from builtin/pack-objects.c, which
   already knows that at least one pack contains this object)
 - has_sha1_file_with_flags (fixed in this commit)
 - assert_sha1_type (auto-fixed by sha1_object_info)

As described in the list above, several changes have been included in
this commit to fix the necessary functions.

For (2), I looked through the same functions as in (1) and also
for_each_packed_object. The ones that are relevant are:
 - parse_pack_index
   - http - indirectly from http_get_info_packs
 - find_pack_entry_one
   - this searches a single pack that is provided as an argument; the
 caller already knows (through other means) that the sought object
 is in a specific pack
 - find_sha1_pack
   - fast-import - appears to be an optimization to not store a
 file if it is already in a pack
   - http-walker - to search through a struct alt_base
   - http-push - to search through remote packs
 - has_sha1_pack
   - builtin/fsck - fixed in this commit
   - builtin/count-objects - informational purposes only (check if loose
 object is also packed)
   - builtin/prune-packed - check if object to be pruned is packed (if
 not, don't prune it)
   - revision - used to exclude packed objects if requested by user
   - diff - just for optimization
 - for_each_packed_object
   - reachable - only to find recent objects
   - builtin/fsck - fixed in this commit
   - builtin/cat-file - see below

As described in the list above, builtin/fsck has been updated. I have
left builtin/cat-file alone; this means that cat-file
--batch-all-objects will only operate on objects physically in the repo.

Some alternative designs that I considered but rejected:

 - Storing a list of hashes of missing blobs, possibly with metadata
   (much like the shallow list). Having such a list would allow for
   things like better error messages, attaching metadata (for example,
   file size or binary/text nature) to each blob, and configuring
   different hooks for each blob, but it is difficult to scale to large
   repos.
 - Adding a hook whenever a packed blob is requested, not on any blob.
   That is, whenever we attempt to search the packfiles for a blob, if
   it is missi