On Wed, Mar 15, 2017 at 02:48:57PM +0100, Thomas Braun wrote:

> I can not comment on the git-lfs issues. The issue that you can not
> properly use files larger than 4GB on windows (no matter if 32bit or
> 64bit) is known, see my findings from May last year [1]. Unfortunately
> nobody, including me, did find time to fix the underlying issue
> properly.

I suspect the fix is going to be quite involved. The use of "unsigned
long" for object sizes is all over the code base.

> My band-aid patch from [1]
> 
> diff --git a/pack-write.c b/pack-write.c
> index 33293ce..ebb8b0a 100644
> --- a/pack-write.c
> +++ b/pack-write.c
> @@ -313,6 +313,9 @@ int encode_in_pack_object_header(enum object_type type, 
> uintmax_t size, unsigned
>         if (type < OBJ_COMMIT || type > OBJ_REF_DELTA)
>                 die("bad type %d", type);
> 
> +       if (bitsizeof(unsigned long) != bitsizeof(uintmax_t) && size > 
> (unsigned long) size)
> +               die("Cannot handle files this big");
> +
>         c = (type << 4) | (size & 15);
>         size >>= 4;
>         while (size) {
> 
> would at least tell the user much earlier about the problem. I can
> submit the above diff as proper patch if it is deemed a worthy change.

I agree that detecting the situation in the meantime is a good idea.
The patch above probably handles the bulk-checkin code path, I'd guess.
It might be nice to have similar checks in other places, too:

  - when reading from an existing packfile

    Looks like we may already have such a check in
    unpack_object_header_buffer().

  - when taking in new objects via index-pack or unpack-objects (to
    catch a fetch of a too-big object)

    I think index-pack.c:unpack_raw_entry() would want a similar check
    to what is in unpack_object_header_buffer().

-Peff

Reply via email to