Michael Haggerty <mhag...@alum.mit.edu> writes:

>       if (read_ref_full(refname, base, 1, &flag))
>               return -1;
>  
> -     if ((flag & REF_ISPACKED)) {
> +     /*
> +      * If the reference is packed, read its ref_entry from the
> +      * cache in the hope that we already know its peeled value.
> +      * We only try this optimization on packed references because
> +      * (a) forcing the filling of the loose reference cache could
> +      * be expensive and (b) loose references anyway usually do not
> +      * have REF_KNOWS_PEELED.
> +      */
> +     if (flag & REF_ISPACKED) {
>               struct ref_entry *r = get_packed_ref(refname);

This code makes the reader wonder what happens when a new loose ref
masks a stale packed ref, but the worry is unfounded because the
read_ref_full() wouldn't have gave us REF_ISPACKED in the flag in
such a case.

But somehow the calling sequence looks like such a mistake waiting
to happen.  It would be much more clear if a function that returns a
"struct ref_entry *" is used instead of read_ref_full() above, and
we checked (r->flag & REF_ISPACKED) in the conditional, without a
separate get_packed_ref(refname).

> -
> -             if (r && (r->flag & REF_KNOWS_PEELED)) {
> -                     if (is_null_sha1(r->u.value.peeled))
> -                         return -1;
> +             if (r) {
> +                     if (peel_entry(r))
> +                             return -1;
>                       hashcpy(sha1, r->u.value.peeled);
>                       return 0;
>               }
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to