On Fri, Feb 24, 2017 at 06:36:00PM +0000, HW42 wrote:

> > +ifdef USE_SHA1DC
> > +   SHA1_HEADER = "sha1dc/sha1.h"
> > +   LIB_OBJS += sha1dc/sha1.o
> > +   LIB_OBJS += sha1dc/ubc_check.o
> > +else
> >  ifdef BLK_SHA1
> >     SHA1_HEADER = "block-sha1/sha1.h"
> >     LIB_OBJS += block-sha1/sha1.o
> > @@ -1403,6 +1412,7 @@ else
> >  endif
> >  endif
> >  endif
> > +endif
> 
> This sets SHA1_MAX_BLOCK_SIZE and the compiler flags for Apple
> CommonCrypto even if the user selects USE_SHA1DC. The same happens for
> BLK_SHA1. Is this intended?

No, it's not. I suspect that setting BLK_SHA1 has the same problem in
the current code, then.

> > +void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *vdata, unsigned long len)
> > +{
> > +   const char *data = vdata;
> > +   /* We expect an unsigned long, but sha1dc only takes an int */
> > +   while (len > INT_MAX) {
> > +           SHA1DCUpdate(ctx, data, INT_MAX);
> > +           data += INT_MAX;
> > +           len -= INT_MAX;
> > +   }
> > +   SHA1DCUpdate(ctx, data, len);
> > +}
> 
> I think you can simply change the len parameter from unsigned into
> size_t (or unsigned long) in SHA1DCUpdate().
> https://github.com/cr-marcstevens/sha1collisiondetection/pull/6

Yeah, I agree that is a cleaner solution. My focus was on changing the
(presumably tested) sha1dc code as little as possible.

-Peff

Reply via email to