Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Junio C Hamano
Jeff King p...@peff.net writes:

 Furthermore, we know that one of our endpoints must be
 the edge of the run of duplicates. For example, given this
 sequence:

  idx 0 1 2 3 4 5
  key A C C C C D

 If we are searching for B, we might hit the duplicate run
 at lo=1, hi=3 (e.g., by first mi=3, then mi=0). But we can
 never have lo  1, because B  C. That is, if our key is
 less than the run, we know that lo is the edge, but we can
 say nothing of hi. Similarly, if our key is greater than
 the run, we know that hi is the edge, but we can say
 nothing of lo. But that is enough for us to return not
 only not found, but show the position at which we would
 insert the new item.

This is somewhat tricky and may deserve an in-code comment.

 diff --git a/sha1-lookup.c b/sha1-lookup.c
 index c4dc55d..614cbb6 100644
 --- a/sha1-lookup.c
 +++ b/sha1-lookup.c
 @@ -204,7 +204,30 @@ int sha1_entry_pos(const void *table,
* byte 0 thru (ofs-1) are the same between
* lo and hi; ofs is the first byte that is
* different.
 +  *
 +  * If ofs==20, then no bytes are different,
 +  * meaning we have entries with duplicate
 +  * keys. We know that we are in a solid run
 +  * of this entry (because the entries are
 +  * sorted, and our lo and hi are the same,
 +  * there can be nothing but this single key
 +  * in between). So we can stop the search.
 +  * Either one of these entries is it (and
 +  * we do not care which), or we do not have
 +  * it.
*/
 + if (ofs == 20) {
 + mi = lo;
 + mi_key = base + elem_size * mi + key_offset;
 + cmp = memcmp(mi_key, key, 20);

It think we already know that mi_key[0:ofs_0] and key[0:ofs_0] are
the same at this point and we do not have to compare full 20 bytes
again, but this is done only once and a better readablity of the
above trumps micro-optimization possibility, I think.

 + if (!cmp)
 + return mi;
 + if (cmp  0)
 + return -1 - hi;
 + else
 + return -1 - lo;
 + }
 +
   hiv = hi_key[ofs_0];
   if (ofs_0  19)
   hiv = (hiv  8) | hi_key[ofs_0+1];
 diff --git a/t/lib-pack.sh b/t/lib-pack.sh
 new file mode 100644
 index 000..61c5d19
 --- /dev/null
 +++ b/t/lib-pack.sh
 @@ -0,0 +1,78 @@
 +#!/bin/sh
 +#
 +# Support routines for hand-crafting weird or malicious packs.
 +#
 +# You can make a complete pack like:
 +#
 +#   pack_header 2 foo.pack 
 +#   pack_obj e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 foo.pack 
 +#   pack_obj e68fe8129b546b101aee9510c5328e7f21ca1d18 foo.pack 
 +#   pack_trailer foo.pack
 +
 +# Print the big-endian 4-byte octal representation of $1
 +uint32_octal() {

micronit (style):

uint32_octal () {

 + n=$1
 + printf '\%o' $(($n / 16777216)); n=$((n % 16777216))
 + printf '\%o' $(($n /65536)); n=$((n %65536))
 + printf '\%o' $(($n /  256)); n=$((n %  256))
 + printf '\%o' $(($n   ));
 +}
 +
 +# Print the big-endian 4-byte binary representation of $1
 +uint32_binary() {
 + printf $(uint32_octal $1)
 +}
 +
 +# Print a pack header, version 2, for a pack with $1 objects
 +pack_header() {
 + printf 'PACK' 
 + printf '\0\0\0\2' 
 + uint32_binary $1
 +}
 +
 +# Print the pack data for object $1, as a delta against object $2 (or as a 
 full
 +# object if $2 is missing or empty). The output is suitable for including
 +# directly in the packfile, and represents the entirety of the object entry.
 +# Doing this on the fly (especially picking your deltas) is quite tricky, so 
 we
 +# have hardcoded some well-known objects. See the case statements below for 
 the
 +# complete list.

Cute ;-) I like the idea of having this function with a right API in
place, and cheating by limiting its implementation to what is
necessary.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Jeff King
On Fri, Aug 23, 2013 at 09:41:57AM -0700, Junio C Hamano wrote:

 Jeff King p...@peff.net writes:
 
  Furthermore, we know that one of our endpoints must be
  the edge of the run of duplicates. For example, given this
  sequence:
 
   idx 0 1 2 3 4 5
   key A C C C C D
 
  If we are searching for B, we might hit the duplicate run
  at lo=1, hi=3 (e.g., by first mi=3, then mi=0). But we can
  never have lo  1, because B  C. That is, if our key is
  less than the run, we know that lo is the edge, but we can
  say nothing of hi. Similarly, if our key is greater than
  the run, we know that hi is the edge, but we can say
  nothing of lo. But that is enough for us to return not
  only not found, but show the position at which we would
  insert the new item.
 
 This is somewhat tricky and may deserve an in-code comment.

Do you want me to re-roll, pushing it down into the comment, or do you
want to mark it up yourself? I think there might be some value in the
latter as your re-writing of it as a comment may cross-check that my
logic is sound.

  +   if (ofs == 20) {
  +   mi = lo;
  +   mi_key = base + elem_size * mi + key_offset;
  +   cmp = memcmp(mi_key, key, 20);
 
 It think we already know that mi_key[0:ofs_0] and key[0:ofs_0] are
 the same at this point and we do not have to compare full 20 bytes
 again, but this is done only once and a better readablity of the
 above trumps micro-optimization possibility, I think.

Yes, I had the same idea, and came to the same conclusion. Though if
anybody did want to try it, note that we have just overwritten the old
ofs_0, so you would want to bump the new code up above that line).

  +uint32_octal() {
 
 micronit (style):
 
   uint32_octal () {

Hmph. I always forget which one we prefer, and we seem to have equal
numbers of both already. Again, want a re-roll or to mark it up
yourself?

  +# Print the pack data for object $1, as a delta against object $2 (or as a 
  full
  +# object if $2 is missing or empty). The output is suitable for including
  +# directly in the packfile, and represents the entirety of the object 
  entry.
  +# Doing this on the fly (especially picking your deltas) is quite tricky, 
  so we
  +# have hardcoded some well-known objects. See the case statements below 
  for the
  +# complete list.
 
 Cute ;-) I like the idea of having this function with a right API in
 place, and cheating by limiting its implementation to what is
 necessary.

Just for reference, the procedure I used to generate the base data is
reasonably straight forward:

  sha1=$(printf %s $content | git hash-object -w --stdin)
  echo $sha1 | git pack-objects --stdout tmp.pack
  tail -c +13 tmp.pack no-header.pack
  head -c -20 no-header.pack no-trailer.pack
  od -b no-trailer.pack | grep ' ' | cut -d' ' -f2- | tr ' ' '\\'

Since we want binary, we can skip the od call at the end (I needed it
to convert to something readable to hand printf). But head -c is not
portable, nor is head with a negative count.

To find items in the same fanout, I just used for-loops to calculate the
sha1s of all 2-byte blobs. And that is why we have the odd magic \7\76
blob.

Making the deltas was considerably less elegant, since we cannot provoke
pack-objects to pick arbitrary deltas (and it will not even try to delta
tiny objects, anyway, which would bloat our samples). I ended up with
the horrible patch below. We _could_ clean it up (error-checking? Who
needs it?) and make it a debug-and-testing-only option for pack-objects,
but I just didn't think the grossness was worth it. Still, it's probably
worth documenting here on the list in case somebody else ever needs to
add new samples to lib-pack.sh.

---
 builtin/pack-objects.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 8da2a66..e8937f5 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -2442,6 +2442,7 @@ int cmd_pack_objects(int argc, const char **argv, const 
char *prefix)
const char *rp_av[6];
int rp_ac = 0;
int rev_list_unpacked = 0, rev_list_all = 0, rev_list_reflog = 0;
+   int magic = 0;
struct option pack_objects_options[] = {
OPT_SET_INT('q', quiet, progress,
N_(do not show progress meter), 0),
@@ -2505,6 +2506,7 @@ int cmd_pack_objects(int argc, const char **argv, const 
char *prefix)
N_(pack compression level)),
OPT_SET_INT(0, keep-true-parents, grafts_replace_parents,
N_(do not hide commits by grafts), 0),
+   OPT_BOOL(0, magic, magic, make deltas),
OPT_END(),
};
 
@@ -2520,6 +2522,34 @@ int cmd_pack_objects(int argc, const char **argv, const 
char *prefix)
argc = parse_options(argc, argv, prefix, pack_objects_options,

Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Nicolas Pitre
On Fri, 23 Aug 2013, Jeff King wrote:

 Making the deltas was considerably less elegant, since we cannot provoke
 pack-objects to pick arbitrary deltas (and it will not even try to delta
 tiny objects, anyway, which would bloat our samples). I ended up with
 the horrible patch below. We _could_ clean it up (error-checking? Who
 needs it?) and make it a debug-and-testing-only option for pack-objects,
 but I just didn't think the grossness was worth it. Still, it's probably
 worth documenting here on the list in case somebody else ever needs to
 add new samples to lib-pack.sh.

Maybe using test-delta (from test-delta.c) would have helped here?

In any case, if something needs to be permanently added into the code to 
help in the creation of test objects, I think test-delta.c is a far 
better place than pack-objects.c.


Nicolas
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Jeff King
On Fri, Aug 23, 2013 at 02:54:19PM -0400, Nicolas Pitre wrote:

 On Fri, 23 Aug 2013, Jeff King wrote:
 
  Making the deltas was considerably less elegant, since we cannot provoke
  pack-objects to pick arbitrary deltas (and it will not even try to delta
  tiny objects, anyway, which would bloat our samples). I ended up with
  the horrible patch below. We _could_ clean it up (error-checking? Who
  needs it?) and make it a debug-and-testing-only option for pack-objects,
  but I just didn't think the grossness was worth it. Still, it's probably
  worth documenting here on the list in case somebody else ever needs to
  add new samples to lib-pack.sh.
 
 Maybe using test-delta (from test-delta.c) would have helped here?
 
 In any case, if something needs to be permanently added into the code to 
 help in the creation of test objects, I think test-delta.c is a far 
 better place than pack-objects.c.

*forehead palm*

I didn't even know we had test-delta. Yes, that is obviously a way
better place (I initially looked at pack-objects because it has the
helpers to do compression and the type/size header properly).

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Johannes Sixt

Am 23.08.2013 01:14, schrieb Jeff King:

+++ b/t/t5308-pack-detect-duplicates.sh
@@ -0,0 +1,73 @@
+#!/bin/sh
+
+test_description='handling of duplicate objects in incoming packfiles'
+. ./test-lib.sh
+. ../lib-pack.sh


This should be

. $TEST_DIRECTORY/lib-pack.sh

to support running tests with --root (also in patch 3/6).

-- Hannes

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Jeff King
On Fri, Aug 23, 2013 at 09:41:39PM +0200, Johannes Sixt wrote:

 Am 23.08.2013 01:14, schrieb Jeff King:
 +++ b/t/t5308-pack-detect-duplicates.sh
 @@ -0,0 +1,73 @@
 +#!/bin/sh
 +
 +test_description='handling of duplicate objects in incoming packfiles'
 +. ./test-lib.sh
 +. ../lib-pack.sh
 
 This should be
 
 . $TEST_DIRECTORY/lib-pack.sh
 
 to support running tests with --root (also in patch 3/6).

Doh, you would think that I would remember that, as the one who
introduced --root in the first place.

Will fix. Thanks for noticing.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-23 Thread Jeff King
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the lo and hi
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.

We may hit a point in the search where both our lo and
hi point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an assert(lov  hiv).

Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.

Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway).  If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.

Signed-off-by: Jeff King p...@peff.net
Acked-by: Nicolas Pitre n...@fluxnic.net
---
 sha1-lookup.c | 47 +++
 t/lib-pack.sh | 78 +++
 t/t5308-pack-detect-duplicates.sh | 73 
 3 files changed, 198 insertions(+)
 create mode 100644 t/lib-pack.sh
 create mode 100755 t/t5308-pack-detect-duplicates.sh

diff --git a/sha1-lookup.c b/sha1-lookup.c
index c4dc55d..2dd8515 100644
--- a/sha1-lookup.c
+++ b/sha1-lookup.c
@@ -204,7 +204,54 @@ int sha1_entry_pos(const void *table,
 * byte 0 thru (ofs-1) are the same between
 * lo and hi; ofs is the first byte that is
 * different.
+*
+* If ofs==20, then no bytes are different,
+* meaning we have entries with duplicate
+* keys. We know that we are in a solid run
+* of this entry (because the entries are
+* sorted, and our lo and hi are the same,
+* there can be nothing but this single key
+* in between). So we can stop the search.
+* Either one of these entries is it (and
+* we do not care which), or we do not have
+* it.
+*
+* Furthermore, we know that one of our
+* endpoints must be the edge of the run of
+* duplicates. For example, given this
+* sequence:
+*
+* idx 0 1 2 3 4 5
+* key A C C C C D
+*
+* If we are searching for B, we might
+* hit the duplicate run at lo=1, hi=3
+* (e.g., by first mi=3, then mi=0). But we
+* can never have lo  1, because B  C.
+* That is, if our key is less than the
+* run, we know that lo is the edge, but
+* we can say nothing of hi. Similarly,
+* if our key is greater than the run, we
+* know that hi is the edge, but we can
+* say nothing of lo.
+*
+* Therefore if we do not find it, we also
+* know where it would go if it did exist:
+* just on the far side of the edge that we
+* know about.
 */
+   if (ofs == 20) {
+   mi = lo;
+   mi_key = base + elem_size * mi + key_offset;
+   cmp = memcmp(mi_key, key, 20);
+   if (!cmp)
+   return mi;
+   if (cmp  0)
+   return -1 - hi;
+   else
+   return -1 - lo;
+   }
+
hiv = hi_key[ofs_0];
if (ofs_0  19)
hiv = (hiv  8) | hi_key[ofs_0+1];
diff --git a/t/lib-pack.sh b/t/lib-pack.sh
new file mode 100644
index 000..fecd5a0
--- /dev/null
+++ b/t/lib-pack.sh
@@ -0,0 +1,78 @@
+#!/bin/sh
+#
+# Support routines for hand-crafting weird or malicious packs.
+#
+# You can make a complete pack like:
+#
+#   pack_header 2 foo.pack 
+#   pack_obj e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 foo.pack 
+#   pack_obj e68fe8129b546b101aee9510c5328e7f21ca1d18 foo.pack 
+#   pack_trailer foo.pack
+
+# Print 

[PATCH 2/6] sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP

2013-08-22 Thread Jeff King
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the lo and hi
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.

We may hit a point in the search where both our lo and
hi point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an assert(lov  hiv).

Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.

Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway).  If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.

Furthermore, we know that one of our endpoints must be
the edge of the run of duplicates. For example, given this
sequence:

 idx 0 1 2 3 4 5
 key A C C C C D

If we are searching for B, we might hit the duplicate run
at lo=1, hi=3 (e.g., by first mi=3, then mi=0). But we can
never have lo  1, because B  C. That is, if our key is
less than the run, we know that lo is the edge, but we can
say nothing of hi. Similarly, if our key is greater than
the run, we know that hi is the edge, but we can say
nothing of lo. But that is enough for us to return not
only not found, but show the position at which we would
insert the new item.

Signed-off-by: Jeff King p...@peff.net
---
 sha1-lookup.c | 23 
 t/lib-pack.sh | 78 +++
 t/t5308-pack-detect-duplicates.sh | 73 
 3 files changed, 174 insertions(+)
 create mode 100644 t/lib-pack.sh
 create mode 100755 t/t5308-pack-detect-duplicates.sh

diff --git a/sha1-lookup.c b/sha1-lookup.c
index c4dc55d..614cbb6 100644
--- a/sha1-lookup.c
+++ b/sha1-lookup.c
@@ -204,7 +204,30 @@ int sha1_entry_pos(const void *table,
 * byte 0 thru (ofs-1) are the same between
 * lo and hi; ofs is the first byte that is
 * different.
+*
+* If ofs==20, then no bytes are different,
+* meaning we have entries with duplicate
+* keys. We know that we are in a solid run
+* of this entry (because the entries are
+* sorted, and our lo and hi are the same,
+* there can be nothing but this single key
+* in between). So we can stop the search.
+* Either one of these entries is it (and
+* we do not care which), or we do not have
+* it.
 */
+   if (ofs == 20) {
+   mi = lo;
+   mi_key = base + elem_size * mi + key_offset;
+   cmp = memcmp(mi_key, key, 20);
+   if (!cmp)
+   return mi;
+   if (cmp  0)
+   return -1 - hi;
+   else
+   return -1 - lo;
+   }
+
hiv = hi_key[ofs_0];
if (ofs_0  19)
hiv = (hiv  8) | hi_key[ofs_0+1];
diff --git a/t/lib-pack.sh b/t/lib-pack.sh
new file mode 100644
index 000..61c5d19
--- /dev/null
+++ b/t/lib-pack.sh
@@ -0,0 +1,78 @@
+#!/bin/sh
+#
+# Support routines for hand-crafting weird or malicious packs.
+#
+# You can make a complete pack like:
+#
+#   pack_header 2 foo.pack 
+#   pack_obj e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 foo.pack 
+#   pack_obj e68fe8129b546b101aee9510c5328e7f21ca1d18 foo.pack 
+#   pack_trailer foo.pack
+
+# Print the big-endian 4-byte octal representation of $1
+uint32_octal() {
+   n=$1
+   printf '\%o' $(($n / 16777216)); n=$((n % 16777216))
+   printf '\%o' $(($n /65536)); n=$((n %65536))
+   printf '\%o' $(($n /  256)); n=$((n %  256))
+   printf '\%o' $(($n   ));
+}
+
+# Print the big-endian 4-byte binary representation of $1
+uint32_binary() {
+   printf $(uint32_octal $1)
+}
+
+# Print a pack header, version 2, for a pack with $1 objects
+pack_header() {
+   printf 'PACK' 
+   printf '\0\0\0\2' 
+   uint32_binary $1
+}
+
+# Print the pack data for object $1, as a delta against object $2 (or as a full
+# object if $2 is missing or empty). The output is