Re: [PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-11 Thread Jeff King
On Wed, Jul 10, 2013 at 06:47:49PM +0530, Ramkumar Ramachandra wrote:

  For a 64-bit off_t, using 16-bit digits gives us k=4.
 
 Wait, isn't off_t a signed data type?  Did you account for that in
 your algorithm?

It is signed, but the values we are storing in the revindex are all
positive file offsets. Right-shifting a positive signed type is
explicitly allowed in C.

  -static int cmp_offset(const void *a_, const void *b_)
  +/*
  + * This is a least-significant-digit radix sort.
  + */
 
 Any particular reason for choosing LSD, and not MSD?

Simplicity. An MSD implementation should have the same algorithmic
complexity and in theory, one can do MSD in-place. I'm happy enough with
the speedup here, but if you want to take a stab at beating my times
with MSD, please feel free.

The other usual downside of MSD is that it is typically not stable,
but we don't care about that here. We know that our sort keys are
unique.

  +#define DIGIT_SIZE (16)
  +#define BUCKETS (1  DIGIT_SIZE)
 
 Okay, NUMBER_OF_BUCKETS = 2^RADIX, and you choose a hex radix.  Is
 off_t guaranteed to be fixed-length though?  I thought only the ones
 in stdint.h were guaranteed to be fixed-length?

I'm not sure what you mean by fixed-length. If you mean does it have the
same size on every platform, then no. It will typically be 32-bit on
platforms without largefile support, and 64-bit elsewhere. But it
shouldn't matter. We'll first sort the entries by the lower 16 bits, and
then if we have more bits, by the next 16 bits, and so on. We quit when
the maximum value to sort (which we know ahead of time from the size of
the packfile) is smaller than the 16-bits we are on. So we don't need to
know the exact size of off_t, only the maximum value in our list (which
must, by definition, be smaller than what can be represented by off_t).

  +   /*
  +* We want to know the bucket that a[i] will go into when we are 
  using
  +* the digit that is N bits from the (least significant) end.
  +*/
  +#define BUCKET_FOR(a, i, bits) (((a)[(i)].offset  (bits))  (BUCKETS-1))
 
 Ouch!  This is unreadable.  Just write an inline function instead?  A
 % would've been easier on the eyes, but you chose base-16.

I specifically avoided an inline function because they are subject to
compiler settings. This isn't just it would be a bit faster if this got
inlined, and OK otherwise but this would be horribly slow if not
inlined.

I'm also not sure that

  static inline unsigned bucket_for(const struct revindex *a,
unsigned i,
unsigned bits)
  {
  return a[i].offset  bits  (BUCKETS-1);
  }

is actually any more readable.

I'm not sure what you mean by base-16. No matter the radix digit size,
as long as it is an integral number of bits, we can mask it off, which
is more efficient than modulo. A good compiler should see that it
is a constant and convert it to a bit-mask, but I'm not sure I agree
that modular arithmetic is more readable. This is fundamentally a
bit-twiddling operation, as we are shifting and masking.

I tried to explain it in the comment; suggestions to improve that are
welcome.

  +   /*
  +* We need O(n) temporary storage, so we sort back and forth between
  +* the real array and our tmp storage. To keep them straight, we 
  always
  +* sort from a into buckets in b.
  +*/
  +   struct revindex_entry *tmp = xcalloc(n, sizeof(*tmp));
 
 Shouldn't this be sizeof (struct revindex_entry), since tmp hasn't
 been declared yet?

No, the variable is declared (but uninitialized) in its initializer.
Despite its syntax, sizeof() is not a function and does not care about
the state of the variable, only its type.

 Also, s/n/revindex_nr/, and something more appropriate for tmp?

What name would you suggest be be more appropriate for tmp?

  +   int bits = 0;
  +   unsigned *pos = xmalloc(BUCKETS * sizeof(*pos));
 
 sizeof(unsigned int), for clarity, if not anything else.

I disagree; in general, I prefer using sizeof(*var) rather than
sizeof(type), because it avoids repeating ourselves, and there is no
compile-time check that you have gotten it right.

In the initializer it is less important, because the type is right
there. But when you are later doing:

  memset(pos, 0, BUCKETS * sizeof(*pos));

this is much more robust. If somebody changes the type of pos, the
memset line does not need changed. If you used sizeof(unsigned), then
the code is now buggy (and the compiler cannot notice).

 You picked malloc over calloc here, because you didn't want to incur
 the extra cost of zero-initializing the memory?

Yes. We have to zero-initialize in each loop, so there is no point
spending the extra effort on calloc.

We could also xcalloc inside each loop iteration, but since we need the
same-size allocation each time, I hoisted the malloc out of the loop.

 Also, pos is the actual buckets array, I presume (hence unsigned,

Re: [PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-11 Thread Jeff King
On Wed, Jul 10, 2013 at 10:10:16AM -0700, Brandon Casey wrote:

  On the linux.git repo, with about 3M objects to sort, this
  yields a 400% speedup. Here are the best-of-five numbers for
  running echo HEAD | git cat-file --batch-disk-size, which
  is dominated by time spent building the pack revindex:
 
before after
real0m0.834s   0m0.204s
user0m0.788s   0m0.164s
sys 0m0.040s   0m0.036s
 
  On a smaller repo, the radix sort would not be
  as impressive (and could even be worse), as we are trading
  the log(n) factor for the k=4 of the radix sort. However,
  even on git.git, with 173K objects, it shows some
  improvement:
 
before after
real0m0.046s   0m0.017s
user0m0.036s   0m0.012s
sys 0m0.008s   0m0.000s
 
 k should only be 2 for git.git.  I haven't packed in a while, but I
 think it should all fit within 4G.  :)  The pathological case would be
 a pack file with very few very very large objects, large enough to
 push the pack size over the 2^48 threshold so we'd have to do all four
 radixes.

Yeah, even linux.git fits into k=2. And that does more or less explain
the numbers in both cases.

For git.git, With 173K objects, log(n) is ~18, so regular sort is 18n.
With a radix sort of k=2, which has a constant factor of 2 (you can see
by looking at the code that we go through the list twice per radix), we
have 4n. So there should be a 4.5x speedup. We don't quite get that,
which is probably due to the extra bookkeeping on the buckets.

For linux.git, with 3M objects, log(n) is ~22, so the speedup we hope
for is 5.5x. We end up with 4x.

 It's probably worth mentioning here and/or in the code that k is
 dependent on the pack file size and that we can jump out early for
 small pack files.  That's my favorite part of this code by the way. :)

Yeah, I agree it is probably worth mentioning along with the numbers; it
is where half of our speedup is coming from. I think the max  bits
loop condition deserves to be commented, too. I'll add that.

Also note that my commit message still refers to --batch-disk-size
which does not exist anymore. :) I didn't update the timings in the
commit message for my re-roll, but I did confirm that they are the same.

  +   /*
  +* We need O(n) temporary storage, so we sort back and forth between
  +* the real array and our tmp storage. To keep them straight, we 
  always
  +* sort from a into buckets in b.
  +*/
  +   struct revindex_entry *tmp = xcalloc(n, sizeof(*tmp));
 
 Didn't notice it the first time I read this, but do we really need
 calloc here?  Or will malloc do?

No, a malloc should be fine. I doubt it matters much, but there's no
reason not to go the cheap route.

  +   struct revindex_entry *a = entries, *b = tmp;
  +   int bits = 0;
  +   unsigned *pos = xmalloc(BUCKETS * sizeof(*pos));
  +
  +   while (max  bits) {
  +   struct revindex_entry *swap;
  +   int i;
 
 You forgot to make i unsigned.  See below too...

Oops. Thanks for catching.

  +   /*
  +* Now we can drop the elements into their correct buckets 
  (in
  +* our temporary array).  We iterate the pos counter 
  backwards
  +* to avoid using an extra index to count up. And since we 
  are
  +* going backwards there, we must also go backwards through 
  the
  +* array itself, to keep the sort stable.
  +*/
  +   for (i = n - 1; i = 0; i--)
  +   b[--pos[BUCKET_FOR(a, i, bits)]] = a[i];
 
 ...which is why the above loop still works.

Since we are iterating by ones, I guess I can just compare to UINT_MAX.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-10 Thread Jeff King
The pack revindex stores the offsets of the objects in the
pack in sorted order, allowing us to easily find the on-disk
size of each object. To compute it, we populate an array
with the offsets from the sha1-sorted idx file, and then use
qsort to order it by offsets.

That does O(n log n) offset comparisons, and profiling shows
that we spend most of our time in cmp_offset. However, since
we are sorting on a simple off_t, we can use numeric sorts
that perform better. A radix sort can run in O(k*n), where k
is the number of digits in our number. For a 64-bit off_t,
using 16-bit digits gives us k=4.

On the linux.git repo, with about 3M objects to sort, this
yields a 400% speedup. Here are the best-of-five numbers for
running echo HEAD | git cat-file --batch-disk-size, which
is dominated by time spent building the pack revindex:

  before after
  real0m0.834s   0m0.204s
  user0m0.788s   0m0.164s
  sys 0m0.040s   0m0.036s

On a smaller repo, the radix sort would not be
as impressive (and could even be worse), as we are trading
the log(n) factor for the k=4 of the radix sort. However,
even on git.git, with 173K objects, it shows some
improvement:

  before after
  real0m0.046s   0m0.017s
  user0m0.036s   0m0.012s
  sys 0m0.008s   0m0.000s

Signed-off-by: Jeff King p...@peff.net
---
I changed a few things from the original, including:

  1. We take an unsigned number of objects to match the fix in the
 last patch.

  2. The 16-bit digit size is factored out to a single place, which
 avoids magic numbers and repeating ourselves.

  3. The digits variable is renamed to bits, which is more accurate.

  4. The outer loop condition uses the simpler while (max  bits).

  5. We use memcpy instead of an open-coded loop to copy the whole array
 at the end. The individual bucket-assignment is still done by
 struct assignment. I haven't timed if memcpy would make a
 difference there.

  6. The 64K*sizeof(int) pos array is now heap-allocated, in case
 there are platforms with a small stack.

I re-ran my timings to make sure none of the above impacted them; it
turned out the same.

 pack-revindex.c | 84 +
 1 file changed, 79 insertions(+), 5 deletions(-)

diff --git a/pack-revindex.c b/pack-revindex.c
index 1aa9754..9365bc2 100644
--- a/pack-revindex.c
+++ b/pack-revindex.c
@@ -59,11 +59,85 @@ static int cmp_offset(const void *a_, const void *b_)
/* revindex elements are lazily initialized */
 }
 
-static int cmp_offset(const void *a_, const void *b_)
+/*
+ * This is a least-significant-digit radix sort.
+ */
+static void sort_revindex(struct revindex_entry *entries, unsigned n, off_t 
max)
 {
-   const struct revindex_entry *a = a_;
-   const struct revindex_entry *b = b_;
-   return (a-offset  b-offset) ? -1 : (a-offset  b-offset) ? 1 : 0;
+   /*
+* We use a digit size of 16 bits. That keeps our memory
+* usage reasonable, and we can generally (for a 4G or smaller
+* packfile) quit after two rounds of radix-sorting.
+*/
+#define DIGIT_SIZE (16)
+#define BUCKETS (1  DIGIT_SIZE)
+   /*
+* We want to know the bucket that a[i] will go into when we are using
+* the digit that is N bits from the (least significant) end.
+*/
+#define BUCKET_FOR(a, i, bits) (((a)[(i)].offset  (bits))  (BUCKETS-1))
+
+   /*
+* We need O(n) temporary storage, so we sort back and forth between
+* the real array and our tmp storage. To keep them straight, we always
+* sort from a into buckets in b.
+*/
+   struct revindex_entry *tmp = xcalloc(n, sizeof(*tmp));
+   struct revindex_entry *a = entries, *b = tmp;
+   int bits = 0;
+   unsigned *pos = xmalloc(BUCKETS * sizeof(*pos));
+
+   while (max  bits) {
+   struct revindex_entry *swap;
+   int i;
+
+   memset(pos, 0, BUCKETS * sizeof(*pos));
+
+   /*
+* We want pos[i] to store the index of the last element that
+* will go in bucket i (actually one past the last element).
+* To do this, we first count the items that will go in each
+* bucket, which gives us a relative offset from the last
+* bucket. We can then cumulatively add the index from the
+* previous bucket to get the true index.
+*/
+   for (i = 0; i  n; i++)
+   pos[BUCKET_FOR(a, i, bits)]++;
+   for (i = 1; i  BUCKETS; i++)
+   pos[i] += pos[i-1];
+
+   /*
+* Now we can drop the elements into their correct buckets (in
+* our temporary array).  We iterate the pos counter backwards
+* to avoid using an extra index to count up. And since we are
+* going backwards there, we must also go 

Re: [PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-10 Thread Jeff King
On Wed, Jul 10, 2013 at 07:55:57AM -0400, Jeff King wrote:

   5. We use memcpy instead of an open-coded loop to copy the whole array
  at the end. The individual bucket-assignment is still done by
  struct assignment. I haven't timed if memcpy would make a
  difference there.

I just timed this, and I can't measure any difference. I think the
struct assignment is the more readable option, and I do not think any
compilers should have trouble with it. But if they do, we can switch it
for a memcpy.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-10 Thread Ramkumar Ramachandra
Jeff King wrote:
 That does O(n log n) offset comparisons, and profiling shows
 that we spend most of our time in cmp_offset. However, since
 we are sorting on a simple off_t, we can use numeric sorts
 that perform better. A radix sort can run in O(k*n), where k
 is the number of digits in our number. For a 64-bit off_t,
 using 16-bit digits gives us k=4.

Wait, isn't off_t a signed data type?  Did you account for that in
your algorithm?

 On the linux.git repo, with about 3M objects to sort, this
 yields a 400% speedup. Here are the best-of-five numbers for
 running echo HEAD | git cat-file --batch-disk-size, which
 is dominated by time spent building the pack revindex:

Okay.

 diff --git a/pack-revindex.c b/pack-revindex.c
 index 1aa9754..9365bc2 100644
 --- a/pack-revindex.c
 +++ b/pack-revindex.c
 @@ -59,11 +59,85 @@ static int cmp_offset(const void *a_, const void *b_)
 /* revindex elements are lazily initialized */
  }

 -static int cmp_offset(const void *a_, const void *b_)
 +/*
 + * This is a least-significant-digit radix sort.
 + */

Any particular reason for choosing LSD, and not MSD?

 +#define DIGIT_SIZE (16)
 +#define BUCKETS (1  DIGIT_SIZE)

Okay, NUMBER_OF_BUCKETS = 2^RADIX, and you choose a hex radix.  Is
off_t guaranteed to be fixed-length though?  I thought only the ones
in stdint.h were guaranteed to be fixed-length?

 +   /*
 +* We want to know the bucket that a[i] will go into when we are using
 +* the digit that is N bits from the (least significant) end.
 +*/
 +#define BUCKET_FOR(a, i, bits) (((a)[(i)].offset  (bits))  (BUCKETS-1))

Ouch!  This is unreadable.  Just write an inline function instead?  A
% would've been easier on the eyes, but you chose base-16.

 +   /*
 +* We need O(n) temporary storage, so we sort back and forth between
 +* the real array and our tmp storage. To keep them straight, we 
 always
 +* sort from a into buckets in b.
 +*/
 +   struct revindex_entry *tmp = xcalloc(n, sizeof(*tmp));

Shouldn't this be sizeof (struct revindex_entry), since tmp hasn't
been declared yet?  Also, s/n/revindex_nr/, and something more
appropriate for tmp?

 +   struct revindex_entry *a = entries, *b = tmp;

It's starting to look like you have something against descriptive names ;)

 +   int bits = 0;
 +   unsigned *pos = xmalloc(BUCKETS * sizeof(*pos));

sizeof(unsigned int), for clarity, if not anything else.  You picked
malloc over calloc here, because you didn't want to incur the extra
cost of zero-initializing the memory?  Also, pos is the actual buckets
array, I presume (hence unsigned, because there can't be a negative
number of keys in any bucket)?

 +   while (max  bits) {

No clue what max is.  Looked at the caller and figured out that it's
the pack-size, although I'm still clueless about why it's appearing
here.

 +   struct revindex_entry *swap;
 +   int i;
 +
 +   memset(pos, 0, BUCKETS * sizeof(*pos));

Ah, so that's why you used malloc there.  Wait, shouldn't this be
memset(pos, 0, sizeof(*pos))?

 +   for (i = 0; i  n; i++)
 +   pos[BUCKET_FOR(a, i, bits)]++;

Okay, so you know how many numbers are in each bucket.

 +   for (i = 1; i  BUCKETS; i++)
 +   pos[i] += pos[i-1];

Cumulative sums; right.

 +   for (i = n - 1; i = 0; i--)
 +   b[--pos[BUCKET_FOR(a, i, bits)]] = a[i];

Classical queue.  You could've gone for something more complex, but I
don't think it would have been worth the extra complexity.

 +   swap = a;
 +   a = b;
 +   b = swap;

Wait a minute: why don't you just throw away b?  You're going to
rebuild the queue in the next iteration anyway, no?  a is what is
being sorted.

 +   /* And bump our bits for the next round. */
 +   bits += DIGIT_SIZE;

I'd have gone for a nice for-loop.

 +   /*
 +* If we ended with our data in the original array, great. If not,
 +* we have to move it back from the temporary storage.
 +*/
 +   if (a != entries)
 +   memcpy(entries, tmp, n * sizeof(*entries));

How could a be different from entries?  It has no memory allocated for
itself, no?  Why did you even create a, and not directly operate on
entries?

 +   free(tmp);
 +   free(pos);

Overall, I found it quite confusing :(

 +#undef BUCKET_FOR

Why not DIGIT_SIZE and BUCKETS too, while at it?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 10/10] pack-revindex: radix-sort the revindex

2013-07-10 Thread Brandon Casey
On Wed, Jul 10, 2013 at 4:55 AM, Jeff King p...@peff.net wrote:
 The pack revindex stores the offsets of the objects in the
 pack in sorted order, allowing us to easily find the on-disk
 size of each object. To compute it, we populate an array
 with the offsets from the sha1-sorted idx file, and then use
 qsort to order it by offsets.

 That does O(n log n) offset comparisons, and profiling shows
 that we spend most of our time in cmp_offset. However, since
 we are sorting on a simple off_t, we can use numeric sorts
 that perform better. A radix sort can run in O(k*n), where k
 is the number of digits in our number. For a 64-bit off_t,
 using 16-bit digits gives us k=4.

 On the linux.git repo, with about 3M objects to sort, this
 yields a 400% speedup. Here are the best-of-five numbers for
 running echo HEAD | git cat-file --batch-disk-size, which
 is dominated by time spent building the pack revindex:

   before after
   real0m0.834s   0m0.204s
   user0m0.788s   0m0.164s
   sys 0m0.040s   0m0.036s

 On a smaller repo, the radix sort would not be
 as impressive (and could even be worse), as we are trading
 the log(n) factor for the k=4 of the radix sort. However,
 even on git.git, with 173K objects, it shows some
 improvement:

   before after
   real0m0.046s   0m0.017s
   user0m0.036s   0m0.012s
   sys 0m0.008s   0m0.000s

k should only be 2 for git.git.  I haven't packed in a while, but I
think it should all fit within 4G.  :)  The pathological case would be
a pack file with very few very very large objects, large enough to
push the pack size over the 2^48 threshold so we'd have to do all four
radixes.

It's probably worth mentioning here and/or in the code that k is
dependent on the pack file size and that we can jump out early for
small pack files.  That's my favorite part of this code by the way. :)

 Signed-off-by: Jeff King p...@peff.net
 ---
 I changed a few things from the original, including:

   1. We take an unsigned number of objects to match the fix in the
  last patch.

   2. The 16-bit digit size is factored out to a single place, which
  avoids magic numbers and repeating ourselves.

   3. The digits variable is renamed to bits, which is more accurate.

   4. The outer loop condition uses the simpler while (max  bits).

   5. We use memcpy instead of an open-coded loop to copy the whole array
  at the end. The individual bucket-assignment is still done by
  struct assignment. I haven't timed if memcpy would make a
  difference there.

   6. The 64K*sizeof(int) pos array is now heap-allocated, in case
  there are platforms with a small stack.

 I re-ran my timings to make sure none of the above impacted them; it
 turned out the same.

  pack-revindex.c | 84 
 +
  1 file changed, 79 insertions(+), 5 deletions(-)

 diff --git a/pack-revindex.c b/pack-revindex.c
 index 1aa9754..9365bc2 100644
 --- a/pack-revindex.c
 +++ b/pack-revindex.c
 @@ -59,11 +59,85 @@ static int cmp_offset(const void *a_, const void *b_)
 /* revindex elements are lazily initialized */
  }

 -static int cmp_offset(const void *a_, const void *b_)
 +/*
 + * This is a least-significant-digit radix sort.
 + */
 +static void sort_revindex(struct revindex_entry *entries, unsigned n, off_t 
 max)
  {
 -   const struct revindex_entry *a = a_;
 -   const struct revindex_entry *b = b_;
 -   return (a-offset  b-offset) ? -1 : (a-offset  b-offset) ? 1 : 0;
 +   /*
 +* We use a digit size of 16 bits. That keeps our memory
 +* usage reasonable, and we can generally (for a 4G or smaller
 +* packfile) quit after two rounds of radix-sorting.
 +*/
 +#define DIGIT_SIZE (16)
 +#define BUCKETS (1  DIGIT_SIZE)
 +   /*
 +* We want to know the bucket that a[i] will go into when we are using
 +* the digit that is N bits from the (least significant) end.
 +*/
 +#define BUCKET_FOR(a, i, bits) (((a)[(i)].offset  (bits))  (BUCKETS-1))
 +
 +   /*
 +* We need O(n) temporary storage, so we sort back and forth between
 +* the real array and our tmp storage. To keep them straight, we 
 always
 +* sort from a into buckets in b.
 +*/
 +   struct revindex_entry *tmp = xcalloc(n, sizeof(*tmp));

Didn't notice it the first time I read this, but do we really need
calloc here?  Or will malloc do?

 +   struct revindex_entry *a = entries, *b = tmp;
 +   int bits = 0;
 +   unsigned *pos = xmalloc(BUCKETS * sizeof(*pos));
 +
 +   while (max  bits) {
 +   struct revindex_entry *swap;
 +   int i;

You forgot to make i unsigned.  See below too...

 +
 +   memset(pos, 0, BUCKETS * sizeof(*pos));
 +
 +   /*
 +* We want pos[i] to store the index of the last element that
 +* will go in bucket i (actually one past the last