On 03/13/2014 11:07 PM, Jeff King wrote:
On Thu, Mar 13, 2014 at 03:01:09PM -0700, Shawn Pearce wrote:
It would definitely be good to have throughput measurements while
writing out the pack. However, I'm not sure we have anything useful to
count. We know the total number of objects we're
On Fri, Mar 14, 2014 at 4:43 PM, Michael Haggerty mhag...@alum.mit.edu wrote:
Would it be practical to change it to a percentage of bytes written?
Then we'd have progress info that is both convenient *and* truthful.
I agreed for a second, then remembered that we don't know the final
pack size
On Fri, Mar 14, 2014 at 05:21:59PM +0700, Duy Nguyen wrote:
On Fri, Mar 14, 2014 at 4:43 PM, Michael Haggerty mhag...@alum.mit.edu
wrote:
Would it be practical to change it to a percentage of bytes written?
Then we'd have progress info that is both convenient *and* truthful.
I agreed
On Fri, Mar 14, 2014 at 10:29 PM, Jeff King p...@peff.net wrote:
If an object is reused, we already know its compressed size. If it's
not reused and is a loose object, we could use on-disk size. It's a
lot harder to estimate an not-reused, deltified object. All we have is
the uncompressed
On Wed, Mar 12, 2014 at 05:21:21PM -0700, Shawn Pearce wrote:
Today I tried pushing a copy of linux.git from a client that had
bitmaps into a JGit server. The client stalled for a long time with no
progress, because it reused the existing pack. No progress appeared
while it was sending the
On Thu, Mar 13, 2014 at 2:26 PM, Jeff King p...@peff.net wrote:
On Wed, Mar 12, 2014 at 05:21:21PM -0700, Shawn Pearce wrote:
Today I tried pushing a copy of linux.git from a client that had
bitmaps into a JGit server. The client stalled for a long time with no
progress, because it reused the
On Thu, Mar 13, 2014 at 03:01:09PM -0700, Shawn Pearce wrote:
It would definitely be good to have throughput measurements while
writing out the pack. However, I'm not sure we have anything useful to
count. We know the total number of objects we're reusing, but we're not
actually parsing
Jeff King p...@peff.net writes:
There are a few ways around this:
1. Add a new phase Writing packs which counts from 0 to 1. Even
though it's more accurate, moving from 0 to 1 really isn't that
useful (the throughput is, but the 0/1 just looks like noise).
2. Add a new phase
On Thu, Mar 13, 2014 at 06:07:54PM -0400, Jeff King wrote:
3. Use the regular Writing objects progress, but fake the object
count. We know we are writing M bytes with N objects. Bump the
counter by 1 for every M/N bytes we write.
Here is that strategy. I think it looks pretty
Today I tried pushing a copy of linux.git from a client that had
bitmaps into a JGit server. The client stalled for a long time with no
progress, because it reused the existing pack. No progress appeared
while it was sending the existing file on the wire:
$ git push git://localhost/linux.git
10 matches
Mail list logo