On Tue, Jan 13, 2015 at 11:33:08PM +0100, Johannes Sixt wrote:

> BTW, is it the incompressibility where the time is lost or lack of
> sparseness of the files? How does the timing change with this patch on
> top?

Oh, good call. It's the incompressibility. Which makes perfect sense.
Once we copy the file into the object database, that copy is not sparse.
But in the genrandom version, it _is_ a million times bigger. :)

With the patch below, my timings go back to ~0.7s (actually, they seem
slightly _better_ on average than what is in "master" now, but there is
quite a bit of run-to-run noise, so it may not be meaningful).

> diff --git a/t/t1050-large.sh b/t/t1050-large.sh
> index f653121..9cf4e0e 100755
> --- a/t/t1050-large.sh
> +++ b/t/t1050-large.sh
> @@ -9,10 +9,10 @@ test_expect_success setup '
>       # clone does not allow us to pass core.bigfilethreshold to
>       # new repos, so set core.bigfilethreshold globally
>       git config --global core.bigfilethreshold 200k &&
> -     test-genrandom seed1 2000000 >large1 &&
> +     printf "\0%2000000s" X >large1 &&
>       cp large1 large2 &&
>       cp large1 large3 &&
> -     test-genrandom seed2 2500000 >huge &&
> +     printf "\0%2500000s" Y >huge &&
>       GIT_ALLOC_LIMIT=1500k &&
>       export GIT_ALLOC_LIMIT
>  '

I think with this squashed in, I have no complaints at all about your
patch.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to