On Tue, 26 Jul, Jeff Garzik wrote:
> AFAICT this is
> just a complete waste of time. Why does this occur?
> Packing 1394 objects
> Unpacking 1394 objects
> 100% (1394/1394) done
> It doesn't seem to make any sense to perform work, then immediately undo
> that work, just for a local pull.
First, make sure you have a recent git, it does better at optimizing the
objects, so there are fewer of them. Of course, the above could be a real
pull of a a fair amount of work, but check that your git has this commit:
Be more aggressive about marking trees uninteresting
because otherwise you sometimes get a fair number of objects just because
git-rev-list wasn't always being very careful, and took more objects than
it strictly needed.
Secondly, what's the problem? Sure, I could special-case the local case,
but do you really want to have two _totally_ different code-paths? In
other words, it's absolutely NOT a complete waste of time: it's very much
a case of trying to have a unified architecture, and the fact that it
spends a few seconds doing things in a way that is network-transparent is
time well spent.
Put another way: do you argue that X network transparency is a total waste
of time? You could certainly optimize X if you always made it be
local-machine only. Or you could make tons of special cases, and have X
have separate code-paths for local clients and for remote clients, rather
than just always opening a socket connection.
See? Trying to have one really solid code-path is not a waste of time.
We do end up having a special code-path for "clone" (the "-l" flag), which
does need it, but I seriously doubt you need it for a local pull. The most
expensive operation in a local pull tends to be (if the repositories are
unpacked and cold-cache) just figuring out the objects to pull, not the
packing/unpacking per se.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html