On 19 Aug 2012, at 02:02, Junio C Hamano wrote:
> Alexey Muranov <alexey.mura...@gmail.com> writes:
>> I hope my opinion might be useful because i do not know anything
>> about the actual implementation of Git,...
> That sounds like contradiction.
I think that the implementation (the code), the model, and the interface are
On the top level, for example, one does not need to know how commit storage is
optimized, it is enough to understand that each commit is a snapshot of a
subtree in a file directory.
>> To just give a quick idea of my ideas, i thought that 'fetching'
>> in Git was an inevitable evil that stands apart from other
>> operations and is necessary only because the computer
>> communication on Earth is not sufficiently developed to keep all
>> Git repositories constantly in sync,...
> It is a feature, not a symptom of an insufficiently developed
> technology, that I do not have to know what random tweaks and
> experiments are done in repositories of 47 thousands people who
> clone from me, and I can sync with any one of them only when I know
> there is something worth looking at when I say "git fetch".
Currently, one of the main functions of 'fetch', apart from changing the remote
tracking branches, is downloading the remote objects. This is necessary
because of an insufficiently developed technology.
The other main function is changing the local copies of remote branches
(changing the remote tracking branches), this is what i described as "taking a
I did not understand what you meant by
"I do not have to know what random tweaks and experiments are done in
repositories of 47 thousands people who clone from me, and I can sync with any
one of them only when I know there is something worth looking at when I say
How is it possible to know and not to know what is going on in the remote
repository in the same time?
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html