On Thursday, August 08, 2013 10:56:38 am Junio C Hamano 
> I thought the discussion was about making the local gc
> cheaper, and the "Imagine we have a cheap way" was to
> address it by assuming that the daily "pack young
> objects into a single pack" can be sped up if we did not
> have to traverse history.  More permanent packs (the
> older ones in "set of packs staggered by age" Martin
> proposes) in the repository should go through the normal
> history traversal route.

Assuming I understand what you are suggesting, would these 
"young object" likely still get "deduped" in an efficient 
way without doing history traversal (it sounds like they 
would)?  In other words, if I understand correctly, it would 
save time by not pruning unreferenced objects, but it would 
still be deduping things and delta compressing also, so you 
would still likely get a great benefit from creating these 
young object packs?  In other words, is there still a good 
chance that my 317 new pack files which included a 33M pack 
file will still get consolidated down to something near 8M?  

If so, then yeah this might be nice, especially if the 
history traversal is what would speed this up.  Because 
today, my solution mostly saves IO and not time.  I think it 
still saves time, I believe I have seen up to a 50% savings, 
but that is nothing compared to massive, several orders of 
magnitude IO savings.  But if what you suggest could also 
give massive time (orders of magnitude) savings along with 
the IO improvements I am seeing, then suddenly repacking 
regularly would become very cheap even on large repos.  

The only time consuming piece would be pruning then?  Could 
bitmaps eventually help out there?


The Qualcomm Innovation Center, Inc. is a member of Code 
Aurora Forum, hosted by The Linux Foundation
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to