On Wed, Oct 13, 2021 at 2:50 PM Grant Edwards <grant.b.edwa...@gmail.com> wrote: > > Is there some reason it should default > to doing unlimited depth fetch operations? >
If all you want is a repo, no reason to set the depth higher. If you want to see the history then you'll want it all. However, once you have an initial sync, I don't think it should go back and fetch all the history unless you explicitly ask git to do so. I don't see why this would cause issues after the initial sync. If you were fetching all the history, it would be the FIRST sync that caused all the issues. Well, unless portage is going out and trying to pull it all in (and if so I'd think it would have done it from the start). Once you have the full repo then subsequent syncs should be very fast and don't use much CPU server-side. The git client sends the remote server its current head, and then the server walks back from its head until it finds yours, which will only be a short distance if you've synced recently. Then it is only the new objects in-between that have to be sent. The whole thing is de-duplicated and copy-on-write just due to its data structure. I'm suspecting some sort of server-side issue - maybe an intermittent one. Either that or portage is really trying to pull in that history after the initial sync. Another option is to do a pull from the github mirror. That same repo is hosted on both gentoo's server and github, and they're identical (the content hash tells you as much), so you should be able to do a pull from either seamlessly. The signatures/etc are applied to both as well. Some don't care for github not being FOSS but if you're just using it as a mirror I'd argue it is no different than if one of the gazillion distfile mirrors happened to run IIS or have a firmware that wasn't coreboot. It is just another mirror. -- Rich