Today we discussed this at triage. We're leaning towards changing the default from 20 to 10 as it seems like 10 only incurs an extra 30% penalty in time while seeming to fix the problem[0].
One question though is how we should treat existing data because most Remotes at this point probably have a value of 20 for download_concurrency. We came up with two options that we would like some feedback on. # Option 1: Migrate 20 to 10 This would be a migration in pulpcore that would update download_concurrency to 10 for all Remotes whose download_concurrency is set to 10. Something like: Remote.objects.all().filter(download_concurrency=20).update(download_concurrency=10) # Option 2: Documentation This would be similar to the migration approach but instead of modifying our users' data, we'd document how they could do it themselves. So something like: pulpcore-manager shell_plus -c "Remote.objects.all().filter(download_concurrency=20).update(download_concurrency=10) Any feedback is welcome. [0] https://pulp.plan.io/issues/7186#note-2 David On Mon, Jul 27, 2020 at 2:57 PM Grant Gainey <ggai...@redhat.com> wrote: > Hey folks, > > Looking into issue 7212 <https://pulp.plan.io/issues/7212> , over the > weekend I did some ad-hoc evaluations of sync-performance at various > concurrency settings. I wrote up my observations here: > > https://hackmd.io/@ggainey/pulp3_sync_concurrency > > Just thought folk might be interested. > > G > -- > Grant Gainey > Principal Software Engineer, Red Hat System Management Engineering > > > _______________________________________________ > Pulp-dev mailing list > Pulp-dev@redhat.com > https://www.redhat.com/mailman/listinfo/pulp-dev >
_______________________________________________ Pulp-dev mailing list Pulp-dev@redhat.com https://www.redhat.com/mailman/listinfo/pulp-dev