Hi all,

After some discussion, we have concluded that we would much prefer to
perform the gitlab data migration again from scratch than to live with
the off-by-one issue.

Daniel, I've added you on CC to this to humbly request assistance in
this matter.

Essentially, as described in the quoted text below, the migration
missed one issue and has resulted in 600+ issue numbers being off by
one, which is quite a hassle (references to issues in many commit
messages and migrated pull requests are incorrect, etc).

On Sun, 2021-01-10 at 18:28 +0900, Tristan Van Berkom wrote:
> Hi all,
[...]
> All migrated issues up until #800 have the same issue number in gitlab
> and in github, however issue #801 was skipped in the migration, due to
> the migration script believing that #801 was in fact the already
> migrated #423.
> 
> The migration script avoids accidentally migrating the same record
> twice, probably in case the script needed to be restarted (which is
> did):
> 
>   
> https://github.com/Cynical-Optimist/node-gitlab-2-github/blob/master/src/index.ts#L281
> 
> The net result, we do not have an issue #801 in github, and every
> subsequent following issue in github is numbered "${gitlab_issue} - 1".
[...]

What we would need help with would be much like what we went through
over christmas (but now we know all the ins and outs of it so it would
run more smoothly):

* Delete the repo
* Recreate the bare repo
* Give the migration user temporary write access to the repo:
  https://github.com/BuildStream-Migration-Bot
* Enable the issues and pull requests features on the repo
  (issue tracking is disabled by default in the ASF repos)
* Temporarily disable automated emails during the migration (asf
  infra has hooks which sends emails for new issues and pull requests
  to the mailing list).
* Set the default branch to `master`

To avoid the issue, I would modify the title of #801 in gitlab so that
it does not get mistaken for #423 and rerun the process.

Would infra be able to assist us with this ?

Apologies for the hassle, we really should have caught this issue
before in our test runs but the problem went unnoticed until this
weekend.

Cheers,
    -Tristan


Reply via email to