Thanks.

--
Respectfully,

Mark Sauder
c: 801.698.8786 <[email protected]>


On Sat, Sep 14, 2019 at 9:43 PM Mark Sauder <[email protected]> wrote:

> Dear Eigen,
>
> I feel like I've given enough time for someone else to speak up about this
> but haven't seen this commentary.  There is a reason that Eigen hasn't
> migrated to git earlier and it is also the same reason that there are 6
> year old PRs waiting in queue and not closed out or commented on... Eigen
> is not well.
>
> Eigen Curators, you have not listened to your audience, you have not
> accepted good, hard, work to keep your library alive and improved; you have
> not kept up with the times. Simply put, you have let opportunity pass Eigen
> by for more than half a decade. Literally for the past 5 years no one
> outside of a few maintainers has done anything for Eigen becuase PR's don't
> get accepted and change is not embraced.
>
> Please don't let this chance get missed. Migrate to git, start accepting
> good work, embrace a fresh start.  And most importantly, don't only let 5
> maintainers do all of the work.  If you cant do that, you will only see
> less activity, more forks so people can get what they want, and less useage
> of Eigen as a package, with more fragmentation and worse results. This has
> been the case for Eigen since 2015.
>
> This is not meant to be negative, only meant to be honest retrospective.
> This is a very god chance for Eigen to see a brighter future than it has.
> It's dying right now, but that can be changed.  I think this will be my
> last email to the community in this forum but I will watch from afar.
>
> Thanks for everything!
>
> --
> Respectfully,
>
> Mark Sauder
> c: 801.698.8786 <[email protected]>
>
>
> On Wed, Sep 11, 2019 at 10:04 AM Gael Guennebaud <
> [email protected]> wrote:
>
>> To prepare the migration from bitbucket, I started to play a bit with its
>> API to see what could be done. So far I've quickly draft two (ugly) python
>> scripts to archive the forks and pull-requests. Since this is a one shot
>> for us, I did not cared about robustness, safety, generality, beauty, etc.
>>
>> You can see them there :
>> https://gitlab.com/ggael/bitbucket-migration-tools and contribute!
>>
>> ** Forks **
>>
>> You can see the summary of the fork script there:
>> http://manao.inria.fr/eigen_tmp/archive_forks_log.html
>>
>> The hg clones (history+checkout) represents 20GB, maybe 12GB if we remove
>> the checkouts. Among the 460 forks, 214 seems to have no change at all
>> (according to "hg out") and could be dropped. I don't know yet where to
>> host them though.
>>
>> This script can be ran incrementally.
>>
>>
>> ** Pull-Requests **
>>
>> You can find the output of the pull-requests script there:
>> http://manao.inria.fr/eigen_tmp/pullrequests/
>>
>> There is a short summary, and then for each PR a static .html file plus
>> diff/patch files, and other details. For instance, see:
>> http://manao.inria.fr/eigen_tmp/pullrequests/OPEN/686/pr686.html
>>
>> Currently this script cannot be ran incrementally. You have to run it
>> just before closing the respective repository!
>>
>> Also, this script does not grab inline comments. Only the main
>> discussions is archived. Those can be obtained by iterating over the
>> "activity" pages, but I don't think that's worth the effort because they
>> would be difficult to exploit anyway.
>>
>>
>> ** hg to git **
>>
>> As discussed in the other thread, if we switch from hg to git, then all
>> hashes will have to be updated. Generating a map file is easy, and thus
>> updating the links/hashes in bug comments and PR comments should not be too
>> difficult (we only have to figure out the right regex to catch all
>> variants).
>>
>> However, updating the hashes within the commit messages will require to
>> rewrite the whole history in a careful order. Does anyone here feels brave
>> enough to write such a script? If not, I guess we could live with an online
>> php script doing the hash conversion on demand. I don't think we'll have to
>> follow such hashes so frequently.
>>
>> cheers,
>> gael
>>
>>
>>

Reply via email to