Hi GSoC contributors (Yes, you're already contributing, because discussions advance the tasks even if not selected for Summer of Code support!)
I'm collecting the MRs I know about here. I will not be working on them immediately, so don't get too excited yet. But you can help me out by letting me know if I missed any of yours. Please reply to this message, and add them in numerical order. Please check if others have already replied, and add to the tail of the thread. *Keep only the MR lists and headings!* My posts are tl;dr as it is, nobody needs to read them twice. ;-) If you are able to assign to me (@yaseppochi), and tag them with other appropriate tags from the drop-down menu, including "easy" and "beginner-friendly". If yours are on the list, please check that they are tagged with "GSoC", and if possible tag them with other appropriate tags. For each MR, please check - The pipeline passes without warnings. Unfortunately, there's an upstream issue with Python 3.13 and Postorius that causes hundreds of warnings. For some reason this does not affect HyperKitty. We're working on this. This may delay approval of Postorius MRs until next week. Some MRs with warnings may be approved, but it will take extra time. MRs that fail the tests will not be approved. - It is cross-linked with an appropriate issue. MRs that do not have a linked issue will not be approved. In most cases there will be an existing issue, which you should update by adding a link to your MR in a comment. If not, you must add one, paying special attention to point (3) below. This may seem like useless bureaucracy, but it is not. (1) At least in my workflow, scanning issues gets priority over scanning MRs. (2) Users who don't program read issues but not MRs. (3) Issue and MR descriptions have different purposes. Issues report symptoms of problems and specifications for fixes and features, while MRs discuss implementation. (4) If you have a well-written issue description, mailing list posts can be copy and paste. - It has tests for the behaviors specified to be fixed. In the case of a *regression*, there may already be a test (that's pretty much the definition of regression). However, for user-reported defects and new features, you'll need to provide them. (Exception: cosmetic changes in the web UIs. Those need to be verified by hand.) There are two kinds of tests used in Mailman: *unit tests* (using tox, nose2, and I think pytest more recently), usually found in a separate directory "near" the code to be tested, and *doctests*, which you will find in a reStructuredText documentation file nearby. If you can't figure out how testing works or at least cargo cult it from existing tests for the module, at least add a comment to the MR explaining what needs to be tested (one-liners should almost always be enough). There are quite a few more MRs than I expected. It's going to take many hours to get through all of them. I plan to start with the core MRs in numerical order, then HyperKitty, and finally Postorius (hoping that the warnings problem can be fixed quickly). I will try to get to at least one mergeable for each contributor, but once you have a MR marked "mergeable", the rest of yours go to the end of the line. I will start actually merging in batches middle of next week. Mailman core: !1438 !1439 !1445 !1449 !1451 !1454 !1458 !1461 !1462 Postorius: !1063 !1064 !1065 !1074 !1075 !1077 HyperKitty: !701 !702 !703 !704 !705 Steve -- GNU Mailman consultant (installation, migration, customization) Sirius Open Source https://www.siriusopensource.com/ Software systems consulting in Europe, North America, and Japan _______________________________________________ Mailman-Developers mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3/lists/mailman-developers.python.org/ Mailman FAQ: https://wiki.list.org/x/AgA3 Security Policy: https://wiki.list.org/x/QIA9
