[issue45344] Have zipapp respect SOURCE_DATE_EPOCH
New submission from Nate Woods : I have a small patch that would make zipapp respect SOURCE_DATE_EPOCH. This will ensure the zip bundles created by zipapp have consistent hashes regardless of when the source files were last touched. This idea came to my attention recently when I came across: https://reproducible-builds.org/ I can convert my changes to a PR if it's deemed interesting or useful, but I would like to respect the core maintainers time. Please let me know if these changes are not desired or not worth while and I'll seek to find somewhere else to put them. Also, I'm completely new here, so I apologize if there is anything I'm doing against protocol. I didn't find any issues in the tracker pertaining to this, and it seemed small and contained enough to be something I could try out. Hopefully this issue finds the maintainers well. -- components: Library (Lib) files: zipapp-respect-source-date-epoch.patch keywords: patch messages: 403041 nosy: bign8 priority: normal severity: normal status: open title: Have zipapp respect SOURCE_DATE_EPOCH type: enhancement versions: Python 3.11 Added file: https://bugs.python.org/file50320/zipapp-respect-source-date-epoch.patch ___ Python tracker <https://bugs.python.org/issue45344> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
Nate Atkinson added the comment: To be clear-- is_alive() doesn't *always* return True. It returns True until .join() is called. Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print('foo') ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() True >>> t.join() >>> t.is_alive() False I would expect is_alive to return True while the target function is executing and return False after the execution has completed. Instead, .is_alive is continuing to return True after execution of the target function has completed. -- ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
Nate Atkinson added the comment: I notice that I maybe have inadvertently assigned this to the wrong group. I suspect that this should apply to the "Library" rather than "Core". Sorry! -- ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
New submission from Nate Atkinson : Here's what I expect to happen (Python2 behavior): Python 2.7.14+ (default, Dec 5 2017, 15:17:02) [GCC 7.2.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print 'foo' ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() False >>> Here's what actually happens (Python3.6): Python 3.6.4 (default, Jan 5 2018, 02:13:53) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print('foo') ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() True >>> After completion of the target function, I would expect .is_alive() to return False for an instance of dummy_thread.Thread. Instead, it returns True until the .join() method of the instance of dummy_thread.Thread is called. -- components: Interpreter Core messages: 318795 nosy: njatkinson priority: normal severity: normal status: open title: dummy_threading: .is_alive method returns True after execution has completed type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32220] multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin
Nate <pm62...@nate.sh> added the comment: According to https://developer.apple.com/library/content/qa/qa1541/_index.html some bugs were fixed in 10.5. Not sure if the original attempt to patch the problem was happening on < 10.5, or if this was still a problem in 10.5+. I can't for the life of me find it again, but I had found another source that claimed the true fixes for OS X came out with 10.7. In any case, because this code is specifically part of the multiprocessing package, whereby it should be *expected* for multiple processes to be accessing the pipe, it's disastrous for this code to be reading/writing an acknowledge packet in this manner. This is a hard case to test for, as timing matters. The duplex pipe doesn't get confused/corrupted unless one process is sending/receiving a message over the pipe at the same moment that another process is executing your acknowledge logic. It's reproducible, but not 100%. Personally, I've restructured to using one pipe exclusively for file descriptor passing, and using a separate Queue (or Pipe pair) for custom message passing. If a better fix cannot be established, at a minimum the documentation for multiprocessing and the Pipe class should be updated with a big red warning about passing file descriptors on OS X/macOS/darwin. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32220> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32220] multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin
New submission from Nate <pm62...@nate.sh>: In multiprocessing/reduction.py, there is a hack workaround in the sendfds() and recvfds() methods for darwin, as determined by the "ACKNOWLEDGE" constant. There is a reference to issue #14669 in the code related to why this was added in the first place. This bug exists in both 3.6.3 and the latest 3.7.0a2. When a file descriptor is received, this workaround/hack sends an acknowledgement message to the sender. The problem is that this completely breaks Duplex pipes depending on the timing of the acknowledgement messages, as your "sock.send(b'A')" and "sock.recv(1) != b'A'" calls are being interwoven with my own messages. Specifically, I have a parent process with child processes. I send socket file descriptors from the parent to the children, and am also duplexing messages from the child processes to the parent. If I am in the process of sending/receiving a message around the same time as your workaround is performing this acknowledge step, then your workaround corrupts the pipe. In a multi-process program, each end of a pipe must only be read or written to by a single process, but this workaround breaks this requirement. A different workaround must be found for the original bug that prompted this "acknowledge" step to be added, because library code must not be interfering with the duplex pipe. -- components: Library (Lib) messages: 307649 nosy: frickenate priority: normal severity: normal status: open title: multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin type: behavior versions: Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32220> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30824] Add mimetype for extension .json
Changes by Nate Tangsurat <e4r7h...@gmail.com>: -- pull_requests: +3082 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30824> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30772] If I make an attribute "
Nate Soares added the comment: To be clear, the trouble I was trying to point at is that if foo.py didn't have __all__, then it would still have a BB attribute. But if the module is given __all__, the BB is normalized away into a B. This seems like pretty strange/counterintuitive behavior. For instance, I found this bug when I added __all__ to a mathy library, where other modules had previously been happily importing BB and using .BB etc. with no trouble. In other words, I could accept "BB gets normalized to B always", but the current behavior is "modules are allowed to have a BB attribute but only if they don't use __all__, because __all__ requires putting the BB through a process that normalizes it to B, and which otherwise doesn't get run". If this is "working as intended" then w/e, I'll work around it, but I want to make sure that we all understand the inconsistency before letting this bug die in peace :-) On Wed, Jun 28, 2017 at 10:55 AM Brett Cannon <rep...@bugs.python.org> wrote: > > Changes by Brett Cannon <br...@python.org>: > > > -- > resolution: -> not a bug > stage: -> resolved > status: open -> closed > > ___ > Python tracker <rep...@bugs.python.org> > <http://bugs.python.org/issue30772> > ___ > -- title: If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on. -> If I make an attribute " ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30772> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30772] If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on.
New submission from Nate Soares: [NOTE: In this comment, I use BB to mean unicode character 0x1D539, b/c the issue tracker won't let me submit a comment with unicode characters in it.] Directory structure: repro/ foo.py test_foo.py Contents of foo.py: BB = 1 __all__ = ['BB'] Contents of test_foo.py: from .foo import * Error message: AttributeError: module 'repro.foo' has no attribute 'BB' If I change foo.py to have `__all__ = ['B']` (note that 'B' is not the same as 'BB'), then everything works "fine", modulo the fact that now foo.B is a thing and foo.BB is not a thing. [Recall that in the above, BB is a placeholder for U+1D539, which the issuetracker prevents me from writing here.] -- components: Unicode messages: 296928 nosy: Nate Soares, ezio.melotti, haypo priority: normal severity: normal status: open title: If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on. versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30772> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +2045 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29581] __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta)
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +1390 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29581> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Nate Soares added the comment: I didn't know about issue29638, and I'm not sure whether my PR fixes it. Looking at that bug, I don't think that my PR would fix it, because I still trust TPFLAGS_IS_ABSTRACT when __abstractmethods__ exists. That said, I'm not clear on how the cache works, so it's possible that my PR would fix 29638. My issue appears when one uses py3.6's new __init_subclass__ hook with an ABC. __init_subclass__ is run by type.__new__, which means that, as of py3.6, users can (in a natural/reasonable way) inspect ABCMeta classes before ABCMeta.__new__ finishes executing. I didn't see any reasonable way to have ABCMeta.__new__ finish setting up the ABC before calling super().__new__, so I figured the fix should go into inspect.isabstract. But there may be better solutions I just didn't think of. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +556 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
New submission from Nate Soares: Here's an example test that fails: def test_isabstract_during_init_subclass(self): from abc import ABCMeta, abstractmethod isabstract_checks = [] class AbstractChecker(metaclass=ABCMeta): def __init_subclass__(cls): abstract_checks.append(inspect.isabstract(cls)) class AbstractClassExample(AbstractChecker): @abstractmethod def foo(self): pass class ClassExample(AbstractClassExample): def foo(self): pass self.assertEqual(isabstract_checks, [True, False]) To run the test, you'll need to be on a version of python where bpo-29581 is fixed (e.g., a cpython branch with https://github.com/python/cpython/pull/527 merged) in order for __init_subclass__ to work with ABCMeta at all in the first place. I have a simple patch to inspect.isabstract that fixes this, and will make a PR shortly. -- messages: 289682 nosy: So8res priority: normal severity: normal status: open title: inspect.isabstract does not work on abstract base classes during __init_subclass__ versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29581] __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta)
New submission from Nate Soares: I believe I've found a bug (or, at least, critical shortcoming) in the way that python 3.6's __init_subclass__ interacts with abc.ABCMeta (and, presumably, most other metaclasses in the standard library). In short, if a class subclasses both an abstract class and a class-that-uses-__init_subclass__, and the __init_subclass__ uses keyword arguments, then this will often lead to TypeErrors (because the metaclass gets confused by the keyword arguments to __new__ that were meant for __init_subclass__). Here's an example of the failure. This code: from abc import ABCMeta class Initifier: def __init_subclass__(cls, x=None, **kwargs): super().__init_subclass__(**kwargs) print('got x', x) class Abstracted(metaclass=ABCMeta): pass class Thingy(Abstracted, Initifier, x=1): pass thingy = Thingy() raises this TypeError when run: Traceback (most recent call last): File "", line 10, in class Thingy(Abstracted, Initifier, x=1): TypeError: __new__() got an unexpected keyword argument 'x' See http://stackoverflow.com/questions/42281697/typeerror-when-combining-abcmeta-with-init-subclass-in-python-3-6 for further discussion. -- messages: 287966 nosy: Nate Soares priority: normal severity: normal status: open title: __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta) type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29581> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8561] Install .exes generated with distutils to not do a CRC check
New submission from Nate DeSimone nateman1...@gmail.com: During network transit, .exe generated with distutils may become corrupted. The part of the file that is a binary executable is small compared to the full package typically, so it is possible for the installer to run and lay down bad files. It would be nice if the setup program ran a CRC check on itself before running. -- assignee: tarek components: Distutils messages: 104451 nosy: Nate.DeSimone, tarek priority: normal severity: normal status: open title: Install .exes generated with distutils to not do a CRC check type: feature request versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8561 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2986] difflib.SequenceMatcher not matching long sequences
New submission from Nate [EMAIL PROTECTED]: The following code shows no matches though the strings clearly match. from difflib import * a = '''39043203381559556628573221727792187279924711093861125152794523529732793117520068565885125032447020125028126531603069277213510312502702798781521250210814711252468946033191629862834564694482932523354428149539640297186717055152464370568794560959154441746654640262554157367545426801783736754129988985714104837148017837367541448283617148017837367541330684087148017837367541408596657148017837367541538510044714801783736754157158643714106907148017837367541474888907148017837362059576680178373675454488017831041705391546777051025363147367544777801783736754152171032271480178373675417378111377148017837367541727911516714801783736754176929952714801783736754175759835714801783736754173989658714801783104170550264677705512355737056879456095915445625329640826754157363006104258329145203115148103015957219995715478978791137801783736189510219832803777819819892374989136789814142131989249498926799891648825778109447511028842170482589787911378017831041705118365420736273279818012793603261597148017837361! 71798080178310415420736447510213871790638471586131412631592131012571210126718031314200414571314893700123874777987006697747115770067074789312578013869801783104120529166337056879456095918495136604565251349544838956219513495753741344870733943253617458316356794745831634651172458316348316144586052838244151360641656349118903581890331689038658903263218549028909605134957536316060''' b = '''46343203381559556628573221727792187279924711093861125152794523529732793117520068565885125032447020125028126531603069277213510312502702798781521250210814711252468946033191629862834564694482932523354428149539640297186717055152464370568794560959154441746654640262554157367545426801783736754129988985714104837148017837367541448283617148017837367541330684087148017837367541408596657148017837367541538510044714801783736754157158643714106907148017837367541474888907148017837362059576680178373675454488017831041705391546777051025363147367544777801783736754131821081171480178373675417378111377148017837367541727911516714801783736754176929952714801783736754175759835714801783736754173989658714801783104170550264677705512355737056879456095915445625329640826754157363006104258329145203115148103015957219995715478978791137801783736189510219832803777819819892374989136789814142131989249498926799891648825778109447511028842170482589787911378017831041705118365420736273279818012793603261597148017837361! 71798080178310415420736447510213871790638471412131420041457131485122165131466702097131466731723131466741536131466751581131466771649131466761975131467212090131467261974131467231858131467201556131467212538131467221553131467221943131467231748131466711452131467271787131412578013869801783104154307361718482280178373638585436251621338931320893185072980138084820801545115716861861152948618615002682261422349251058108327767521397977810837298017831041205291663370568794560959184951366045652513495448389562195134957537413448707339432536174583163''' lst = [(a,b)] for a, b in lst: print --- s = SequenceMatcher(None, a, b) print length of a is %d % len(a) print length of b is %d % len(b) print s.find_longest_match(0, len(a), 0, len(b)) print s.ratio() for block in s.get_matching_blocks(): m = a[block[0]:block[0]+block[2]] print a[%d] and b[%d] match for %d elements and it is \%s\ % (block[0], block[1], block[2], m) -- components: Extension Modules messages: 67428 nosy: hagna severity: normal status: open title: difflib.SequenceMatcher not matching long sequences versions: Python 2.5 __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue2986 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com