[issue45344] Have zipapp respect SOURCE_DATE_EPOCH
New submission from Nate Woods : I have a small patch that would make zipapp respect SOURCE_DATE_EPOCH. This will ensure the zip bundles created by zipapp have consistent hashes regardless of when the source files were last touched. This idea came to my attention recently when I came across: https://reproducible-builds.org/ I can convert my changes to a PR if it's deemed interesting or useful, but I would like to respect the core maintainers time. Please let me know if these changes are not desired or not worth while and I'll seek to find somewhere else to put them. Also, I'm completely new here, so I apologize if there is anything I'm doing against protocol. I didn't find any issues in the tracker pertaining to this, and it seemed small and contained enough to be something I could try out. Hopefully this issue finds the maintainers well. -- components: Library (Lib) files: zipapp-respect-source-date-epoch.patch keywords: patch messages: 403041 nosy: bign8 priority: normal severity: normal status: open title: Have zipapp respect SOURCE_DATE_EPOCH type: enhancement versions: Python 3.11 Added file: https://bugs.python.org/file50320/zipapp-respect-source-date-epoch.patch ___ Python tracker <https://bugs.python.org/issue45344> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
Nate Atkinson added the comment: To be clear-- is_alive() doesn't *always* return True. It returns True until .join() is called. Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print('foo') ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() True >>> t.join() >>> t.is_alive() False I would expect is_alive to return True while the target function is executing and return False after the execution has completed. Instead, .is_alive is continuing to return True after execution of the target function has completed. -- ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
Nate Atkinson added the comment: I notice that I maybe have inadvertently assigned this to the wrong group. I suspect that this should apply to the "Library" rather than "Core". Sorry! -- ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33777] dummy_threading: .is_alive method returns True after execution has completed
New submission from Nate Atkinson : Here's what I expect to happen (Python2 behavior): Python 2.7.14+ (default, Dec 5 2017, 15:17:02) [GCC 7.2.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print 'foo' ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() False >>> Here's what actually happens (Python3.6): Python 3.6.4 (default, Jan 5 2018, 02:13:53) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dummy_threading import Thread >>> def f(): print('foo') ... >>> t = Thread(target=f) >>> t.start() foo >>> t.is_alive() True >>> After completion of the target function, I would expect .is_alive() to return False for an instance of dummy_thread.Thread. Instead, it returns True until the .join() method of the instance of dummy_thread.Thread is called. -- components: Interpreter Core messages: 318795 nosy: njatkinson priority: normal severity: normal status: open title: dummy_threading: .is_alive method returns True after execution has completed type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue33777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32220] multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin
Nate <pm62...@nate.sh> added the comment: According to https://developer.apple.com/library/content/qa/qa1541/_index.html some bugs were fixed in 10.5. Not sure if the original attempt to patch the problem was happening on < 10.5, or if this was still a problem in 10.5+. I can't for the life of me find it again, but I had found another source that claimed the true fixes for OS X came out with 10.7. In any case, because this code is specifically part of the multiprocessing package, whereby it should be *expected* for multiple processes to be accessing the pipe, it's disastrous for this code to be reading/writing an acknowledge packet in this manner. This is a hard case to test for, as timing matters. The duplex pipe doesn't get confused/corrupted unless one process is sending/receiving a message over the pipe at the same moment that another process is executing your acknowledge logic. It's reproducible, but not 100%. Personally, I've restructured to using one pipe exclusively for file descriptor passing, and using a separate Queue (or Pipe pair) for custom message passing. If a better fix cannot be established, at a minimum the documentation for multiprocessing and the Pipe class should be updated with a big red warning about passing file descriptors on OS X/macOS/darwin. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32220> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32220] multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin
New submission from Nate <pm62...@nate.sh>: In multiprocessing/reduction.py, there is a hack workaround in the sendfds() and recvfds() methods for darwin, as determined by the "ACKNOWLEDGE" constant. There is a reference to issue #14669 in the code related to why this was added in the first place. This bug exists in both 3.6.3 and the latest 3.7.0a2. When a file descriptor is received, this workaround/hack sends an acknowledgement message to the sender. The problem is that this completely breaks Duplex pipes depending on the timing of the acknowledgement messages, as your "sock.send(b'A')" and "sock.recv(1) != b'A'" calls are being interwoven with my own messages. Specifically, I have a parent process with child processes. I send socket file descriptors from the parent to the children, and am also duplexing messages from the child processes to the parent. If I am in the process of sending/receiving a message around the same time as your workaround is performing this acknowledge step, then your workaround corrupts the pipe. In a multi-process program, each end of a pipe must only be read or written to by a single process, but this workaround breaks this requirement. A different workaround must be found for the original bug that prompted this "acknowledge" step to be added, because library code must not be interfering with the duplex pipe. -- components: Library (Lib) messages: 307649 nosy: frickenate priority: normal severity: normal status: open title: multiprocessing: passing file descriptor using reduction breaks duplex pipes on darwin type: behavior versions: Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32220> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30824] Add mimetype for extension .json
Changes by Nate Tangsurat <e4r7h...@gmail.com>: -- pull_requests: +3082 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30824> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30772] If I make an attribute "
Nate Soares added the comment: To be clear, the trouble I was trying to point at is that if foo.py didn't have __all__, then it would still have a BB attribute. But if the module is given __all__, the BB is normalized away into a B. This seems like pretty strange/counterintuitive behavior. For instance, I found this bug when I added __all__ to a mathy library, where other modules had previously been happily importing BB and using .BB etc. with no trouble. In other words, I could accept "BB gets normalized to B always", but the current behavior is "modules are allowed to have a BB attribute but only if they don't use __all__, because __all__ requires putting the BB through a process that normalizes it to B, and which otherwise doesn't get run". If this is "working as intended" then w/e, I'll work around it, but I want to make sure that we all understand the inconsistency before letting this bug die in peace :-) On Wed, Jun 28, 2017 at 10:55 AM Brett Cannon <rep...@bugs.python.org> wrote: > > Changes by Brett Cannon <br...@python.org>: > > > -- > resolution: -> not a bug > stage: -> resolved > status: open -> closed > > ___ > Python tracker <rep...@bugs.python.org> > <http://bugs.python.org/issue30772> > ___ > -- title: If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on. -> If I make an attribute " ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30772> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30772] If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on.
New submission from Nate Soares: [NOTE: In this comment, I use BB to mean unicode character 0x1D539, b/c the issue tracker won't let me submit a comment with unicode characters in it.] Directory structure: repro/ foo.py test_foo.py Contents of foo.py: BB = 1 __all__ = ['BB'] Contents of test_foo.py: from .foo import * Error message: AttributeError: module 'repro.foo' has no attribute 'BB' If I change foo.py to have `__all__ = ['B']` (note that 'B' is not the same as 'BB'), then everything works "fine", modulo the fact that now foo.B is a thing and foo.BB is not a thing. [Recall that in the above, BB is a placeholder for U+1D539, which the issuetracker prevents me from writing here.] -- components: Unicode messages: 296928 nosy: Nate Soares, ezio.melotti, haypo priority: normal severity: normal status: open title: If I make an attribute "[a unicode version of B]", it gets assigned to "[ascii B]", and so on. versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30772> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +2045 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29581] __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta)
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +1390 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29581> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Nate Soares added the comment: I didn't know about issue29638, and I'm not sure whether my PR fixes it. Looking at that bug, I don't think that my PR would fix it, because I still trust TPFLAGS_IS_ABSTRACT when __abstractmethods__ exists. That said, I'm not clear on how the cache works, so it's possible that my PR would fix 29638. My issue appears when one uses py3.6's new __init_subclass__ hook with an ABC. __init_subclass__ is run by type.__new__, which means that, as of py3.6, users can (in a natural/reasonable way) inspect ABCMeta classes before ABCMeta.__new__ finishes executing. I didn't see any reasonable way to have ABCMeta.__new__ finish setting up the ABC before calling super().__new__, so I figured the fix should go into inspect.isabstract. But there may be better solutions I just didn't think of. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
Changes by Nate Soares <so8...@gmail.com>: -- pull_requests: +556 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29822] inspect.isabstract does not work on abstract base classes during __init_subclass__
New submission from Nate Soares: Here's an example test that fails: def test_isabstract_during_init_subclass(self): from abc import ABCMeta, abstractmethod isabstract_checks = [] class AbstractChecker(metaclass=ABCMeta): def __init_subclass__(cls): abstract_checks.append(inspect.isabstract(cls)) class AbstractClassExample(AbstractChecker): @abstractmethod def foo(self): pass class ClassExample(AbstractClassExample): def foo(self): pass self.assertEqual(isabstract_checks, [True, False]) To run the test, you'll need to be on a version of python where bpo-29581 is fixed (e.g., a cpython branch with https://github.com/python/cpython/pull/527 merged) in order for __init_subclass__ to work with ABCMeta at all in the first place. I have a simple patch to inspect.isabstract that fixes this, and will make a PR shortly. -- messages: 289682 nosy: So8res priority: normal severity: normal status: open title: inspect.isabstract does not work on abstract base classes during __init_subclass__ versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29822> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29581] __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta)
New submission from Nate Soares: I believe I've found a bug (or, at least, critical shortcoming) in the way that python 3.6's __init_subclass__ interacts with abc.ABCMeta (and, presumably, most other metaclasses in the standard library). In short, if a class subclasses both an abstract class and a class-that-uses-__init_subclass__, and the __init_subclass__ uses keyword arguments, then this will often lead to TypeErrors (because the metaclass gets confused by the keyword arguments to __new__ that were meant for __init_subclass__). Here's an example of the failure. This code: from abc import ABCMeta class Initifier: def __init_subclass__(cls, x=None, **kwargs): super().__init_subclass__(**kwargs) print('got x', x) class Abstracted(metaclass=ABCMeta): pass class Thingy(Abstracted, Initifier, x=1): pass thingy = Thingy() raises this TypeError when run: Traceback (most recent call last): File "", line 10, in class Thingy(Abstracted, Initifier, x=1): TypeError: __new__() got an unexpected keyword argument 'x' See http://stackoverflow.com/questions/42281697/typeerror-when-combining-abcmeta-with-init-subclass-in-python-3-6 for further discussion. -- messages: 287966 nosy: Nate Soares priority: normal severity: normal status: open title: __init_subclass__ causes TypeError when used with standard library metaclasses (such as ABCMeta) type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29581> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Task Engine Framework?
On Dec 7, 8:32 am, Adam Tauno Williams awill...@whitemice.org wrote: On Mon, 2010-12-06 at 15:11 -0800, Nate wrote: Hello, I'm in the process of developing a task engine / workflow module for my Python application and I'm wondering if anyone knows of existing code that could be used or adapted. Since I know that's far too generic a question, let me share my goals: 1) Support long running operations (think backing up millions of files) where: - The operation can be paused (application closed) and the operation resumed later. - Individual tasks can be chained, run in parallel, or looped over (the workflow part) We have something like that in OIE (OpenGroupware Integration Engine). http://sourceforge.net/projects/coils/. These things tend to turn out to be quite specific [and thus not generic]. But if you have any questions feel free to ask. The focus in OIE was the ability to describe processes in BPML and facilitate process management [creating, queuing, parking (stopping for later resume) of business / ETL tasks. Parts of the code aren't especially elegant but it does move a fairly large amount of data every day. 2) Would like to graph each defined operation (task A starts task B with parameters... ) for documenting algorithms in Software Design Document 3) Each individual task in the operation would a self-contained class. I'd imagine implementing its action by defining a doTask() method Hopefully that's clear. I just feel like someone must have already solved this elegantly. I greatly enjoy Python and I look forward to proving its use as a valuable language for a Masters student even though everyone thinks I should use C# :-). Thank you, I'll take a look at the project. At the very least, seeing someone else's solution would be helpful. I'm trying desperately hard to keep the code simple :-) -Nate -- http://mail.python.org/mailman/listinfo/python-list
Task Engine Framework?
Hello, I'm in the process of developing a task engine / workflow module for my Python application and I'm wondering if anyone knows of existing code that could be used or adapted. Since I know that's far too generic a question, let me share my goals: 1) Support long running operations (think backing up millions of files) where: - The operation can be paused (application closed) and the operation resumed later. - Individual tasks can be chained, run in parallel, or looped over (the workflow part) 2) Would like to graph each defined operation (task A starts task B with parameters... ) for documenting algorithms in Software Design Document 3) Each individual task in the operation would a self-contained class. I'd imagine implementing its action by defining a doTask() method. Hopefully that's clear. I just feel like someone must have already solved this elegantly. I greatly enjoy Python and I look forward to proving its use as a valuable language for a Masters student even though everyone thinks I should use C# :-). Thanks! -Nate Masters Student at Eastern Washington University -- http://mail.python.org/mailman/listinfo/python-list
[issue8561] Install .exes generated with distutils to not do a CRC check
New submission from Nate DeSimone nateman1...@gmail.com: During network transit, .exe generated with distutils may become corrupted. The part of the file that is a binary executable is small compared to the full package typically, so it is possible for the installer to run and lay down bad files. It would be nice if the setup program ran a CRC check on itself before running. -- assignee: tarek components: Distutils messages: 104451 nosy: Nate.DeSimone, tarek priority: normal severity: normal status: open title: Install .exes generated with distutils to not do a CRC check type: feature request versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8561 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: os.system vs subprocess
I ended up going with this: http://code.activestate.com/recipes/440554/ seems to feed me new lines of output atleast before the subprocess finishes, with some adjustment of the time delays. I'll guess I'll just be packing winpy into the installer. Or something. -- http://mail.python.org/mailman/listinfo/python-list
os.system vs subprocess
I get different behavior with os.system and subprocess (no surprise there I guess), but I was hoping for some clarification, namely why. If I type this directly into the command window: java -Xms128M -Xmx512M -jar gmapcreator.jar -dfile=censettings.xml mapoutput.txt mapoutput.txt stores the output: Command line mode: input file=censettings.xml 1358 files will be created in C:\Documents and Settings\Nate\Desktop \freqanalysis\tilefiles\CENSUS1-tiles 1358 tiles created out of 1358 in 16 seconds If I execute said command with subprocess, the output is not written to mapoutput.txt - the output just appears in the command window. If I execute said command with os.system, the output is written to mapoutput.txt like I expected. In reality all I want to do is access the first two lines of the above output before the process finishes, something which I haven't been able to manage with subprocess so far. I saw that somehow I might be able to use os.read(), but this is my first attempt at working with pipes/processes, so I'm a little overwhelmed. Thanks! -- http://mail.python.org/mailman/listinfo/python-list
Re: os.system vs subprocess
On Jun 21, 2:12 pm, Chris Rebert c...@rebertia.com wrote: On Sun, Jun 21, 2009 at 10:12 AM, Natewalton.nathan...@gmail.com wrote: I get different behavior with os.system and subprocess (no surprise there I guess), but I was hoping for some clarification, namely why. If I type this directly into the command window: java -Xms128M -Xmx512M -jar gmapcreator.jar -dfile=censettings.xml mapoutput.txt mapoutput.txt stores the output: Command line mode: input file=censettings.xml 1358 files will be created in C:\Documents and Settings\Nate\Desktop \freqanalysis\tilefiles\CENSUS1-tiles 1358 tiles created out of 1358 in 16 seconds If I execute said command with subprocess, the output is not written to mapoutput.txt - the output just appears in the command window. Show us the subprocess version of you code. People tend to not get the parameters quite right if they're not familiar with the library. Cheers, Chris --http://blog.rebertia.com- Hide quoted text - - Show quoted text - Here it is: gmapcreator = subprocess.Popen(java -Xms128M -Xmx512M -jar gmapcreator.jar -dfile=censettings.xml, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) -- http://mail.python.org/mailman/listinfo/python-list
Re: os.system vs subprocess
On Jun 21, 3:49 pm, Christian Heimes li...@cheimes.de wrote: Nate wrote: gmapcreator = subprocess.Popen(java -Xms128M -Xmx512M -jar gmapcreator.jar -dfile=censettings.xml, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) Try this: gmapcreator = subprocess.Popen( [java, -Xms128M, -Xmx512M, -jar, gmapcreator.jar, -dfile=censettings.xml], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = gmapcreator.communicate(stdin input) The subprocess doesn't use the shell so you have to apply the command as a list of strings. Do you really need stdin, stdout and stderr as pipes? If you aren't carefully with the API your program can block. Christian Christian, Thanks for your response. Related to this talk about shells, maybe you could point me towards a resource where I could read about how windows commands are processed w/w/o shells? I guess I assumed all subprocess commands were intepreted by the same thing, cmd.exe., or perhaps the shell. I guess this also ties in with me wondering what the whole subprocess Popen instantiation encompassed (the __init__ of the subprocess.Popen class?). I don't think I need stdin, stdout, stderr as pipes, I just was faced with several options for how to capture these things and I chose subprocess.PIPE. I could also os.pipe my own pipes, or create a file descriptor in one of a couple of ways, right? I'm just unclear on which method is preferable for me. All I want to do is pull the first 2 lines from the output before gmapcreator.jar finishes. The first 2 lines of output are always the same except a few numbers. -- http://mail.python.org/mailman/listinfo/python-list
[issue2986] difflib.SequenceMatcher not matching long sequences
New submission from Nate [EMAIL PROTECTED]: The following code shows no matches though the strings clearly match. from difflib import * a = '''39043203381559556628573221727792187279924711093861125152794523529732793117520068565885125032447020125028126531603069277213510312502702798781521250210814711252468946033191629862834564694482932523354428149539640297186717055152464370568794560959154441746654640262554157367545426801783736754129988985714104837148017837367541448283617148017837367541330684087148017837367541408596657148017837367541538510044714801783736754157158643714106907148017837367541474888907148017837362059576680178373675454488017831041705391546777051025363147367544777801783736754152171032271480178373675417378111377148017837367541727911516714801783736754176929952714801783736754175759835714801783736754173989658714801783104170550264677705512355737056879456095915445625329640826754157363006104258329145203115148103015957219995715478978791137801783736189510219832803777819819892374989136789814142131989249498926799891648825778109447511028842170482589787911378017831041705118365420736273279818012793603261597148017837361! 71798080178310415420736447510213871790638471586131412631592131012571210126718031314200414571314893700123874777987006697747115770067074789312578013869801783104120529166337056879456095918495136604565251349544838956219513495753741344870733943253617458316356794745831634651172458316348316144586052838244151360641656349118903581890331689038658903263218549028909605134957536316060''' b = '''46343203381559556628573221727792187279924711093861125152794523529732793117520068565885125032447020125028126531603069277213510312502702798781521250210814711252468946033191629862834564694482932523354428149539640297186717055152464370568794560959154441746654640262554157367545426801783736754129988985714104837148017837367541448283617148017837367541330684087148017837367541408596657148017837367541538510044714801783736754157158643714106907148017837367541474888907148017837362059576680178373675454488017831041705391546777051025363147367544777801783736754131821081171480178373675417378111377148017837367541727911516714801783736754176929952714801783736754175759835714801783736754173989658714801783104170550264677705512355737056879456095915445625329640826754157363006104258329145203115148103015957219995715478978791137801783736189510219832803777819819892374989136789814142131989249498926799891648825778109447511028842170482589787911378017831041705118365420736273279818012793603261597148017837361! 71798080178310415420736447510213871790638471412131420041457131485122165131466702097131466731723131466741536131466751581131466771649131466761975131467212090131467261974131467231858131467201556131467212538131467221553131467221943131467231748131466711452131467271787131412578013869801783104154307361718482280178373638585436251621338931320893185072980138084820801545115716861861152948618615002682261422349251058108327767521397977810837298017831041205291663370568794560959184951366045652513495448389562195134957537413448707339432536174583163''' lst = [(a,b)] for a, b in lst: print --- s = SequenceMatcher(None, a, b) print length of a is %d % len(a) print length of b is %d % len(b) print s.find_longest_match(0, len(a), 0, len(b)) print s.ratio() for block in s.get_matching_blocks(): m = a[block[0]:block[0]+block[2]] print a[%d] and b[%d] match for %d elements and it is \%s\ % (block[0], block[1], block[2], m) -- components: Extension Modules messages: 67428 nosy: hagna severity: normal status: open title: difflib.SequenceMatcher not matching long sequences versions: Python 2.5 __ Tracker [EMAIL PROTECTED] http://bugs.python.org/issue2986 __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Not understanding absolute_import
On Apr 5, 8:33 am, Nate Finch [EMAIL PROTECTED] wrote: I've been trying to use fromabsolute_importand it's giving me a hell of a headache. I can't figure out what it's *supposed* to do, or maybe rather, it doesn't seem to be doing what I *think* it's supposed to be doing. No one? Is this too simple a question, or is it just that no one is using this? -Nate -- http://mail.python.org/mailman/listinfo/python-list
Re: Plugin architecture - how to do?
On Apr 5, 10:57 am, [EMAIL PROTECTED] wrote: I'm making a program that consists of a main engine + plugins. Both are in Python. My question is, how do I go about importing arbitrary code and have it be able to use the engine's functions, classes, etc? For a true plugin architecture, you don't have the main engine calling methods on the plugin. What you do is have an API on your engine with methods the plugin can call and events it can hook into. The events are how the engine communicates state to any plugins, without having to know who they are or what they do. Your engine has a configuration that tells it what plugins to load (which the plugins presumably modified when they installed themselves) or otherwise has some standard way that the engine can figure out what plugins need to be loaded. Now granted, I don't specifically know how to do this via python.. but, maybe what I've said will send you in the right direction. -Nate -- http://mail.python.org/mailman/listinfo/python-list
Not understanding absolute_import
I've been trying to use from absolute_import and it's giving me a hell of a headache. I can't figure out what it's *supposed* to do, or maybe rather, it doesn't seem to be doing what I *think* it's supposed to be doing. For example (actual example from my code, assume all files have from __future__ import absolute_import): /project /common guid.py (has class Guid) __init__.py (has from .guid import Guid) /relate relatable.py (has from .common import Guid and class Relatable(Guid)) __init__.py (has from .relatable import Relatable) Now, this all compiles ok and if I change the imports, it does not. So obviously this is working. However, I can't figure out *why* it works. In relatable.py, shouldn't that need to be from ..common import Guid? from . should import stuff from the current directory, from .foo should import stuff from module foo in the current directory. to go up a directory, you should need to use .. but if I do that, python complains that I've gone up too many levels. So, I don't understand... if the way I have above is correct, what happens if I put a common.py in the relate directory? How would you differentiate between that and the common package? I don't understand why .common works from relatable. According to the docs and according to what seems to be common sense, it really seems like it should be ..common. -Nate -- http://mail.python.org/mailman/listinfo/python-list
Re: Why NOT only one class per file?
So, here's a view from a guy who's not a python nut and has a long history of professional programming in other languages (C++, C, C#, Java) I think you're all going about this the wrong way. There's no reason to *always* have one class per file. However, there's also no reason to have 1600 lines of code and 50 classes in one file either. You talk about the changing file dance, but what about the scroll for 30 seconds dance? What about the six different conflicts in source control because everything's in one file dance? I think it doesn't matter how many classes and/or functions you have in one file as long as you keep files to a reasonable size. If you've ever worked on a medium to large-scale project using multiple developers and tens or hundreds of thousands of lines of code, then you know that keeping code separate in source control is highly important. If I'm working on extending one class and someone else is working on another class... it's much less of a headache if they're in separate files in source control. Then you don't have to worry about conflicts and merging. I think that's the number one benefit for short files (not necessarily just one class per file). My cutoff is around 500 lines per file. If a file goes much over that, it's really time to start looking for a way to break it up. Sure, if all your classes are 4 lines long, then by all means, stick them all in one file. But I don't think that's really any kind of valid stance to argue from. Sure, if you don't need organization, it doesn't matter what organization technique you use. But if you do need organization, it does matter. And I think one class per file is an acceptable way to go about it, especially when your classes tend to be over a couple hundred lines long. -Nate -- http://mail.python.org/mailman/listinfo/python-list
Re: Why NOT only one class per file?
On Apr 5, 10:48 am, Bruno Desthuilliers bruno. [EMAIL PROTECTED] wrote: Nate Finch a écrit : So, here's a view from a guy who's not a python nut and has a long history of professional programming in other languages (C++, C, C#, Java) There are quite a few professional programmers here with experience on medium to large to huge projects with different languages, you know. Sorry, I meant to go back and edit that phrase to sound less condescending. I know there are a lot of professional programmers on here, and I didn't mean to imply otherwise. It wasn't supposed to be a contrast to everyone, just introducing myself. I totally agree with you... there's a balance between too many files and files that are too large. As to the guy who writes 1000+ line classes dude, refactor some. You're trying to make the class do too much, almost by definition. We have *some* classes that big, and they're marked as needs refactor. It's certainly not a common occurance, though. Usually they're UI classes, since they require a lot of verbose positioning of elements and hooking of events. And while people are reading this thread, let me plug my other thread, asking about absolute_import. I'd really love some help :) -Nate -- http://mail.python.org/mailman/listinfo/python-list
import and global namespace
I'd like a module than I'm importing to be able to use objects in the global namespace into which it's been imported. is there a way to do that? thanks, nate -- http://mail.python.org/mailman/listinfo/python-list
Re: I thought I'd 'got' globals but...
try this: gname = 'nate' def test(): gname = 'amy' print gname test() print gname outputs: 'amy' 'nate' whereas this: gname = 'nate' def test(): global gname gname = 'amy' print gname test() print gname outputs: 'amy' 'amy' Luis M. González wrote: Bruno Desthuilliers wrote: def doIt(name=None): global gname if name is None: name = gname else: gname = name Sorry for this very basic question, but I don't understand why I should add the global into the function body before using it. This function works even if I don't add the global. Just to try this out, I wrote this variant: gname = 'Luis' def doIt2(name=None): if name is None: name = gname return name print doIt2() -- returns Luis. So, what's the point of writing the function this way instead? def doIt2(name=None): global gname if name is None: name = gname return name luis -- http://mail.python.org/mailman/listinfo/python-list
Re: I thought I'd 'got' globals but...
OK, so I should include the global only if I plan to modify it. Otherwise, I don't need to include it. Am I right? I guess you could say that's true. I'm hardly an expert so I couldn't say there aren't other potential ramifications. (anyone?) But, as a rule I would declare the global variable always because 1) it makes my intention clear 2) if I later decided to modify the global variable, I would be less likely to introduce a bug by forgetting to declare it global. -- http://mail.python.org/mailman/listinfo/python-list
Re: setting variables from a tuple NEWB
manstey wrote: Hi, If I have a tuple like this: tupGlob = (('VOWELS','aeiou'),('CONS','bcdfgh')) is it possible to write code using tupGlob that is equivalent to: VOWELS = 'aeiou' CONS = ''bcdfgh' could you use a dictionary instead? i.e. tupGlob = {'VOWELS':'aeiou', 'CONS':'bcdfgh'} tupGlob['VOWELS'] 'aeiou' tupGlob['VOWELS'] = 'aeiou AndSometimesY' tupGlob['VOWELS'] 'aeiou AndSometimesY' nate -- http://mail.python.org/mailman/listinfo/python-list
Re: SWIG, C++, and Mac OS X
why are you modifying your setup.py file? can't you just run: swig -c++ -python exampleFile.i ? have you seen http://www.swig.org/tutorial.html , does it describe something like what you'd like to do? n -- http://mail.python.org/mailman/listinfo/python-list
mac findertools restart() does not work?
import findertools findertools.restart() Traceback (most recent call last): File stdin, line 1, in ? File /Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/plat-mac/findertools.py, line 90, in restart finder.restart() File /Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/plat-mac/lib-scriptpackages/Finder/Legacy_suite.py, line 29, in restart raise aetools.Error, aetools.decodeerror(_arguments) aetools.Error: (-1708, 'the AppleEvent was not handled by any handler', None) does anyone know why findertools.restart() isn't working on my machine (see above code), is there something more I need to do? -- http://mail.python.org/mailman/listinfo/python-list
Re: replace a method in class: how?
I don't know a ton about this, but it seems to work if you use: This.update = another_update (instead of t.update = another_update) after saying This.update = another_update, if you enter This.update, it says unbound method This.another_update, (it's unbound) but if you call t.update(5) it will print: another 5 -- http://mail.python.org/mailman/listinfo/python-list
SendKeys for mac?
does anyone know if there is a way to inject keyboard events to a mac similar to the way SendKeys works for a windows machine? (Can you point me at it?) thanks, n -- http://mail.python.org/mailman/listinfo/python-list
python texts?
Everyone that took their time to reply, thank you. I have a better idea of where to go after Learning Python. I still do not have a good idea of where this book will put me in the grand scheme of things, but oh well. I suppose that is something I will find out soon enough. Once again, thank you for your responses --Nate. -- http://mail.python.org/mailman/listinfo/python-list
maximum integer length?
Hey everyone, I am trying to figure out what is the largest integer I can. Lets say for 400 megabytes of memory at my disposal. I have tried a few things c = 2**100 d = 2**200 print c**d Obviously I didn't have enough memory for that, but I was able to c**3. (I think anyways, it is still trying to display the result) So I am just wondering how long an integer can be with 400 megabytes of memory. I guess this is a question of logic? each integer takes up a byte right? If I have 400 megabytes that would mean I could have a long integer with up to 419,430,400,000 integers? Really I am not sure on this one and that is why I am asking. Because it is just bugging me I am not sure how it works... I know this probably seems trivial and just a stupid question, but I find this type of stuff interesting... -- http://mail.python.org/mailman/listinfo/python-list
python texts?
Hello everyone, Can anyone recommend python text progression from me. Assuming I have no knowledge of python which books should I progress through? I prefer published books that I can actually hold with my hands. But if there are some awesome tutorials on-line I guess I am game. At this moment I am reading Learning Python 2nd edition by O'Reilly. I am enjoying it at the moment. I intend to be done with it in a week. But not sure where it will put me in the grand scheme of programming with python. So perhaps a more direct question would be, what do I read after this book? Should I read something before this book? Should I ditch this book? Thanks, --Nate -- http://mail.python.org/mailman/listinfo/python-list
Re: Great books on Python?
On Sun, 11 Dec 2005 06:15:17 -0800, Tolga wrote: I am not unfamiliar to programming but a newbie in Python. Could you recommend me (a) great book(s) to start with? Free online books or solid books are welcome. Thanx in advance. O'Reilly's Learning Python Second Edition covers up to version 2.3 and presumes a bit of knowledge with C. I've found it well written with a rather low count of errors. - Nate -- The optimist proclaims that we live in the best of all possible worlds, the pessimist fears this is true. -- http://mail.python.org/mailman/listinfo/python-list