Re: JUST GOT HACKED
Op 02-10-13 15:17, Steven D'Aprano schreef: [...] And you don't treat all others in the way you hope to be treated if you would be in their shoes. I suspect that should you one day feel so frustrated you need to vent, you will hope to get treated differently than how you treat those that need to vent now. You are very selective about the people in whose shoes you can imagine yourself. I am only an imperfect human being. I don't always live up to my ideals. Sometimes I behave poorly. I have a tendency to react to newbies' poor questions with sarcasm. Perhaps a little bit of sarcasm is okay, but there is a fine line between making a point and being unnecessarily nasty. If I cross that line, I hope that somebody will call me out on it. Steve it is not about failing. It is about not even trying. You don't follow the principle of treating others in the way you hope to be treated if you were in their shoes. You just mention the principle if you think it can support your behaviour. Your paragraph above is a nice illustration. Suppose you develop a new interest in which you are now the newbie and you go to a newsgroup or forum where as a nebie you ask a poor question. Are you hoping they will answer with sarcasm? I doubt that very much. Yet here you are telling us sarcasm is okay as long as you don't cross the line instead of you showing us that you would like to follow this priciple although you find it hard in a number of circumstances. -- Antoon Pardon -- https://mail.python.org/mailman/listinfo/python-list
Nodebox(v1) on the web via RapydScript
Hello, Nodebox is a program in the spirit of Processing but for Python. The first version runs only on MAC. Tom, the creator has partly ported it to Javascript. But many of you dislike Javascript. The solution was to use a translator, Python - Javascript Of the both two greats solutions Brython / RapydScript, I've choosen RapydScript (Brython and RapydScript does not achieve the same goal) You can see a preview of 'Nodebox on the Web' namely 'RapydBox' here : http://salvatore.pythonanywhere.com/RapydBox Regards -- https://mail.python.org/mailman/listinfo/python-list
Re: Running code from source that includes extension modules
On 2 October 2013 23:28, Michael Schwarz michi.schw...@gmail.com wrote: I will look into that too, that sounds very convenient. But am I right, that to use Cython the non-Python code needs to be written in the Cython language, which means I can't just copypast C code into it? For my current project, this is exactly what I do, because the C code I use already existed. It's better than that. Don't copy/paste your code. Just declare it in Cython and you can call straight into the existing C functions cutting out most of the boilerplate involved in making C code accessible to Python: http://docs.cython.org/src/userguide/external_C_code.html You'll sometimes need a short Cython wrapper function to convert from Python types to corresponding C types. But this is about 5 lines of easy to read Cython code vs maybe 30 lines of hard to follow C code. Having written CPython extension modules both by hand and using Cython I strongly recommend to use Cython. Oscar -- https://mail.python.org/mailman/listinfo/python-list
Re: Lowest Value in List
subhabangal...@gmail.com wrote: Dear Group, I am trying to work out a solution to the following problem in Python. The Problem: Suppose I have three lists. Each list is having 10 elements in ascending order. I have to construct one list having 10 elements which are of the lowest value among these 30 elements present in the three given lists. The Solution: I tried to address the issue in the following ways: a) I took three lists, like, list1=[1,2,3,4,5,6,7,8,9,10] list2=[0,1,2,3,4,5,6,7,8,9] list3=[-5,-4,-3,-2,-1,0,1,2,3,4] I tried to make sum and convert them as set to drop the repeating elements: set_sum=set(list1+list2+list3) set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -1, -5, -4, -3, -2]) In the next step I tried to convert it back to list as, list_set=list(set_sum) gave the value as, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -1, -5, -4, -3, -2] Now, I imported heapq as, import heapq and took the result as, result=heapq.nsmallest(10,list_set) it gave as, [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4] b) I am thinking to work out another approach. I am taking the lists again as, list1=[1,2,3,4,5,6,7,8,9,10] list2=[0,1,2,3,4,5,6,7,8,9] list3=[-5,-4,-3,-2,-1,0,1,2,3,4] as they are in ascending order, I am trying to take first four/five elements of each list,like, list1_4=list1[:4] list2_4=list2[:4] list3_4=list3[:4] Now, I am trying to add them as, list11=list1_4+list2_4+list3_4 thus, giving us the result [1, 2, 3, 4, 0, 1, 2, 3, -5, -4, -3, -2] Now, we are trying to sort the list of the set of the sum as, sort_sum=sorted(list(set(list11))) giving us the required result as, [-5, -4, -3, -2, 0, 1, 2, 3, 4] If by taking the value of each list portion as 4 gives as less number of elements in final value, as we are making set to avoid repeating numbers, we increase element count by one or two and if final result becomes more than 10 we take first ten. Are these approaches fine. Or should we think some other way. If any learned member of the group can kindly let me know how to solve I would be helpful enough. A bit late to the show here's my take. You could separate your problem into three simpler ones: (1) combine multiple sequences into one big sequence (2) filter out duplicate items (3) find the largest items (1) is covered by the stdlib: items = itertools.chain.from_iterable([list1, list2, list3]) (2) is easy assuming the items are hashable: def unique(items): seen = set() for item in items: if item not in seen: seen.add(item) yield item items = unique(items) (3) is also covered by the stdlib: largest = heapq.nlargest(3, items) This approach has one disadvantage: the `seen` set in unique() may grow indefinitely if the sequence passed to it is long and has an unlimited number of distinct duplicates. So here's an alternative using a heap and a set both limited by the length of the result: import heapq def unique_nlargest(n, items): items = iter(items) heap = [] seen = set() for item in items: if item not in seen: seen.add(item) heapq.heappush(heap, item) if len(heap) n: max_discard = heapq.heappop(heap) seen.remove(max_discard) break for item in items: if item max_discard and item not in seen: max_discard = heapq.heappushpop(heap, item) seen.remove(max_discard) return heap if __name__ == __main__: print(unique_nlargest(3, [1,2,3,4,5,4,3,2,1,6,2,7])) I did not test it, so there may be bugs, but the idea behind the code is simple: you can remove from the set all items that are below the minimum item in the heap. Thus both lengths can never grow beyond n (or n+1 in my actual implementation). -- https://mail.python.org/mailman/listinfo/python-list
Re: Goodbye: was JUST GOT HACKED
On Thu, 03 Oct 2013 09:21:08 +0530, Ravi Sahni wrote: On Thu, Oct 3, 2013 at 2:43 AM, Walter Hurry walterhu...@lavabit.com wrote: Ding ding! Nikos is simply trolling. It's easy enough to killfile him but inconvenient to skip all the answers to his lengthy threads. If only people would just ignore him! Hello Walter Hurry please wait! Did I do/say something wrong?! Don't worry about it Ravi, you haven't done anything wrong. Walter is not a regular here. At best he is a lurker who neither asks Python questions nor answers them. In the last four months, I can see four posts from him: three are complaining about Nikos, and one is a two- line Me to! response to a post about defensive programming. If one of us should go it should be me -- Im just a newbie here. No, you are welcome here. You've posted more in just a few days than Walter has in months. We need more people like you. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: JUST GOT HACKED
On Thu, 03 Oct 2013 09:01:29 +0200, Antoon Pardon wrote: You don't follow the principle of treating others in the way you hope to be treated if you were in their shoes. [...] Suppose you develop a new interest in which you are now the newbie and you go to a newsgroup or forum where as a nebie you ask a poor question. Are you hoping they will answer with sarcasm? I doubt that very much. Then you would be wrong. You don't know me very well at all. If I asked a dumb question -- not an ignorant question, but a dumb question -- then I hope somebody will rub my nose in it. Sarcasm strikes me as a good balance between being too namby-pamby to correct me for wasting everyone's time, and being abusive. An ignorant question would be: I don't understand closures, can somebody help me? or even: I wrote this function: def f(arg=[]): arg.append(1); return arg and it behaves strangely. Is that a bug in Python? This, on the other hand, is a dumb question: I wrote a function to print prime numbers, and it didn't work. What did I do wrong? In the last case, the question simply is foolish. Short of mind-reading, how is anyone supposed to know which of the infinite number of errors I made? In this case, I would *much* prefer a gentle, or even not-so- gentle, reminder of my foolishness via a sarcastic retort about looking in crystal balls or reading minds, than either being ignored or being abused. And quite frankly, although I might *prefer* a gentle request asking for more information, I might *need* something harsher for the lesson to really sink in. Negative reinforcement is a legitimate teaching tool, provided it doesn't cross the line into abuse. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Goodbye: was JUST GOT HACKED
On Thu, Oct 3, 2013 at 5:05 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: On Thu, 03 Oct 2013 09:21:08 +0530, Ravi Sahni wrote: On Thu, Oct 3, 2013 at 2:43 AM, Walter Hurry walterhu...@lavabit.com wrote: Ding ding! Nikos is simply trolling. It's easy enough to killfile him but inconvenient to skip all the answers to his lengthy threads. If only people would just ignore him! Hello Walter Hurry please wait! Did I do/say something wrong?! Don't worry about it Ravi, you haven't done anything wrong. Walter is not a regular here. At best he is a lurker who neither asks Python questions nor answers them. In the last four months, I can see four posts from him: three are complaining about Nikos, and one is a two- line Me to! response to a post about defensive programming. If one of us should go it should be me -- Im just a newbie here. No, you are welcome here. You've posted more in just a few days than Walter has in months. We need more people like you. Thanks for the welcome! But No thanks for the non-welcome -- I dont figure why Walter Hurry (or anyone else) should be unwelcome just because I am welcome. The world (and the python list hopefully!!) is big enough for all of us -- Ravi -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Wed, Oct 2, 2013, at 17:33, Terry Reedy wrote: 5. Conversion of apparent recursion to iteration assumes that the function really is intended to be recursive. This assumption is the basis for replacing the recursive call with assignment and an implied internal goto. The programmer can determine that this semantic change is correct; the compiler should not assume that. (Because of Python's late name-binding semantics, recursive *intent* is better expressed in Python with iterative syntax than function call syntax. ) Speaking of assumptions, I would almost say that we should make the assumption that operators (other than the __i family, and setitem/setattr/etc) are not intended to have visible side effects. This would open a _huge_ field of potential optimizations - including that this would no longer be a semantic change (since relying on one of the operators being allowed to change the binding of fact would no longer be guaranteed). -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Wed, Oct 2, 2013, at 21:46, MRAB wrote: The difference is that a tuple can be reused, so it makes sense for the comiler to produce it as a const. (Much like the interning of small integers) The list, however, would always have to be copied from the compile-time object. So that object itself would be a phantom, used only as the template with which the list is to be made. The key point here is that the tuple is immutable, including its items. Hey, while we're on the subject, can we talk about frozen(set|dict) literals again? I really don't understand why this discussion fizzles out whenever it's brought up on python-ideas. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Wed, Oct 2, 2013, at 22:34, Steven D'Aprano wrote: You are both assuming that LOAD_CONST will re-use the same tuple (1, 2, 3) in multiple places. But that's not the case, as a simple test will show you: def f(): ... return (1, 2, 3) f() is f() True It does, in fact, re-use it when it is _the same LOAD_CONST instruction_. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
Alain Ketterlin al...@dpt-info.u-strasbg.fr wrote: Terry Reedy tjre...@udel.edu writes: Part of the reason that Python does not do tail call optimization is that turning tail recursion into while iteration is almost trivial, once you know the secret of the two easy steps. Here it is. Assume that you have already done the work of turning a body recursive ('not tail recursive') form like def fact(n): return 1 if n = 1 else n * fact(n-1) into a tail recursion like [...] How do know that either = or * didn't rebind the name fact to something else? I think that's the main reason why python cannot apply any procedural optimization (even things like inlining are impossible, or possible only under very conservative assumption, that make it worthless). That isn't actually sufficient reason. It isn't hard to imagine adding a TAIL_CALL opcode to the interpreter that checks whether the function to be called is the same as the current function and if it is just updates the arguments and jumps to the start of the code block. If the function doesn't match it would simply fall through to doing the same as the current CALL_FUNCTION opcode. There is an issue that you would lose stack frames in any traceback. Also it means code for this modified Python wouldn't run on other non-modified interpreters, but it is at least theoretically possible without breaking Python's assumptions. -- Duncan Booth http://kupuguy.blogspot.com -- https://mail.python.org/mailman/listinfo/python-list
Multiple scripts versus single multi-threaded script
What is the difference between running multiple python scripts and a single multi-threaded script? May I know what are the pros and cons of each approach? Right now, my preference is to run multiple separate python scripts because it is simpler. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote: How do know that either = or * didn't rebind the name fact to something else? I think that's the main reason why python cannot apply any procedural optimization (even things like inlining are impossible, or possible only under very conservative assumption, that make it worthless). That isn't actually sufficient reason. It isn't hard to imagine adding a TAIL_CALL opcode to the interpreter that checks whether the function to be called is the same as the current function and if it is just updates the arguments and jumps to the start of the code block. Tail call optimization doesn't involve verification that the function is calling itself; you just have to verfify that the call is in tail position. The current frame would be removed from the stack frame and replaced with the one that results from calling the function. There is an issue that you would lose stack frames in any traceback. I don't think that's a major issue. Frames that got replaced would quite uninteresting. Also it means code for this modified Python wouldn't run on other non-modified interpreters, but it is at least theoretically possible without breaking Python's assumptions. In any case it's so easy to implement yourself I'm not sure there's any point. -- Neil Cerutti -- https://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] summing integer and class
This list is for development OF Python, not for development in python. For that reason, I will redirect this to python-list as well. My actual answer is below. On Thu, Oct 3, 2013 at 6:45 AM, Igor Vasilyev igor.vasil...@oracle.com wrote: Hi. Example test.py: class A(): def __add__(self, var): print(I'm in A class) return 5 a = A() a+1 1+a Execution: python test.py I'm in A class Traceback (most recent call last): File ../../test.py, line 7, in module 1+a TypeError: unsupported operand type(s) for +: 'int' and 'instance' So adding integer to class works fine, but adding class to integer fails. I could not understand why it happens. In objects/abstact.c we have the following function: Based on the code you provided, you are only overloading the __add__ operator, which is only called when an A is added to something else, not when something is added to an A. You can also override the __radd__ method to perform the swapped addition. See http://docs.python.org/2/reference/datamodel.html#object.__radd__ for the documentation (it is just below the entry on __add__). Note that for many simple cases, you could define just a single function, which then is defined as both the __add__ and __radd__ operator. For example, you could modify your A sample class to look like: class A(): def __add__(self, var): print(I'm in A) return 5 __radd__ = __add__ Which will produce: a = A() a + 1 I'm in A 5 1 + a I'm in A 5 Chris -- https://mail.python.org/mailman/listinfo/python-list
feature requests
Hi, hope this is the right group for this: I miss two basic (IMO) features in parallel processing: 1. make `threading.Thread.start()` return `self` I'd like to be able to `workers = [Thread(params).start() for params in whatever]`. Right now, it's 5 ugly, menial lines: workers = [] for params in whatever: thread = threading.Thread(params) thread.start() workers.append(thread) 2. make multiprocessing pools (incl. ThreadPool) limit the size of their internal queues As it is now, the queue will greedily consume its entire input, and if the input is large and the pool workers are slow in consuming it, this blows up RAM. I'd like to be able to `pool = Pool(4, max_qsize=1000)`. Same with the output queue (finished tasks). Or does anyone know of a way to achieve this? -- https://mail.python.org/mailman/listinfo/python-list
Re: feature requests
On Fri, Oct 4, 2013 at 2:12 AM, macker tester.teste...@gmail.com wrote: I'd like to be able to `workers = [Thread(params).start() for params in whatever]`. Right now, it's 5 ugly, menial lines: workers = [] for params in whatever: thread = threading.Thread(params) thread.start() workers.append(thread) You could shorten this by iterating twice, if that helps: workers = [Thread(params).start() for params in whatever] for thrd in workers: thrd.start() ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
In article f01b2e7a-9fc7-4138-bb6e-447d31179...@googlegroups.com, JL lightai...@gmail.com wrote: What is the difference between running multiple python scripts and a single multi-threaded script? May I know what are the pros and cons of each approach? Right now, my preference is to run multiple separate python scripts because it is simpler. First, let's take a step back and think about multi-threading vs. multi-processing in general (i.e. in any language). Threads are lighter-weight. That means it's faster to start a new thread (compared to starting a new process), and a thread consumes fewer system resources than a process. If you have lots of short-lived tasks to run, this can be significant. If each task will run for a long time and do a lot of computation, the cost of startup becomes less of an issue because it's amortized over the longer run time. Threads can communicate with each other in ways that processes can't. For example, file descriptors are shared by all the threads in a process, so one thread can open a file (or accept a network connection), then hand the descriptor off to another thread for processing. Threads also make it easy to share large amounts of data because they all have access to the same memory. You can do this between processes with shared memory segments, but it's more work to set up. The downside to threads is that all of of this sharing makes them much more complicated to use properly. You have to be aware of how all the threads are interacting, and mediate access to shared resources. If you do that wrong, you get memory corruption, deadlocks, and all sorts of (extremely) difficult to debug problems. A lot of the really hairy problems (i.e. things like one thread continuing to use memory which another thread has freed) are solved by using a high-level language like Python which handles all the memory allocation for you, but you can still get deadlocks and data corruption. So, the full answer to your question is very complicated. However, if you're looking for a short answer, I'd say just keep doing what you're doing using multiple processes and don't get into threading. -- https://mail.python.org/mailman/listinfo/python-list
Re: feature requests
On 2013-10-04 02:21, Chris Angelico wrote: workers = [] for params in whatever: thread = threading.Thread(params) thread.start() workers.append(thread) You could shorten this by iterating twice, if that helps: workers = [Thread(params).start() for params in whatever] for thrd in workers: thrd.start() Do you mean workers = [Thread(params) for params in whatever] for thrd in workers: thrd.start() ? (Thread(params) vs. Thread(params).start() in your list comp) -tkc -- https://mail.python.org/mailman/listinfo/python-list
Re: feature requests
On Fri, Oct 4, 2013 at 2:42 AM, Tim Chase python.l...@tim.thechases.com wrote: Do you mean workers = [Thread(params) for params in whatever] for thrd in workers: thrd.start() ? (Thread(params) vs. Thread(params).start() in your list comp) Whoops, copy/paste fail. Yes, that's what I meant. Thanks for catching! ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith r...@panix.com wrote: The downside to threads is that all of of this sharing makes them much more complicated to use properly. You have to be aware of how all the threads are interacting, and mediate access to shared resources. If you do that wrong, you get memory corruption, deadlocks, and all sorts of (extremely) difficult to debug problems. A lot of the really hairy problems (i.e. things like one thread continuing to use memory which another thread has freed) are solved by using a high-level language like Python which handles all the memory allocation for you, but you can still get deadlocks and data corruption. With CPython, you don't have any headaches like that; you have one very simple protection, a Global Interpreter Lock (GIL), which guarantees that no two threads will execute Python code simultaneously. No corruption, no deadlocks, no hairy problems. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
On Fri, Oct 4, 2013 at 2:01 AM, JL lightai...@gmail.com wrote: What is the difference between running multiple python scripts and a single multi-threaded script? May I know what are the pros and cons of each approach? Right now, my preference is to run multiple separate python scripts because it is simpler. (Caveat: The below is based on CPython. If you're using IronPython, Jython, or some other implementation, some details may be a little different.) Multiple threads can share state easily by simply referencing each other's variables, but the cost of that is that they'll never actually execute simultaneously. If you want your scripts to run in parallel on multiple CPUs/cores, you need multiple processes. But if you're doing something I/O bound (like servicing sockets), threads work just fine. As to using separate scripts versus the multiprocessing module, that's purely a matter of what looks cleanest. Do whatever suits your code. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Rounding off Values of dicts (in a list) to 2 decimal points
On Wednesday, October 2, 2013 10:01:16 AM UTC-7, tri...@gmail.com wrote: am trying to round off values in a dict to 2 decimal points but have been unsuccessful so far. The input I have is like this: y = [{'a': 80.0, 'b': 0.0786235, 'c': 10.0, 'd': 10.6742903}, {'a': 80.73246, 'b': 0.0, 'c': 10.780323, 'd': 10.0}, {'a': 80.7239, 'b': 0.7823640, 'c': 10.0, 'd': 10.0}, {'a': 80.7802313217234, 'b': 0.0, 'c': 10.0, 'd': 10.9762304}] I want to round off all the values to two decimal points using the ceil function. Here's what I have: def roundingVals_toTwoDeci(): global y for d in y: for k, v in d.items(): v = ceil(v*100)/100.0 return roundingVals_toTwoDeci() But it is not working - I am still getting the old values. I am not sure what's going on but here's the current scenario: I get the values with 2 decimal places as I originally required. When I do json.dumps(), it works fine. The goal is to send them to a URL and so I do a urlencode. When I decode the urlencoded string, it gives me the same goodold 2 decimal places. But, for some reason, at the URL, when I check, it no longer limits the values to 2 decimal places, but shows values like 9.10003677694312. What's going on. Here's the code that I have: class LessPrecise(float): def __repr__(self): return str(self) def roundingVals_toTwoDeci(y): for d in y: for k, v in d.iteritems(): d[k] = LessPrecise(round(v, 2)) return roundingVals_toTwoDeci(y) j = json.dumps(y) print j //At this point, print j gives me [{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: 0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] //then I do, params = urllib.urlencode({'thekey': j}) //I then decode params and print it and it gives me thekey=[{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: 0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] However, at the URL, the values show up as 90.43278694123 -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote: 4. There is a whole spectrum of such optimizaitons -- 4a eg a single-call structural recursion example, does not need to push return address on the stack. It only needs to store the recursion depth: If zero jump to outside return add; if 0 jump to internal return address 4b An example like quicksort in which one call is a tail call can be optimized with your optimization and the other, inner one with 4a above I am interested in studying more this 'whole spectrum of optimizations' Any further pointers? Thanks -- Ravi -- https://mail.python.org/mailman/listinfo/python-list
Re: Rounding off Values of dicts (in a list) to 2 decimal points
trip...@gmail.com wrote: On Wednesday, October 2, 2013 10:01:16 AM UTC-7, tri...@gmail.com wrote: am trying to round off values in a dict to 2 decimal points but have been unsuccessful so far. The input I have is like this: y = [{'a': 80.0, 'b': 0.0786235, 'c': 10.0, 'd': 10.6742903}, {'a': 80.73246, 'b': 0.0, 'c': 10.780323, 'd': 10.0}, {'a': 80.7239, 'b': 0.7823640, 'c': 10.0, 'd': 10.0}, {'a': 80.7802313217234, 'b': 0.0, 'c': 10.0, 'd': 10.9762304}] I want to round off all the values to two decimal points using the ceil function. Here's what I have: def roundingVals_toTwoDeci(): global y for d in y: for k, v in d.items(): v = ceil(v*100)/100.0 return roundingVals_toTwoDeci() But it is not working - I am still getting the old values. I am not sure what's going on but here's the current scenario: I get the values with 2 decimal places as I originally required. When I do json.dumps(), it works fine. The goal is to send them to a URL and so I do a urlencode. When I decode the urlencoded string, it gives me the same goodold 2 decimal places. But, for some reason, at the URL, when I check, it no longer limits the values to 2 decimal places, but shows values like 9.10003677694312. What's going on. Here's the code that I have: class LessPrecise(float): def __repr__(self): return str(self) def roundingVals_toTwoDeci(y): for d in y: for k, v in d.iteritems(): d[k] = LessPrecise(round(v, 2)) return That should only process the first dict in the list, due to a misplaced return. roundingVals_toTwoDeci(y) j = json.dumps(y) print j //At this point, print j gives me [{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: [{0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] //then I do, params = urllib.urlencode({'thekey': j}) //I then decode params and print it and it gives me thekey=[{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: 0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] However, at the URL, the values show up as 90.43278694123 Can you give the actual code, including the decoding part? Preferably you'd put both encoding and decoding into one small self-contained demo script. -- https://mail.python.org/mailman/listinfo/python-list
ipy %run noob confusion
I have some rather complex code that works perfectly well if I paste it in by hand to ipython, but if I use %run it can't find some of the libraries, but others it can. The confusion seems to have to do with mathplotlib. I get it in stream by: %pylab osx and do a bunch of stuff interactively that works just fine, for example: clf() But I want it to run on a %run, but %pylab is (apparently) not allowed from a %run script, and importing matplotlib explicitly doesn't work...I mean, it imports, but then clf() is only defined in the module, not interactively. More confusing, if I do all the setup interactively, and the try to just run my script, again, clf() [etc] don't work (don't appear to exist), even though I can do them interactively. There seems to be some sort of scoping problem ... or, put more correctly, my problem is that I don't seem to understand the scoping, like, are %run eval'ed in some closed context that doesn't work the same way as ipython interactive? Is there any way to really do what I mean, which is: Please just read in commands from that script (short of getting out and passing my script through stdin to ipython?) Thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: feature requests
On 10/03/2013 09:12 AM, macker wrote: Hi, hope this is the right group for this: I miss two basic (IMO) features in parallel processing: 1. make `threading.Thread.start()` return `self` I'd like to be able to `workers = [Thread(params).start() for params in whatever]`. Right now, it's 5 ugly, menial lines: workers = [] for params in whatever: thread = threading.Thread(params) thread.start() workers.append(thread) Ugly, menial lines are a clue that a function to hide it could be useful. 2. make multiprocessing pools (incl. ThreadPool) limit the size of their internal queues As it is now, the queue will greedily consume its entire input, and if the input is large and the pool workers are slow in consuming it, this blows up RAM. I'd like to be able to `pool = Pool(4, max_qsize=1000)`. Same with the output queue (finished tasks). Have you verified that this is a problem in Python? Or does anyone know of a way to achieve this? You could try subclassing. -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
compare two list of dictionaries
Dear All, I have two list of dictionaries like below: In the below dictionaries the value of ip can be either hostname or ip address. output1=[ {'count': 3 , 'ip': 'xxx.xx.xxx.1'}, {'count': 4, 'ip': 'xxx.xx.xxx.2'}, {'count': 8, 'ip': 'xxx.xx.xxx.3'}, {'count': 10, 'ip': 'xxx.xx.xxx.4'}, {'count': 212, 'ip': 'hostname1'}, {'count': 27, 'ip': 'hostname2'}, {'count': 513, 'ip': 'hostname3'}, {'count': 98, 'ip': 'hostname4'}, {'count': 1, 'ip': 'hostname10'}, {'count': 2, 'ip': 'hostname8'}, {'count': 3, 'ip': 'xxx.xx.xxx.11'}, {'count': 90, 'ip': 'xxx.xx.xxx.12'}, {'count': 12, 'ip': 'xxx.xx.xxx.13'}, {'count': 21, 'ip': 'xxx.xx.xxx.14'}, {'count': 54, 'ip': 'xxx.xx.xxx.15'}, {'count': 34, 'ip': 'xxx.xx.xxx.16'}, {'count': 11, 'ip': 'xxx.xx.xxx.17'}, {'count': 2, 'ip': 'xxx.xx.xxx.18'}, {'count': 19, 'ip': 'xxx.xx.xxx.19'}, {'count': 21, 'ip': 'xxx.xx.xxx.20'}, {'count': 25, 'ip': 'xxx.xx.xxx.21'}, {'count': 31, 'ip': 'xxx.xx.xxx.22'}, {'count': 43, 'ip': 'xxx.xx.xxx.23'}, {'count': 46, 'ip': 'xxx.xx.xxx.24'}, {'count': 80, 'ip': 'xxx.xx.xxx.25'}, {'count': 91, 'ip': 'xxx.xx.xxx.26'}, {'count': 90, 'ip': 'xxx.xx.xxx.27'}, {'count': 10, 'ip': 'xxx.xx.xxx.28'}, {'count': 3, 'ip': 'xxx.xx.xxx.29'}] In the below dictionaries have either hostname or ip or both. output2=( {'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'}, {'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'}, {'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'}, {'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'}, {'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'}, {'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'}, {'hostname': 'hostname1', 'ip_addr': 'n/a'}, {'hostname': 'hostname2', 'ip_addr': 'n/a'}, {'hostname': 'hostname10', 'ip_addr': ''}, {'hostname': 'hostname8', 'ip_addr': ''}, {'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'}, {'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'}, ) trying to get the following difference from the above dictionary 1). compare the value of 'ip' in output1 dictionary with either 'hostname' and 'ip_addr' output2 dictionary and print their intersection. Tried below code: for doc in output1: for row in output2: if((row[hostname] == doc[ip]) or (row[ip_addr] == doc[ip])): print doc[ip],doc[count] *output:* hostname1 212 hostname2 27 hostname10 1 hostname8 2 xxx.xx.xxx.11 3 xxx.xx.xxx.12 90 xxx.xx.xxx.13 12 xxx.xx.xxx.14 21 xxx.xx.xxx.15 54 xxx.xx.xxx.16 34 2). need to print the below output if the value of 'ip' in output1 dictionary is not there in in output2 dictionary(ip/hostname which is there in output1 and not there in output2): xxx.xx.xxx.1 3 xxx.xx.xxx.2 4 xxx.xx.xxx.3 8 xxx.xx.xxx.4 10 hostname3 513 hostname4 98 xxx.xx.xxx.17 11 xxx.xx.xxx.18 2 xxx.xx.xxx.19 19 xxx.xx.xxx.20 21 xxx.xx.xxx.21 25 xxx.xx.xxx.22 31 xxx.xx.xxx.23 43 xxx.xx.xxx.24 46 xxx.xx.xxx.25 80 xxx.xx.xxx.26 91 xxx.xx.xxx.27 90 xxx.xx.xxx.28 10 xxx.xx.xxx.29 3 3). Ip address with is there only in output2 dictionary. xxx.xx.xxx.200 xxx.xx.xxx.400 Any help would be really appreciated. Thank you Thanks Mohan L -- https://mail.python.org/mailman/listinfo/python-list
Why didn't my threads exit correctly ?
Hi list, I write an example script using threading as follow. It look like hang when the list l_ip is empty. And any suggestion with debug over the threading in Python ? 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 import re 4 import os 5 import threading 6 from Queue import Queue 7 from time import sleep 8 9 l_ip = [] 10 l_result = [] 11 result = re.compile(r[1-3] received) 12 13 class ping(threading.Thread): 14 15 def __init__(self, l_ip, l_result): 16 threading.Thread.__init__(self) 17 self.l_ip = l_ip 18 #self.l_result = l_result 19 20 def run(self): 21 22 while True: 23 try: 24 ip = self.l_ip.pop() 25 except IndexError as e: 26 print e 27 break 28 ping_out = os.popen(''.join(['ping -q -c3 ',ip]), 'r') 29 print 'Ping ip:%s' % ip 30 while True: 31 line = ping_out.readline() 32 if not line: break 33 if result.findall(line): 34 l_result.append(ip) 35 break 36 37 queue = Queue() 38 39 for i in range(1,110): 40 l_ip.append(''.join(['192.168.1.', str(i)])) 41 for i in xrange(10): 42 t = ping(l_ip, l_result) 43 t.start() 44 queue.put(t) 45 queue.join() 46 print Result will go here. 47 for i in l_result: 48 print 'IP %s is OK' % i -- All the best! http://luolee.me -- https://mail.python.org/mailman/listinfo/python-list
Re: Rounding off Values of dicts (in a list) to 2 decimal points
On 2013-10-03, trip...@gmail.com trip...@gmail.com wrote: thekey=[{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: 0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] However, at the URL, the values show up as 90.43278694123 You'll need to convert them to strings yourself before submitting them, by using % formatting or str.format. -- Neil Cerutti -- https://mail.python.org/mailman/listinfo/python-list
Re: Get the selected tab in a enthought traits application
Here's the answer: from enthought.traits.api import HasTraits, Str, List, Button, Any from enthought.traits.ui.api import View, Item from enthought.traits.ui.api import ListEditor class A(HasTraits): StringA = Str view = View(Item('StringA')) class B(HasTraits): StringB = Str view = View(Item('StringB')) class C(HasTraits): MyList = List(HasTraits) MyButton = Button(label=Test) SelectedTab = Any def _MyButton_fired(self): if self.SelectedTab == self.MyList[0]: print self.MyList[0].StringA if self.SelectedTab == self.MyList[1]: print self.MyList[1].StringB view = View(Item('MyList', style='custom', show_label=False, editor=ListEditor(use_notebook=True, deletable=False, dock_style='tab', selected='SelectedTab')), Item('MyButton', show_label=False)) a = A() b = B() c = C() c.MyList = [a, b] c.configure_traits() -- https://mail.python.org/mailman/listinfo/python-list
Re: Rounding off Values of dicts (in a list) to 2 decimal points
On Thursday, October 3, 2013 11:03:17 AM UTC-7, Neil Cerutti wrote: On 2013-10-03, trip...@gmail.com trip...@gmail.com wrote: thekey=[{a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 100.0, b: 0.0, c: 0.0, d: 0.0}, {a: 80.0, b: 0.0, c: 10.0, d: 10.0}, {a: 90.0, b: 0.0, c: 0.0, d: 10.0}] However, at the URL, the values show up as 90.43278694123 You'll need to convert them to strings yourself before submitting them, by using % formatting or str.format. -- Neil Cerutti I thought the class 'LessPrecise' converts them to strings. But even when I try doing it directly without the class at all, as in str(round(v, 2)), it gives all the expected values (as in {a: 10.1, b: 3.4, etc.}) but at the URL, it gives all the decimal places - 10.78324783923783 -- https://mail.python.org/mailman/listinfo/python-list
Re: Why didn't my threads exit correctly ?
On 03/10/2013 18:37, 李洛 wrote: Hi list, I write an example script using threading as follow. It look like hang when the list l_ip is empty. And any suggestion with debug over the threading in Python ? 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 import re 4 import os 5 import threading 6 from Queue import Queue 7 from time import sleep 8 9 l_ip = [] 10 l_result = [] 11 result = re.compile(r[1-3] received) 12 13 class ping(threading.Thread): 14 15 def __init__(self, l_ip, l_result): 16 threading.Thread.__init__(self) 17 self.l_ip = l_ip 18 #self.l_result = l_result 19 20 def run(self): 21 22 while True: 23 try: 24 ip = self.l_ip.pop() 25 except IndexError as e: 26 print e 27 break 28 ping_out = os.popen(''.join(['ping -q -c3 ',ip]), 'r') 29 print 'Ping ip:%s' % ip 30 while True: 31 line = ping_out.readline() 32 if not line: break 33 if result.findall(line): 34 l_result.append(ip) 35 break 36 37 queue = Queue() 38 39 for i in range(1,110): 40 l_ip.append(''.join(['192.168.1.', str(i)])) 41 for i in xrange(10): 42 t = ping(l_ip, l_result) 43 t.start() 44 queue.put(t) 45 queue.join() 46 print Result will go here. 47 for i in l_result: 48 print 'IP %s is OK' % i queue.join() will block until the queue is empty, which is never is! You're putting the workers in the queue, whereas the normal way of doing it is to put them into a list, the inputs into a queue, and the outputs into another queue. The workers then 'get' from the input queue, do some processing, and 'put' to the output queue. -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
In article mailman.684.1380819470.18130.python-l...@python.org, Chris Angelico ros...@gmail.com wrote: On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith r...@panix.com wrote: The downside to threads is that all of of this sharing makes them much more complicated to use properly. You have to be aware of how all the threads are interacting, and mediate access to shared resources. If you do that wrong, you get memory corruption, deadlocks, and all sorts of (extremely) difficult to debug problems. A lot of the really hairy problems (i.e. things like one thread continuing to use memory which another thread has freed) are solved by using a high-level language like Python which handles all the memory allocation for you, but you can still get deadlocks and data corruption. With CPython, you don't have any headaches like that; you have one very simple protection, a Global Interpreter Lock (GIL), which guarantees that no two threads will execute Python code simultaneously. No corruption, no deadlocks, no hairy problems. ChrisA Well, the GIL certainly eliminates a whole range of problems, but it's still possible to write code that deadlocks. All that's really needed is for two threads to try to acquire the same two resources, in different orders. I'm running the following code right now. It appears to be doing a pretty good imitation of a deadlock. Any similarity to current political events is purely intentional. import threading import time lock1 = threading.Lock() lock2 = threading.Lock() class House(threading.Thread): def run(self): print House starting... lock1.acquire() time.sleep(1) lock2.acquire() print House running lock2.release() lock1.release() class Senate(threading.Thread): def run(self): print Senate starting... lock2.acquire() time.sleep(1) lock1.acquire() print Senate running lock1.release() lock2.release() h = House() s = Senate() h.start() s.start() Similarly, I can have data corruption. I can't get memory corruption in the way you can get in a C/C++ program, but I can certainly have one thread produce data for another thread to consume, and then (incorrectly) continue to mutate that data after it relinquishes ownership. Let's say I have a Queue. A producer thread pushes work units onto the Queue and a consumer thread pulls them off the other end. If my producer thread does something like: work = {'id': 1, 'data': The Larch} my_queue.put(work) work['id'] = 3 I've got a race condition where the consumer thread may get an id of either 1 or 3, depending on exactly when it reads the data from its end of the queue (more precisely, exactly when it uses that data). Here's a somewhat different example of data corruption between threads: import threading import random import sys sketch = The Dead Parrot class T1(threading.Thread): def run(self): current_sketch = str(sketch) while 1: if sketch != current_sketch: print Blimey, it's changed! return class T2(threading.Thread): def run(self): sketches = [Piranah Brothers, Spanish Enquisition, Lumberjack] while 1: global sketch sketch = random.choice(sketches) t1 = T1() t2 = T2() t2.daemon = True t1.start() t2.start() t1.join() sys.exit() -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
On Fri, Oct 4, 2013 at 4:28 AM, Roy Smith r...@panix.com wrote: Well, the GIL certainly eliminates a whole range of problems, but it's still possible to write code that deadlocks. All that's really needed is for two threads to try to acquire the same two resources, in different orders. I'm running the following code right now. It appears to be doing a pretty good imitation of a deadlock. Any similarity to current political events is purely intentional. Right. Sorry, I meant that the GIL protects you from all that happening in the lower level code (even lower than the Senate, here), but yes, you can get deadlocks as soon as you acquire locks. That's nothing to do with threading, you can have the same issues with databases, file systems, or anything else that lets you lock something. It's a LOT easier to deal with deadlocks or data corruption that occurs in pure Python code than in C, since Python has awesome introspection facilities... and you're guaranteed that corrupt data is still valid Python objects. As to your corrupt data example, though, I'd advocate a very simple system of object ownership: as soon as the object has been put on the queue, it's owned by the recipient and shouldn't be mutated by anyone else. That kind of system generally isn't hard to maintain. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
On 3/10/2013 12:50, Chris Angelico wrote: On Fri, Oct 4, 2013 at 2:41 AM, Roy Smith r...@panix.com wrote: The downside to threads is that all of of this sharing makes them much more complicated to use properly. You have to be aware of how all the threads are interacting, and mediate access to shared resources. If you do that wrong, you get memory corruption, deadlocks, and all sorts of (extremely) difficult to debug problems. A lot of the really hairy problems (i.e. things like one thread continuing to use memory which another thread has freed) are solved by using a high-level language like Python which handles all the memory allocation for you, but you can still get deadlocks and data corruption. With CPython, you don't have any headaches like that; you have one very simple protection, a Global Interpreter Lock (GIL), which guarantees that no two threads will execute Python code simultaneously. No corruption, no deadlocks, no hairy problems. ChrisA The GIL takes care of the gut-level interpreter issues like reference counts for shared objects. But it does not avoid deadlock or hairy problems. I'll just show one, trivial, problem, but many others exist. If two threads process the same global variable as follows, myglobal = myglobal + 1 Then you have no guarantee that the value will really get incremented twice. Presumably there's a mutex/critsection function in the threading module that can make this safe, but once you use it in two different places, you raise the possibility of deadlock. On the other hand, if you're careful to have the thread use only data that is unique to that thread, then it would seem to be safe. However, you still have the same risk if you call some library that wasn't written to be thread safe. I'll assume that print() and suchlike are safe, but some third party library could well use the equivalent of a global variable in an unsafe way. -- DaveA -- https://mail.python.org/mailman/listinfo/python-list
Re: compare two list of dictionaries
On 03/10/2013 17:11, Mohan L wrote: Dear All, I have two list of dictionaries like below: In the below dictionaries the value of ip can be either hostname or ip address. output1=[ {'count': 3 , 'ip': 'xxx.xx.xxx.1'}, {'count': 4, 'ip': 'xxx.xx.xxx.2'}, {'count': 8, 'ip': 'xxx.xx.xxx.3'}, {'count': 10, 'ip': 'xxx.xx.xxx.4'}, {'count': 212, 'ip': 'hostname1'}, {'count': 27, 'ip': 'hostname2'}, {'count': 513, 'ip': 'hostname3'}, {'count': 98, 'ip': 'hostname4'}, {'count': 1, 'ip': 'hostname10'}, {'count': 2, 'ip': 'hostname8'}, {'count': 3, 'ip': 'xxx.xx.xxx.11'}, {'count': 90, 'ip': 'xxx.xx.xxx.12'}, {'count': 12, 'ip': 'xxx.xx.xxx.13'}, {'count': 21, 'ip': 'xxx.xx.xxx.14'}, {'count': 54, 'ip': 'xxx.xx.xxx.15'}, {'count': 34, 'ip': 'xxx.xx.xxx.16'}, {'count': 11, 'ip': 'xxx.xx.xxx.17'}, {'count': 2, 'ip': 'xxx.xx.xxx.18'}, {'count': 19, 'ip': 'xxx.xx.xxx.19'}, {'count': 21, 'ip': 'xxx.xx.xxx.20'}, {'count': 25, 'ip': 'xxx.xx.xxx.21'}, {'count': 31, 'ip': 'xxx.xx.xxx.22'}, {'count': 43, 'ip': 'xxx.xx.xxx.23'}, {'count': 46, 'ip': 'xxx.xx.xxx.24'}, {'count': 80, 'ip': 'xxx.xx.xxx.25'}, {'count': 91, 'ip': 'xxx.xx.xxx.26'}, {'count': 90, 'ip': 'xxx.xx.xxx.27'}, {'count': 10, 'ip': 'xxx.xx.xxx.28'}, {'count': 3, 'ip': 'xxx.xx.xxx.29'}] In the below dictionaries have either hostname or ip or both. output2=( {'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'}, {'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'}, {'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'}, {'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'}, {'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'}, {'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'}, {'hostname': 'hostname1', 'ip_addr': 'n/a'}, {'hostname': 'hostname2', 'ip_addr': 'n/a'}, {'hostname': 'hostname10', 'ip_addr': ''}, {'hostname': 'hostname8', 'ip_addr': ''}, {'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'}, {'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'}, ) trying to get the following difference from the above dictionary 1). compare the value of 'ip' in output1 dictionary with either 'hostname' and 'ip_addr' output2 dictionary and print their intersection. Tried below code: for doc in output1: for row in output2: if((row[hostname] == doc[ip]) or (row[ip_addr] == doc[ip])): print doc[ip],doc[count] *output:* hostname1 212 hostname2 27 hostname10 1 hostname8 2 xxx.xx.xxx.11 3 xxx.xx.xxx.12 90 xxx.xx.xxx.13 12 xxx.xx.xxx.14 21 xxx.xx.xxx.15 54 xxx.xx.xxx.16 34 1. Create a dict from output1 in which the key is the ip and the value is the count. 2. Create a set from output2 containing all the hostnames and ip_addrs. 3. Get the intersection of the keys of the dict with the set. 4. Print the entries of the dict for each member of the intersection. 2). need to print the below output if the value of 'ip' in output1 dictionary is not there in in output2 dictionary(ip/hostname which is there in output1 and not there in output2): xxx.xx.xxx.1 3 xxx.xx.xxx.2 4 xxx.xx.xxx.3 8 xxx.xx.xxx.4 10 hostname3 513 hostname4 98 xxx.xx.xxx.17 11 xxx.xx.xxx.18 2 xxx.xx.xxx.19 19 xxx.xx.xxx.20 21 xxx.xx.xxx.21 25 xxx.xx.xxx.22 31 xxx.xx.xxx.23 43 xxx.xx.xxx.24 46 xxx.xx.xxx.25 80 xxx.xx.xxx.26 91 xxx.xx.xxx.27 90 xxx.xx.xxx.28 10 xxx.xx.xxx.29 3 1. Get the difference between the keys of the dict and the intersection. 2. Print the entries of the dict for each member of the difference. 3). Ip address with is there only in output2 dictionary. xxx.xx.xxx.200 xxx.xx.xxx.400 1. Create a set from output2 containing all the ip_addrs. 2. Get the difference between the set and the keys of the dict created from output1. Any help would be really appreciated. Thank you -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On 10/2/2013 10:34 PM, Steven D'Aprano wrote: You are both assuming that LOAD_CONST will re-use the same tuple (1, 2, 3) in multiple places. No I did not. To save tuple creation time, a pre-compiled tuple is reused when its display expression is re-executed. If I had been interested in multiple occurrences of the same display, I would have tested. def f(): a = 1,'a',, 'bbb'; x = 1,'a',, 'bbb' b = 1,'a',, 'bbb' c = 'a' d = + f.__code__.co_consts (None, 1, 'a', , 'bbb', (1, 'a', , 'bbb'), (1, 'a', , 'bbb'), (1, 'a', , 'bbb'), ) Empirically, ints and strings are checked for prior occurrence in co_consts before being added. I suspect None is too, but will not assume. How is the issue of multiple occurrences of constants relevant to my topic statement? Let me quote it, with misspellings corrected. CPython core developers have been very conservative about what transformations they put into the compiler. [misspellings corrected] Aha! Your example and that above reinforce this statement. Equal tuples are not necessarily identical and cannot necessarily be substituted for each other in all code. (1, 2) == (1.0, 2.0) True But replacing (1.0, 2.0) with (1, 2), by only storing the latter, would not be valid without some possibly tricky context analysis. The same is true for equal numbers, and the optimizer pays attention. def g(): a = 1 b = 1.0 g.__code__.co_consts (None, 1, 1.0) For numbers, the proper check is relatively easy: for item in const_list: if type(x) is type(item) and x == item: break # identical item already in working list else: const_list.append(x) Writing a valid recursive function to do the same for tuples, and proving its validity to enough other core developers to make it accepted, is much harder and hardly seems worthwhile. It would probably be easier to compare the parsed AST subtrees for the displays rather than the objects created from them. --- py def f(): ... a = (1, 2, 3) ... b = (1, 2, 3) [snip] So even though both a and b are created by the same LOAD_CONST byte-code, I am not sure what you mean by 'created'. LOAD_CONST puts the address of an object in co_consts on the top of the virtual machine stack. the object is not re-used (although it could be!) It can be reused, in this case, because the constant displays are identical, as defined above. and two distinct tuples are created. Because it is not easy to make the compiler see that only one is needed. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: ipy %run noob confusion
On 10/3/2013 1:42 PM, jshra...@gmail.com wrote: I have some rather complex code that works perfectly well if I paste it in by hand to ipython, but if I use %run it can't find some of the libraries, but others it can. Ipython is a separate product built on top of Python. If no answer here, look for an ipython-specific list or discussion group. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: ipy %run noob confusion
On 03/10/2013 20:26, Terry Reedy wrote: On 10/3/2013 1:42 PM, jshra...@gmail.com wrote: I have some rather complex code that works perfectly well if I paste it in by hand to ipython, but if I use %run it can't find some of the libraries, but others it can. Ipython is a separate product built on top of Python. If no answer here, look for an ipython-specific list or discussion group. Such as news.gmane.org/gmane.comp.python.ipython.user -- Roses are red, Violets are blue, Most poems rhyme, But this one doesn't. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
In article mailman.691.1380825390.18130.python-l...@python.org, Chris Angelico ros...@gmail.com wrote: As to your corrupt data example, though, I'd advocate a very simple system of object ownership: as soon as the object has been put on the queue, it's owned by the recipient and shouldn't be mutated by anyone else. Well, sure. I agree with you that threading in Python is about a zillion times easier to manage than threading in C/C++, but there are still things you need to think about when using threading in Python which you don't need to think about if you're not using threading at all. Transfer of ownership when you put something on a queue is one of those things. So, I think my original statement: if you're looking for a short answer, I'd say just keep doing what you're doing using multiple processes and don't get into threading. is still good advice for somebody who isn't sure they need threads. On the other hand, for somebody who is interested in learning about threads, Python is a great platform to learn because you get to experiment with the basic high-level concepts without getting bogged down in pthreads minutiae. And, as Chris pointed out, if you get it wrong, at least you've still got valid Python objects to puzzle over, not a smoking pile of bits on the floor. -- https://mail.python.org/mailman/listinfo/python-list
Re: wil anyone ressurect medusa and pypersist?
On Thursday, May 16, 2013 11:15:45 AM UTC-7, vispha...@gmail.com wrote: www.prevayler.org in python = pypersist medusa = python epoll web server and ftp server eventy and async wow interesting sprevayler ?? cl-prevalence -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiple scripts versus single multi-threaded script
On Fri, Oct 4, 2013 at 5:53 AM, Roy Smith r...@panix.com wrote: So, I think my original statement: if you're looking for a short answer, I'd say just keep doing what you're doing using multiple processes and don't get into threading. is still good advice for somebody who isn't sure they need threads. On the other hand, for somebody who is interested in learning about threads, Python is a great platform to learn because you get to experiment with the basic high-level concepts without getting bogged down in pthreads minutiae. And, as Chris pointed out, if you get it wrong, at least you've still got valid Python objects to puzzle over, not a smoking pile of bits on the floor. Agree wholeheartedly to both halves. I was just explaining a similar concept to my brother last night, with regard to network/database request handling: 1) The simplest code starts, executes, and finishes, with no threads, fork(), or other confusions.or shared state or anything. Execution can be completely predicted by eyeballing the source code. You can pretend that you have a dedicated CPU core that does nothing but run your program. 2) Threaded code adds a measure of complexity that you have to get your head around. Now you need to concern yourself with preemption, multiple threads doing things in different orders, locking, shared state, etc, etc. But you can still pretend that the execution of one job will happen as a single thing, top down, with predictable intermediate state, if you like. (Python's threading and multiprocess modules both follow this style, they just have different levels of shared state.) 3) Asynchronous code adds significantly more get your head around complexity, since you now have to retain state for multiple jobs/requests in the same thread. You can't use local variables to keep track of where you're up to. Most likely, your code will do some tiny thing, update the state object for that request, fire off an asynchronous request of your own (maybe to the hard disk, with a callback when the data's read/written), and then return, back to some main loop. Now imagine you have a database written in style #1, and you have to drag it, kicking and screaming, into the 21st century. Oh look, it's easy! All you have to do is start multiple threads doing the same job! And then you'll have some problems with simultaneous edits, so you put some big fat locks all over the place to prevent two threads from doing the same thing at the same time. Even if one of those threads was handling something interactive and might hold its lock for some number of minutes. Suboptimal design, maybe, but hey, it works right? That's what my brother has to deal with every day, as a user of said database... :| ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Literal syntax for frozenset, frozendict (was: Tail recursion to while iteration in 2 easy steps)
random...@fastmail.us writes: Hey, while we're on the subject, can we talk about frozen(set|dict) literals again? I really don't understand why this discussion fizzles out whenever it's brought up on python-ideas. Can you start us off by searching for previous threads discussing it, and summarise the arguments here? -- \ “If you ever catch on fire, try to avoid seeing yourself in the | `\mirror, because I bet that's what REALLY throws you into a | _o__) panic.” —Jack Handey | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Wed, 02 Oct 2013 22:41:00 -0400, Terry Reedy wrote: I am referring to constant-value objects included in the code object. def f(): return (1,2,3) f.__code__.co_consts (None, 1, 2, 3, (1, 2, 3)) Okay, now that's more clear. I didn't understand what you meant before. So long as we understand we're talking about a CPython implementation detail. None is present as the default return, even if not needed for a particular function. Every literal is also tossed in, whether needed or not. which in Python 3.3 understands tuples like (1, 2, 3), but not lists. The byte-code does not understand anything about types. LOAD_CONST n simply loads the (n+1)st object in .co_consts onto the top of the stack. Right, this is more clear to me now. As I understand it, the contents of code objects are implementation details, not required for implementations. For example, IronPython provides a co_consts attribute, but it only contains None. Jython doesn't provide a co_consts attribute at all. So while it's interesting to discuss what CPython does, we should not be fooled into thinking that this is guaranteed by every Python. I can imagine a Python implementation that compiles constants into some opaque object like __closure__ or co_code. In that case, it could treat the list in for i in [1, 2, 3]: ... as a constant too, since there is no fear that some other object could reach into the opaque object and change it. Of course, that would likely be a lot of effort for very little benefit. The compiler would have to be smart enough to see that the list was never modified or returned. Seems like a lot of trouble to go to just to save creating a small list. More likely would be implementations that didn't re-use constants, than implementations that aggressively re-used everything possible. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Literal syntax for frozenset, frozendict
On 10/03/2013 05:18 PM, Ben Finney wrote: random...@fastmail.us writes: Hey, while we're on the subject, can we talk about frozen(set|dict) literals again? I really don't understand why this discussion fizzles out whenever it's brought up on python-ideas. Can you start us off by searching for previous threads discussing it, and summarise the arguments here? And then start a new thread. :) -- ~Ethan~ -- https://mail.python.org/mailman/listinfo/python-list
Re: Goodbye: was JUST GOT HACKED
On Thu, 03 Oct 2013 17:31:44 +0530, Ravi Sahni wrote: On Thu, Oct 3, 2013 at 5:05 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: No, you are welcome here. You've posted more in just a few days than Walter has in months. We need more people like you. Thanks for the welcome! But No thanks for the non-welcome -- I dont figure why Walter Hurry (or anyone else) should be unwelcome just because I am welcome. Who said Walter was unwelcome? It's *his* choice to leave, nobody is kicking him out. Regards, -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Tail recursion to while iteration in 2 easy steps
On Thu, 03 Oct 2013 10:09:25 -0400, random832 wrote: Speaking of assumptions, I would almost say that we should make the assumption that operators (other than the __i family, and setitem/setattr/etc) are not intended to have visible side effects. This would open a _huge_ field of potential optimizations - including that this would no longer be a semantic change (since relying on one of the operators being allowed to change the binding of fact would no longer be guaranteed). I like the idea of such optimizations, but I'm afraid that your last sentence seems a bit screwy to me. You seem to be saying, if we make this major semantic change to Python, we can then retroactively declare that it's not a semantic change at all, since under the new rules, it's no different from the new rules. Anyway... I think that it's something worth investigating, but it's not as straight forward as you might hope. There almost certainly is code out in the world that uses operator overloading for DSLs. For instance, I've played around something vaguely like this DSL: chain = Node('spam') chain 'eggs' chain 'ham' chain.head = 'cheese' where I read as appending and = as inserting. I was never quite happy with the syntax, so my experiments never went anywhere, but I expect that some people, somewhere, have. This is a legitimate way to use Python, and changing the semantics to prohibit it would be a Bad Thing. However, I can imagine something like a __future__ directive that enables, or disables, such optimizations on a per-module basis. In Python 3, it would have to be disabled by default. Python 4000 could make the optimizations enabled by default and use the __future__ machinery to disable it. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: compare two list of dictionaries
On Fri, Oct 4, 2013 at 12:14 AM, MRAB pyt...@mrabarnett.plus.com wrote: On 03/10/2013 17:11, Mohan L wrote: Dear All, I have two list of dictionaries like below: In the below dictionaries the value of ip can be either hostname or ip address. output1=[ {'count': 3 , 'ip': 'xxx.xx.xxx.1'}, {'count': 4, 'ip': 'xxx.xx.xxx.2'}, {'count': 8, 'ip': 'xxx.xx.xxx.3'}, {'count': 10, 'ip': 'xxx.xx.xxx.4'}, {'count': 212, 'ip': 'hostname1'}, {'count': 27, 'ip': 'hostname2'}, {'count': 513, 'ip': 'hostname3'}, {'count': 98, 'ip': 'hostname4'}, {'count': 1, 'ip': 'hostname10'}, {'count': 2, 'ip': 'hostname8'}, {'count': 3, 'ip': 'xxx.xx.xxx.11'}, {'count': 90, 'ip': 'xxx.xx.xxx.12'}, {'count': 12, 'ip': 'xxx.xx.xxx.13'}, {'count': 21, 'ip': 'xxx.xx.xxx.14'}, {'count': 54, 'ip': 'xxx.xx.xxx.15'}, {'count': 34, 'ip': 'xxx.xx.xxx.16'}, {'count': 11, 'ip': 'xxx.xx.xxx.17'}, {'count': 2, 'ip': 'xxx.xx.xxx.18'}, {'count': 19, 'ip': 'xxx.xx.xxx.19'}, {'count': 21, 'ip': 'xxx.xx.xxx.20'}, {'count': 25, 'ip': 'xxx.xx.xxx.21'}, {'count': 31, 'ip': 'xxx.xx.xxx.22'}, {'count': 43, 'ip': 'xxx.xx.xxx.23'}, {'count': 46, 'ip': 'xxx.xx.xxx.24'}, {'count': 80, 'ip': 'xxx.xx.xxx.25'}, {'count': 91, 'ip': 'xxx.xx.xxx.26'}, {'count': 90, 'ip': 'xxx.xx.xxx.27'}, {'count': 10, 'ip': 'xxx.xx.xxx.28'}, {'count': 3, 'ip': 'xxx.xx.xxx.29'}] In the below dictionaries have either hostname or ip or both. output2=( {'hostname': 'INNCHN01', 'ip_addr': 'xxx.xx.xxx.11'}, {'hostname': 'HYDRHC02', 'ip_addr': 'xxx.xx.xxx.12'}, {'hostname': 'INNCHN03', 'ip_addr': 'xxx.xx.xxx.13'}, {'hostname': 'MUMRHC01', 'ip_addr': 'xxx.xx.xxx.14'}, {'hostname': 'n/a', 'ip_addr': 'xxx.xx.xxx.15'}, {'hostname': 'INNCHN05', 'ip_addr': 'xxx.xx.xxx.16'}, {'hostname': 'hostname1', 'ip_addr': 'n/a'}, {'hostname': 'hostname2', 'ip_addr': 'n/a'}, {'hostname': 'hostname10', 'ip_addr': ''}, {'hostname': 'hostname8', 'ip_addr': ''}, {'hostname': 'hostname200', 'ip_addr': 'xxx.xx.xxx.200'}, {'hostname': 'hostname300', 'ip_addr': 'xxx.xx.xxx.400'}, ) trying to get the following difference from the above dictionary 1). compare the value of 'ip' in output1 dictionary with either 'hostname' and 'ip_addr' output2 dictionary and print their intersection. Tried below code: for doc in output1: for row in output2: if((row[hostname] == doc[ip]) or (row[ip_addr] == doc[ip])): print doc[ip],doc[count] *output:* hostname1 212 hostname2 27 hostname10 1 hostname8 2 xxx.xx.xxx.11 3 xxx.xx.xxx.12 90 xxx.xx.xxx.13 12 xxx.xx.xxx.14 21 xxx.xx.xxx.15 54 xxx.xx.xxx.16 34 1. Create a dict from output1 in which the key is the ip and the value is the count. 2. Create a set from output2 containing all the hostnames and ip_addrs. 3. Get the intersection of the keys of the dict with the set. 4. Print the entries of the dict for each member of the intersection. 2). need to print the below output if the value of 'ip' in output1 dictionary is not there in in output2 dictionary(ip/hostname which is there in output1 and not there in output2): xxx.xx.xxx.1 3 xxx.xx.xxx.2 4 xxx.xx.xxx.3 8 xxx.xx.xxx.4 10 hostname3 513 hostname4 98 xxx.xx.xxx.17 11 xxx.xx.xxx.18 2 xxx.xx.xxx.19 19 xxx.xx.xxx.20 21 xxx.xx.xxx.21 25 xxx.xx.xxx.22 31 xxx.xx.xxx.23 43 xxx.xx.xxx.24 46 xxx.xx.xxx.25 80 xxx.xx.xxx.26 91 xxx.xx.xxx.27 90 xxx.xx.xxx.28 10 xxx.xx.xxx.29 3 1. Get the difference between the keys of the dict and the intersection. 2. Print the entries of the dict for each member of the difference. #!/bin/env python import sys output1=[ {'count': 3 , 'ip': 'xxx.xx.xxx.1'}, {'count': 4, 'ip': 'xxx.xx.xxx.2'}, {'count': 8, 'ip': 'xxx.xx.xxx.3'}, {'count': 10, 'ip': 'xxx.xx.xxx.4'}, {'count': 212, 'ip': 'hostname1'}, {'count': 27, 'ip': 'hostname2'}, {'count': 513, 'ip': 'hostname3'}, {'count': 98, 'ip': 'hostname4'}, {'count': 1, 'ip': 'hostname10'}, {'count': 2, 'ip': 'hostname8'}, {'count': 3, 'ip': 'xxx.xx.xxx.11'}, {'count': 90, 'ip': 'xxx.xx.xxx.12'}, {'count': 12, 'ip': 'xxx.xx.xxx.13'}, {'count': 21, 'ip': 'xxx.xx.xxx.14'}, {'count': 54, 'ip': 'xxx.xx.xxx.15'}, {'count': 34, 'ip': 'xxx.xx.xxx.16'}, {'count': 11, 'ip': 'xxx.xx.xxx.17'}, {'count': 2, 'ip': 'xxx.xx.xxx.18'}, {'count': 19, 'ip': 'xxx.xx.xxx.19'}, {'count': 21, 'ip': 'xxx.xx.xxx.20'}, {'count': 25, 'ip': 'xxx.xx.xxx.21'}, {'count': 31, 'ip': 'xxx.xx.xxx.22'}, {'count': 43, 'ip': 'xxx.xx.xxx.23'}, {'count': 46, 'ip': 'xxx.xx.xxx.24'}, {'count': 80, 'ip': 'xxx.xx.xxx.25'}, {'count': 91, 'ip': 'xxx.xx.xxx.26'}, {'count': 90, 'ip': 'xxx.xx.xxx.27'}, {'count': 10, 'ip': 'xxx.xx.xxx.28'}, {'count': 3, 'ip': 'xxx.xx.xxx.29'}] output2=(('INNCHN01','xxx.xx.xxx.11'), ('HYDRHC02', 'xxx.xx.xxx.12'), ('INNCHN03','xxx.xx.xxx.13'), ('MUMRHC01','xxx.xx.xxx.14'), ('n/a','xxx.xx.xxx.15'), ('INNCHN05','xxx.xx.xxx.16'), ('hostname1','n/a'),
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
New submission from Eric Snow: Changeset 1db6553f3f8c for issue #15576 changed importlib._bootstrap._get_supported_file_loaders() to return a list of 2-tuples instead of 3-tuples. However, the docstring for the function was not updated to reflect that. More importantly, WindowsRegistryFinder.find_module() still expects 3-tuples. The fix is relatively trivial (patch attached). However, I'm not aware of any reports of problems related to what should be a very broken WindowsRegistryFinder. So I just wanted to double-check that I haven't missed something here. Undoubtedly the code simply hasn't gotten a lot of exposure since the patch (Aug. 2012). -- assignee: eric.snow components: Interpreter Core files: fix-get-supported-file-loaders.diff keywords: patch messages: 198883 nosy: brett.cannon, eric.snow priority: normal severity: normal stage: commit review status: open title: Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders() type: behavior versions: Python 3.3, Python 3.4 Added file: http://bugs.python.org/file31951/fix-get-supported-file-loaders.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19148] Minor issues with Enum docs
Georg Brandl added the comment: Patch LGTM. -- nosy: +georg.brandl ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19148 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19150] IDLE shell fails: ModifiedInterpreter instance has no attribute 'interp'
Ned Deily added the comment: That still doesn't explain the problem. How are you trying to compile the program? For example, one way would be to use your mouse to select the Run menu item and then the Run Module option. Or use the F5 function key shortcut for that. Another somewhat unusual thing is No Subprocess status in the shell window. How are you starting IDLE? From a shell? If so, with what arguments? Also, can you show the contents of any files in your ~/.idlerc directory? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19150 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
New submission from Eric Snow: Any reason why ExtensionFileLoader does not implement get_filename()? I'm guessing it just slipped through the cracks. It should be there (and be registered as implementing ExecutionLoader). -- assignee: eric.snow components: Interpreter Core messages: 198886 nosy: brett.cannon, eric.snow priority: normal severity: normal status: open title: ExtensionFileLoader missing get_filename() type: behavior versions: Python 3.3, Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18037] 2to3 passes through string literal which causes SyntaxError in 3.x
Roundup Robot added the comment: New changeset 5e8de100f708 by Serhiy Storchaka in branch '2.7': Issue #18037: 2to3 now escapes '\u' and '\U' in native strings. http://hg.python.org/cpython/rev/5e8de100f708 New changeset 5950dd4cd9ef by Serhiy Storchaka in branch '3.3': Issue #18037: 2to3 now escapes '\u' and '\U' in native strings. http://hg.python.org/cpython/rev/5950dd4cd9ef New changeset 7d3695937362 by Serhiy Storchaka in branch 'default': Issue #18037: 2to3 now escapes '\u' and '\U' in native strings. http://hg.python.org/cpython/rev/7d3695937362 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18037 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18965] 2to3 can produce illegal bytes literals
Changes by Serhiy Storchaka storch...@gmail.com: Removed file: http://bugs.python.org/file31657/2to3_nonascii_bytes.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18965 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18965] 2to3 can produce illegal bytes literals
Serhiy Storchaka added the comment: Added a test. -- Added file: http://bugs.python.org/file31952/2to3_nonascii_bytes.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18965 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18037] 2to3 passes through string literal which causes SyntaxError in 3.x
Changes by Serhiy Storchaka storch...@gmail.com: -- resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18037 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18965] 2to3 can produce illegal bytes literals
Serhiy Storchaka added the comment: Backported to 2.7. -- Added file: http://bugs.python.org/file31953/2to3_nonascii_bytes-2.7.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18965 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18965] 2to3 can produce illegal bytes literals
Changes by Serhiy Storchaka storch...@gmail.com: Removed file: http://bugs.python.org/file31952/2to3_nonascii_bytes.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18965 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19143] Finding the Windows version getting messier
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +serhiy.storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19143 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19014] Allow memoryview.cast() for empty views
Stefan Krah added the comment: Ok, I think the main reason for disallowing zeros in view-shape here was that casts are undefined if also the shape argument is given: x = memoryview(b'') x.cast('d', shape=[1]) Now, this case *is* already caught at a later stage, since there isn't enough space for the cast. Nevertheless, the code is tricky, so I'd prefer to be conservative and catch shape arguments earlier. I left a suggestion in Rietveld. I would commit it myself, but I'm moving and my infrastructure is a mess. It would be great if one of you could take this one. I'll try to review the general case for ndim 1 later, but that's not particularly important right now. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19014 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19141] Windows Launcher fails to respect PATH
Mark Hammond added the comment: I am trying to draw attention to the situation where the script has no shebang line, and there is no other explicit configuration info for py.exe. In that case, the user should just type python scriptname.py - py.exe is for cases where just specifying python doesn't do the right thing. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19141 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18725] Multiline shortening
Serhiy Storchaka added the comment: Could anyone please review the patch? -- keywords: +needs review ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18725 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19021] AttributeError in Popen.__del__
Antoine Pitrou added the comment: There is a regression in 3.4 due to changes in shutdown procedure. This code correctly works in 3.3. There are more than a dozen places in the stdlib which rely upon accessibility of builtins. Well, perhaps we can special-case builtins not to be wiped at shutdown. However, there is another problem here in that the Popen object survives until the builtins module is wiped. This should be investigated too. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19021 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19108] Benchmark runner tries to execute external Python command and fails on error reporting
Brett Cannon added the comment: I should mention any solution for the command-line should take a N.N value *only* and not just 2/3 for instances where tests do not work with the latest version of Python yet. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19108 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Brett Cannon added the comment: LGTM; just watch any Windows buildbot for possible failure. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Brett Cannon added the comment: Just an oversight. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19021] AttributeError in Popen.__del__
Richard Oudkerk added the comment: Well, perhaps we can special-case builtins not to be wiped at shutdown. However, there is another problem here in that the Popen object survives until the builtins module is wiped. This should be investigated too. Maybe it is because it uses the evil resuscitate-in-__del__ trick. I presume that if the child process survives during shutdown, then the popen object is guaranteed to survive too. We could get rid of the trick: * On Windows __del__ is unneeded since we don't need to reap zombie processes. * On Unix __del__ could just add self._pid (rather than self) to the list _active. _cleanup() would then use os.waitpid() to check the pids in _active. The hardest thing about making such a change is that test_subprocess currently uses _active. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19021 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19108] Benchmark runner tries to execute external Python command and fails on error reporting
Stefan Behnel added the comment: I'm having trouble understanding your last comment. Are you saing that you want the exact value to be a two digits version and therefore use separate arguments for both Pythons (e.g. --basever 2.7 --cmpver 3.3), or that you want it to accept two digit versions instead of major-only regardless of the number of arguments (e.g. --pyversion 2.7,3.3)? And are you saying that you want to use the version also for, say, disabling a test because it doesn't run in Py3.4 yet, despite supporting Py3.3, for example? A comma is what is used by --args, BTW, that's why I'd reuse it for the versions argument, i.e. --pyversion 2.7,3.3 and additionally --pyversion 3.3 for the simple case. I haven't come up with a patch yet because it requires a number of non-local changes throughout the file. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19108 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19108] Benchmark runner tries to execute external Python command and fails on error reporting
Brett Cannon added the comment: I want to only accept major.minor version specifications; how that is done on the command-line I don't care and leave up to you. And yes, the version may be used in the future to disable tests that e.g. don't work on Python 3.4 like Chameleon. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19108 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19153] Embedding into a shared library fails again
New submission from Rinat: I have same error as here described http://bugs.python.org/issue4434 I made everythings according this article http://docs.python.org/2/extending/embedding.html#compiling-and-linking-under-unix-like-systems and more but when i try to call interpriter from C++ it fails with errors python_support::pre_process_payment() failed Traceback (most recent call last): File string, line 1, in module File hidden-path/payments/__init__.py, line 1, in module from .factory import * File hidden-path/payments/factory.py, line 7, in module import psycopg2 File /usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py, line 50, in module from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: /usr/local/lib/python2.7/dist-packages/psycopg2/_psycopg.so: undefined symbol: PyExc_SystemError Error in sys.excepthook: Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/apport_python_hook.py, line 66, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File /usr/lib/python2.7/dist-packages/apport/__init__.py, line 1, in module from apport.report import Report File /usr/lib/python2.7/dist-packages/apport/report.py, line 16, in module from xml.parsers.expat import ExpatError File /usr/lib/python2.7/xml/parsers/expat.py, line 4, in module from pyexpat import * ImportError: /usr/lib/python2.7/lib-dynload/pyexpat.so: undefined symbol: _Py_ZeroStruct Original exception was: Traceback (most recent call last): File string, line 1, in module File hidden-path/payments/__init__.py, line 1, in module from .factory import * File hidden-path/payments/factory.py, line 7, in module import psycopg2 File /usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py, line 50, in module from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: /usr/local/lib/python2.7/dist-packages/psycopg2/_psycopg.so: undefined symbol: PyExc_SystemError Environment 3.2.0-54-generic-pae #82-Ubuntu (12.04) g++ (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Boost.Python 1.48 Python 2.7.3 (default, Sep 26 2013, 20:08:41) [GCC 4.6.3] on linux2 No custom builds. Every package from ubuntu repository. Psycopg2 installed by pip Project is big so i can't to provide code -- components: Library (Lib) messages: 198900 nosy: rinatous priority: normal severity: normal status: open title: Embedding into a shared library fails again type: behavior versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19153 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18805] ipaddress netmask/hostmask parsing bugs
pmoody added the comment: I've got a patch from pmarks that I've applied to ipaddr and the google code version of ipaddress-py. I'll get it applied to the hg ipaddress. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18805 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19154] AttributeError: 'NoneType' in http/client.py when using select when file descriptor is closed.
New submission from Florent Viard: In Lib/http/client.py +682(Formerly httplib) def fileno(self): return self.fp.fileno() This function should be modified to be able to handle the case where the http request is already completed and so fp is closed. Ex.: def fileno(self): if self.fp: return self.fp.fileno() else: return -1 I encountered the issue in the following context: while 1: read_list = select([req], ...)[0] if read_list: req.read(CHUNK_SIZE) ... Traceback (most recent call last): File /usr/lib/python2.7/site-packages/nappstore/server_comm.py, line 211, in download_file ready = select.select([req], [], [], timeout)[0] File /usr/lib/python2.7/socket.py, line 313, in fileno return self._sock.fileno() File /usr/lib/python2.7/httplib.py, line 655, in fileno return self.fp.fileno() AttributeError: 'NoneType' object has no attribute 'fileno' For the returned value, I'm not sure because there is currently 2 different cases for other objects returning a fileno. In Lib/fileinput.py: -1 is returned in case of ValueError (no fileno value as fp was closed) but in Lib/socket.py: ValueError is raised in that case and default value for fileno for a socket is None -- components: Library (Lib) messages: 198902 nosy: fviard priority: normal severity: normal status: open title: AttributeError: 'NoneType' in http/client.py when using select when file descriptor is closed. type: crash versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19154 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19014] Allow memoryview.cast() for empty views
Roundup Robot added the comment: New changeset b08e092df155 by Antoine Pitrou in branch '3.3': Issue #19014: memoryview.cast() is now allowed on zero-length views. http://hg.python.org/cpython/rev/b08e092df155 New changeset 1e13a58c1b92 by Antoine Pitrou in branch 'default': Issue #19014: memoryview.cast() is now allowed on zero-length views. http://hg.python.org/cpython/rev/1e13a58c1b92 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19014 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19014] Allow memoryview.cast() for empty views
Antoine Pitrou added the comment: Applied Stefan's suggestion. Thanks for the review :) -- resolution: - fixed stage: needs patch - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19014 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19154] AttributeError: 'NoneType' in http/client.py when using select when file descriptor is closed.
Changes by R. David Murray rdmur...@bitdance.com: -- nosy: +r.david.murray ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19154 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19014] Allow memoryview.cast() for empty views
Serhiy Storchaka added the comment: Thank you Antoine. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19014 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Roundup Robot added the comment: New changeset 0d079c66dc23 by Eric Snow in branch 'default': [issue19152] Add ExtensionFileLoader.get_filename(). http://hg.python.org/cpython/rev/0d079c66dc23 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Eric Snow added the comment: I realized after I committed that this should probably be back-ported to 3.3. I'll take care of that in a few hours. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Eric Snow added the comment: changeset: 85941:152f7235667001fe7cb3c90ad79ab421ef8c03bb user:Eric Snow ericsnowcurren...@gmail.com date:Thu Oct 03 12:08:55 2013 -0600 summary: [issue19951] Fix docstring and use of _get_suppported_file_loaders() to reflect 2-tuples. (wrong issue # in commit message) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Roundup Robot added the comment: New changeset a329474cfe0c by Eric Snow in branch 'default': [issue19151] Fix issue number in Misc/NEWS entry. http://hg.python.org/cpython/rev/a329474cfe0c -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Eric Snow added the comment: As with #19152, I'll need to backport this to 3.3. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19087] bytearray front-slicing not optimized
Antoine Pitrou added the comment: Here is a slightly modified patch implementing Serhiy's suggestion. -- Added file: http://bugs.python.org/file31954/bytea_slice3.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19087 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18986] Add a case-insensitive case-preserving dict
Antoine Pitrou added the comment: Raymond, have you had time to look at this? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18986 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17442] code.InteractiveInterpreter doesn't display the exception cause
Changes by Antoine Pitrou pit...@free.fr: -- nosy: +georg.brandl, serhiy.storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17442 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Roundup Robot added the comment: New changeset 32b18998a560 by Eric Snow in branch '3.3': [issue19151] Fix docstring and use of _get_suppported_file_loaders() to reflect 2-tuples. http://hg.python.org/cpython/rev/32b18998a560 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Roundup Robot added the comment: New changeset 832579dbafd6 by Eric Snow in branch '3.3': [issue19152] Add ExtensionFileLoader.get_filename(). http://hg.python.org/cpython/rev/832579dbafd6 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Changes by Eric Snow ericsnowcurren...@gmail.com: -- resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19151] Docstring and WindowsRegistryFinder wrong relative to importlib._bootstrap._get_supported_file_loaders()
Changes by Eric Snow ericsnowcurren...@gmail.com: -- resolution: - fixed stage: commit review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19151 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19087] bytearray front-slicing not optimized
STINNER Victor added the comment: bytea_slice3.patch looks simpler than bytea_slice2.patch, I prefer it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19087 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19155] Display stack info with color in pdb (the w command)
New submission from Peng Yu: Currently, the w command does not show the stack info in color. I think that it might be visually helpful to add some color to emphasize the current frame, etc. May I suggest to add this feature to pdb? Thanks. -- messages: 198916 nosy: Peng.Yu priority: normal severity: normal status: open title: Display stack info with color in pdb (the w command) type: enhancement ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19155 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Brett Cannon added the comment: Actually you need to back out the 3.3 commit. That's a new API in a bugfix release and that's bad. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Changes by Brett Cannon br...@python.org: -- status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Eric Snow added the comment: Dang it. I was thinking of it as a bug that the method wasn't there, but you're right regardless. Revert coming. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Roundup Robot added the comment: New changeset 7ed717bd5faa by Eric Snow in branch '3.3': [issue19152] Revert 832579dbafd6. http://hg.python.org/cpython/rev/7ed717bd5faa -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Eric Snow added the comment: Thanks for noticing that, Brett. -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19152] ExtensionFileLoader missing get_filename()
Berker Peksag added the comment: It would be good to add a versionadded(or versionchanged) tag. -- nosy: +berker.peksag versions: -Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19152 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19146] Improvements to traceback module
Antoine Pitrou added the comment: And adding __slots__ to a namedtuple subclass doesn't work. Are you sure? I do it all the time. -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19146 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19156] Enum helper functions test-coverage
New submission from CliffM: Added some tests for the _is_sunder and _is_dunder helper functions in the enum module. -- components: Tests files: enum.patch keywords: patch messages: 198923 nosy: CliffM priority: normal severity: normal status: open title: Enum helper functions test-coverage type: enhancement versions: Python 3.4 Added file: http://bugs.python.org/file31955/enum.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19156 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19146] Improvements to traceback module
Guido van Rossum added the comment: Well this is what I get: $ python3 Python 3.4.0a1+ (default:41de6f0e62fd+, Aug 27 2013, 18:44:07) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin Type help, copyright, credits or license for more information. from collections import namedtuple from collections import namedtuple A = namedtuple('A', 'foo bar') A = namedtuple('A', 'foo bar') class B(A): class B(A): ... __slots__ = ['baz'] __slots__ = ['baz'] ... Traceback (most recent call last): File stdin, line 1, in module TypeError: nonempty __slots__ not supported for subtype of 'A' When I try to set __slots__ on an existing namedtuple it doesn't complain, but it doesn't work either: A.__slots__ = ['xxx'] a.xxx = 1 Traceback (most recent call last): File stdin, line 1, in module AttributeError: 'A' object has no attribute 'xxx' What am I doing wrong? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19146 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19146] Improvements to traceback module
Antoine Pitrou added the comment: Well this is what I get: $ python3 Python 3.4.0a1+ (default:41de6f0e62fd+, Aug 27 2013, 18:44:07) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin Type help, copyright, credits or license for more information. from collections import namedtuple from collections import namedtuple A = namedtuple('A', 'foo bar') A = namedtuple('A', 'foo bar') class B(A): class B(A): ... __slots__ = ['baz'] __slots__ = ['baz'] ... Traceback (most recent call last): File stdin, line 1, in module TypeError: nonempty __slots__ not supported for subtype of 'A' Ah, ok, you're right. I only use empty __slots__ :-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19146 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com