[issue26843] tokenize does not include Other_ID_Start or Other_ID_Continue in identifier
Joshua Landau added the comment: Sorry, I'd stumbled on my old comment on the closed issue and completely forgot about the *last* time I did the same thing. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26843> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26843] tokenize does not include Other_ID_Start or Other_ID_Continue in identifier
New submission from Joshua Landau: This is effectively a continuation of https://bugs.python.org/issue9712. The line in Lib/tokenize.py Name = r'\w+' must be changed to a regular expression that accepts Other_ID_Start at the start and Other_ID_Continue elsewhere. Hence tokenize does not accept '℘·'. See the reference here: https://docs.python.org/3.5/reference/lexical_analysis.html#identifiers I'm unsure whether unicode normalization (aka the `xid` properties) needs to be dealt with too. Credit to toriningen from http://stackoverflow.com/a/29586366/1763356. -- components: Library (Lib) messages: 264145 nosy: Joshua.Landau priority: normal severity: normal status: open title: tokenize does not include Other_ID_Start or Other_ID_Continue in identifier type: behavior versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26843> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21593] Clarify re.search documentation first match
Changes by Joshua Landau <joshua.landau...@gmail.com>: -- versions: +Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue21593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21593] Clarify re.search documentation first match
Joshua Landau added the comment: This should also be applied to regex.search's docstring. https://docs.python.org/3.5/library/re.html#re.regex.search -- resolution: fixed - status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21593 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24194] tokenize yield an ERRORTOKEN if an identifier uses Other_ID_Start or Other_ID_Continue
New submission from Joshua Landau: This is valid: ℘· = 1 print(℘·) # 1 But this gives an error token: from io import BytesIO from tokenize import tokenize stream = BytesIO(℘·.encode(utf-8)) print(*tokenize(stream.read), sep=\n) # TokenInfo(type=56 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line='') # TokenInfo(type=53 (ERRORTOKEN), string='℘', start=(1, 0), end=(1, 1), line='℘·') # TokenInfo(type=53 (ERRORTOKEN), string='·', start=(1, 1), end=(1, 2), line='℘·') # TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='') This is a continuation of http://bugs.python.org/issue9712. I'm not able to reopen the issue, so I thought I should report it anew. It is tokenize that is wrong - Other_ID_Start and Other_ID_Continue are documented to be valid: https://docs.python.org/3.5/reference/lexical_analysis.html#identifiers -- components: Library (Lib) messages: 243188 nosy: Joshua.Landau priority: normal severity: normal status: open title: tokenize yield an ERRORTOKEN if an identifier uses Other_ID_Start or Other_ID_Continue type: behavior versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24194 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: There is a change as part of this to make dict building more like list and set building, which both have this behaviour. The same changes have likely occurred before whenever BUILD_LIST and BUILD_SET were introduced, and this behaviour seems particularly undefined. That said, I did overlook the difference. Hopefully there's agreement that it doesn't matter. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9712] tokenize yield an ERRORTOKEN if the identifier starts with a non-ascii char
Joshua Landau added the comment: This doesn't seem to be a complete fix; the regex used does not include Other_ID_Start or Other_ID_Continue from https://docs.python.org/3.5/reference/lexical_analysis.html#identifiers Hence tokenize does not accept '℘·'. Credit to modchan from http://stackoverflow.com/a/29586366/1763356. -- nosy: +Joshua.Landau versions: +Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9712 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I don't know the etiquette rules for the issue tracker, but I'd really appreciate having something to debug -- it's working for me, you see. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Special-cased `(*i for i in x)` to use YIELD_FROM instead of looping. Speed improved, albeit still only half as fast as chain.from_iterable. Fixed error message check in test_syntax and removed semicolons. -- Added file: http://bugs.python.org/file37928/starunpack30.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Quick-fix for Guido's bug attached. I'm not familiar with this part of the code, yet, so take this tentatively. I just changed while (containers 1) { to while (containers) { --- @Guido My comments were assuming `f(**x for x in y)` meant `f({**x for x in y})`. I see your reasoning, but I don't like how your version has (x for y in z for x in y) == (*y for y in z) f(x for y in z for x in y) != f(*y for y in z) This seems like a tripping point. I've never wanted to unpack a 2D iterable into an argument list, so personally I'm not convinced by the value-add either. -- Added file: http://bugs.python.org/file37866/starunpack19.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Update for the error messages fix. I've put aside the idea of unifying things for now because there are a couple of interdependencies I wasn't expecting and I absolutely don't want the fast-path for f(x) to get slower. -- Added file: http://bugs.python.org/file37867/starunpack20.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: If we're supporting f(**x for x in y) surely we should also support f(x: y for x, y in z) I personally don't like this idea. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23316] Incorrect evaluation order of function arguments with *args
New submission from Joshua Landau: It is claimed that all expressions are evaluated left-to-right, including in functions¹. However, f(*a(), b=b()) will evaluate b() before a(). ¹ https://docs.python.org/3/reference/expressions.html#evaluation-order -- components: Interpreter Core messages: 234672 nosy: Joshua.Landau priority: normal severity: normal status: open title: Incorrect evaluation order of function arguments with *args type: behavior versions: Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23316 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Amazing, thanks. I also just uncovered http://bugs.python.org/issue23316; we'll need to support a patch for that. In fact, bad evaluation order is why I haven't yet gotten down my unification strategy. I wouldn't worry about extra opcodes when using *args or **kwargs, though; what matters is mostly avoiding extra copies. I guess a few `timeit`s will show whether I'm right or totally off-base. Most of what's needed for the error stuff is already implemented; one just needs to set the top bit flag (currently just 18) to 1 + arg_count_on_stack(), which is a trivial change. I'll push a patch for that after I'm done fiddling with the unification idea. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I've looked at BUILD_MAP(n). It seems to work and has speed improvements but: - I was wrong about the 16-bit int thing. It turns out CPython is happily treating them as 32 bit as long as they are prefixed by an EXTENDED_ARG bytecode https://docs.python.org/3/library/dis.html#opcode-EXTENDED_ARG This could be used by BUILD_MAP rather than falling back to BUILD_MAP_UNPACK. - It's probably best to not include it here, since it's a disjoint issue. This patch wouldn't really be affected by its absence. Also, if we limit BUILD_MAP_MERGE to 255 dictionaries, this will also apply to the {**a, **b, **c, ...} syntax, although I really can't see it being a problem. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Why would that simplify things? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I phrased that badly. Whilst I can see minor simplifications to BUILD_MAP_UNPACK, the only way to add more information to CALL_FUNCTION_XXX would be through EXTENDED_ARG. This seems like it would outweigh any benefits, and the tiny duplication of error checking removed would be far dwarfed by the unpacking code in CALL_FUNCTION_XXX. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: We wouldn't actually need to raise it from somewhere else; the line numbering and frame are already correct. The only difficulty is that the traceback currently says # func(a=1, **{'a': 1}) TypeError: func() got multiple values for keyword argument 'arg' To do this from the UNPACK opcode would require knowing where the function is in order to print its name. (We also need to know whether to do the check at all, so we'd be hijacking some bits the UNPACK opcode anyway.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: The function object that's on the stack. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Just before any arguments to the function. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: No, that happens in CALL_FUNCTION_KW: import dis dis.dis(f(x=1, **{'x': 1})) 1 0 LOAD_NAME0 (f) 3 LOAD_CONST 0 ('x') 6 LOAD_CONST 1 (1) 9 LOAD_CONST 1 (1) 12 LOAD_CONST 0 ('x') 15 BUILD_MAP1 18 CALL_FUNCTION_KW 256 (0 positional, 1 keyword pair) 21 RETURN_VALUE There's no call to BUILD_MAP_UNPACK at all. Namely, it's raised from update_keyword_args (in turn from ext_do_call). --- Tangential note: --- In fact, it seems the only reason we keep the mess of unpacking in two places rather than just using BUILD_TUPLE_UNPACK and BUILD_MAP_UNPACK unconditionally is that CALL_FUNCTION_XXX looks to be slightly more efficient by only dealing with the case of a single unpack at the end. I think I see how to make the _UNPACKs fast enough for this case, though, so maybe we could remove it and unify a few things. I'd need to write it up, though. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I imagine it like (in the map unpacking code) func_offset = (oparg 8) 0xFF; num_maps = oparg 0xFF; // later if (func_offset) { // do checks if (repeated_argument) { raise_error_from_function(PEEK(func_offset + num_maps)); } } This code should be relatively quick, in an already-slow opcode, and rather short. If adding to CALL_FUNCTION_XXX, you would have to add an EXTENDED_ARG opcode (because CALL_FUNCTION_XXX already uses the bottom 16 bits), add checks for the top bits in the opcode, duplicate the (large) dictionary merging function. This doesn't seem like it saves much work. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: The stack will have the function, then any number of positional arguments, then optionally an *args, then any number (= 2) of maps to unpack. To get to the function, you need to know the sum count of all of these. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I think I've fixed the memory leaks (plural). There were also a host of other problems with the _UNPACK opcodes in ceval. Here are the things I remember fixing, although I think I did slightly more: - Not throwing an error when PyDict_New or PyDict_Update fails. - Not doing Py_DECREF on stack items being popped. - Not checking if intersection is non-NULL. - Not doing Py_DECREF on intersection. Now the primary problem is giving good errors; I don't know how to make them look like they came from the function call. I'm not sure I want to, either, since these opcodes are used elsewhere. I do need to check something about this (what requirements are there on how you leave the stack when you goto error?), but that's an easy fix if my current guess isn't right. -- Added file: http://bugs.python.org/file37811/starunpack13.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: The _UNPACK opcodes are new in this changelist. Yup, but they're used in the other unpacking syntax too: (*(1, 2, 3), *(4, 5, 6)) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: According to the standard, int can be only 16 bits long so that only leaves 255/255. However, if the offset is on top of the dictionary count, this is easily enough to clear the limits for the maximum function size (worst case is a merge of 255 dicts with an offset of 1 or a merge of 2 dicts with an offset of 254). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Functions are already limited to 255 arguments, so I don't think so. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: We wouldn't want to replace STORE_MAP since that's used in dictionary comprehensions, but replacing BUILD_MAP with BUILD_MAP(n) sounds like a great idea. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Good catch. CALL_FUNCTION seems to split its opcode into two to give it a positional-keyword pair so this seems fine. I'd hope we can do the same thing; personally I would do: BUILD_MAP_UNPACK( position_of_function_in_stack_or_0 8 | number_to_pack ) This way if building for a function we can do the check *and* give good errors that match the ones raised from CALL_FUNCTION. When the top 8 bits are 0, we don't do checks. What do you think? Would dual-usage be too confusing? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Some of the tests seemed to be failing simply because they were incorrect. This fixes that. -- Added file: http://bugs.python.org/file37806/starunpack12.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: The problem seems to be that with the removal of -else if (TYPE(ch) == STAR) { -vararg = ast_for_expr(c, CHILD(n, i+1)); -if (!vararg) -return NULL; -i++; -} -else if (TYPE(ch) == DOUBLESTAR) { -kwarg = ast_for_expr(c, CHILD(n, i+1)); -if (!kwarg) -return NULL; -i++; -} the code will ignore any subnodes that aren't of type argument. However, the grammar still says arglist: (argument ',')* (argument [','] | '*' test [',' '**' test] | '**' test) so this is incorrect. Here's an example of what you might get inner( a,argument comma *bcd, star test comma e,argument comma f=6,argument comma **{g: 7}, doublestar test comma h=8,argument comma **{i:9} doublestar test ) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: This was a rather minor fix; I basically moved from STORE_SUBSCR to STORE_MAP and fixed a BUILD_MAP opcode. -- Added file: http://bugs.python.org/file37795/starunpack7.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Aye, I'd done so (see starunpack7.diff). It was the fuss to reapply it ontop of your newer diff and making sure I'd read at least *some* of the devguide before barging on. Anyhow, here's another small fix to deal with the [*[0] for i in [0]] problem. I'm not nearly confident I can touch the grammar, though, so the other problems are out of my reach. -- Added file: http://bugs.python.org/file37798/starunpack8.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I'm getting f(x=5, **{'x': 1}, **{'x': 3}, y=2) Traceback (most recent call last): File stdin, line 1, in module TypeError: f() got multiple values for keyword argument 'x' Which, as I understand, is the correct result. I'm using starunpack8.diff right now. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: The problem with using STORE_MAP is you create a new dict for each keyword argument in that situation. You don't; if you look at the disassembly for producing a built-in dict (dis.dis('{1:2, 2:3, 3:4}')) you'll see they use STORE_MAP too. STORE_MAP seems to just be the map equivalent of LIST_APPEND. I've done simple timings that show my version being faster... Unfortunately, it points out there is definitely a memory leak. This reproduces: def f(a): pass while True: f(**{}, a=1) This goes for both patches 8 and 9. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I think I've got it working; I'm just working out how to make a patch and adding a test or two. I think I'll also need to sign the contributor agreement. While I'm at it, here are a few other deviations from the PEP: - {*()} and {**{}} aren't supported - [*[0] for i in [0]] gives a SystemError - return *(1, 2, 3), fails whilst *(1, 2, 3), succeeds -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: 2 here as well: 15 LOAD_CONST 2 ('w') 18 LOAD_CONST 3 (1) 21 BUILD_MAP1 24 LOAD_CONST 4 (2) 27 LOAD_CONST 5 ('x') 30 STORE_MAP 31 BUILD_MAP1 34 LOAD_CONST 6 (3) 37 LOAD_CONST 7 ('y') 40 STORE_MAP 41 LOAD_CONST 8 (4) 44 LOAD_CONST 9 ('z') 47 STORE_MAP 48 LOAD_CONST 10 (5) 51 LOAD_CONST 11 ('r') 54 STORE_MAP 55 BUILD_MAP_UNPACK 2 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Trees
On 20 January 2015 at 04:21, Dan Stromberg drsali...@gmail.com wrote: On Mon, Jan 19, 2015 at 6:46 PM, Mark Lawrence breamore...@yahoo.co.uk wrote: I don't know if you've seen this http://kmike.ru/python-data-structures/ but maybe of interest. I've seen it. It's a nice page. I attempted to get my treap port in there since it has a Cython version, but it didn't seem to take. I've mostly focused on pure python that runs on CPython 2.x, CPython 3.x, Pypy, Pypy3 and Jython. http://www.grantjenks.com/docs/sortedcontainers/ SortedContainers is seriously great stuff; faster and lower memory than the Cython variants yet pure Python and more featureful. -- https://mail.python.org/mailman/listinfo/python-list
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: I take it back; that just causes f(**{}, c=2) XXX lineno: 1, opcode: 105 Traceback (most recent call last): File stdin, line 1, in module SystemError: unknown opcode -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: This causes a segmentation fault if any keyword arguments come after a **-unpack. Minimal demo: f(**x, x=x) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2292] Missing *-unpacking generalizations
Joshua Landau added the comment: Just change if (!PyUnicode_Compare(tmp, key)) { when iterating over prior keyword arguments to if (tmp !PyUnicode_Compare(tmp, key)) { since tmp (the argument's name) can now be NULL. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2292 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: jitpy - Library to embed PyPy into CPython
To: python-list On 7 December 2014 at 14:31, Albert-Jan Roskam fo...@yahoo.com.dmarc.invalid wrote: On Sun, Dec 7, 2014 11:06 AM CET Stefan Behnel wrote: I think this is trying to position PyPy more in the same corner as other JIT compilers for CPython, as opposed to keeping it a completely separate thing which suffers from being not CPython. It's a huge dependency, but so are others. You mean like psyco? Well, if implementation differences between cpython and pypy are a problem, it might be useful. I've only come across a few unimportant ones. Bu then, I never reimplement __del__. http://pypy.readthedocs.org/en/latest/cpython_differences.html Some libraries don't work on PyPy; SciPy for example. If you want to use SciPy but use PyPy where appropriate, this is a good bet. --- SoupGate-Win32 v1.05 * Origin: SpaceSST.BBS.Fidonetnntp.gatew...@.piz.noip.me (1:249/999) --- Synchronet 3.15b-Win32 NewsLink 1.92 SpaceSST BBS Usenet Fidonet Gateway -- https://mail.python.org/mailman/listinfo/python-list
Re: jitpy - Library to embed PyPy into CPython
On 7 December 2014 at 14:31, Albert-Jan Roskam fo...@yahoo.com.dmarc.invalid wrote: On Sun, Dec 7, 2014 11:06 AM CET Stefan Behnel wrote: I think this is trying to position PyPy more in the same corner as other JIT compilers for CPython, as opposed to keeping it a completely separate thing which suffers from being not CPython. It's a huge dependency, but so are others. You mean like psyco? Well, if implementation differences between cpython and pypy are a problem, it might be useful. I've only come across a few unimportant ones. Bu then, I never reimplement __del__. http://pypy.readthedocs.org/en/latest/cpython_differences.html Some libraries don't work on PyPy; SciPy for example. If you want to use SciPy but use PyPy where appropriate, this is a good bet. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python handles globals badly.
On 3 December 2014 at 04:32, Skybuck Flying skybuck2...@hotmail.com wrote: I am still new at python and definetly don't feel comfortable with the object feature, though I did use it for these variables which are actually objects. If you are being serious, please take into consideration that there is no way you are going to convince anyone on this list with the way you are asking. For the most part, people will be convinced when you show them something that would improve *their* life, not something that would improve yours. Being an open-source project lead largely by volunteers, either you convince people on their terms or you go fork the project. To do the former, find an example in the standard library or some popular codebase and show how what you're suggesting could improve the code. If your example is good, you'll get support for the idea. If the example is bad, people will point out how better to approach this. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python, C++ interaction
On 3 December 2014 at 08:29, Michael Kreim mich...@perfect-kreim.de wrote: What are you using to wrap C++ classes for Python? Can you recommend swig? Should I give it another try? Did I misunderstood ctypes? The PyPy guys would love it if you used CFFI. Cython is also a wonderful approach. There's a lot of support of Cython, but it's also possible to find comparisons in CFFI's favour: http://eev.ee/blog/2013/09/13/cython-versus-cffi/ I've not actually used CFFI, but I've heard good things about it. Also later on in our code development we would like to pass python lambda functions to C++. So far I understood that this seems not to be possible. Do you have any ideas or hints on this topics? This is definitely possible. You'll need to build C++ against CPython and use the Python C API. I've personally found that Boost::Python simplified this dramatically and IMHO if most of your code needs to use Python types directly from C++ I'd scrap Cython altogether and just use Boost. Here's some stuff on callbacks with CFFI: https://cffi.readthedocs.org/en/release-0.5/#callbacks -- https://mail.python.org/mailman/listinfo/python-list
Re: What for -- for? (was A bug?)
On 29 October 2014 03:22, Rustom Mody rustompm...@gmail.com wrote: Yesterday I was trying to introduce python to some senior computer scientists. Tried showing a comprehension-based dir-walker vs a for-loop based one: def dw(p): if isfile(p): return [p] else: return [p] + [c for f in listdir(p) for c in dw(p+'/'+f)] ... Comment to me : Well this is neat and compact, but it does not add anything fundamental (over usual index based for-loops) I tried to say that 'for' over general sequences is quite different and significantly more powerful than C/Pascal for over indexes + explicit indexing. If you really want to show the generality of iteration, I suggest you start with iterators: def walk(path): yield path if isdir(path): for name in iterdir(path): for file in walk(path + / + name): yield file This is fundementally inexpressable with indexes. It also lends itself to expressing delegation (eg. yield from walk(path + / + name)). -- https://mail.python.org/mailman/listinfo/python-list
Re: A bug?
On 28 October 2014 00:36, Denis McMahon denismfmcma...@gmail.com wrote: d = [[list(range(1,13))[i*3+j] for j in range(3)] for i in range(4)] A quick note. Ranges (even 2.7's xrange) are all indexable. The cast to a list isn't needed. -- https://mail.python.org/mailman/listinfo/python-list
Re: id == vs is
On 27 October 2014 00:12, Dan Stromberg drsali...@gmail.com wrote: Are the following two expressions the same? x is y Id(x) == id(y) Much of the time, but not all the time. The obvious exception is if id is redefined, but that one's kind of boring. The real thing to watch out for is if the object that x points to is garbage collected before y is evaluated: nothing = ! id(hello + nothing) == id(hello + nothing) # True (hello + nothing) is (hello + nothing) # False Since in the first case the (hello + nothing) gets garbage collected, CPython is allowed to re-use its id. If instead you assign them outside of the expression: nothing = ! x = hello + nothing y = hello + nothing id(x) == id(y) # False the collection cannot happen. Note that in this case CPython is allowed to deduplicate these strings anyway (although in this case it does not), so using is here is not safe. -- https://mail.python.org/mailman/listinfo/python-list
Re: (test) ? a:b
On 26 October 2014 01:03, Ben Finney ben+pyt...@benfinney.id.au wrote: Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes: I suspect that Guido and the core developers disagree with you, since they had the opportunity to fix that in Python 3 and didn't. That doesn't follow; there are numerous warts in Python 2 that were not fixed in Python 3. As I understand it, the preservation of bool–int equality has more to do with preserving backward compatibility. Guido van Rossum answered Jul 28 '11 at 21:20, http://stackoverflow.com/questions/3174392/is-it-pythonic-to-use-bools-as-ints False==0 and True==1, and there's nothing wrong with that. -- https://mail.python.org/mailman/listinfo/python-list
Re: (test) ? a:b
On 27 October 2014 02:28, Ben Finney ben+pyt...@benfinney.id.au wrote: Joshua Landau jos...@landau.ws writes: Guido van Rossum answered Jul 28 '11 at 21:20, http://stackoverflow.com/questions/3174392/is-it-pythonic-to-use-bools-as-ints False==0 and True==1, and there's nothing wrong with that. Guido is incorrect. I've already stated what's wrong. You were arguing about what Guido thinks. I'm pretty sure Guido gets first say in that, regardless of whether anyone agrees with him. Regardless, I feel you're making this out as a black and white issue. Guido isn't incorrect, he just has a different opinion. Designing a language and calling things wrong or right gets you Haskell. You can discuss the advantages of each approach without drawing lines in the sand. Although if you do want a language like Haskell, there are a few great choices to chose from. -- https://mail.python.org/mailman/listinfo/python-list
Re: How do I check if a string is a prefix of any possible other string that matches a given regex.
On 7 October 2014 17:15, jonathan.slend...@gmail.com wrote: Probably I'm turning the use of regular expressions upside down with this question. I don't want to write a regex that matches prefixes of other strings, I know how to do that. I want to generate a regex -- given another regex --, that matches all possible strings that are a prefix of a string that matches the given regex. [...] Logically, I'd think it should be possible by running the input string against the state machine that the given regex describes, and if at some point all the input characters are consumed, it's a match. (We don't have to run the regex until the end.) But I cannot find any library that does it... How wide a net are you counting regular expressions to be? What grammar are you using? -- https://mail.python.org/mailman/listinfo/python-list
[issue22451] filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type
Joshua Landau added the comment: That sounds OK to me. It's a bit of a non-issue once you know about it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22451 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22451] filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type
New submission from Joshua Landau: All code referred to is from bltinmodule.c, Python 2.7.8: https://github.com/python/cpython/blob/2.7/Python/bltinmodule.c filter implements and optimization for PyBool_Type, making it equivalent to PyNone: # Line 303 if (func == (PyObject *)PyBool_Type || func == Py_None) The specializations for tuples, byte strings and unicode don't have this: # Lines 2776, 2827, 2956, 2976 if (func == Py_None) This is a damper against recommending `filter(bool, ...)`. --- Python 3 of course does not have these specializations, so has no bug. -- components: Library (Lib) messages: 227199 nosy: Joshua.Landau priority: normal severity: normal status: open title: filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type type: performance versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22451 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22451] filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type
Joshua Landau added the comment: It's solely a speed thing. I think it was an oversight that the optimisation was only applied to lists. I didn't expect the optimisation to break when applied to tuples. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22451 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: We made from water every living thing...
On 8 September 2014 12:54, David H. Lipman DLipman~nospam~@verizon.net wrote: From: Ned Batchelder n...@nedbatchelder.com On 9/7/14 5:41 PM, Tony the Tiger wrote: Now, kindly get the fuck outta here, you fucking retard! That was unnecessary, ineffective, and totally outside the bounds of this community's norms: http://www.python.org/psf/codeofconduct The Python.Org CoC does not extend to Usenet unless a FAQ is properly published for news:comp.lang.python I don't think allowing people to be disrespectful because they accessed the forum in a different way is a good idea. I'd rather we all just be nice. -- https://mail.python.org/mailman/listinfo/python-list
Re: How to turn a string into a list of integers?
On 3 September 2014 15:48, c...@isbd.net wrote: Peter Otten __pete...@web.de wrote: [ord(c) for c in This is a string] [84, 104, 105, 115, 32, 105, 115, 32, 97, 32, 115, 116, 114, 105, 110, 103] There are other ways, but you have to describe the use case and your Python version for us to recommend the most appropriate. That looks OK to me. It's just for outputting a string to the block write command in python-smbus which expects an integer array. Just be careful about Unicode characters. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 23 August 2014 23:53, Chris Angelico ros...@gmail.com wrote: On Sun, Aug 24, 2014 at 8:47 AM, Joshua Landau jos...@landau.ws wrote: On 23 August 2014 23:31, Chris Angelico ros...@gmail.com wrote: I'd say never is too strong (there are times when it's right to put an import inside a function), but yes, in this case it should really be at the top of the function. But do any of them apply to import math? Yep. If you have only one function that will ever use it, and that function often won't ever be called, then putting the import inside the function speeds up startup. Anything that cuts down on I/O can give a dramatic performance improvement. python -c import time; a = time.time(); import math; b = time.time(); print(b-a) 0.0005981922149658203 *squints eyes* Is math not already imported by start-up? However, you won't need the import at all if you let the formatting function do the rounding for you. Can that floor? I'm not sure, dig into the format spec and see! FWIW, I haven't seen something that does so. -- https://mail.python.org/mailman/listinfo/python-list
Re: Global indent
On 23 August 2014 22:55, Rustom Mody rustompm...@gmail.com wrote: On Sunday, August 24, 2014 2:27:56 AM UTC+5:30, Joshua Landau wrote: Ay, so is any editor with an API. I use Sublime mostly because it's pretty, fast and has a Python-based API. The only actual feature it has that some others don't is multiple selections, and even then a lot do. You mean this? http://emacsrocks.com/e13.html Yup. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 24 August 2014 20:19, Ian Kelly ian.g.ke...@gmail.com wrote: On Sun, Aug 24, 2014 at 1:17 PM, Ian Kelly ian.g.ke...@gmail.com wrote: On Sun, Aug 24, 2014 at 1:12 PM, Joshua Landau jos...@landau.ws wrote: Is math not already imported by start-up? Why would it be? It's easy to check, by the way: $ python -c import sys; print(sys.modules['math']) I don't mean into the global namespace, but imported by other modules (like the builtins) and thus cached, making instantiation trivial. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 24 August 2014 20:25, Joshua Landau jos...@landau.ws wrote: On 24 August 2014 20:19, Ian Kelly ian.g.ke...@gmail.com wrote: On Sun, Aug 24, 2014 at 1:17 PM, Ian Kelly ian.g.ke...@gmail.com wrote: On Sun, Aug 24, 2014 at 1:12 PM, Joshua Landau jos...@landau.ws wrote: Is math not already imported by start-up? I don't mean into the global namespace, but imported by other modules (like the builtins) and thus cached, making instantiation trivial. Although it doesn't seem to be: python -c import sys; print('math' in sys.modules) False An even easier check: python -c import time; a = time.time(); import math; b = time.time(); print(b-a) 0.0006012916564941406 python -c import math, time; a = time.time(); import math; b = time.time(); print(b-a) 9.5367431640625e-06 I guess I'm just pessimistic. Even so, that's not much reason to hide the import inside a function. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 24 August 2014 20:40, Ian Kelly ian.g.ke...@gmail.com wrote: That's the same check I posted, just using the in operator instead of a straight lookup and raising an error. I think I need to take a break from the internet. This is the second time in as many threads that I've responded with what I'm commenting on. *sigh* -- https://mail.python.org/mailman/listinfo/python-list
Re: Global indent
(Since this is already an editor war...) On 23 August 2014 10:41, Christian Gollwitzer aurio...@gmx.de wrote: Sometimes I impress my colleagues with what they call magic, i.e. creating special repeated lists of numbers by a few keystrokes in gvim, and that has triggered the request from them to learn a bit of (g)vim. I have yet to be truly impressed by Vim, in that Sublime Text with a few extensions seems to do the same things just as easily. I find that Vim and Emacs users consistently underrate the powers of these editors, presumably because they've never put nearly as much effort into them as they have into their Vim or Emacs. For example, to make a numbered list in (my) Sublime Text (fully custom shortcuts ahead): Alt-1 Alt-0 Alt-0 to repeat the next command 100 times Enter to insert 100 new lines, so 101 in total Ctrl-A to select all text (can be done more fancily, but keep this simple for now) Ctrl-l to select lines (creates multiple selections), ignoring the blank end of selection $: to write some text Ctrl-Shift-Home to select to beginning of line Ctrl-e to replace $ with consecutive numbers (also supports using Python's {} with all of its formatting options) With an increment function and macros: 1: to write some text Ctrl-q to start macro recording Ctrl-d to duplicate line (and select it) Leftto go to start of selection INCREMENT to increment number (emulated by evaluating 1+number with Python [1, +, Ctrl-left, Shift-Home, Ctrl-Shift-e]) Ctrl-q to finish macro recording Alt-1 Alt-0 Alt-0 to repeat the next command 100 times Ctrl-Shift-qto repeat macro Compare with Vim: http://stackoverflow.com/questions/4224410/macro-for-making-numbered-lists-in-vim -- https://mail.python.org/mailman/listinfo/python-list
Re: Global indent
On 23 August 2014 17:17, Christian Gollwitzer aurio...@gmx.de wrote: Am 23.08.14 16:19, schrieb Joshua Landau: On 23 August 2014 10:41, Christian Gollwitzer aurio...@gmx.de wrote: Sometimes I impress my colleagues with what they call magic, i.e. creating special repeated lists of numbers by a few keystrokes in gvim, and that has triggered the request from them to learn a bit of (g)vim. I have yet to be truly impressed by Vim, in that Sublime Text with a few extensions seems to do the same things just as easily. I find that Vim and Emacs users consistently underrate the powers of these editors, presumably because they've never put nearly as much effort into them as they have into their Vim or Emacs. I never looked into Sublime, because it costs money. But no doubt it is a powerful editor, judging from comments of other people. Ay, so is any editor with an API. I use Sublime mostly because it's pretty, fast and has a Python-based API. The only actual feature it has that some others don't is multiple selections, and even then a lot do. My point is more about how using Emacs or Vim and having a powerful editor is mostly the symptom of the same thing, not a causal relation. For example, to make a numbered list in (my) Sublime Text (fully custom shortcuts ahead): [ ... some keystrokes ...] I'd actually do this in gvim to put numbers at each line: - Select text (by mouse, or v + cursor movements) - ! awk '{print NR . $0}' Yes, it is cheating, it pipes the selected text through an external tool. But why should I do the tedious exercise of constructing an editor macro, when an external tool like awk can do the same so much easier? Because it normally happens more like this: Move to copy something that I wish to postfix with a number Ctrl-d a few times to select copies of that fragment Write $ and select it Press Ctrl-e to turn $s into numbers Luckily that one doesn't happen too often either because numbering things sequentially is better left to loops. The key binding is primarily used for evaluating snippets of code inline. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 23 August 2014 18:47, Seymore4Head Seymore4Head@hotmail.invalid wrote: Anyone care to suggest what method to use to fix the decimal format? It sounds like you want a primer on floating point. The documentation of the decimal module is actually a good read, although I don't doubt there are even better resources somewhere: https://docs.python.org/3/library/decimal.html Note that you probably also want to use the decimal module, seeing as it's good at storing decimals. Finally, look at moneyfmt in the decimal docs: https://docs.python.org/3/library/decimal.html#recipes -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 23 August 2014 22:13, Seymore4Head Seymore4Head@hotmail.invalid wrote: def make_it_money(number): import math return ' + str(format(math.floor(number * 100) / 100, ',.2f')) So for one import math should never go inside a function; you should hoist it to the top of the file with all the other imports. You then have def make_it_money(number): return '$' + str(format(math.floor(number * 100) / 100, ',.2f')) Consider the '$' + STUFF This takes your formatted string (something like '12.43') and adds a $ to the front. So then consider str(format(math.floor(number * 100) / 100, ',.2f')) The first thing to note is that format is defined like so: help(format) # Help on built-in function format in module builtins: # # format(...) # format(value[, format_spec]) - string # # Returns value.__format__(format_spec) # format_spec defaults to # format returns a string, so the str call is unneeded. You then consider that format takes two arguments: math.floor(number * 100) / 100 and ',.2f' Looking at the (well hidden ;P) documentation (https://docs.python.org/3/library/string.html#formatspec) you find: The ',' option signals the use of a comma for a thousands separator. For a locale aware separator, use the 'n' integer presentation type instead. and The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with'f' and 'F', or before and after the decimal point for a floating point value formatted with 'g' or 'G'. So this says two decimal places with a comma separator. Then consider math.floor(number * 100) / 100 This takes a number, say 12345.6789, multiplies it by 100, to say 1234567.89, floors it, to say 1234567 and then divides by 100, to say, 12345.67. In other words it floors to two decimal places. The one thing to note is that binary floating point doesn't divide exactly by 100, so this might not actually give a perfect answer. It'll probably be good enough for your purposes though. -- https://mail.python.org/mailman/listinfo/python-list
Re: Working with decimals
On 23 August 2014 23:31, Chris Angelico ros...@gmail.com wrote: On Sun, Aug 24, 2014 at 7:47 AM, Joshua Landau jos...@landau.ws wrote: So for one import math should never go inside a function; you should hoist it to the top of the file with all the other imports. I'd say never is too strong (there are times when it's right to put an import inside a function), but yes, in this case it should really be at the top of the function. But do any of them apply to import math? However, you won't need the import at all if you let the formatting function do the rounding for you. Can that floor? -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3 is killing Python
On 15 July 2014 23:40, Abhiram R abhi.darkn...@gmail.com wrote: On Wed, Jul 16, 2014 at 4:00 AM, Kevin Walzer k...@codebykevin.com wrote: ...but Unix/newsgroup ettiquette says that it's gauche to [top post], because it presents an unacceptable cognitive burden to the user trying to catch the context of the thread by forcing them to read your reply first, before they read the preceding quoted comments. Aah. Understood. Apologies for the noobishness :) Also heinous is the crime of not trimming. A post should contain all of the context needed to understand the reply, in order, and nothing more. -- https://mail.python.org/mailman/listinfo/python-list
Re: OT: This Swift thing
On 12 June 2014 03:08, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: We know *much more* about generating energy from E = mc^2 than we know about optimally flipping bits: our nuclear reactions convert something of the order of 0.1% of their fuel to energy, that is, to get a certain yield, we merely have to supply about a thousand times more fuel than we theoretically needed. That's about a thousand times better than the efficiency of current bit-flipping technology. You're comparing a one-use device to a trillion-use device. I think that's unfair. Tell me when you find an atom splitter that works a trillion times. Then tell me what it's efficiency is, because it's not nearly 0.1%. -- https://mail.python.org/mailman/listinfo/python-list
Re: try/except/finally
On 8 June 2014 08:12, Marko Rauhamaa ma...@pacujo.net wrote: Does anyone have an example motivating a return from finally? It seems to me it would always be a bad idea as it silently clears all unexpected exceptions. In a general sense: try: something_that_can_break() return foo() # before clean_up finally: clean_up() if default: return default() # after clean_up() What's the best replacement? Note: I've never done this. --- I do sometimes use try: return x finally: x += 1 over ret = x x += 1 return ret now-a-days. -- https://mail.python.org/mailman/listinfo/python-list
Re: try/except/finally
On 6 June 2014 18:39, Roy Smith r...@panix.com wrote: The only way I can think of to bypass a finally block would be to call os._exit(), or send yourself a kill signal. If you're willing to use implementation details... --- # BreakN.py import sys # Turn tracing on if it is off if sys.gettrace() is None: sys.settrace(lambda frame, event, arg: None) def break_n(n): frame = sys._getframe().f_back for _ in range(n): frame.f_trace = skip_function_tracer frame = frame.f_back def skip_function_tracer(frame, event, arg): try: # Skip this line while True: frame.f_lineno += 1 except ValueError as e: # Finished tracing function; remove trace pass --- # Thing_you_run.py from BreakN import break_n def foo(): try: print(I am not skipped) break_n(1) print(I am skipped) ... finally: print(I am skipped) ... foo() # I am not skipped -- https://mail.python.org/mailman/listinfo/python-list
Re: Unicode and Python - how often do you index strings?
On 4 June 2014 15:50, Michael Torrie torr...@gmail.com wrote: On 06/04/2014 12:50 AM, wxjmfa...@gmail.com wrote: [Things] [Reply to things] Please. Just don't. -- https://mail.python.org/mailman/listinfo/python-list
[issue21642] _ if 1else _ does not compile
Joshua Landau added the comment: Here's a minimal example of the difference: 1e # ... etc ... # SyntaxError: invalid token 1t # ... etc ... # SyntaxError: invalid syntax -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21642 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21642] _ if 1else _ does not compile
New submission from Joshua Landau: By the docs, Except at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens. Whitespace is needed between two tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is one token, but a b is two tokens). _ if 1else _ should compile equivalently to _ if 1 else _. The tokenize module does this correctly: import io import tokenize def print_tokens(string): tokens = tokenize.tokenize(io.BytesIO(string.encode(utf8)).readline) for token in tokens: print({:12}{}.format(tokenize.tok_name[token.type], token.string)) print_tokens(_ if 1else _) # ENCODINGutf-8 # NAME_ # NAMEif # NUMBER 1 # NAMEelse # NAME_ # ENDMARKER but it fails when compiled with, say, compile, eval or ast.parse. import ast compile(_ if 1else _, , eval) # Traceback (most recent call last): # File , line 32, in module # File string, line 1 # _ if 1else _ # ^ # SyntaxError: invalid token eval(_ if 1else _) # Traceback (most recent call last): # File , line 40, in module # File string, line 1 # _ if 1else _ # ^ # SyntaxError: invalid token ast.parse(_ if 1else _) # Traceback (most recent call last): # File , line 48, in module # File /usr/lib/python3.4/ast.py, line 35, in parse # return compile(source, filename, mode, PyCF_ONLY_AST) # File unknown, line 1 # _ if 1else _ # ^ # SyntaxError: invalid token Further, some other forms work: 1 if 0b1else 0 # 1 1 if 1jelse 0 # 1 See http://stackoverflow.com/questions/23998026/why-isnt-this-a-syntax-error-in-python particularly, http://stackoverflow.com/a/23998128/1763356 for details. -- messages: 219614 nosy: Joshua.Landau priority: normal severity: normal status: open title: _ if 1else _ does not compile type: compile error versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21642 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21593] Clarify re.search documentation first match
New submission from Joshua Landau: The documentation for re.search does not state that it returns the first match. This should be added, or a clarification added if this is implementation-defined. https://docs.python.org/3/library/re.html#re.search --- See also http://stackoverflow.com/questions/23906400/is-regular-expression-search-guaranteed-to-return-first-match -- assignee: docs@python components: Documentation messages: 219270 nosy: Joshua.Landau, docs@python priority: normal severity: normal status: open title: Clarify re.search documentation first match versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21593 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21587] Tabs in source
New submission from Joshua Landau: There are tabs in the source: http://hg.python.org/cpython/file/5c8d71516235/Include/listobject.h#l49 I don't really know, but this seems like a release blocker to me ;). -- messages: 219183 nosy: Joshua.Landau priority: normal severity: normal status: open title: Tabs in source ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21587 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21547] '!s' formatting documentation bug
New submission from Joshua Landau: In the docs for 2.x about the formatting syntax: https://docs.python.org/2/library/string.html#format-string-syntax it says Two conversion flags are currently supported: '!s' which calls str() on the value, and '!r' which calls repr(). but for unicode formatters, '!s' calls unicode() instead. See http://stackoverflow.com/questions/23773816/why-python-str-format-doesnt-call-str for the question that found this. -- assignee: docs@python components: Documentation messages: 218863 nosy: Joshua.Landau, docs@python priority: normal severity: normal status: open title: '!s' formatting documentation bug versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21547 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Python Internet Database
On 9 May 2014 22:06, Chris Angelico ros...@gmail.com wrote: On Sat, May 10, 2014 at 6:45 AM, jun...@gmail.com wrote: 2 - Jit compiler for using from a web server. I mean, one has a web server running under Apache in a hosting service like Hostgator, Daddy Host or another inexpensive service. I decide to run a few applications in Racket, but the application requires number crunching. I install the Jit Racket in the hosting service, and call it from my dynamic generated page. My programs will run almost at the speed of optimised C. For number crunching, you can use the numpy library, which is highly efficient. For general JIT compilation of actual Python code, PyPy will do that. AFAIK there's no standard module for that, though. There's also Numba for JIT compilation of Numpy code inside CPython. -- https://mail.python.org/mailman/listinfo/python-list
Inconsistent viewkeys behaviour
Is there any reference for this strange behaviour on Python 2: set() dict().viewkeys() Traceback (most recent call last): File stdin, line 1, in module TypeError: can only compare to a set dict().viewkeys() set() False ? -- https://mail.python.org/mailman/listinfo/python-list
Re: symple programming task
On 20 April 2014 20:27, Ivan Ivanivich ivriabt...@gmail.com wrote: thanks, i found the bag G'day. This [https://xkcd.com/979/] applies to threads ending in nvm, solved it too. I know the problem in your case isn't likely to be widely useful, but there are other benefits of pointing out what you've done. For example the list members can tell you if your solution misses anything. -- https://mail.python.org/mailman/listinfo/python-list
Re: Martijn Faassen: The Call of Python 2.8
On 15 April 2014 06:03, Marko Rauhamaa ma...@pacujo.net wrote: Terry Reedy tjre...@udel.edu: Any decent system should have 3.4 available now. Really, now? Which system is that? Arch is on 3.4 *default*. $ python Python 3.4.0 (default, Mar 17 2014, 23:20:09) [...] -- https://mail.python.org/mailman/listinfo/python-list
Re: Martijn Faassen: The Call of Python 2.8
On 15 April 2014 23:18, Ned Batchelder n...@nedbatchelder.com wrote: On 4/15/14 5:34 PM, Joshua Landau wrote: Arch is on 3.4 *default*. $ python Python 3.4.0 (default, Mar 17 2014, 23:20:09) [...] Yeah, that's the wrong way to do it, and they shouldn't have done that. python needs to mean Python 2.x for a long time. Why? The only things that break are things outside of the official repos, and the vast majority of the user repository works flawlessly. If I get something from the source, I normally run it explicitly (python the_thing) and on the very rare occasion it breaks (when it's 2.x and uses python to mean python2) I can trivially patch or wrap it, and file a bug report. The python = python3 choice of Arch is not what takes up maintenance time, and it's good to prepare developers ahead of time. That's what rolling release is all about: getting the best and preparing the rest. -- https://mail.python.org/mailman/listinfo/python-list
Re: Martijn Faassen: The Call of Python 2.8
On 16 April 2014 01:42, Devin Jeanpierre jeanpierr...@gmail.com wrote: Yes. Software included in Arch, and programs installed via distutils, will both work correctly under Arch. [...] I don't like how Arch created a situation where it was impossible to support Arch and Debian at the same time with standalone Python 2.x programs (due to a missing python2 and differing python in Debian). Let the developers aim at Debian and other mainstream distros and Arch will clean it up for its own use. Isn't that how it normally works? This did, however, quickly result in python2 symlinks, which I think is extremely good in the long run to have ingrained in people's habits. I don't like how the migration was not communicated sufficiently clearly to users[*], so that when they saw weird Python errors, they came to the Python community instead of to Arch That's not expected Arch user behaviour ;). I don't like how their new and unusual executable naming scheme forced into existence a PEP [1] to figure out how to bring Python and Debian into line, and I don't like how Debian was forced to do extra work to make life easier for Python 2.x developers and resolve problems that only existed because of what Arch did. I don't agree entirely. Arch was early, perhaps earlier than reasonable, but python2 was going to be needed soon anyway, especially since it significantly aids adoption of the version-prepended names. It's worth stating clearly: there is actually no technical benefit to changing what the python symlink points to. If we want to do such a thing, it is for cultural reasons, and there is no urgency to it. It can be done over an extremely long period of time. This is Arch. The fact that it *can* be done over a long period of time falls far behind the cultural reasons in level of importance. [*] One might also ask why they didn't do a phase where python2 was python 2.x, python3 was python 3.x, and python was 2.x but also gave a warning to upgrade your stuff because the meaning of the symlink was changing. There is no good reason. The stated reason was that warnings are annoying -- so they broke everything instead of giving warnings. [2] [2] https://mail.python.org/pipermail/python-dev/2010-November/105299.html Thanks for the read; I found it rather entertaining. Apologies about the #python grief. I disagree with you about the warnings. Arch is made to move fast and this is made abundantly clear: @https://wiki.archlinux.org/index.php/The_Arch_Way This user-centric design necessarily implies a certain do-it-yourself approach to using the Arch distribution. Rather than pursuing assistance or requesting a new feature to be implemented by developers, Arch Linux users have a tendency to solve problems themselves and generously share the results with the community and development team – a do first, then ask philosophy. This is especially true for user-contributed packages found in the Arch User Repository – the official Arch Linux repository for community-maintained packages. If people want to run Arch but don't want the Arch way, then there's not much we can do about it. Arch isn't going to compromise its demographic because a different demographic is also using it. -- https://mail.python.org/mailman/listinfo/python-list
Re: python obfuscate
On 11 April 2014 02:29, Wesley nisp...@gmail.com wrote: Does python has any good obfuscate? Most other people on the list will point out why such a thing is mostly pointless and you don't really need it. However, if this really is your major blocker to using Python, I suggest compiling with Cython. There are downsides, but untyped Cython basically compiles the bytecode into C without actually changing the program, making compatibility really good. It's very difficult to reverse-engineer, largely because there aren't specialised tools to do it. But I do warn that it's adding another abstracting step that doesn't improve - it probably harms - the overall usability of the product. Further, a determined hacker can circumvent it, much as they can circumvent everything else. -- https://mail.python.org/mailman/listinfo/python-list
Re: python obfuscate
On 11 April 2014 10:17, Sturla Molden sturla.mol...@gmail.com wrote: Joshua Landau jos...@landau.ws wrote: However, if this really is your major blocker to using Python, I suggest compiling with Cython. Cython restains all the code as text, e.g. to readable generate exceptions. Users can also still steal the extension modules and use them in their own code. In general, Cython is not useful as an obfuscation tool. Ah, thanks for the info. I imagine it's perfectly easy to get around that, though, through basic removal at the C phase. I doubt it's worthwhile doing so, but deobfuscation will still be harder than a .pyc. -- https://mail.python.org/mailman/listinfo/python-list
Re: Balanced trees
On 18 March 2014 01:01, Daniel Stutzbach stutzb...@google.com wrote: I would love to have include macro-benchmarks. I keep waiting for the PyPy benchmark suite to get ported to Python 3... *grins* Delete a slice is fudged from its inclusion of multiplication, which is far faster on blists. I admit that it's not obvious how to fix this. I could move the initialization into the timed part, similar to what I did for sort (see below). That has downsides too, of course, but it might be an improvement. You could try making a baseline and subtracting it: timer(del x[len(x)//4:3*len(x)//4]; x *= 2) - timer(x * 2) Not ideal, but closer, assuming that the multiplication isn't much larger than the deletion. Error would be summed. Sort * are really unfair because they put initialisation in the timed part That's a limitation of timeit. The setup step is only executed once. If I put the initialization there, every sort after the first one would be sorting a pre-sorted list. If you compare the Create form an iterator and Sort a random list, you'll see that the initialization cost is dwarfed by the sorting cost for n 15 or so. This argument is slightly less convincing without the overhead of the keys. It might be worth doing a subtraction and adding some error-bars as I suggest above. Nevertheless, I do agree for n some small n, which is all that matters anyway. and all have keys. If you use classes with __lt__ methods instead of keys, the cost is dominated by the calls to __lt__. You're right that I should include both, though. This argument doesn't make sense to me. The only time this happens is when you have a non-primitive and your transformation gives a primitive which has optimised comparisons. This typically only happens when the key is a getitem or getattr, in which case it's just meaningless overhead. I see little reason to care about the key's cost in those cases. That's definitely a cache issue, which is always a risk with micro-benchmarks. I agree it's more interesting to pick items randomly instead of always querying the same index. The overhead of choice() is kind of a problem, though. Since I'm only plotting up to 10**5, I'd expect these to look more or less flat. You could try jumping around to avoid the cache without using random numbers. Something like idx = (idx + LARGE_PRIME) % n might have less overhead. Further, the subtraction method would probably work fine for that. Also, I don't think the cache is all bad. Chances are a lot of list accesses have a lot of data locality. Thanks for all of the feedback. Thanks in turn for the module :). -- https://mail.python.org/mailman/listinfo/python-list
Re: Balanced trees
On 17 March 2014 21:16, Daniel Stutzbach stutzb...@google.com wrote: On Fri, Mar 14, 2014 at 6:13 PM, Joshua Landau jos...@landau.ws wrote: Now, I understand there are downsides to blist. Particularly, I've looked through the benchmarks and they seem untruthful. I worked hard to make those benchmarks as fair as possible. I recognize that evaluating your own work always runs the risk of introducing hidden biases, and I welcome input on how they could be improved. Thanks. First, I want to state that there are two aspects to my claim. The first is that these benchmarks to not represent typical use-cases. I will not go too far into this, though, because it's mostly obvious. The second is that of the the flaws in the benchmarks themselves. I'll go through in turn some that are apparent to me: Create from an iterator gives me relatively different results when I run it (Python 3). Delete a slice is fudged from its inclusion of multiplication, which is far faster on blists. I admit that it's not obvious how to fix this. First in, first out (FIFO) should be x.append(0); x.pop(0). Last in, first out (LIFO) should use pop() over pop(-1), although I admit it shouldn't make a meaningful difference. Sort * are really unfair because they put initialisation in the timed part and all have keys. The benchmarks on Github are less bad, but the website really should include all of them and fix the remaining problems. I do understand that TimSort isn't the most suited algorithm, though, so I won't read too far into these results. Further, some of these tests don't show growth where they should, such as in getitem. The growth is readily apparent when measured as such: python -m timeit -s from random import choice; import blist; lst = blist.blist(range(10**0)) choice(lst) 100 loops, best of 3: 1.18 usec per loop python -m timeit -s from random import choice; import blist; lst = blist.blist(range(10**8)) choice(lst) 100 loops, best of 3: 1.56 usec per loop Lower size ranges are hidden by the function-call overhead. Perhaps this effect is to do with caching, in which case the limits of the cache should be explained more readily. Nevertheless, my enthusiasm for blist as an alternative stdlib implementation remains. There are obvious and large advantages to be had, sometimes when you wouldn't even expect. The slower aspects of blist are also rarely part of the bottlenecks of programs. So yeah, go for it. -- https://mail.python.org/mailman/listinfo/python-list
Re: Balanced trees
On 8 March 2014 20:37, Mark Lawrence breamore...@yahoo.co.uk wrote: I've found this link useful http://kmike.ru/python-data-structures/ I also don't want all sorts of data structures added to the Python library. I believe that there are advantages to leaving specialist data structures on pypi or other sites, plus it means Python in a Nutshell can still fit in your pocket and not a 40 ton articulated lorry, unlike the Java equivalent. The thing we really need is for the blist containers to become stdlib (but not to replace the current list implementation). The rejected PEP (http://legacy.python.org/dev/peps/pep-3128/) misses a few important points, largely in how the log(n) has a really large base: random.choice went from 1.2µs to 1.6µs from n=1 to n=10⁸, vs 1.2µs for a standard list. Further, it's worth considering a few advantages: * copy is O(1), allowing code to avoid mutation by just copying its input, which is good practice. * FIFO is effectively O(1), as the time just about doubles from n=1 to n=10⁸ so will never actually branch that much. There is still a speed benefit of collections.deque, but it's much, much less significant. This is very useful when considering usage as a multi-purpose data structure, and removes demand for explicit linked lists (which have foolishly been reimplemented loads of times). * It reduces demand for trees: * There are efficient implementations of sortedlist, sortedset and sorteddict. * Slicing, slice assignment and slice deletion are really fast. * Addition of lists is sublinear. Instead of list(itertools.chain(...)), one can add in a loop and end up *faster*. I think blist isn't very popular not because it isn't really good, but because it isn't a specialised structure. It is, however, almost there for almost every circumstance. This can help keep the standard library clean, especially of tree data structures. Here's what we kill: * Linked lists and doubly-linked lists, which are scarily popular for whatever reason. Sometimes people claim that collections.deque isn't powerful enough for whatever they want, and blist will almost definitely sate those cases. * Balanced trees, with blist.sortedlist. This is actually needed right now. * Poor performance in the cases where a lot of list merging and pruning happens. * Most uses of bisect. * Some instances where two data structures are used in parallel in order to keep performance fast on disparate operations (like `x in y` and `y[i]`). Now, I understand there are downsides to blist. Particularly, I've looked through the benchmarks and they seem untruthful. Further, we'd need a maintainer. Finally, nobody jumps at blists because they're rarely the obvious solution. Rather, they attempt to be a different general solution. Hopefully, though, a stdlib inclusion could make them a lot more obvious, and support in some current libraries could make them feel more at home. I don't know whether this is a good idea, but I do feel that it is more promising and general than having a graph in the standard library. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tuples and immutability
On 28 February 2014 14:43, Chris Angelico ros...@gmail.com wrote: On Sat, Mar 1, 2014 at 1:41 AM, Joshua Landau jos...@landau.ws wrote: Would it be better to add a check here, such that if this gets raised to the top-level it includes a warning (Addition was inplace; variable probably mutated despite assignment failure)? That'd require figuring out whether or not the variable was actually mutated, and that's pretty hard to work out. It does not. First, the warning is actually an attachment to the exception so is only shown if the exception is uncaught. This should basically never happen in working code. The warning exists only to remove likely misunderstanding in these odd cases. Even if x = (1,); x[0] += 1 warned addition was inplace; possible mutation occurred or whatever phrasing you wish, this would only cause a quick check of the results. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tuples and immutability
On 9 March 2014 18:13, Chris Angelico ros...@gmail.com wrote: I think I see what you're saying here. But ignore top-level; this should just be a part of the exception message, no matter what. I don't think I was clear, but yes. That. What you're saying is that this should notice that it's doing an augmented assignment and give some more text. This can be done; all you need to do is catch the error and reraise it with more info: [...] Now you can look at writing an import hook that does an AST transform, locating every instance of item assignment and wrapping it like that. It's certainly possible. I'm not sure how much benefit you'd get, but it could be done. I would probably implement it closer to home. Inside tuple.__getitem__, there would be something like if context_is_augmented_assignment(): raise TypeError(message+warning) else: raise TypeError(message) which would have much lower technical costs. It does depend on how costly context_is_augmented_assignment is, though. A speed argument is totally valid, but I'd hope it's possible to work around. -- https://mail.python.org/mailman/listinfo/python-list
Re: Tuples and immutability
On 27 February 2014 16:14, Chris Angelico ros...@gmail.com wrote: On Fri, Feb 28, 2014 at 3:01 AM, Eric Jacoboni eric.jacob...@gmail.com wrote: a_tuple = (spam, [10, 30], eggs) a_tuple[1] += [20] Traceback (most recent call last): File stdin, line 1, in module TypeError: 'tuple' object does not support item assignment Ok... I accept this message as += is a reassignment of a_tuple[1] and a tuple is immutable... But, then, why a_tuple is still modified? This is a common confusion. The += operator does two things. First, it asks the target to please do an in-place addition of the other operand. Then, it takes whatever result the target gives back, and assigns it back into the target. So with a list, it goes like this: foo = [10, 30] foo.__iadd__([20]) [10, 30, 20] foo = _ Would it be better to add a check here, such that if this gets raised to the top-level it includes a warning (Addition was inplace; variable probably mutated despite assignment failure)? -- https://mail.python.org/mailman/listinfo/python-list
Re: Explanation of list reference
On 15 February 2014 14:20, Ben Finney ben+pyt...@benfinney.id.au wrote: Joshua Landau jos...@landau.ws writes: Here, I give you a pdf. Hopefully this isn't anti mailing-list-etiquette. This forum is read in many different contexts, and attachments aren't appropriate. You should simply put the text directly into your message, if it's short enough. If it's long, then put it online somewhere and send a link with a description; or, better, make it shorter so it is reasonable to put the text in a message :-) A PDF seemed like a better use of my time than ASCII art. Nevertheless, I agree that I can always give a link; I was just wondering if there was an environment under which this was actively harmful. -- https://mail.python.org/mailman/listinfo/python-list
Re: Calculator Problem
On 5 February 2014 02:22, Dan Sommers d...@tombstonezero.net wrote: On Tue, 04 Feb 2014 19:53:52 -0500, Roy Smith wrote: In article ed1c2ddd-f704-4d58-a5a4-aef13de88...@googlegroups.com, David Hutto dwightdhu...@gmail.com wrote: Can anyone point out how using an int as a var is possible one = 42 (ducking and running) int = 42 (ducking lower and running faster) globals()[1] = 42 (limbo, limbo, limbo like me) -- https://mail.python.org/mailman/listinfo/python-list
Re: 1 0 == True - False
On 30 January 2014 20:38, Chris Angelico ros...@gmail.com wrote: Why is tuple unpacking limited to the last argument? Is it just for the parallel with the function definition, where anything following it is keyword-only? You're not the first person to ask that: http://www.python.org/dev/peps/pep-0448/ If you're able and willing to implement it, I believe the support is there. The primary reason I know of for its non-inclusion was that it was first proposed (with code) during a feature freeze. -- https://mail.python.org/mailman/listinfo/python-list
Re: 1 0 == True - False
On 31 January 2014 00:10, Rotwang sg...@hotmail.co.uk wrote: On a vaguely-related note, does anyone know why iterable unpacking in calls was removed in Python 3? I mean things like def f(x, (y, z)): return (x, y), z I don't have a use case in mind, I was just wondering. http://www.python.org/dev/peps/pep-3113/ -- https://mail.python.org/mailman/listinfo/python-list
Re: Need help vectorizing code
On 18 January 2014 20:51, Kevin K richyoke...@gmail.com wrote: def foo(X, y, mylambda, N, D, epsilon): ... for j in xrange(D): aj = 0 cj = 0 for i in xrange(N): aj += 2 * (X[i,j] ** 2) cj += 2 * (X[i,j] * (y[i] - w.transpose()*X[i].transpose() + w[j]*X[i,j])) Currently this just computes and throws away values... -- https://mail.python.org/mailman/listinfo/python-list
Re: Is it possible to protect python source code by compiling it to .pyc or .pyo?
On 17 January 2014 00:58, Sam lightai...@gmail.com wrote: I would like to protect my python source code. It need not be foolproof as long as it adds inconvenience to pirates. Is it possible to protect python source code by compiling it to .pyc or .pyo? Does .pyo offer better protection? If you're worried about something akin to corporate espionage or some-such, I don't know of a better way than ShedSkin or Cython. Both of those will be far harder to snatch the source of. Cython will be particularly easy to use as it is largely compatible with Python codebases. I offer no opinions, however, on whether this is a task worth doing. I only suggest you consider the disadvantages and how they apply to your individual case. -- https://mail.python.org/mailman/listinfo/python-list
Re: Please stop the trolling
On 23 December 2013 20:53, Terry Reedy tjre...@udel.edu wrote: On 12/23/2013 2:05 PM, wxjmfa...@gmail.com wrote: Le lundi 23 décembre 2013 18:59:41 UTC+1, Wolfgang Keller a écrit : [me] I'll note that Python core developers do care about memory leaks. And that's a really good thing. Memory? Let me laugh! [snip repeated (for about the 5th time) posting of single character memory sizes] Jim, since I know you are smart enough to know the different between a small fixed size and a continuous leak, I can only think that the purpose of your repeated post was to get similarly a inane response from a couple of other people. That is the definition of trolling. It is disrespectful of others and in my opinion is a violation of the Python Code of Conduct, which *does* apply to python-list. Please desist. Agreed; It's also a shame that it's so easy for such diversions to snowball so often, and thus derail so many threads. Sometimes I feel that we have so much trolling on this list because we're easy targets. Nonetheless, I join in to ask jmf to stop posting these threads as they do cause harm to the community. If you do not, I would rather some form of moderating be applied. A thanks to everyone who hasn't been so easily caught up in this malarky. -- https://mail.python.org/mailman/listinfo/python-list
Re: New user's initial thoughts / criticisms of Python
On 11 November 2013 10:39, Chris Angelico ros...@gmail.com wrote: On Mon, Nov 11, 2013 at 9:09 PM, lorenzo.ga...@gmail.com wrote: Regarding the select statement, I think the most Pythonic approach is using dictionaries rather than nested ifs. Supposing we want to decode abbreviated day names (mon) to full names (Monday): You can't [normally], for instance, build up a dictionary that handles inequalities, but you can do that with elif. [...] Consider the following logic: A 'minor weapon' is based on a roll of a 100-sided dice. If it's 01 to 70, +1 weapon: 2,000gp [weapon]; if it's 71 to 85, +2 weapon: 8,000gp [weapon]; if 86 to 90, Specific weapon [minor specific weapon]; and if 91 to 100, Special ability [minor special weapon] and roll again. My code to handle that starts out with this array: minor weapon:({ 70,+1 weapon: 2,000gp [weapon], 85,+2 weapon: 8,000gp [weapon], 90,Specific weapon [minor specific weapon], 100,Special ability [minor special weapon] and roll again, }), (that's Pike; in Python it'd be a list, or maybe a tuple of tuples), and denormalizes it into a lookup table by creating 70 entries quoting the first string, 15 quoting the second, 5, and 10, respectively. So, with a bit of preprocessing, a lookup table (which in this case is an array (list), but could just as easily be a dict) can be used to handle inequalities. The obvious way to me is a binary search: from bisect import bisect_left class FloorMap: def __init__(self, dct): self.indexes = sorted(list(dct)) self.dct = dct def __getitem__(self, itm): index = self.indexes[bisect_left(self.indexes, itm)] return self.dct[index] minor_weapon = FloorMap({ 70: +1 weapon: 2,000gp [weapon], 85: +2 weapon: 8,000gp [weapon], 90: Specific weapon [minor specific weapon], 100: Special ability [minor special weapon] and roll again }) minor_weapon[63] # '+1 weapon: 2,000gp [weapon]' The precise details of the wrapper class here are just to make initialisation pretty; it could be done straight from a pair of lists too: from bisect import bisect_left minor_weapon_indexes = 70, 85, 90, 100 minor_weapon_descriptions = ( +1 weapon: 2,000gp [weapon], +2 weapon: 8,000gp [weapon], Specific weapon [minor specific weapon], Special ability [minor special weapon] and roll again ) minor_weapon_descriptions[bisect_left(minor_weapon_indexes, 80)] # '+2 weapon: 8,000gp [weapon]' Compare to if 80 = 70: res = +1 weapon: 2,000gp [weapon] elif 80 = 85: res = +2 weapon: 8,000gp [weapon] elif 80 = 90: res = Specific weapon [minor specific weapon] elif 80 = 100: res = Special ability [minor special weapon] and roll again which although shorter¹ is a lot less data-driven and much less reusable. ¹ Longer if you ignore the import and class declaration. -- https://mail.python.org/mailman/listinfo/python-list
Re: New user's initial thoughts / criticisms of Python
On 11 November 2013 22:21, Chris Angelico ros...@gmail.com wrote: On Tue, Nov 12, 2013 at 7:50 AM, Joshua Landau jos...@landau.ws wrote: The obvious way to me is a binary search: Which makes an O(log n) search where I have an O(1) lookup. The startup cost of denormalization doesn't scale, so when the server keeps running for two years or more, it's definitely worth processing it that way. log 4 is tiny so I'd expect constant factors to be far more significant here. Then you add on the better asymptotic behaviours for large n, space wise, and the simplicity of implementation. This just seems like a premature optimisation to me, I guess. I agree that your way is faster; I just don't see a single case in which I'd care. I do see several circumstances (large or floating numbers) in which I'd probably prefer my way. Feel free to disagree, I'm not really trying to convince you. -- https://mail.python.org/mailman/listinfo/python-list
Re: New user's initial thoughts / criticisms of Python
On 9 November 2013 13:08, John von Horn j@btinternet.com wrote: I'm Mr. Noobie here, I've just started easing into Python (2.7.4) and am enjoying working along to some youtube tutorials. I've done a little programming in the past. I've just got a few thoughts I'd like to share and ask about: * Why not allow floater=float(int1/int2) - rather than floater=float (int1)/float(int2)? Give me a float (or an error message) from evaluating everything in the brackets. Don't make me explicitly convert everything myself (unless I want to) In Python 2, `int1/int2` does integer division. So `float(that_result)` gives a truncated float. `int1/float(int2)` obviously avoids this by dividing by a float. If `float(int1/int2)` were to return the same value, `float` could not be a normal function, and would have to be magic. Magic is bad. Fortunately, Python 3 does the sane thing and just does floating-point division by default. Great, eh? * No sign of a select .. case statement Another useful tool in the programmer's toolbox Select DayofWeek case mon ... end select `select` is quite an odd statement, in that in most cases it's just a weaker variant of `if`. By the time you're at the point where a `select` is actually more readable you're also at the point where a different control flow is probably a better idea. Things like dictionaries or a variables pointing to functions are really useful and can be encapsulated in a class quite well. This is a bit more advanced but largely more rigorous. But most of the time the idea is just that an `if` is more explicit, and it's not like a `case` statement can be optimised as it can in lower-level languages with simpler datatypes. * Call me pedantic by why do we need a trailing comma for a list of one item? Keep it intuitive and allow lstShopping=[] or [Bread] or [Bread, Milk,Hot Chocolate] I don't like [Bread,]. It bugs me. You don't. You might be confused because you need to write `(hello,)`, but that's because the *comma* makes a *tuple*, not the brackets. Imagine if `2 * (1/2)` gave you `(0.5, 0.5)`! Is everyone happy with the way things are? Could anyone recommend a good, high level language for CGI work? Not sure if I'm going to be happy with Perl (ahhh, get him, he's mentioned Perl and is a heretic!) or Python. I would very much value any constructive criticism or insights. I don't know squat about CGI, but I think Python can grow on anyone open to the idea of programming at a high level. Not high level as in expert but as in using higher-level constructs, if you get the meaning. Python implements these quite well. Your first two complaints are not new, though. -- https://mail.python.org/mailman/listinfo/python-list