Re: tokenize.untokenize adding line continuation characters

2017-01-17 Thread Rotwang
On Tuesday, January 17, 2017 at 11:11:27 AM UTC, Peter Otten wrote:
> Rotwang wrote:
> 
> > Here's something odd I've found with the tokenize module: tokenizing 'if
> > x:\ny' and then untokenizing the result adds '\\\n' to the end.
> > Attempting to tokenize the result again fails because of the backslash
> > continuation with nothing other than a newline after it. On the other
> > hand, if the original string ends with a newline then it works fine. Can
> > anyone explain why this happens?
> 
> > I'm using Python 3.4.3 on Windows 8. Copypasted from iPython:
> 
> This looks like a bug...
> 
> $ python3.4 -c 'import tokenize as t, io; 
> print(t.untokenize(t.tokenize(io.BytesIO(b"if x:\n  y").readline)))'
> b'if x:\n  y\\\n'
> 
> ...that is fixed now:
> 
> $ python3.6 -c 'import tokenize as t, io; 
> print(t.untokenize(t.tokenize(io.BytesIO(b"if x:\n  y").readline)))'
> b'if x:\n  y'
> 
> A quick search brought up multiple bug reports, among them
> 
> http://bugs.python.org/issue12691

Ah, thanks. I did search for bug reports before I started this thread but 
didn't find anything.
-- 
https://mail.python.org/mailman/listinfo/python-list


tokenize.untokenize adding line continuation characters

2017-01-16 Thread Rotwang
Here's something odd I've found with the tokenize module: tokenizing 'if x:\n   
 y' and then untokenizing the result adds '\\\n' to the end. Attempting to 
tokenize the result again fails because of the backslash continuation with 
nothing other than a newline after it. On the other hand, if the original 
string ends with a newline then it works fine. Can anyone explain why this 
happens?

I'm using Python 3.4.3 on Windows 8. Copypasted from iPython:


import tokenize, io

tuple(tokenize.tokenize(io.BytesIO('if x:\ny'.encode()).readline))
Out[2]: 
(TokenInfo(type=56 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), 
line=''),
 TokenInfo(type=1 (NAME), string='if', start=(1, 0), end=(1, 2), line='if 
x:\n'),
 TokenInfo(type=1 (NAME), string='x', start=(1, 3), end=(1, 4), line='if x:\n'),
 TokenInfo(type=52 (OP), string=':', start=(1, 4), end=(1, 5), line='if x:\n'),
 TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 5), end=(1, 6), line='if 
x:\n'),
 TokenInfo(type=5 (INDENT), string='', start=(2, 0), end=(2, 4), line='
y'),
 TokenInfo(type=1 (NAME), string='y', start=(2, 4), end=(2, 5), line='y'),
 TokenInfo(type=6 (DEDENT), string='', start=(3, 0), end=(3, 0), line=''),
 TokenInfo(type=0 (ENDMARKER), string='', start=(3, 0), end=(3, 0), line=''))

tokenize.untokenize(_).decode()
Out[3]: 'if x:\ny\\\n'

tuple(tokenize.tokenize(io.BytesIO(_.encode()).readline))
---
TokenErrorTraceback (most recent call last)
 in ()
> 1 tuple(tokenize.tokenize(io.BytesIO(_.encode()).readline))

C:\Program Files\Python34\lib\tokenize.py in _tokenize(readline, encoding)
558 else:  # continued statement
559 if not line:
--> 560 raise TokenError("EOF in multi-line statement", (lnum, 
0))
561 continued = 0
562 

TokenError: ('EOF in multi-line statement', (3, 0))
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Passing a frame to pdb.Pdb.set_trace

2014-06-25 Thread Rotwang

On 24/06/2014 20:10, Rotwang wrote:

Hi all, I've found something weird with pdb and I don't understand it. I
want to define a function mydebugger() which starts the debugger in the
caller's frame. The following is copied from IDLE with Python 2.7.3
(I've since tried it with 3.3.0 and the same thing happens):

[...]


Never mind, I figured out what's going on thanks to an excellent article 
by Ned Batchelder explaining the internals of pdb:


http://nedbatchelder.com/text/trace-function.html
--
https://mail.python.org/mailman/listinfo/python-list


Passing a frame to pdb.Pdb.set_trace

2014-06-24 Thread Rotwang
Hi all, I've found something weird with pdb and I don't understand it. I 
want to define a function mydebugger() which starts the debugger in the 
caller's frame. The following is copied from IDLE with Python 2.7.3 
(I've since tried it with 3.3.0 and the same thing happens):



Python 2.7.3 (default, Feb 27 2014, 19:58:35)
[GCC 4.6.3] on linux2
Type copyright, credits or license() for more information.
  import pdb, sys
  def f(x):
 mydebugger()


  def mydebugger():
 frame = sys._getframe().f_back
 pdb.Pdb().set_trace(frame)


  f(4)
--Return--
  (2)f()-None
(Pdb) x
4


This is what I expect: sys._getframe().f_back gives f's frame, so the 
call to mydebugger() within f does approximately the same thing as if 
I'd just called pdb.set_trace() instead. But when I add another 
statement to mydebugger, this happens:



  def mydebugger():
 frame = sys._getframe().f_back
 pdb.Pdb().set_trace(frame)
 print 'hmm'


  f(4)
--Call--
  /usr/lib/python2.7/idlelib/rpc.py(546)__getattr__()
- def __getattr__(self, name):
(Pdb) x
*** NameError: name 'x' is not defined
(Pdb) w #Where am I?
   (1)()
   /usr/lib/python2.7/idlelib/run.py(97)main()
- ret = method(*args, **kwargs)
   /usr/lib/python2.7/idlelib/run.py(298)runcode()
- exec code in self.locals
   (1)()
   (2)f()
   (4)mydebugger()
  /usr/lib/python2.7/idlelib/rpc.py(546)__getattr__()
- def __getattr__(self, name):


The same thing happens if I define

frame = sys._getframe().f_back.f_back

(i.e. the debugger starts in the same place) for example, though if I define

frame = sys._getframe()

then the debugger starts in mydebugger's frame as I would expect. Also, 
whether it goes wrong depends on what the third line of mydebugger is; 
some kinds of statement consistently cause the problem and others 
consistently don't.


When I try the above simple code in the terminal rather than IDLE it 
works like it should, but in the more complicated version where I first 
noticed it it still goes wrong. Can anyone help?

--
https://mail.python.org/mailman/listinfo/python-list


Re: Values and objects

2014-05-11 Thread Rotwang

On 11/05/2014 04:11, Steven D'Aprano wrote:

[...]

And try running
this function in both 2.7 and 3.3 and see if you can explain the
difference:

def test():
 if False: x = None
 exec(x = 1)
 return x


I must confess to being baffled by what happens in 3.3 with this 
example. Neither locals() nor globals() has x = 1 immediately before the 
return statement, so what is exec(x = 1) actually doing?


--
https://mail.python.org/mailman/listinfo/python-list


Re: Values and objects

2014-05-11 Thread Rotwang

On 11/05/2014 19:40, Ned Batchelder wrote:

On 5/11/14 9:46 AM, Rotwang wrote:

On 11/05/2014 04:11, Steven D'Aprano wrote:

[...]

And try running
this function in both 2.7 and 3.3 and see if you can explain the
difference:

def test():
 if False: x = None
 exec(x = 1)
 return x


I must confess to being baffled by what happens in 3.3 with this
example. Neither locals() nor globals() has x = 1 immediately before the
return statement, so what is exec(x = 1) actually doing?



The same happens if you try to modify locals():

  def test():
 ...   if 0: x = 1
 ...   locals()['x'] = 13
 ...   return x
 ...
  test()
 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 4, in test
 UnboundLocalError: local variable 'x' referenced before assignment

The doc for exec says:

 Note: The default locals act as described for function locals()
 below: modifications to the default locals dictionary should not be
 attempted. Pass an explicit locals dictionary if you need to see
 effects of the code on locals after function exec() returns.

The doc for locals says:

 Note: The contents of this dictionary should not be modified;
 changes may not affect the values of local and free variables used
 by the interpreter.

This is a tradeoff of practicality for purity: the interpreter runs
faster if you don't make locals() a modifiable dict.


Thanks.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Why has __new__ been implemented as a static method?

2014-05-04 Thread Rotwang

On 04/05/2014 15:16, Steven D'Aprano wrote:

On Sun, 04 May 2014 20:03:35 +1200, Gregory Ewing wrote:


Steven D'Aprano wrote:

If it were a class method, you would call it by MyBaseClass.__new__()
rather than explicitly providing the cls argument.


But that wouldn't be any good, because the base __new__ needs to receive
the actual class being instantiated, not the class that the __new__
method belongs to.



Which is exactly what method descriptors -- whether instance methods or
class descriptors -- can do. Here's an example, using Python 2.7:

class MyDict(dict):
 @classmethod
 def fromkeys(cls, *args, **kwargs):
 print Called from, cls
 return super(MyDict, cls).fromkeys(*args, **kwargs)

class AnotherDict(MyDict):
 pass


And in use:

py MyDict.fromkeys('abc')
Called from class '__main__.MyDict'
{'a': None, 'c': None, 'b': None}
py AnotherDict().fromkeys('xyz')
Called from class '__main__.AnotherDict'
{'y': None, 'x': None, 'z': None}


In both cases, MyDict's __new__ method receives the class doing the
calling, not the class where the method is defined.


Yes, when a classmethod bound to a subclass or an instance is called. 
But this is irrelevant to Gregory's point:


On 04/05/2014 04:37, Steven D'Aprano wrote:

On Sun, 04 May 2014 11:21:53 +1200, Gregory Ewing wrote:

Steven D'Aprano wrote:

I'm not entirely sure what he means by upcalls, but I believe it
means to call the method further up (that is, closer to the base) of
the inheritance tree.


I think it means this:

 def __new__(cls):
MyBaseClass.__new__(cls)

which wouldn't work with a class method, because MyBaseClass.__new__
would give a *bound* method rather than an unbound one.


If it were a class method, you would call it by MyBaseClass.__new__()
rather than explicitly providing the cls argument.



The relevant behaviour is this:

 class C:
@classmethod
def m(cls):
print(Called from, cls)

 class D(C):
@classmethod
def m(cls):
C.m()


 C.m()
Called from class '__main__.C'
 D.m()
Called from class '__main__.C'


If __new__ were a classmethod, then a call to MyBaseClass.__new__() 
within the body of MySubClass.__new__ would pass MyBaseClass to the 
underlying function, not the MySubClass. This means that


class MySubClass(MyBaseClass):
def __new__(cls):
return MyBaseClass.__new__()

would fail, since it would return an instance of MyBaseClass rather than 
MySubClass.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Scoping rules for class definitions

2014-04-08 Thread Rotwang

On 04/04/2014 19:55, Ian Kelly wrote:

On Fri, Apr 4, 2014 at 12:37 PM, Rotwang sg...@hotmail.co.uk wrote:

Hi all. I thought I had a pretty good grasp of Python's scoping rules, but
today I noticed something that I don't understand. Can anyone explain to me
why this happens?


x = 'global'
def f1():

 x = 'local'
 class C:
 y = x
 return C.y


def f2():

 x = 'local'
 class C:
 x = x
 return C.x


f1()

'local'

f2()

'global'


Start by comparing the disassembly of the two class bodies:


dis.dis(f1.__code__.co_consts[2])

   3   0 LOAD_NAME0 (__name__)
   3 STORE_NAME   1 (__module__)
   6 LOAD_CONST   0 ('f1.locals.C')
   9 STORE_NAME   2 (__qualname__)

   4  12 LOAD_CLASSDEREF  0 (x)
  15 STORE_NAME   3 (y)
  18 LOAD_CONST   1 (None)
  21 RETURN_VALUE

dis.dis(f2.__code__.co_consts[2])

   3   0 LOAD_NAME0 (__name__)
   3 STORE_NAME   1 (__module__)
   6 LOAD_CONST   0 ('f2.locals.C')
   9 STORE_NAME   2 (__qualname__)

   4  12 LOAD_NAME3 (x)
  15 STORE_NAME   3 (x)
  18 LOAD_CONST   1 (None)
  21 RETURN_VALUE

The only significant difference is that the first uses
LOAD_CLASSDEREF, which I guess is the class version of LOAD_DEREF for
loading values from closures, at line 4 whereas the second uses
LOAD_NAME.  So the first one knows about the x in the nonlocal scope,
whereas the second does not and just loads the global (since x doesn't
yet exist in the locals dict).

Now why doesn't the second version also use LOAD_CLASSDEREF?  My guess
is because it's the name of a local; if it were referenced a second
time in the class then the second LOAD_CLASSDEREF would again get the
x from the nonlocal scope, which would be incorrect.


Thanks (sorry for the slow reply, I've had a busy few days).

For anyone who's interested, I also found an interesting discussion of 
the above in the following thread:


https://mail.python.org/pipermail/python-dev/2002-April/023427.html
--
https://mail.python.org/mailman/listinfo/python-list


Scoping rules for class definitions

2014-04-04 Thread Rotwang
Hi all. I thought I had a pretty good grasp of Python's scoping rules, 
but today I noticed something that I don't understand. Can anyone 
explain to me why this happens?


 x = 'global'
 def f1():
x = 'local'
class C:
y = x
return C.y

 def f2():
x = 'local'
class C:
x = x
return C.x

 f1()
'local'
 f2()
'global'
--
https://mail.python.org/mailman/listinfo/python-list


Re: Import order question

2014-03-10 Thread Rotwang

On 18/02/2014 23:28, Rotwang wrote:

[...]

I have music software that's a single 9K-line Python module, which I
edit using Notepad++ or gedit.


Incidentally, in the time since I wrote the above I've started using 
Sublime Text 3, following somebody on c.l.p's recommendation (I 
apologise that I forget who). I can heartily recommend it.


--
https://mail.python.org/mailman/listinfo/python-list


Re: property confusion

2014-02-21 Thread Rotwang

On 21/02/2014 18:58, K Richard Pixley wrote:

Could someone please explain to me why the two values at the bottom of
this example are different?

Python-3.3 if it makes any difference.

Is this a difference in evaluation between a class attribute and an
instance attribute?


Yes, see below.



--rich

class C:
 def __init__(self):
 self._x = None

 def getx(self):
 print('getx')
 return self._x
 def setx(self, value):
 print('setx')
 self._x = value
 def delx(self):
 print('delx')
 del self._x
 x = property(getx, setx, delx, I'm the 'x' property.)

class D:
 def getx(self):
 print('getx')
 return self._x
 def setx(self, value):
 print('setx')
 self._x = value
 def delx(self):
 print('delx')
 del self._x

 def __init__(self):
 self._x = None
 self.x = property(self.getx, self.setx, self.delx, I'm the 'x'
property.)

type(C().x)
type(D().x)


Properties are implemented as descriptors, i.e. objects that have 
__get__, __set__ and/or __delete__ methods:


http://docs.python.org/3/reference/datamodel.html#invoking-descriptors

If a is an instance and type(a) has an attribute named 'x' which is a 
descriptor then a.x is transformed into


type(a).__dict__['x'].__get__(a, type(a));

in the case of your code, C.__dict__['x'] is the property you defined, 
and its __get__ method is just getx. But because of the way attribute 
lookup works for descriptors, properties don't work as attributes of 
instances as is the case with D(); D().x just gives you 
D().__dict__['x'] which is the property itself.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Import order question

2014-02-18 Thread Rotwang

On 18/02/2014 21:44, Rick Johnson wrote:

[...]

Are you telling me you're willing to search through a single
file containing 3,734 lines of code (yes, Tkinter) looking
for a method named destroy of a class named OptionMenu
(of which three other classes contain a method of the same
exact name!), when all you needed to do was open one single
module named tk_optionmenu.py and do a single search for
def destroy?

You must be trolling!


I have music software that's a single 9K-line Python module, which I 
edit using Notepad++ or gedit. If I wish to find e.g. the method edit 
of class sequence I can type


Ctrl-fclass seqReturndef edit(Return
--
https://mail.python.org/mailman/listinfo/python-list


Re: Import order question

2014-02-18 Thread Rotwang

On 18/02/2014 23:41, Rick Johnson wrote:

On Tuesday, February 18, 2014 5:28:21 PM UTC-6, Rotwang wrote:


[snipped material restored for context]


On 18/02/2014 21:44, Rick Johnson wrote:

[...]

Are you telling me you're willing to search through a single
file containing 3,734 lines of code (yes, Tkinter) looking
for a method named destroy of a class named OptionMenu
(of which three other classes contain a method of the same
exact name!), when all you needed to do was open one single
module named tk_optionmenu.py and do a single search for
def destroy?

You must be trolling!


I have music software that's a single 9K-line Python module, which I
edit using Notepad++ or gedit. If I wish to find e.g. the method edit
of class sequence I can type
  Ctrl-fclass seqReturndef edit(Return


This is not about how to use a search function


No, it's about your incredulity that someone would search for a method 
in a large file that contains several methods of the same name. However, 
the existence of search functions makes this completely trivial.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Explanation of list reference

2014-02-17 Thread Rotwang

On 17/02/2014 06:21, Steven D'Aprano wrote:

On Mon, 17 Feb 2014 11:54:45 +1300, Gregory Ewing wrote:

[...]

[1] Mathematicians tried this. Everything is a set! Yeah, right...


No, that's okay. You only get into trouble when you have self-referential
sets, like the set of all sets that don't contain themselves.


Actually there's nothing wrong with self-referential sets per se. For 
example set theory with Aczel's anti-foundation axiom instead of the 
axiom of foundation is consistent if ZF is; see e.g.


http://en.wikipedia.org/wiki/Non-well-founded_set_theory

The trouble comes not with self-reference, but with unrestricted 
comprehension.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Rotwang
What's this? A discussion about angels dancing on a the head of a pin? 
Great, I'm in.


On 13/02/2014 14:00, Marko Rauhamaa wrote:

Oscar Benjamin oscar.j.benja...@gmail.com:


This isn't even a question of resource constraints: a digital computer
with infinite memory and computing power would still be limited to
working with countable sets, and the real numbers are just not
countable. The fundamentally discrete nature of digital computers
prevents them from being able to truly handle real numbers and real
computation.


Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
and ran at ℵ₁ hertz and Python supported transfinite iteration, you
could easily do reals:

 def real_sqrt(y):
 for x in continuum(0, max(1, y)):
 # Note: x is not traversed in the  order but some other
 # well-ordering, which has been proved to exist.
 if x * x == y:
 return x
 assert False

The function could well return in finite time with a precise result for
any given nonnegative real argument.



Minor point: ℵ₁ does not mean the cardinality c of the continuum, it 
means the smallest cardinal larger than ℵ₀. It has been proved that the 
question of whether ℵ₁ == c is independent of ZFC, so it is in a sense 
unanswerable.


More importantly, though, such a computer could not complete the above 
iteration in finite time unless time itself is not real-valued. That's 
because if k is an uncountable ordinal then there is no strictly 
order-preserving function from k to the unit interval [0, 1]. For 
suppose otherwise, and let f be such a function. Let S denote the set of 
successor ordinals in k, and let L denote the set of limit ordinals in 
k. Then lambda x: x + 1 is an injective function from L (or L with a 
single point removed if k is the successor of a limit ordinal) to S, so 
that S is at least as large as L and since k == S | L it follows that S 
is uncountable.


For each x + 1 in S, let g(x + 1) = f(x + 1) - f(x)  0. Let F be any 
finite subset of S and let y = max(F). It is clear that f(y) = sum(g(x) 
for x in F). Since also f(y) = 1, we have sum(g(x) for x in F) if = 1 
for all finite F. In particular, for any integer n  0, the set S_n = {x 
for x in S if g(x)  1/n} has len(S_n)  n. But then S is the union of 
the countable collection {S_n for n in N} of finite sets, so is 
countable; a contradiction.



On 13/02/2014 19:47, Marko Rauhamaa wrote:

My assumption was you could execute ℵ₁ statements per second. That
doesn't guarantee a finite finish time but would make it possible. That
is because

ℵ₁ * ℵ₁ = ℵ₁ = ℵ₁ * 1


I don't think that's enough - assuming the operations of your processor 
during a second can be indexed by some ordinal k with len(k) == c, if 
each of the c operations per iteration must be complete before the next 
step of the for loop is complete then you need an injective function 
from c * c to k that preserves the lexicographic ordering. I don't know 
whether such a function exists for arbitrary such k, but k can be chosen 
in advance so that it does.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Rotwang

On 13/02/2014 22:00, Marko Rauhamaa wrote:

Rotwang sg...@hotmail.co.uk:


  for x in continuum(0, max(1, y)):
  # Note: x is not traversed in the  order but some other
  # well-ordering, which has been proved to exist.
  if x * x == y:
  return x


[...]


Restoring for context:


The function could well return in finite time with a precise result
for any given nonnegative real argument.




More importantly, though, such a computer could not complete the above
iteration in finite time unless time itself is not real-valued. That's
because if k is an uncountable ordinal then there is no strictly
order-preserving function from k to the unit interval [0, 1].


If you read the code comment above, the transfinite iterator yields the
whole continuum, not in the  order (which is impossible), but in some
other well-ordering (which is known to exist). Thus, we can exhaust the
continuum in ℵ₁ discrete steps.


Yes, I understood that. But my point was that it can't carry out those 
ℵ₁ discrete steps in finite time (assuming that time is real-valued), 
because there's no way to embed them in any time interval without 
changing their order. Note that this is different to the case of 
iterating over a countable set, since the unit interval does have 
countable well-ordered subsets.

--
https://mail.python.org/mailman/listinfo/python-list


Re: PyWart: More surpises via implict conversion to boolean (and other steaming piles!)

2014-02-10 Thread Rotwang

On 10/02/2014 18:45, Rick Johnson wrote:

[...]

 3. Implicit introspection is evil, i prefer all
 references to a callable's names to result in a CALL
 to that callable, not an introspection!


So, for example, none of

isinstance(x, myclass)

map(myfunc, range(10))

x = property(x_get, x_set)

would still work?
--
https://mail.python.org/mailman/listinfo/python-list


Re: generator slides review and Python doc (+/- text bug)

2014-02-03 Thread Rotwang

On 03/02/2014 13:59, wxjmfa...@gmail.com wrote:

[...]

I noticed the same effect with the Python doc
since ? (long time).

Eg.

The Python Tutorial
appears as
The Python Tutorial¶

with a visible colored ¶, 'PILCROW SIGN',
blueish in Python 3, red in Python 2.7.6.


Hint: try clicking the ¶.
--
https://mail.python.org/mailman/listinfo/python-list


Re: generator slides review and Python doc (+/- text bug)

2014-02-03 Thread Rotwang

On 03/02/2014 18:37, wxjmfa...@gmail.com wrote:

[...]

Hint: try clicking the ¶.


I never was aware of this feature. Is it deliverate?


Do you mean deliberate? Of course it is.



It gives to me the feeling of a badly programmed
html page, especially if this sign does correspond
to an eol!


Why on Earth would the sign correspond to an EOL? The section sign and 
pilcrow have a history of being used to refer to sections and paragraphs 
respectively, so using them for permalinks to individual sections of a 
web page makes perfect sense.


--
https://mail.python.org/mailman/listinfo/python-list


Re: Try-except-finally paradox

2014-01-30 Thread Rotwang

On 30/01/2014 06:33, Andrew Berg wrote:

On 2014.01.29 23:56, Jessica Ross wrote:

I found something like this in a StackOverflow discussion.

def paradox():

... try:
... raise Exception(Exception raised during try)
... except:
... print Except after try
... return True
... finally:
... print Finally
... return False
... return None
...

return_val = paradox()

Except after try
Finally

return_val

False

I understand most of this.
What I don't understand is why this returns False rather than True.
Does the finally short-circuit the return in the except block?


My guess would be that the interpreter doesn't let the finally block
get skipped under any circumstances, so the return value gets set to
True, but then it forces the finally block to be run before returning,
which changes the return value to False.


Mine too. We can check that the interpreter gets as far as evaluating 
the return value in the except block:


 def paradox2():
try:
raise Exception(Raise)
except:
print(Except)
return [print(Return), True][1]
finally:
print(Finally)
return False
return None

 ret = paradox2()
Except
Return
Finally
 ret
False
--
https://mail.python.org/mailman/listinfo/python-list


Re: 1 0 == True - False

2014-01-30 Thread Rotwang

On 30/01/2014 12:49, Dave Angel wrote:

[...]

For hysterical reasons,  True and False are instances of class
  bool, which is derived from int. So for comparison purposes
  False==0 and True==1. But in my opinion,  you should never take
  advantage of this, except when entering obfuscation
  contests.


Really? I take advantage of it quite a lot. For example, I do things 
like this:


'You have scored %i point%s' % (score, 's'*(score != 1))
--
https://mail.python.org/mailman/listinfo/python-list


Re: 1 0 == True - False

2014-01-30 Thread Rotwang

On 30/01/2014 23:36, Joshua Landau wrote:

On 30 January 2014 20:38, Chris Angelico ros...@gmail.com wrote:


Why is tuple unpacking limited to the last argument? Is it just for
the parallel with the function definition, where anything following it
is keyword-only?


You're not the first person to ask that:
http://www.python.org/dev/peps/pep-0448/


On a vaguely-related note, does anyone know why iterable unpacking in 
calls was removed in Python 3? I mean things like


def f(x, (y, z)):
return (x, y), z

I don't have a use case in mind, I was just wondering.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Removal of iterable unpacking in function calls

2014-01-30 Thread Rotwang

On 31/01/2014 00:21, Ben Finney wrote:

Rotwang sg...@hotmail.co.uk writes:


On a vaguely-related note, does anyone know why iterable unpacking in
calls was removed in Python 3?


This is explained in the PEP which described its removal
URL:http://www.python.org/dev/peps/pep-3113/, especially
URL:http://www.python.org/dev/peps/pep-3113/#why-they-should-go.


Ah, I had not seen that. Thanks.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Guessing the encoding from a BOM

2014-01-17 Thread Rotwang

On 17/01/2014 18:43, Tim Chase wrote:

On 2014-01-17 09:10, Mark Lawrence wrote:

Slight aside, any chance of changing the subject of this thread, or
even ending the thread completely?  Why?  Every time I see it I
picture Inspector Clouseau, A BOM!!! :)


In discussions regarding BOMs, I regularly get the All your base
meme from a couple years ago stuck in my head: Somebody set us up the
bomb!


ITYM Somebody set up us the bomb.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Open Question - I'm a complete novice in programming so please bear with me...Is python equivalent to C, C++ and java combined?

2014-01-12 Thread Rotwang

On 12/01/2014 05:58, Chris Angelico wrote:

[...]

(BTW, is there no better notation than six nested for/range for doing
6d6? I couldn't think of one off-hand, but it didn't really much
matter anyway.)


If you're willing to do an import, then how about this:

 from itertools import product
 len([x for x in product(range(1, 7), repeat = 6) if sum(x)  14])/6**6
0.03587962962962963
--
https://mail.python.org/mailman/listinfo/python-list


Re: Understanding decorator and class methods

2014-01-08 Thread Rotwang

On 08/01/2014 19:56, axis.of.wea...@gmail.com wrote:

can someone please explain why the following works, in contrast to the second 
example?

def decorator(func):
 def on_call(*args):
 print args
 return func(args)
 return on_call

class Foo:
 @decorator
 def bar(self, param1):
 print 'inside bar'

f=Foo()
f.bar(4)  # from where is the decorator getting the Foo instance?



I understand why the following works/does not work

class decorator2:
 def __init__(self, func):
 self.func=func
 def __call__(self, *args):
 self.func(*args)

class Foo2:
 @decorator2
 def bar2(self, param): pass


f2 = Foo2()
Foo2.bar2(f2, 4) # works, Foo2 instance and param are passed to decorator2 call
f2.bar2(4) # does not work, Foo2 instance is missing, decorator2 cannot invoke 
method bar


From http://docs.python.org/3/reference/datamodel.html:

Instance methods

An instance method object combines a class, a class instance and
any callable object (normally a user-defined function).

[...]

User-defined method objects may be created when getting an
attribute of a class (perhaps via an instance of that class), if
that attribute is a user-defined function object or a class method
object.

[...]

Note that the transformation from function object to instance
method object happens each time the attribute is retrieved from the
instance. In some cases, a fruitful optimization is to assign the
attribute to a local variable and call that local variable. Also
notice that this transformation only happens for user-defined
functions; other callable objects (and all non-callable objects)
are retrieved without transformation.


Notice the last sentence in particular. After being decorated by 
decorator2 Foo2.bar2 is not a user-defined function (i.e. an instance of 
types.FunctionType), so is not transformed into a method upon being 
accessed through an instance. I suppose you could create a class that 
mimics the behaviour of methods, though I don't know why you would want 
to. The following is tested with 3.3.0; I expect someone who knows more 
than I will probably be along soon to point out why it's stupid.


class decorator3:
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
print('Calling func(self, *%r, **%r)' % (args, kwargs))
return self.func(self.__self__, *args, **kwargs)
def __get__(self, instance, owner):
self.__self__ = instance
return self

class Foo3:
@decorator3
def bar3(self, param):
return self, param

 f3 = Foo3()
 f3.bar3('param')
Calling func(self, *('param',), **{})
(__main__.Foo3 object at 0x02BDF198, 'param')
--
https://mail.python.org/mailman/listinfo/python-list


Re: One liners

2013-12-07 Thread Rotwang

On 07/12/2013 12:41, Jussi Piitulainen wrote:

[...]

   if tracks is None:
  tracks = []


Sorry to go off on a tangent, but in my code I often have stuff like 
this at the start of functions:


tracks = something if tracks is None else tracks

or, in the case where I don't intend for the function to be passed 
non-default Falsey values:


tracks = tracks or something

Is there any reason why the two-line version that avoids the ternary 
operator should be preferred to the above?

--
https://mail.python.org/mailman/listinfo/python-list


Re: Managing Google Groups headaches

2013-12-07 Thread Rotwang

On 07/12/2013 16:08, Roy Smith wrote:

In article 31f1bb84-1432-446c-a7d4-79ce16f2a...@googlegroups.com,
  wxjmfa...@gmail.com wrote:


It is on this level the FSR fails.


What is FSR?  I apologize if this was explained earlier in the thread
and I can't find the reference.


It's the Flexible String Representation, introduced in Python 3.3:

http://www.python.org/dev/peps/pep-0393/
--
https://mail.python.org/mailman/listinfo/python-list


Re: One liners

2013-12-07 Thread Rotwang

On 07/12/2013 16:25, Steven D'Aprano wrote:

On Sat, 07 Dec 2013 16:13:09 +, Rotwang wrote:


On 07/12/2013 12:41, Jussi Piitulainen wrote:

[...]

if tracks is None:
   tracks = []


Sorry to go off on a tangent, but in my code I often have stuff like
this at the start of functions:

  tracks = something if tracks is None else tracks

or, in the case where I don't intend for the function to be passed
non-default Falsey values:

  tracks = tracks or something

Is there any reason why the two-line version that avoids the ternary
operator should be preferred to the above?


Only if you need to support Python 2.4, which doesn't have the ternary if
operator :-)


Thanks, and likewise to everyone else who replied.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Why is there no natural syntax for accessing attributes with names not being valid identifiers?

2013-12-06 Thread Rotwang

On 06/12/2013 16:51, Piotr Dobrogost wrote:

[...]

I thought of that argument later the next day. Your proposal does
unify access if the old obj.x syntax is removed.


As long as obj.x is a very concise way to get attribute named 'x' from
object obj it's somehow odd that identifier x is treated not like
identifier but like string literal 'x'. If it were treated like an
identifier then we would get attribute with name being value of x
instead attribute named 'x'. Making it possible to use string literals
in the form obj.'x' as proposed this would make getattr basically
needless as long as we use only variable not expression to denote
attribute's name.


But then every time you wanted to get an attribute with a name known at 
compile time you'd need to write obj.'x' instead of obj.x, thereby 
requiring two additional keystrokes. Given that the large majority of 
attribute access Python code uses dot syntax rather than getattr, this 
seems like it would massively outweigh the eleven keystrokes one saves 
by writing obj.'x' instead of getattr(obj,'x').


--
https://mail.python.org/mailman/listinfo/python-list


Re: Why is there no natural syntax for accessing attributes with names not being valid identifiers?

2013-12-04 Thread Rotwang

On 04/12/2013 20:07, Piotr Dobrogost wrote:

[...]


Unless we compare with what we have now, which gives 9 (without space) or 10 
(with space):
x = obj.'value-1'
x = getattr(obj, 'value-1')


That is not a significant enough savings to create new syntax.


Well, 9 characters is probably significant enough saving to create new syntax 
but saving these characters is only a side effect and is not the most important 
aspect of this proposal which leads us to the next point.


Remember the Python philosophy that there ought to be one way to do it.


Funny you use this argument against my idea as this idea comes from following 
this rule whereas getattr goes against it. Using dot is the main syntax to 
access attributes. Following this, the syntax I'm proposing is much more in 
line with this primary syntax than getattr is. If there ought to be only one 
way to access attributes then it should be dot notation.


I believe that you are missing the point of getattr. It's not there so 
that one can use arbitrary strings as attribute names; it's there so 
that one can get attributes with names that aren't known until run time. 
For this purpose the dot-notation-with-quotes you suggest above is not 
good enough. For suppose e.g. that one does this:


name = 'attribute'
x.name

How would the interpreter know whether you're asking for getattr(x, 
name) or getattr(x, 'name')?

--
https://mail.python.org/mailman/listinfo/python-list


Re: Got a Doubt ! Wanting for your Help ! Plz make it ASAP !

2013-11-27 Thread Rotwang

On 27/11/2013 08:31, Antoon Pardon wrote:

Op 27-11-13 09:19, Chris Angelico schreef:

[...]

Do you mean standard British English, standard American English,
standard Australian English, or some other?


Does that significantly matter or are you just looking for details
you can use to disagree? As far as I understand the overlap between
standard British English and standard American English is so large
that it doesn't really matter for those who had to learn the language.


The overlap is large, yes, but there are differences in vocabulary that 
are just as likely to cause confusion as the doubt/question distinction.


http://www.youtube.com/watch?v=LVzwJ6WWlHAt=5m06s
--
https://mail.python.org/mailman/listinfo/python-list


Re: Method chaining

2013-11-24 Thread Rotwang

On 24/11/2013 14:27, Steven D'Aprano wrote:

On Sat, 23 Nov 2013 19:53:32 +, Rotwang wrote:


On 22/11/2013 11:26, Steven D'Aprano wrote:

A frequently missed feature is the ability to chain method calls:

[...]

chained([]).append(1).append(2).append(3).reverse().append(4) =
returns [3, 2, 1, 4]


That's pretty cool. However, I can imagine it would be nice for the
chained object to still be an instance of its original type.


Why?


Well, if one intends to pass such an object to a function that does 
something like this:


def f(obj):
if isinstance(obj, class1):
do_something(obj)
elif isinstance(obj, class2):
do_something_else(obj)

then

 f(chained(obj).method1().method2())

looks nicer than

 f(chained(obj).method1().method2().obj)

and usually still works with the version of chained I posted last night. 
This isn't a wholly hypothetical example, I have functions in my own 
software that perform instance checks and that I often want to pass 
objects that I've just mutated several times.




During the chained call, you're only working with the object's own
methods, so that shouldn't matter. Even if a method calls an external
function with self as an argument, and the external function insists on
the original type, that doesn't matter because the method sees only the
original (wrapped) object, not the chained object itself.

In other words, in the example above, each of the calls to list.append
etc. see only the original list, not the chained object.

The only time you might care about getting the unchained object is at the
end of the chain. This is where Ruby has an advantage, the base class of
everything has a method useful for chaining methods, Object.tap, where in
Python you either keep working with the chained() wrapper object, or you
can extract the original and discard the wrapper.



How about something like this:

def getr(self, name):
  obj = super(type(self), self).__getattribute__(name)


I don't believe you can call super like that. I believe it breaks when
you subclass the subclass.


Yes. I don't know what I was thinking with the various super calls I 
wrote last night, apart from being buggy they're completely unnecessary. 
Here's a better version:


class dummy:
pass

def initr(self, obj):
object.__setattr__(self, '__obj', obj)
def getr(self, name):
try:
return object.__getattribute__(self, name)
except AttributeError:
return getattr(self.__obj, name)
def methr(method):
def cmethod(*args, **kwargs):
try:
args = list(args)
self = args[0]
args[0] = self.__obj
except (IndexError, AttributeError):
self = None
result = method(*args, **kwargs)
return self if result is None else result
try:
cmethod.__qualname__ = method.__qualname__
except AttributeError:
pass
return cmethod

class chained(type):
typedict = {}
def __new__(cls, obj):
if isinstance(type(obj), chained):
return obj
if type(obj) not in cls.typedict:
dict = {}
for t in reversed(type(obj).__mro__):
dict.update({k: methr(v) for k, v in t.__dict__.items()
if callable(v) and k != '__new__'})
dict.update({'__init__': initr, '__getattribute__': getr})
cls.typedict[type(obj)] = type.__new__(cls, 'chained%s'
% type(obj).__name__, (dummy, type(obj)), dict)
return cls.typedict[type(obj)](obj)
--
https://mail.python.org/mailman/listinfo/python-list


Re: Method chaining

2013-11-23 Thread Rotwang

On 22/11/2013 11:26, Steven D'Aprano wrote:

A frequently missed feature is the ability to chain method calls:

x = []
x.append(1).append(2).append(3).reverse().append(4)
= x now equals [3, 2, 1, 4]


This doesn't work with lists, as the methods return None rather than
self. The class needs to be designed with method chaining in mind before
it will work, and most Python classes follow the lead of built-ins like
list and have mutator methods return None rather than self.

Here's a proof-of-concept recipe to adapt any object so that it can be
used for chaining method calls:


class chained:
 def __init__(self, obj):
 self.obj = obj
 def __repr__(self):
 return repr(self.obj)
 def __getattr__(self, name):
 obj = getattr(self.obj, name)
 if callable(obj):
 def selfie(*args, **kw):
 # Call the method just for side-effects, return self.
 _ = obj(*args, **kw)
 return self
 return selfie
 else:
 return obj


chained([]).append(1).append(2).append(3).reverse().append(4)
= returns [3, 2, 1, 4]


That's pretty cool. However, I can imagine it would be nice for the 
chained object to still be an instance of its original type. How about 
something like this:


def getr(self, name):
obj = super(type(self), self).__getattribute__(name)
if callable(obj):
def selfie(*args, **kwargs):
result = obj(*args, **kwargs)
return self if result is None else result
return selfie
return obj

class chained(type):
typedict = {}
def __new__(cls, obj):
if type(obj) not in cls.typedict:
cls.typedict[type(obj)]  = type.__new__(
cls, 'chained%s' % type(obj).__name__,
(type(obj),), {'__getattribute__': getr})
return cls.typedict[type(obj)](obj)


# In the interactive interpreter:
 d = chained({}).update({1: 2}).update({3: 4})
 d
{1: 2, 3: 4}
 type(d)
class '__main__.chaineddict'
 isinstance(d, dict)
True


The above code isn't very good - it will only work on types whose 
constructor will copy an instance, and it discards the original. And its 
dir() is useless. Can anyone suggest something better?

--
https://mail.python.org/mailman/listinfo/python-list


Re: Method chaining

2013-11-23 Thread Rotwang

On 23/11/2013 19:53, Rotwang wrote:

[...]

That's pretty cool. However, I can imagine it would be nice for the
chained object to still be an instance of its original type. How about
something like this:

[crap code]

The above code isn't very good - it will only work on types whose
constructor will copy an instance, and it discards the original. And its
dir() is useless. Can anyone suggest something better?


Here's another attempt:

class dummy:
pass

def initr(self, obj):
super(type(self), self).__setattr__('__obj', obj)
def getr(self, name):
try:
return super(type(self), self).__getattribute__(name)
except AttributeError:
return getattr(self.__obj, name)
def methr(method):
def selfie(self, *args, **kwargs):
result = method(self.__obj, *args, **kwargs)
return self if result is None else result
return selfie

class chained(type):
typedict = {}
def __new__(cls, obj):
if type(obj) not in cls.typedict:
dict = {}
for t in reversed(type(obj).__mro__):
dict.update({k: methr(v) for k, v in t.__dict__.items()
if callable(v) and k != '__new__'})
dict.update({'__init__': initr, '__getattribute__': getr})
cls.typedict[type(obj)] = type.__new__(cls, 'chained%s'
% type(obj).__name__, (dummy, type(obj)), dict)
return cls.typedict[type(obj)](obj)


This solves some of the problems in my earlier effort. It keeps a copy 
of the original object, while leaving its interface pretty much 
unchanged; e.g. repr does what it's supposed to, and getting or setting 
an attribute of the chained object gets or sets the corresponding 
attribute of the original. It won't work on classes with properties, 
though, nor on classes with callable attributes that aren't methods (for 
example, a class with an attribute which is another class).

--
https://mail.python.org/mailman/listinfo/python-list


Re: Method chaining

2013-11-23 Thread Rotwang

On 24/11/2013 00:28, Rotwang wrote:

[...]

This solves some of the problems in my earlier effort. It keeps a copy
of the original object,


Sorry, I meant that it keeps a reference to the original object.
--
https://mail.python.org/mailman/listinfo/python-list


Re: Getting globals of the caller, not the defining module

2013-11-14 Thread Rotwang

On 11/11/2013 12:02, sg...@hotmail.co.uk wrote:

(Sorry for posting through GG, I'm at work.)

On Monday, November 11, 2013 11:25:42 AM UTC, Steven D'Aprano wrote:

Suppose I have a function that needs access to globals:

# module A.py
def spam():
 g = globals()  # this gets globals from A
 introspect(g)

As written, spam() only sees its own globals, i.e. those of the module in
which spam is defined. But I want spam to see the globals of the caller.

# module B
import A
A.spam()  # I want spam to see globals from B

I can have the caller explicitly pass the globals itself:

def spam(globs=None):
 if globs is None:
 globs = globals()
 introspect(globs)

But since spam is supposed to introspect as much information as possible,
I don't really want to do that. What (if anything) are my other options?


How about this?

# module A.py
import inspect
def spam():
 return inspect.stack()[1][0].f_globals


Bump. Did this do what you wanted, or not?
--
https://mail.python.org/mailman/listinfo/python-list


Re: Getting globals of the caller, not the defining module

2013-11-12 Thread Rotwang

On 12/11/2013 01:57, Terry Reedy wrote:

On 11/11/2013 7:02 AM, sg...@hotmail.co.uk wrote:

(Sorry for posting through GG, I'm at work.)

On Monday, November 11, 2013 11:25:42 AM UTC, Steven D'Aprano wrote:

Suppose I have a function that needs access to globals:

# module A.py
def spam():
 g = globals()  # this gets globals from A
 introspect(g)

As written, spam() only sees its own globals, i.e. those of the
module in
which spam is defined. But I want spam to see the globals of the caller.

# module B
import A
A.spam()  # I want spam to see globals from B

I can have the caller explicitly pass the globals itself:

def spam(globs=None):
 if globs is None:
 globs = globals()
 introspect(globs)

But since spam is supposed to introspect as much information as
possible,
I don't really want to do that. What (if anything) are my other options?


How about this?

# module A.py
import inspect
def spam():
 return inspect.stack()[1][0].f_globals


In Python 3, the attribute is __globals__.


Er... no it isn't? Sorry if I'm mistaken but I believe you're thinking 
of the attribute formerly known as func_globals. But in the above 
inspect.stack()[1][0] is not a function, it's a frame object. In fact 
it's the same thing as sys._getframe().f_back, I think.

--
https://mail.python.org/mailman/listinfo/python-list


Re: converting letters to numbers

2013-10-16 Thread Rotwang

On 14/10/2013 06:02, Steven D'Aprano wrote:

On Sun, 13 Oct 2013 20:13:32 -0700, Tim Roberts wrote:


def add(c1, c2):
  % Decode
  c1 = ord(c1) - 65
  c2 = ord(c2) - 65
  % Process
  i1 = (c1 + c2) % 26
  % Encode
  return chr(i1+65)


Python uses # for comments, not %, as I'm sure you know. What language
were you thinking off when you wrote the above?


Maybe TeX?

--
https://mail.python.org/mailman/listinfo/python-list


Re: I am never going to complain about Python again

2013-10-10 Thread Rotwang

On 10/10/2013 16:51, Neil Cerutti wrote:

[...]

Mixed arithmetic always promotes to the wider type (except in
the case of complex numbers (Ha!)).

r == c is equivalent to r == abs(c), which returns the magintude
of the complex number.


What?

 -1 == -1 + 0j
True
 -1 == abs(-1 + 0j)
False
 1 == 0 + 1j
False
 1 == abs(0 + 1j)
True
--
https://mail.python.org/mailman/listinfo/python-list


Re: python function parameters, debugging, comments, etc.

2013-10-02 Thread Rotwang

On 02/10/2013 11:15, Oscar Benjamin wrote:

On 2 October 2013 00:45, Rotwang sg...@hotmail.co.uk wrote:


So the upside of duck-typing is clear. But as you've already discovered, so
is the downside: Python's dynamic nature means that there's no way for the
interpreter to know what kind of arguments a function will accept, and so a
user of any function relies on the function having clear documentation.


It is still necessary to document the arguments of functions in
explicitly typed languages. Knowing that you need a list of strings
does not mean that you know what the function expects of the values of
the strings and what it will try to do with them.

[...]


Well, yes. I didn't intend to suggest otherwise.

--
https://mail.python.org/mailman/listinfo/python-list


Re: python function parameters, debugging, comments, etc.

2013-10-01 Thread Rotwang

On 01/10/2013 23:54, Chris Friesen wrote:


I've got a fair bit of programming experience (mostly kernel/POSIX stuff in C). 
 I'm fairly new to python though, and was hoping for some advice.

Given the fact that function parameters do not specify types, when you're 
looking at someone else's code how the heck do you know what is expected for a 
given argument?  (Especially in a nontrivial system where the parameter is just 
passed on to some other function and may not be evaluated for several nested 
function calls.)

Is the recommendation to have comments for each function describing the 
expected args?

[...]


In the Python community, one of the programming styles that is 
encouraged is duck-typing. What this means is that rather than writing 
functions that check whether arguments passed to that function are of a 
specific type, the function should simply use any methods of those 
arguments it requires; that way the function will still work if passed 
an argument whose type is a custom type defined by the user which has 
the right interface so that the function body still makes sense (if it 
quacks like a duck, then the function might as well treat it like a 
duck). If a user passes an argument which doesn't have the right methods 
then the function will fail, but the traceback that the interpreter 
provides will often have enough information to make it clear why it failed.


(see http://docs.python.org/3/glossary.html#term-duck-typing )


So the upside of duck-typing is clear. But as you've already discovered, 
so is the downside: Python's dynamic nature means that there's no way 
for the interpreter to know what kind of arguments a function will 
accept, and so a user of any function relies on the function having 
clear documentation. There are several ways to document a function; 
apart from comments, functions also have docstrings, which will be 
displayed, along with the function's signature, when you call 
help(function). A docstring is a string literal which occurs as the 
first statement of a function definition, like this:


def foo(x, y = 2):
'''This function takes an argument x, which should be iterable, and
a function y, which should be a numeric type. It does nothing.'''
pass


If I call help(foo), I get this:

Help on function foo in module __main__:

foo(x, y=2)
This function takes an argument x, which should be iterable, and
a function y, which should be a numeric type. It does nothing.


In Python 3.0 and later, functions can also have annotations; they have 
no semantics in the language itself but third-party modules can use them 
if they choose to do so. They look like this:


def foo(x: str, y: int = 2, z: 'Hello' = None) - tuple:
return a, b, c

For more about annotations, see here:

http://www.python.org/dev/peps/pep-3107/


So the short answer is that Python gives you several methods for making 
it clear what kind of arguments the functions you define should be 
passed, but unfortunately you'll likely encounter functions written by 
people who made no use of those methods. On the plus side, Python's 
exception reporting is good, so if in doubt just try using a function in 
the interactive interpreter and see what happens (with the usual caveats 
about using untrusted code, obviously).

--
https://mail.python.org/mailman/listinfo/python-list


Re: lambda - strange behavior

2013-09-20 Thread Rotwang

On 20/09/2013 16:21, Kasper Guldmann wrote:

I was playing around with lambda functions, but I cannot seem to fully grasp
them. I was running the script below in Python 2.7.5, and it doesn't do what
I want it to. Are lambda functions really supposed to work that way. How do
I make it work as I intend?

f = []
for n in range(5):
 f.append( lambda x: x*n )

assert( f[4](2) == 8 )
assert( f[3](3) == 9 )
assert( f[2](2) == 4 )
assert( f[1](8) == 8 )
assert( f[0](2) == 0 )


This is a common gotcha. In the function lambda x: x*n n is a global 
variable, which means that when the function is called it searches 
globals() for the current value of n, which is 4 (since that was its 
value when the for loop ended). There are several ways to define 
functions that depend on the values bound to names at creation time, 
like you're trying to do. One is to use the fact that default function 
arguments are evaluated when the function is created. So this will work:


f = []
for n in range(5):
f.append(lambda x, n = n: x*n)


Another is to use the fact that arguments passed in function calls are 
evaluated when the function is called. That means that you can define a 
function which takes a parameter as an argument and returns a function 
which depends on that parameter to get the desired behaviour, like this:


f = []
for n in range(5):
f.append((lambda n: lambda x: x*n)(n))
--
https://mail.python.org/mailman/listinfo/python-list


Re: dynamic function parameters for **kwargs

2013-09-20 Thread Rotwang

On 20/09/2013 16:51, bab mis wrote:

Hi ,
I have a function as below:

def func(**kwargs):
 ...
 ...




args=a='b',c='d'

i want to call func(args) so that my function call will take a var as an 
parameter.
it fails with an error typeError: fun() takes exactly 0 arguments (1 given)
. Is there any other way to get the same.


It fails because args is a string and func(args) is passing a single 
string as a positional argument to func, rather than passing the keyword 
arguments a and c. Not sure if I've understood your question, but


args = {'a': 'b', 'c': 'd'}
# or equivalently args = dict(a='b', c='d')
func(**args)

will work. If you need args to be a string like the one in your post 
then you could try


eval('func(%s)' % args)

or

func(**eval('dict(%s)' % args))

but that's only something that should be done if you trust the user who 
will decide what args is (since malicious code passed to eval() can do 
pretty much anything).

--
https://mail.python.org/mailman/listinfo/python-list


Re: Weird ttk behaviour

2013-09-17 Thread Rotwang

On 16/09/2013 19:43, Serhiy Storchaka wrote:

16.09.13 19:28, Rotwang написав(ла):

On Windows 7 (sys.version is '3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012,
10:57:17) [MSC v.1600 64 bit (AMD64)]') there's no problem; f() works
fine in the first place. Does anybody know what's going on?


What _root.wantobjects() returns?


It returns True, both before and after the call to 
style.theme_use('newtheme'). Why do you ask? I've no idea what 
Tk.wantobjects is supposed to do (I tried help() and some web searches 
with no luck).


--
https://mail.python.org/mailman/listinfo/python-list


Re: Weird ttk behaviour

2013-09-17 Thread Rotwang

On 16/09/2013 23:34, Chris Angelico wrote:

On Tue, Sep 17, 2013 at 2:28 AM, Rotwang sg...@hotmail.co.uk wrote:

If I then uncomment those two lines, reload the module and call f() again
(by entering tkderp.reload(tkderp).f()), the function works like it was
supposed to in the first place: two warnings, no exceptions. I can reload
the module as many times as I like and f() will continue to work without any
problems.


Reloading modules in Python is a bit messy. Are you able to tinker
with it and make it work in some way without reloading? It'd be easier
to figure out what's going on that way.


I can't think what else I could try, do you have any suggestions? The 
problem first appeared in a much larger module (I was trying to replace 
some tkinter widgets that looked bad on Linux with their ttk 
equivalents); the only reason I noticed the thing about reloading is 
that I was trying to reproduce the error in a short module by repeatedly 
making changes and reloading.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Weird ttk behaviour

2013-09-17 Thread Rotwang

On 17/09/2013 12:32, Chris Angelico wrote:

[...]

If reloading and doing it again makes things different, what happens
if you simply trigger your code twice without reloading?

I've no idea if it'll help, it just seems like an attack vector on the
problem, so to speak.


Thanks for the suggestion, here's what I've found with some more 
testing. If I rewrite the function f() like this:


def f(fail):
style = ttk.Style(_root)
style.theme_create('newtheme', parent = 'default')
tk.messagebox.showwarning('test', 'test')
if fail:
style.theme_use('newtheme')
tk.messagebox.showwarning('test', 'test')


then I import the module and call f(False) followed by f(True), the 
second call raises an exception just like the original function. I've 
tried variations of the above, such as defining a module-level global 
style instead of having one created during the function call, and the 
end result is always the same. However, suppose instead I define two 
modules, tkderp and tkderp2; both have a function f as defined in my OP, 
but the first has the last two lines of f commented out, and the second 
doesn't (i.e. tkderp is the modified tkderp from before, and tkderp2 is 
the original). Then I do this:


 import tkderp
 tkderp.f()
 import tkderp2
 tkderp2.f()


In that case the second call to f() works fine - two warnings, no 
exception. In fact, if I replace tkderp with this:


# begin tkderp.py

import tkinter as tk

_root = tk.Tk()
_root.withdraw()

# end tkderp.py


then simply importing tkderp before tkderp2 is enough to make the latter 
work properly - this


 import tkderp2
 tkderp2.f()


raises an exception, but this

 import tkderp
 import tkderp2
 tkderp2.f()


doesn't. Any ideas what may be going on?
--
https://mail.python.org/mailman/listinfo/python-list


Re: Weird ttk behaviour

2013-09-17 Thread Rotwang

On 17/09/2013 15:35, Chris Angelico wrote:

On Wed, Sep 18, 2013 at 12:25 AM, Rotwang sg...@hotmail.co.uk wrote:

In fact, if I replace tkderp with this:


# begin tkderp.py

import tkinter as tk

_root = tk.Tk()
_root.withdraw()

# end tkderp.py


then simply importing tkderp before tkderp2 is enough to make the latter
work properly


Nice piece of detective work! Alas, I don't know tkinter well enough
to help with the details, but this is exactly what I'd like to hear if
I were trying to pick this up and debug it :)


I don't know tkinter well enough either, but the fact that it behaves 
differently on Linux and Windows suggests to me that at least one 
version is bugging out. Do you think this is worth raising on 
bugs.python.org?




Tkinter experts, anywhere? Where's Ranting Rick when you need him...


Last time I saw him was in this thread:

https://mail.python.org/pipermail/python-list/2013-June/650257.html
--
https://mail.python.org/mailman/listinfo/python-list


Weird ttk behaviour

2013-09-16 Thread Rotwang

Hi all,

I've just started trying to learn how to use ttk, and I've discovered 
something that I don't understand. I'm using Python 3.3.0 in Linux Mint 
15. Suppose I create the following module:


# begin tkderp.py

import tkinter as tk
import tkinter.messagebox as _
from tkinter import ttk
from imp import reload


_root = tk.Tk()
_root.withdraw()

def f():
style = ttk.Style(_root)
style.theme_create('newtheme', parent = 'default')
tk.messagebox.showwarning('test', 'test')
style.theme_use('newtheme')
tk.messagebox.showwarning('test', 'test')

# end tkderp.py


The function f() is supposed to spawn two warning dialogs. AIUI, the 
style.theme.use('newtheme') line shouldn't make any difference - it 
would refresh any existing ttk widgets and alter the appearance of any 
subsequently created ones if I had changed any of the new theme's 
settings, but in the above context nothing it does should be discernible 
to the user. If I try to call f() the first warning gets shown, but the 
second one causes an exception:


Python 3.3.1 (default, Apr 17 2013, 22:30:32)
[GCC 4.7.3] on linux
Type help, copyright, credits or license for more information.
 import tkderp
 tkderp.f()
Traceback (most recent call last):
  File stdin, line 1, in module
  File /home/phil/Python/tkderp.py, line 17, in f
tk.messagebox.showwarning('test', 'test')
  File /usr/lib/python3.3/tkinter/messagebox.py, line 87, in showwarning
return _show(title, message, WARNING, OK, **options)
  File /usr/lib/python3.3/tkinter/messagebox.py, line 72, in _show
res = Message(**options).show()
  File /usr/lib/python3.3/tkinter/commondialog.py, line 48, in show
s = w.tk.call(self.command, *w._options(self.options))
_tkinter.TclError: unknown color name 


If I try reloading the module and calling the function again, this time 
the first of the two warnings raises the exception. Since I don't really 
understand how the ttk.Style class works, I can't say whether this 
behaviour is expected or not. But here's what's weird. Suppose that I 
comment out the last two lines in the definition of f(), like so:


# begin modified tkderp.py

import tkinter as tk
import tkinter.messagebox as _
from tkinter import ttk
from imp import reload


_root = tk.Tk()
_root.withdraw()

def f():
style = ttk.Style(_root)
style.theme_create('newtheme', parent = 'default')
tk.messagebox.showwarning('test', 'test')
#style.theme_use('newtheme')
#tk.messagebox.showwarning('test', 'test')

# end modified tkderp.py


Unsurprisingly, importing the module and calling f() displays a single 
warning dialog and raises no exception. If I then uncomment those two 
lines, reload the module and call f() again (by entering 
tkderp.reload(tkderp).f()), the function works like it was supposed to 
in the first place: two warnings, no exceptions. I can reload the module 
as many times as I like and f() will continue to work without any problems.


On Windows 7 (sys.version is '3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 
10:57:17) [MSC v.1600 64 bit (AMD64)]') there's no problem; f() works 
fine in the first place. Does anybody know what's going on?

--
https://mail.python.org/mailman/listinfo/python-list


Re: back with more issues

2013-08-12 Thread Rotwang

On 12/08/2013 06:54, Dave Angel wrote:

[...]

This function makes no sense to me.  A function should have three
well-defined pieces:  what are its parameters, what does it do, what are
its side-effects, and what does it return.


No! A function should have *four* well-defined pieces: what are its 
parameters, what does it do, what are its side-effects, what does it 
return, and an almost fanatical devotion to the Pope [etc.]

--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie: static typing?

2013-08-06 Thread Rotwang

On 06/08/2013 11:07, Rui Maciel wrote:

Joshua Landau wrote:


Unless you have a very good reason, don't do this [i.e. checking
arguments for type at runtime and raising TypeError]. It's a damn pain
when functions won't accept my custom types with equivalent
functionality -- Python's a duck-typed language and it should behave
like one.


In that case what's the pythonic way to deal with standard cases like this
one?

code
class SomeModel(object):
 def __init__(self):
 self.label = this is a label attribute

 def accept(self, visitor):
 visitor.visit(self)
 print(visited: , self.label)


class AbstractVisitor(object):
 def visit(self, element):
 pass


class ConcreteVisitorA(AbstractVisitor):
 def visit(self, element):
 element.label = ConcreteVisitorA operated on this model

class ConcreteVisitorB(AbstractVisitor):
 def visit(self, element):
 element.label = ConcreteVisitorB operated on this model


model = SomeModel()

operatorA = ConcreteVisitorA()

model.accept(operatorA)

operatorB = ConcreteVisitorB()

model.accept(operatorA)

not_a_valid_type = foo

model.accept(not_a_valid_type)
/python


The Pythonic way to deal with it is exactly how you deal with it above. 
When the script attempts to call model.accept(not_a_valid_type) an 
exception is raised, and the exception's traceback will tell you exactly 
what the problem was (namely that not_a_valid_type does not have a 
method called visit). In what way would runtime type-checking be any 
better than this? There's an obvious way in which it would be worse, 
namely that it would prevent the user from passing a custom object to 
SomeModel.accept() that has a visit() method but is not one of the types 
for which you thought to check.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Script that converts between indentation and curly braces in Python code

2013-07-31 Thread Rotwang

On 31/07/2013 14:55, Chris Angelico wrote:

[...]


Since the braced version won't run anyway, how about a translation like this:

def foo():
 print(Hello,
world!)
 for i in range(5):
 foo()
 return 42

--

0-def foo():
4-print(Hello,
0-world!)
4-for i in range(5):
8-foo()
4-return 42

That's a simple translation that guarantees safe round-tripping, and
you can probably do it with a one-liner fwiw... let's see...

# Assumes spaces OR tabs but not both
# Can't see an easy way to count leading spaces other than:
# len(s)-len(s.lstrip())


How about len(s.expandtabs()) - len(s.lstrip()) instead?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unexpected results comparing float to Fraction

2013-07-29 Thread Rotwang

On 29/07/2013 17:40, Ian Kelly wrote:

On Mon, Jul 29, 2013 at 10:20 AM, Chris Angelico ros...@gmail.com wrote:

On Mon, Jul 29, 2013 at 5:09 PM, MRAB pyt...@mrabarnett.plus.com wrote:

I'm surprised that Fraction(1/3) != Fraction(1, 3); after all, floats
are approximate anyway, and the float value 1/3 is more likely to be
Fraction(1, 3) than Fraction(6004799503160661, 18014398509481984).


At what point should it become Fraction(1, 3)?


At the point where the float is exactly equal to the value you get
from the floating-point division 1/3.


But the interpreter has no way of knowing that the value 1/3 that's been 
passed to the Fraction constructor was obtained from the division 1/3, 
rather than, say, 11/30 or 
6004799503160661/18014398509481984. How do you propose the constructor 
should decide between the many possible fractions that round to the same 
float, if not by choosing the one that evaluates to it exactly?


Personally the behaviour in the OP is exactly what I would expect.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unexpected results comparing float to Fraction

2013-07-29 Thread Rotwang

On 29/07/2013 17:20, Chris Angelico wrote:

On Mon, Jul 29, 2013 at 5:09 PM, MRAB pyt...@mrabarnett.plus.com wrote:

I'm surprised that Fraction(1/3) != Fraction(1, 3); after all, floats
are approximate anyway, and the float value 1/3 is more likely to be
Fraction(1, 3) than Fraction(6004799503160661, 18014398509481984).


At what point should it become Fraction(1, 3)?


Fraction(0.3)

Fraction(5404319552844595, 18014398509481984)

Fraction(0.33)

Fraction(5944751508129055, 18014398509481984)

Fraction(0.333)

Fraction(5998794703657501, 18014398509481984)

Fraction(0.333)

Fraction(6004798902680711, 18014398509481984)

Fraction(0.33)

Fraction(6004799502560181, 18014398509481984)

Fraction(0.3)

Fraction(6004799503160061, 18014398509481984)

Fraction(0.3)

Fraction(6004799503160661, 18014398509481984)

Rounding off like that is a job for a cool library function (one of
which was mentioned on this list a little while ago, I believe), but
not IMO for the Fraction constructor.


How about this?

 from fractions import Fraction
 help(Fraction.limit_denominator)
Help on function limit_denominator in module fractions:

limit_denominator(self, max_denominator=100)
Closest Fraction to self with denominator at most max_denominator.

 Fraction('3.141592653589793').limit_denominator(10)
Fraction(22, 7)
 Fraction('3.141592653589793').limit_denominator(100)
Fraction(311, 99)
 Fraction(4321, 8765).limit_denominator(1)
Fraction(4321, 8765)

 Fraction(1/3).limit_denominator()
Fraction(1, 3)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Explain your acronyms (RSI?)

2013-07-06 Thread Rotwang

On 06/07/2013 20:38, Terry Reedy wrote:

rms has crippling RSI (anonymous, as quoted by Skip).

I suspect that 'rms' = Richard M Stallman (but why lower case? to insult
him?). I 'know' that RSI = Roberts Space Industries, a game company
whose Kickstarter project I supported. Whoops, wrong context. How about
'Richard Stallman Insanity' (his personal form of megalomania)? That
makes the phrase is a claim I have read others making.

Lets continue and see if that interpretation works. should indicate
that emacs' ergonomics is not right. Aha! Anonymous believes that using
his own invention, emacs, is what drove Richard crazy. He would not be
the first self invention victim.

But Skip mentions 'worse for wrists'. So RSI must be a physical rather
than mental condition. Does 'I' instead stand for Inoperability?,
Instability?, or what?

Let us try Google. Type in RSI and it offers 'RSI medications' as a
choice. Sound good, as it will eliminate all the companies with those
initials. The two standard medical meanings of RSI seem to be Rapid
Sequence Intubation and Rapid Sequence Induction. But those are
procedures, not chronic syndromes. So I still do not know what the
original poster, as quoted by Skip, meant.


Repetitive strain injury, I assume. Not sure if you're joking but over 
here the top 7 hits for RSI on Google, as well as the three ads that 
precede them, are repetitive strain injury-related.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple recursive sum function | what's the cause of the weird behaviour?

2013-07-06 Thread Rotwang

On 06/07/2013 19:43, Joshua Landau wrote:

On 6 July 2013 13:59, Russel Walker russ.po...@gmail.com wrote:

Since I've already wasted a thread I might as well...

Does this serve as an acceptable solution?

def supersum(sequence, start=0):
 result = type(start)()
 for item in sequence:
 try:
 result += supersum(item, start)
 except:
 result += item
 return result


It's probably more robust to do:

def supersum(sequence, start=0):
 for item in sequence:
 try:
 result = result + supersum(item, start)
except:
 result = result + item
 return result


I assume you meant to put result = start in there at the beginning.



as that way you aren't assuming the signature of type(start).


It's not quite clear to me what the OP's intentions are in the general 
case, but calling supersum(item, start) seems odd - for example, is the 
following desirable?


 supersum([[1], [2], [3]], 4)
22

I would have thought that the correct answer would be 10. How about 
the following?


def supersum(sequence, start = 0):
result = start
for item in reversed(sequence):
try:
result = supersum(item, result)
except:
result = item + result
return result
--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple recursive sum function | what's the cause of the weird behaviour?

2013-07-06 Thread Rotwang

On 06/07/2013 21:10, Rotwang wrote:

[...]

It's not quite clear to me what the OP's intentions are in the general
case, but calling supersum(item, start) seems odd - for example, is the
following desirable?

  supersum([[1], [2], [3]], 4)
22

I would have thought that the correct answer would be 10. How about
the following?

def supersum(sequence, start = 0):
 result = start
 for item in reversed(sequence):
 try:
 result = supersum(item, result)
 except:
 result = item + result
 return result


Sorry, I've no idea what I was thinking with that reversed thing. The 
following seems better:


def supersum(sequence, start = 0):
result = start
for item in sequence:
try:
result = supersum(item, result)
except:
result = result + item
return result
--
http://mail.python.org/mailman/listinfo/python-list


Re: Explain your acronyms (RSI?)

2013-07-06 Thread Rotwang

On 06/07/2013 21:11, Stefan Behnel wrote:

Rotwang, 06.07.2013 21:51:

On 06/07/2013 20:38, Terry Reedy wrote:

rms has crippling RSI (anonymous, as quoted by Skip).
[...]
Let us try Google. Type in RSI and it offers 'RSI medications' as a
choice. Sound good, as it will eliminate all the companies with those
initials. The two standard medical meanings of RSI seem to be Rapid
Sequence Intubation and Rapid Sequence Induction. But those are
procedures, not chronic syndromes. So I still do not know what the
original poster, as quoted by Skip, meant.


Repetitive strain injury, I assume. Not sure if you're joking but over here
the top 7 hits for RSI on Google, as well as the three ads that precede
them, are repetitive strain injury-related.


Both of you might want to delete your browser cookies, log out of your
Google accounts, and then retry. Maybe disabling JavaScript helps. Or
enabling the Privacy Mode in your browser. Or try a different browser all
together. Or a different search engine. Google has lots of ways to detect
who's asking.


The results I mentioned above were in private browsing in FF. I'm in the 
UK though so that certainly will have made a difference.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Default scope of variables

2013-07-05 Thread Rotwang

On 05/07/2013 02:24, Steven D'Aprano wrote:

On Thu, 04 Jul 2013 17:54:20 +0100, Rotwang wrote:
[...]

Anyway, none of the calculations that has been given takes into account
the fact that names can be /less/ than one million characters long.



Not in *my* code they don't!!!

*wink*



The
actual number of non-empty strings of length at most 100 characters,
that consist only of ascii letters, digits or underscores, and that
don't start with a digit, is

sum(53*63**i for i in range(100)) == 53*(63**100 - 1)//62



I take my hat of to you sir, or possibly madam. That is truly an inspired
piece of pedantry.


FWIW, I'm male.



It's perhaps worth mentioning that some non-ascii characters are allowed
in identifiers in Python 3, though I don't know which ones.


PEP 3131 describes the rules:

http://www.python.org/dev/peps/pep-3131/


Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Default scope of variables

2013-07-04 Thread Rotwang

Sorry to be OT, but this is sending my pedantry glands haywire:

On 04/07/2013 08:06, Dave Angel wrote:

On 07/04/2013 01:32 AM, Steven D'Aprano wrote:



   SNIP


Well, if I ever have more than 63,000,000 variables[1] in a function,
I'll keep that in mind.


 SNIP


[1] Based on empirical evidence that Python supports names with length at
least up to one million characters long, and assuming that each character
can be an ASCII letter, digit or underscore.



Well, the number wouldn't be 63,000,000.  Rather it'd be 63**100

I probably have it wrong, but I think that looks like:

859,122,[etc.]


variables.  (The number has 180 digits)


That's 63**100. Note that 10**100 has 101 digits, and is 
somewhat smaller than 63**100.


Anyway, none of the calculations that has been given takes into account 
the fact that names can be /less/ than one million characters long. The 
actual number of non-empty strings of length at most 100 characters, 
that consist only of ascii letters, digits or underscores, and that 
don't start with a digit, is


sum(53*63**i for i in range(100)) == 53*(63**100 - 1)//62


It's perhaps worth mentioning that some non-ascii characters are allowed 
in identifiers in Python 3, though I don't know which ones.

--
http://mail.python.org/mailman/listinfo/python-list


Re: What is the semantics meaning of 'object'?

2013-06-26 Thread Rotwang

On 25/06/2013 23:57, Chris Angelico wrote:

On Wed, Jun 26, 2013 at 8:38 AM, Mark Janssen dreamingforw...@gmail.com wrote:

Combining integers with sets I can make
a Rational class and have infinite-precision arithmetic, for example.


Combining two integers lets you make a Rational. Python integers are
already infinite-precision. Or are you actually talking of using
machine words and sets as your fundamental? Also, you need an
ordered set - is the set {5,3} greater or less than the set {2} when
you interpret them as rationals? One must assume, I suppose, that any
one-element set represents the integer 1, because any number divided
by itself is 1. Is the first operand 3/5 or 5/3?


You could use Kuratowski ordered pairs:

http://en.wikipedia.org/wiki/Ordered_pair#Kuratowski_definition

Not that doing so would be sensible, of course. I don't know much about 
low-level data structures but it seems obvious that it's much easier to 
implement an ordered container type than an unordered set on a computer.

--
http://mail.python.org/mailman/listinfo/python-list


Re: What is the semantics meaning of 'object'?

2013-06-24 Thread Rotwang

On 24/06/2013 07:31, Steven D'Aprano wrote:

On Mon, 24 Jun 2013 02:53:06 +0100, Rotwang wrote:


On 23/06/2013 18:29, Steven D'Aprano wrote:

On Sat, 22 Jun 2013 23:40:53 -0600, Ian Kelly wrote:

[...]

Can you elaborate or provide a link?  I'm curious to know what other
reason there could be for magic methods to behave differently from
normal methods in this regard.


It's an efficiency optimization. I don't quite get the details, but
when you run something like a + b, Python doesn't search for __add__
using the normal method lookup procedure. That allows it to skip
checking the instance __dict__, as well as __getattribute__ and
__getattr__.


It's not just an efficiency optimisation, it's actually necessary in
cases where a dunder method gets called on a type. Consider what happens
when one calls repr(int), for example - if this tried to call
int.__repr__() by the normal lookup method, it would call the unbound
__repr__ method of int with no self argument:



I don't know about *necessary*, after all, classic classes manage just
fine in Python 2.x:

py class OldStyle:
... def __repr__(self):
... return Spam
...
py repr(OldStyle())
'Spam'
py repr(OldStyle)
'class __main__.OldStyle at 0xb7553e0c'


Point taken. It's also possible to override the __repr__ method of an 
old-style instance and have the change recognised by repr, so repr(x) 
isn't simply calling type(x).__repr__(x) in general.




I daresay that there are good reasons why new-style classes don't do the
same thing, but the point is that had the Python devs had been
sufficiently interested in keeping the old behaviour, and willing to pay
whatever costs that would require, they could have done so.


Sure, though the above behaviour was probably easier to achieve with 
old-style classes than it would have been with new-style classes because 
all instances of old-style classes have the same type. But I don't doubt 
that you're correct that they could have done it if they wanted.

--
http://mail.python.org/mailman/listinfo/python-list


Re: What is the semantics meaning of 'object'?

2013-06-23 Thread Rotwang

On 23/06/2013 18:29, Steven D'Aprano wrote:

On Sat, 22 Jun 2013 23:40:53 -0600, Ian Kelly wrote:

[...]

Can you elaborate or provide a link?  I'm curious to know what other
reason there could be for magic methods to behave differently from
normal methods in this regard.


It's an efficiency optimization. I don't quite get the details, but when
you run something like a + b, Python doesn't search for __add__ using
the normal method lookup procedure. That allows it to skip checking the
instance __dict__, as well as __getattribute__ and __getattr__.


It's not just an efficiency optimisation, it's actually necessary in 
cases where a dunder method gets called on a type. Consider what happens 
when one calls repr(int), for example - if this tried to call 
int.__repr__() by the normal lookup method, it would call the unbound 
__repr__ method of int with no self argument:


 int.__repr__()
Traceback (most recent call last):
  File pyshell#1, line 1, in module
int.__repr__()
TypeError: descriptor '__repr__' of 'int' object needs an argument

By bypassing the instance-first lookup and going straight to the 
object's type's dictionary, repr(int) instead calls type.__repr__(int), 
which works:


 type.__repr__(int)
class 'int'

This is explained here:

http://docs.python.org/3.3/reference/datamodel.html#special-lookup
--
http://mail.python.org/mailman/listinfo/python-list


Re: Default Value

2013-06-22 Thread Rotwang

On 22/06/2013 03:01, I wrote:

On 22/06/2013 02:15, Rick Johnson wrote:

[...]

This is what should happen:

 py def foo(arg=[]):
 ... arg.append(1)
 ... print(arg)
 ...
 py foo()
 [1]
 py foo()
 [1]
 py foo()
 [1]

Yes, Yes, YES! That is intuitive! That is sane! Now, what if
we pass a reference to a mutable object? What then. Well, let's
see:

 py lst = range(5)
 py lst
 [0, 1, 2, 3, 4]
 py def foo(arg=lst):
 ... arg.append(1)
 ... print(arg)
 ...
 py foo()
 [0, 1, 2, 3, 4, 1]
 py foo()
 [0, 1, 2, 3, 4, 1, 1]

That's fine. Because the object was already created OUTSIDE
the subroutine. So therefore, modifications to the mutable
are not breaking the fundamental of statelessness INSIDE
subroutines. The modification is merely a side effect, and
the subroutine is unconcerned with anything that exists
beyond it's own scope.

IS ALL THIS REGISTERING YET? DO YOU UNDERSTAND?


No, I don't. These two special cases are not sufficient for me to
determine what semantics you are proposing for the general case. For
example, what happens in the second example if lst is rebound? Does the
default stay the same or does it change to the new value of lst? What
about if you pass a call as a default argument, and then subsequently
change the behaviour of the callable? Does the argument get re-evaluated
every time foo() is called, or is the argument guaranteed to be the same
every time? If the latter, what happens if the arguments type is
modified (e.g. by changing one of its class attributes)? What about
defining functions dynamically, with default arguments that are only
known at runtime? Is there any way to avoid the second type of behaviour
in this case? If so, how? If not, isn't that likely to prove at least as
big a gotcha to people who don't know the rules of RickPy as the problem
you're trying to solve?


Since you haven't answered this, allow me to suggest something. One 
thing that a language could do is treat default arguments as mini 
function definitions with their own scope, which would be the same as 
the scope of the function itself but without the function's arguments. 
In other words, in this alternative version of Python, the following


 def f(x = expression):
...stuff

would be equivalent, in actual Python, to this:

 default = lambda: expression
 sentinel = object()
 def f(x = sentinel):
...if x is sentinel:
...x = default()
...stuff

That would be something that could be implemented, and as far as I can 
tell it's consistent with the examples you've given of how you would 
like Python to work. Is this indeed the kind of thing you have in mind?


--
http://mail.python.org/mailman/listinfo/python-list


Re: Default Value

2013-06-22 Thread Rotwang

On 22/06/2013 19:49, Rick Johnson wrote:

On Saturday, June 22, 2013 12:19:31 PM UTC-5, Rotwang wrote:

On 22/06/2013 02:15, Rick Johnson wrote:
IS ALL THIS REGISTERING YET? DO YOU UNDERSTAND?


No, I don't. These two special cases are not sufficient
for me to determine what semantics you are proposing for
the general case.



  QUESTION 1:


For example, what happens in the second
example if lst is rebound?


Well nothing could happen, or something significant could
happen. How do you expect me to determine an answer for
such a general question when i have no context, or any
algorithms to work from.


You don't have an algorithm to work from? I assumed you did. If you 
don't have an algorithm in mind as an alternative to Python's existing 
behaviour, then what exactly is your point?


Anyway, I already proposed an algorithm that is consistent with your 
examples, in


  http://mail.python.org/pipermail/python-list/2013-June/650447.html

. Is that what you have in mind?



Maybe you are trying to set a situation that results in
chaos, okay, then chaos will happen.


No, I'm not. I'm simply trying to find out what alternative behaviour 
you're proposing for Python, because the examples you've given so far 
are not sufficient to explain what happens in the general case.




But who's fault is the
chaos when a programmer is rebinding names? The programmer
is ultimately responsible for his action.

   Remember my Apathetic Approach?


  QUESTION 2:


Does the default stay the same or does it change to the
new value of lst?


Here you go again, you cannot seem to comprehend very simple
concepts. My second example clearly explains what will happen


  py lst = range(5)
  py lst
  [0, 1, 2, 3, 4]
  py def foo(arg=lst):
  ... arg.append(1)
  ... print(arg)
  ...
  py foo()
  [0, 1, 2, 3, 4, 1]
  py foo()
  [0, 1, 2, 3, 4, 1, 1]


No, it doesn't, because lst is not rebound in your second example. As 
such, entering  lst = []; foo() could return [1], or it could return 
[0, 1, 2, 3, 4, 1, 1, 1]. Both of those could result from behaviours 
that gave the exact same output as the two examples you've given. It is 
literally impossible for me to know what you would like to see happen in 
this case from the limited information you've given me.


On the other hand, from the specification I gave earlier it's easy to 
see that  lst = []; foo() would return [1]. Is this what you want?




Do you see how the name 'lst' is bound to a list object
living in global module scope? Do you see how the default
argument (named `arg`) is a reference to a global list named
`lst`?

  ARE YOU FOLLOWING ALONG SO FAR?

Since the mutable arg exists OUTSIDE the subroutine, the
subroutine merely need to provide access to the global
*name* `lst` from within the body of the subroutine, and the
subroutine merely do this ONCE at compile time!

  ARE YOU FAMILIAR WITH THE CONCEPT OF POINTERS?


Yes, but Python doesn't have them. Are you proposing that it should have 
them, and that they should play some role in Python default binding 
behaviour? If so then a better way of making people aware of this would 
be to tell them, rather than expecting them to guess and then pretending 
that their failure to do so represents some failing on their part.





  QUESTION 3-A:


What about if you pass a call as a default argument, and
then subsequently change the behaviour of the callable?


Hmm, let's have a thought experiment. What if i give you a
digital birthday card. And when you open the card you see a
friendly note from me:

  Rotwang is my best friend in the whole wide world.

But unbeknownst to you, i have actually hacked the card and
installed a secret wifi device. I've made the message
mutable -- hmm, surely nothing bad could arise from
mutability correct?

  *evil-grin*

A few hours later when you open the card to show to some
friends, you are alarmed to see a new message:

  Rotwang is a degenerative penis condition

Now, besides being embarrassed, and probably angry as hell,
do you understand the dangers of mutable objects?


Yes. As I expect you already know, I'm well aware of the dangers of 
mutable objects. But that isn't what I'm asking you about. I'm asking 
you what method you propose to avoid one of those dangers. Why don't you 
just answer the question, instead of answering a different question I 
didn't ask?




I'm not suggesting you don't ever want to use the power of
mutability, no, but you do need to have a healthy respect for
mutability -- because it can really ruin your day

Re: Default Value

2013-06-21 Thread Rotwang

On 21/06/2013 18:01, Rick Johnson wrote:


[stuff]


It isn't clear to me from your posts what exactly you're proposing as an 
alternative to the way Python's default argument binding works. In your 
version of Python, what exactly would happen when I passed a mutable 
argument as a default value in a def statement? E.g. this:


 a = [1, 2, 3]
 a.append(a)
 b = object()
 def f(x = [None, b, [a, [4]]]):
... pass # do something

What would you like to see the interpreter do in this case?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Default Value

2013-06-21 Thread Rotwang

On 21/06/2013 19:26, Rick Johnson wrote:

On Friday, June 21, 2013 12:47:56 PM UTC-5, Rotwang wrote:

It isn't clear to me from your posts what exactly you're
proposing as an alternative to the way Python's default
argument binding works. In your version of Python, what
exactly would happen when I passed a mutable argument as a
default value in a def statement? E.g. this:

   a = [1, 2, 3]
   a.append(a)
   b = object()
   def f(x = [None, b, [a, [4]]]):
... pass # do something

What would you like to see the interpreter do in this case?


Ignoring that this is a completely contrived example that has
no use in the real world, here are one of three methods by
which i can handle this:


I didn't ask what alternative methods of handling default argument 
binding exist (I can think of several, but none of them strikes me as 
preferable to how Python currently does it). I asked what would happen 
in /your/ version of Python. Which of the alternatives that you present 
would have been implemented, if you had designed the language?





  The Benevolent Approach:

I could cast a virtual net over my poor lemmings before
they jump off the cliff by throwing an exception:

   Traceback (most recent screw-up last):
Line BLAH in SCRIPT
 def f(x = [None, b, [a, [4]]]):
   ArgumentError: No mutable default arguments allowed!


So how does the interpreter know whether an arbitrary object passed as a 
default value is mutable or not?


Not that it really matters. Elsewhere in this thread you wrote:


Let's imagine for a second if Python allowed mutable keys in
a dictionary,


which it does


would you be surprised if you used a mutable
for a key, then you mutated the mutable key, then you could
not access the value from the original key?

 ## Imaginary code ##
 py lst = [3]
 py d = {1:2, lst:4}
 py lst.append(10)
 py d[lst]
 opps, KeyError!

Would you REALLY be so surprised? I would not. But more
importantly, does Python need to protect you from being such
an idiot? I don't think so!


Now, I don't really believe that you think that the user shouldn't be 
protected from doing one idiotic thing with mutable dict keys but should 
be protected from doing another idiotic thing with mutable default 
arguments, especially as you've already been given a use case for the 
latter. So I assume that The Benevolent Approach is not the approach you 
would have gone for if you had designed the language, right? If so then 
let's ignore it.





  The Apathetic Approach:

I could just assume that a programmer is responsible for the
code he writes. If he passes mutables into a function as
default arguments, and then mutates the mutable later, too
bad, he'll understand the value of writing solid code after
a few trips to exception Hell.


It seems to me that this is exactly what currently happens.




  The Malevolent Approach (disguised as beneva-loon-icy):

I could use early binding to confuse the hell out of him and
enjoy the laughs with all my ivory tower buddies as he falls
into fits of confusion and rage. Then enjoy again when he
reads the docs. Ahh, the gift that just keeps on giving!


My question was about how you think the language should work, not about 
what your buddies should or shouldn't enjoy. In terms of how a language 
actually works, is there any difference between The Malevolent Approach 
and The Apathetic Approach? And is there any difference between either 
of them and what Python currently does?





  Conclusion:

As you can probably guess the malevolent approach has some
nice fringe benefits.

You know, out of all these post, not one of you guys has
presented a valid use-case that will give validity to the
existence of this PyWart -- at least not one that CANNOT be
reproduced by using my fine examples.


Of course using a mutable default as a cache can be reproduced by other 
means, as can another common use case that I don't think anyone's 
mentioned yet (defining functions parametrised by variables whose values 
aren't known until runtime). That's hardly an argument against it - you 
might as well argue that Python shouldn't have decorators, or that it 
shouldn't have for loops because their behaviour can be reproduced with 
while loops.


But this is beside the point anyway, until you present an alternative to 
Python's current behaviour. If you do so then we can start debating the 
relative merits of the two approaches.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Default Value

2013-06-21 Thread Rotwang

On 22/06/2013 02:15, Rick Johnson wrote:

On Friday, June 21, 2013 6:40:51 PM UTC-5, Rotwang wrote:

On 21/06/2013 19:26, Rick Johnson wrote:
[...]
I didn't ask what alternative methods of handling default
argument binding exist (I can think of several, but none
of them strikes me as preferable to how Python currently
does it). I asked what would happen in /your/ version of
Python. Which of the alternatives that you present would
have been implemented, if you had designed the language?


The apathetic approach. However, you fail to understand that
whilst Python's current implementation is partly apathetic,
is is also benevolent, and malevolent simultaneously. My
approach is purely apathetic. I'll explain later. Stay
tuned.


OK...



[...]
So how does the interpreter know whether an arbitrary
object passed as a default value is mutable or not? Not
that it really matters.


Well i'm glad it does not matter to you because it does not
matter to me either. *shrugs*


Let's imagine for a second if Python allowed mutable keys in
a dictionary,

which it does


Am i just to take your word for this? You cannot provide an
example?


Of course I can:

 class hashablelist(list):
... def __hash__(self):
... return hash(tuple(self))
... 
 x = hashablelist(range(4))
 x
[0, 1, 2, 3]
 # Let's try using x as a dict key:
 d = {x: 'Hello'}
 d[x]
'Hello'
 # Let's try mutating it:
 x.append(4)
 x
[0, 1, 2, 3, 4]



Here, allow me to break the ice:

 # Literal
 py d = {[1]:2}
 Traceback (most recent call last):
   File pyshell#0, line 1, in module
 d = {[1]:2}
 TypeError: unhashable type: 'list'
 # Symbol
 py lst = [1]
 py d = {lst:2}
 Traceback (most recent call last):
   File pyshell#2, line 1, in module
 d = {lst:2}
 TypeError: unhashable type: 'list'

Hmm, maybe only certain mutables work?


Try reading the tracebacks. Notice how they don't say anything about 
mutability?




Great, more esoteric
rules! Feel free to enlighten me since i'm not going to
waste one second of my time pursuing the docs just to learn
about ANOTHER unintuitive PyWart i have no use for.


Sure. In order to be used as a dictionary key a Python has to be 
hashable. That means its type has to define a __hash__ method, which is 
called by the builtin function hash. The __hash__ method should return 
an int, and objects that compare equal should have the same hash.




Now, I don't really believe that you think that the user
shouldn't be protected from doing one idiotic thing with
mutable dict keys but should be protected from doing
another idiotic thing with mutable default arguments,
especially as you've already been given a use case for the
latter. So I assume that The Benevolent Approach is not
the approach you would have gone for if you had designed
the language, right? If so then let's ignore it.


You are correct. Finally, we can agree on something.



   The Apathetic Approach:

I could just assume that a programmer is responsible for the
code he writes. If he passes mutables into a function as
default arguments, and then mutates the mutable later, too
bad, he'll understand the value of writing solid code after
a few trips to exception Hell.


It seems to me that this is exactly what currently happens.


(is this lazy readers day? I swear i explained this earlier)

And here is where you are wrong. In the current implementation
python functions carry the state of mutable default arguments
between successive calls. That's a flaw.


But your description of The Apathetic Approach doesn't say anything 
about functions carrying the state of mutable default arguments, or 
otherwise. How am I supposed to know how your proposed approach treats 
mutable defaults if you don't tell me, even after I explicitly ask?




Observe:

 py def foo(arg=[]):
 ... arg.append(1)
 ... print(arg)
 ...
 py foo()
 [1]
 py foo()
 [1, 1]
 py foo()
 [1, 1, 1]


Yes, I am well aware of how Python currently treats mutable default 
arguments.




No, no, NO! That's wrong! Subroutines should be stateless.
That means that an empty mutable default argument will
ALWAYS be empty at each call of the subroutine.  This is
what should happen:

 py def foo(arg=[]):
 ... arg.append(1)
 ... print(arg)
 ...
 py foo()
 [1]
 py foo()
 [1]
 py foo()
 [1]

Yes, Yes, YES! That is intuitive! That is sane! Now, what if
we pass a reference to a mutable object? What then. Well, let's
see:

 py lst = range(5)
 py lst
 [0, 1, 2, 3, 4]
 py def foo(arg=lst):
 ... arg.append(1)
 ... print(arg)
 ...
 py foo()
 [0, 1, 2, 3, 4, 1]
 py foo()
 [0, 1, 2, 3, 4, 1, 1]

That's fine. Because the object was already created OUTSIDE
the subroutine. So therefore, modifications

Re: Default Value

2013-06-21 Thread Rotwang

On 22/06/2013 03:18, Chris Angelico wrote:

On Sat, Jun 22, 2013 at 12:01 PM, Rotwang sg...@hotmail.co.uk wrote:

class hashablelist(list):

... def __hash__(self):
... return hash(tuple(self))


There's a vulnerability in that definition:


a=hashablelist((1,[],3))
a

[1, [], 3]

{a:1}

Traceback (most recent call last):
   File pyshell#255, line 1, in module
 {a:1}
   File pyshell#249, line 3, in __hash__
 return hash(tuple(self))
TypeError: unhashable type: 'list'

Of course, if you monkey-patch list itself to have this functionality,
or always use hashablelist instead of list, then it will work. But
it's still vulnerable.


Quite right, sorry. I should have called my class 
sometimeshashablelist, or something.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Differences of != operator behavior in python3 and python2 [ bug? ]

2013-05-12 Thread Rotwang

On 13/05/2013 00:40, Ian Kelly wrote:

On Sun, May 12, 2013 at 5:23 PM, Mr. Joe titani...@gmail.com wrote:

I seem to stumble upon a situation where != operator misbehaves in
python2.x. Not sure if it's my misunderstanding or a bug in python
implementation. Here's a demo code to reproduce the behavior -


The != operator is implemented by the __ne__ special method.  In
Python 3, the default implementation of __ne__ is to call __eq__ and
return the opposite of whatever it returns.


One should be aware, however, that this doesn't necessarily apply to 
classes inheriting from builtins other than object (a fact that recently 
bit me on the a***):


 class spam:
def __eq__(self, other):
print('spam')
return super().__eq__(other)


 class eggs(list):
def __eq__(self, other):
print('eggs')
return super().__eq__(other)


 spam() != spam()
spam
spam
True
 eggs() != eggs()
False
--
http://mail.python.org/mailman/listinfo/python-list


Re: The type/object distinction and possible synthesis of OOP and imperative programming languages

2013-04-15 Thread Rotwang

On 15/04/2013 22:13, Dave Angel wrote:

On 04/15/2013 01:43 PM, Antoon Pardon wrote:

[...]

I had gotten my hopes up after reading this but then I tried:


$ python3
Python 3.2.3 (default, Feb 20 2013, 17:02:41)
[GCC 4.7.2] on linux2
Type help, copyright, credits or license for more information.
  class vslice (slice):
...   pass
...
Traceback (most recent call last):
   File stdin, line 1, in module
TypeError: type 'slice' is not an acceptable base type


It seems types and classes are still not mere synonyms.


No, it seems you're trying to use an internal detail as though it were a
supported feature.

 From page:
 http://docs.python.org/3.3/reference/datamodel.html#types

Internal types
A few types used internally by the interpreter are exposed to the user.
Their definitions may change with future versions of the interpreter,
but they are mentioned here for completeness.



To be fair, one can't do this either:

Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 
bit (AMD64)] on win32

Type copyright, credits or license() for more information.
 class C(type(lambda: None)):
pass

Traceback (most recent call last):
  File pyshell#2, line 1, in module
class C(type(lambda: None)):
TypeError: type 'function' is not an acceptable base type


and I don't think that FunctionType would be considered an internal 
detail, would it? Not that I'd cite the fact that not all types can be 
inherited from as evidence that types and classes are not synonyms, mind.

--
http://mail.python.org/mailman/listinfo/python-list


Re: howto remove the thousand separator

2013-04-15 Thread Rotwang

On 15/04/2013 08:03, Steven D'Aprano wrote:

On Mon, 15 Apr 2013 03:19:43 +0100, Rotwang wrote:

[...]

(Sorry for linking to Google Groups. Does anyone know of a better c.l.p.
web archive?)


The canonical (although possibly not the best) archive for c.l.p. is the
python-list mailing list archive:

http://mail.python.org/mailman/listinfo/python-list


Thanks to both you and Ned.
--
http://mail.python.org/mailman/listinfo/python-list


Re: The type/object distinction and possible synthesis of OOP and imperative programming languages

2013-04-15 Thread Rotwang

On 15/04/2013 23:32, Chris Angelico wrote:

On Tue, Apr 16, 2013 at 8:12 AM, Rotwang sg...@hotmail.co.uk wrote:

Traceback (most recent call last):
   File pyshell#2, line 1, in module
 class C(type(lambda: None)):
TypeError: type 'function' is not an acceptable base type


and I don't think that FunctionType would be considered an internal
detail, would it? Not that I'd cite the fact that not all types can be
inherited from as evidence that types and classes are not synonyms, mind.


Actually, I'm not sure how you'd go about inheriting from a function.
Why not just create a bare class, then assign its __call__ to be the
function you're inheriting from?


No idea. I wasn't suggesting that trying to inherit from FunctionType 
was a sensible thing to do; I was merely pointing out that slice's 
status as an internal feature was not IMO relevant to the point that 
Antoon was making.

--
http://mail.python.org/mailman/listinfo/python-list


Re: howto remove the thousand separator

2013-04-14 Thread Rotwang

On 15/04/2013 02:14, Steven D'Aprano wrote:

On Sun, 14 Apr 2013 17:44:28 -0700, Mark Janssen wrote:


On Sun, Apr 14, 2013 at 5:29 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:

On Sun, 14 Apr 2013 12:06:12 -0700, Mark Janssen wrote:


cleaned=''
for c in myStringNumber:
if c != ',':
  cleaned+=c
int(cleaned)


due to being an O(N**2)  algorithm.


What on earth makes you think that is an O(n**2) algorithm and not O(n)?


Strings are immutable. Consider building up a single string from four
substrings:

s = ''
s += 'fe'
s += 'fi'
s += 'fo'
s += 'fum'

Python *might* optimize the first concatenation, '' + 'fe', to just reuse
'fe', (but it might not). Let's assume it does, so that no copying is
needed. Then it gets to the second concatenation, and now it has to copy
characters, because strings are immutable and cannot be modified in
place.


Actually, I believe that CPython is optimised to modify strings in place 
where possible, so that the above would surprisingly turn out to be 
O(n). See the following thread where I asked about this:


http://groups.google.com/group/comp.lang.python/browse_thread/thread/990a695fe2d85c52

(Sorry for linking to Google Groups. Does anyone know of a better c.l.p. 
web archive?)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3.3 Tkinter Fullscreen - Taskbar not Hiding

2013-04-04 Thread Rotwang

On 04/04/2013 14:49, Jason Swails wrote:

I've added some comments about the code in question as well...

On Wed, Apr 3, 2013 at 11:45 PM, teslafreque...@aol.com
mailto:teslafreque...@aol.com wrote:

Hi, I am working with Tkinter, and I have set up some simple code to
run:

import tkinter
import re
from tkinter import *


If you import everything from tkinter into your top-level namespace,
then the import tkinter at the top serves no purpose.


I don't know whether this applies to the OP's code, but I can think of 
at least one reason why one would want both import module and from 
module import* at the top of one's code: monkey patching.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3.3 Tkinter Fullscreen - Taskbar not Hiding

2013-04-04 Thread Rotwang

On 04/04/2013 20:00, Jason Swails wrote:

On Thu, Apr 4, 2013 at 1:30 PM, Rotwang sg...@hotmail.co.uk
mailto:sg...@hotmail.co.uk wrote:
[...]

I don't know whether this applies to the OP's code, but I can think
of at least one reason why one would want both import module and
from module import* at the top of one's code: monkey patching.


That was not happening in the OP's code (it actually had no references
to tkinter after the initial import).


Sure.



That said, if you change any
attributes inside tkinter (by binding names inside tkinter to another
object) after the top three lines, those changes will not percolate down
to the attributes imported via from tkinter import * -- you would
obviously have to do that work before importing the tkinter namespace
into the toplevel namespace.


What I had in mind was something like this:

# begin module derp.py

global_variable = 4

def f():
print('global_variable == %i' % global_variable)

# end module derp.py

 # in the interactive interpreter...
 import derp
 from derp import *
 global_variable = 5
 f()
global_variable == 4
 derp.global_variable = 5
 f()
global_variable == 5


Off the top of my head I don't know whether there's any purpose to doing 
that kind of thing with tkinter, but I can conceive that it might be 
useful for e.g. changing widget default behaviour or something.




I'd be interested to see if there's actually an example where someone
does this in a way that would not be done better another way.


No idea, sorry.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorating functions without losing their signatures

2013-04-03 Thread Rotwang

On 03/04/2013 02:05, Rotwang wrote:

[...]

After thinking about it for a while I've come up with the following
abomination:

import inspect

def sigwrapper(sig):
   if not isinstance(sig, inspect.Signature):
 sig = inspect.signature(sig)
   def wrapper(f):
 ps = 'args = []\n\t\t'
 ks = 'kwargs = {}\n\t\t'
 for p in sig.parameters.values():
   if p.kind in (p.POSITIONAL_ONLY, p.POSITIONAL_OR_KEYWORD):
 ps = '%sargs.append(%s)\n\t\t' % (ps, p.name)
   elif p.kind == p.VAR_POSITIONAL:
 ps = '%sargs.extend(%s)\n\t\t' % (ps, p.name)
   elif p.kind == p.KEYWORD_ONLY:
 ks = '%skwargs[%r] = %s\n\t\t' % (ks, p.name, p.name)
   elif p.kind == p.VAR_KEYWORD:
 ks = '%skwargs.update(%s)\n\t\t' % (ks, p.name)
 loc = {'wrapped': f}
 defstring = ('def wrapouter(wrapped = wrapped):'
  '\n\tdef wrapinner%s:'
  '\n\t\t%s%sreturn wrapped(*args, **kwargs)'
  '\n\treturn wrapinner' % (sig, ps, ks))
 exec(defstring, f.__globals__, loc)
 return loc['wrapouter']()
   return wrapper


Oops! Earlier I found out the hard way that this fails when the 
decorated function has arguments called 'args' or 'kwargs'. Here's a 
modified version that fixes said bug, but presumably not the many others 
I haven't noticed yet:


def sigwrapper(sig):
  if not isinstance(sig, inspect.Signature):
sig = inspect.signature(sig)
  n = 0
  while True:
pn = 'p_%i' % n
kn = 'k_%i' % n
if pn not in sig.parameters and kn not in sig.parameters:
  break
n += 1
  ps = '%s = []\n\t\t' % pn
  ks = '%s = {}\n\t\t' % kn
  for p in sig.parameters.values():
if p.kind in (p.POSITIONAL_ONLY, p.POSITIONAL_OR_KEYWORD):
  ps = '%s%s.append(%s)\n\t\t' % (ps, pn, p.name)
elif p.kind == p.VAR_POSITIONAL:
  ps = '%s%s.extend(%s)\n\t\t' % (ps, pn, p.name)
elif p.kind == p.KEYWORD_ONLY:
  ks = '%s%s[%r] = %s\n\t\t' % (ks, kn, p.name, p.name)
elif p.kind == p.VAR_KEYWORD:
  ks = '%s%s.update(%s)\n\t\t' % (ks, kn, p.name)
  defstring = ('def wrapouter(wrapped = wrapped):'
   '\n\tdef wrapinner%s:'
   '\n\t\t%s%sreturn wrapped(*%s, **%s)'
   '\n\treturn wrapinner' % (sig, ps, ks, pn, kn))
  def wrapper(f):
loc = {'wrapped': f}
exec(defstring, f.__globals__, loc)
return loc['wrapouter']()
  return wrapper
--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorating functions without losing their signatures

2013-04-03 Thread Rotwang

On 03/04/2013 05:15, Steven D'Aprano wrote:

On Wed, 03 Apr 2013 02:05:31 +0100, Rotwang wrote:


Hi all,

Here's a Python problem I've come up against and my crappy solution.
Hopefully someone here can suggest something better. I want to decorate
a bunch of functions with different signatures;

[...]

After thinking about it for a while I've come up with the following
abomination:

[...]

It seems to work, but I don't like it. Does anyone know of a better way
of doing the same thing?



Wait until Python 3.4 or 3.5 (or Python 4000?) when functools.wraps
automatically preserves the function signature?

Alas, I think this is a hard problem to solve with current Python. You
might like to compare your solution with that of Michele Simionato's
decorator module:

http://micheles.googlecode.com/hg/decorator/documentation.html


See this for some other ideas:

http://numericalrecipes.wordpress.com/2009/05/25/signature-preserving-
function-decorators/



Good luck!


Thanks. It'll take me a while to fully absorb the links, but it looks 
like both are similarly based on abusing the exec function.


Thanks to Jan too.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorating functions without losing their signatures

2013-04-03 Thread Rotwang

On 04/04/2013 02:18, Michele Simionato wrote:

On Wednesday, April 3, 2013 3:05:31 AM UTC+2, Rotwang wrote:

After thinking about it for a while I've come up with the following

abomination


Alas, there is actually no good way to implement this feature in pure
Python without abominations. Internally the decorator module does
something similar to what you are doing. However, instead of cooking up
yourself your custom solution, it is probably better if you stick to
the decorator module which has been used in production for several
years and has hundreds of thousands of downloads. I am not claiming
that it is bug free, but it is stable, bug reports come very rarely and
it works for all versions of Python from 2.5 to 3.3.


Thanks, I'll check it out. Looking at the link Steven provided, I didn't 
see an easy way to add additional keyword-only arguments to a function's 
signature, though (but then I've yet to figure out how the FunctionMaker 
class works).


--
http://mail.python.org/mailman/listinfo/python-list


Decorating functions without losing their signatures

2013-04-02 Thread Rotwang

Hi all,

Here's a Python problem I've come up against and my crappy solution. 
Hopefully someone here can suggest something better. I want to decorate 
a bunch of functions with different signatures; for example, I might 
want to add some keyword-only arguments to all functions that return 
instances of a particular class so that the caller can create instances 
with additional attributes. So I do something like this:


import functools

def mydecorator(f):
@functools.wraps(f)
def wrapped(*args, attribute = None, **kwargs):
result = f(*args, **kwargs)
result.attribute = attribute
return result
return wrapped

@mydecorator
def f(x, y = 1, *a, z = 2, **k):
return something

The problem with this is, when I subsequently type 'f(' in IDLE, the 
signature prompt that appears is not very useful; it looks like this:


(*args, attribute=None, **kwargs)

whereas I'd like it to look like this:

(x, y=1, *a, z=2, attribute=None, **k)


After thinking about it for a while I've come up with the following 
abomination:


import inspect

def sigwrapper(sig):
  if not isinstance(sig, inspect.Signature):
sig = inspect.signature(sig)
  def wrapper(f):
ps = 'args = []\n\t\t'
ks = 'kwargs = {}\n\t\t'
for p in sig.parameters.values():
  if p.kind in (p.POSITIONAL_ONLY, p.POSITIONAL_OR_KEYWORD):
ps = '%sargs.append(%s)\n\t\t' % (ps, p.name)
  elif p.kind == p.VAR_POSITIONAL:
ps = '%sargs.extend(%s)\n\t\t' % (ps, p.name)
  elif p.kind == p.KEYWORD_ONLY:
ks = '%skwargs[%r] = %s\n\t\t' % (ks, p.name, p.name)
  elif p.kind == p.VAR_KEYWORD:
ks = '%skwargs.update(%s)\n\t\t' % (ks, p.name)
loc = {'wrapped': f}
defstring = ('def wrapouter(wrapped = wrapped):'
 '\n\tdef wrapinner%s:'
 '\n\t\t%s%sreturn wrapped(*args, **kwargs)'
 '\n\treturn wrapinner' % (sig, ps, ks))
exec(defstring, f.__globals__, loc)
return loc['wrapouter']()
  return wrapper

The function sigwrapper() may be passed an inspect.Signature object sig 
(or function, if that function has the right signature) and returns a 
decorator that gives any function the signature sig. I can then replace 
my original decorator with something like


def mydecorator(f):
sig = inspect.signature(f)
sig = do_something(sig) # add an additional kw-only argument to sig
@functools.wraps(f)
@sigwrapper(sig)
def wrapped(*args, attribute = None, **kwargs):
result = f(*args, **kwargs)
result.attribute = attribute
return result
return wrapped

It seems to work, but I don't like it. Does anyone know of a better way 
of doing the same thing?

--
http://mail.python.org/mailman/listinfo/python-list


Re: Excel column 256 limit

2013-03-19 Thread Rotwang

On 18/03/2013 15:50, Steven D'Aprano wrote:

[...]

Gnumeric is Linux-only


No it isn't. I use it on Windows 7 with no problem.


--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Do you feel bad because of the Python docs?

2013-02-26 Thread Rotwang

On 26/02/2013 12:54, Steven D'Aprano wrote:

One week ago, JoePie91 wrote a blog post challenging the Python
community and the state of Python documentation, titled:

The Python documentation is bad, and you should feel bad.

http://joepie91.wordpress.com/2013/02/19/the-python-documentation-is-bad-
and-you-should-feel-bad/

It is valuable to contrast and compare the PHP and Python docs:

http://php.net/manual/en/index.php
http://www.python.org/doc/

There's no doubt that one of PHP's strengths, perhaps its biggest
strength, is the good state of documentation. But should we feel bad
about Python's docs?


I strongly disagree with most of what the author writes. To start with, 
there's this:


  Let’s start out with a simple example. Say you are a developer that
  just started using PHP, and you want to know how to get the current
  length of an array. You fire up a browser and Google for “PHP array
  length site:php.net”. The first result is spot-on, and one minute
  later, you know that count($arr) will suffice.

  Now let’s say that you wish to do the same in Python. In this case,
  you would Google for “Python list length site:docs.python.org”, and
  the first result is… a page with several chapters on standard types?

It seems to me that this is /completely/ the wrong way for a developer 
who's new to Python to go about learning the language. If you don't know 
enough Python to be familiar with len(), the sensible thing to is not to 
try coding by finding out individual language features as and when you 
need them, but to read the tutorial, systematically from start to 
finish. This list is continually being bombarded with questions from 
people who tried the former only to become stuck when something didn't 
work the way they thought it should (I've been guilty of this too), 
because knowing vocabulary is not the same thing as knowing how a 
language works. The majority of such questions could have been answered 
by simply reading the tutorial. More still could be answered by reading 
the language reference, which really isn't very long.


That's not to say that experienced users don't need to look things up, 
but then why would one restrict a web search to docs.python.org? Almost 
every question I've had about how to do something in Python has already 
been asked at StackExchange, and Google will find it.


  When you Google for something, you will end up on a page that
  explains a lot of things, including what you’re looking for. But how
  are you supposed to know where on the page it is, or whether it’s
  even on the page at all? The problem here is that the particular
  operation you are trying to find documentation on, does not have its
  own page.

And the solution is Ctrl-f.

  The general norm for the Python community appears to be that if you
  are not already familiar with the language, you do not deserve help.
  If you do something in a less-than-optimal way, other Python
  developers will shout about how horrible you are without bothering to
  explain much about what you did wrong. When you ask out of curiosity
  how a certain thing works, and that thing is considered a bad
  practice, you will get flamed like there’s no tomorrow – even if you
  had no intention of ever implementing it.

This is not my experience at all. Even when asking questions that I 
could have answered myself if I had RTFM, I've received helpful advice 
and nothing that could be construed as a flame. I don't know how this 
compares to other programming language communities, but it's much 
friendlier to newcomers here than, say, sci.math (whose competent 
regulars are understandably suspicious of people asking idiotic 
questions, given how many of those people turn out to be cranks).


  PHP solves [ambiguity] by having examples for every single function
  and class. If you’re not sure what is meant with a certain sentence in
  the description, you just look at one of the included examples, and
  all ambiguity is removed. It’s immediately obvious how to use things.

Python solves this by having an interactive interpreter. The tutorial 
goes to the trouble of pointing out that [i]t helps to have a Python 
interpreter handy for hands-on experience.


  If you are an experienced developer, then you are most likely in a
  very bad position to judge how beginner-friendly the documentation
  for a language is.

  [...]

  Most of all, accept that your personal experiences with Python, as an
  experienced developer, are not worth very much. Listen to the newbies
  when they tell you the documentation is hard to read or find stuff in.

But I'm not an experienced developer. I'm an amateur hobbyist who came 
to learn Python having only had any real programming experience with BBC 
BASIC and OPL (both as a child). I read the tutorial, then I read the 
language reference, now I'm reading the library reference. They're all fine.



--
I have made a thing that superficially resembles music:


Re: Awsome Python - chained exceptions

2013-02-20 Thread Rotwang

On 20/02/2013 11:50, Steven D'Aprano wrote:


[...alternatives to Google...]

Or if your ISP provides Usenet access, you can use a News client to read it
via comp.lang.python, or gmane.comp.python.general.


And if it doesn't, you can get free Usenet access that includes most of 
the text-only groups (including c.l.p) from eternal-september.org.


  http://www.eternal-september.org/


--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python venerable?

2013-02-20 Thread Rotwang

On 20/02/2013 03:53, Barry W Brown wrote:

[...]

Homer Simpson put it accurately last night.  I used to be with it when
I was younger.  But it moved and now what I am with is no longer it.


Sorry to be pedantic, but the quote you're thinking of is from Abe Simpson:

http://www.youtube.com/watch?v=-cjHTcIgxHct=41


--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


String concatenation benchmarking weirdness

2013-01-11 Thread Rotwang

Hi all,

the other day I 2to3'ed some code and found it ran much slower in 3.3.0 
than 2.7.2. I fixed the problem but in the process of trying to diagnose 
it I've stumbled upon something weird that I hope someone here can 
explain to me. In what follows I'm using Python 2.7.2 on 64-bit Windows 
7. Suppose I do this:


from timeit import timeit

# find out how the time taken to append a character to the end of a byte
# string depends on the size of the string

results = []
for size in range(0, 1001, 10):
results.append(timeit(y = x + 'a',
setup = x = 'a' * %i % size, number = 1))

If I plot results against size, what I see is that the time taken 
increases approximately linearly with the size of the string, with the 
string of length 1000 taking about 4 milliseconds. On the other 
hand, if I replace the statement to be timed with x = x + 'a' instead 
of y = x + 'a', the time taken seems to be pretty much independent of 
size, apart from a few spikes; the string of length 1000 takes about 
4 microseconds.


I get similar results with strings (but not bytes) in 3.3.0. My guess is 
that this is some kind of optimisation that treats strings as mutable 
when carrying out operations that result in the original string being 
discarded. If so it's jolly clever, since it knows when there are other 
references to the same string:


timeit(x = x + 'a', setup = x = y = 'a' * %i % size, number = 1)
# grows linearly with size

timeit(x = x + 'a', setup = x, y = 'a' * %i, 'a' * %i
% (size, size), number = 1)
# stays approximately constant

It also can see through some attempts to fool it:

timeit(x = ('' + x) + 'a', setup = x = 'a' * %i % size, number = 1)
# stays approximately constant

timeit(x = x*1 + 'a', setup = x = 'a' * %i % size, number = 1)
# stays approximately constant

Is my guess correct? If not, what is going on? If so, is it possible to 
explain to a programming noob how the interpreter does this? And is 
there a reason why it doesn't work with bytes in 3.3?



--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: String concatenation benchmarking weirdness

2013-01-11 Thread Rotwang

On 11/01/2013 20:16, Ian Kelly wrote:

On Fri, Jan 11, 2013 at 12:03 PM, Rotwang sg...@hotmail.co.uk wrote:

Hi all,

the other day I 2to3'ed some code and found it ran much slower in 3.3.0 than
2.7.2. I fixed the problem but in the process of trying to diagnose it I've
stumbled upon something weird that I hope someone here can explain to me.

[stuff about timings]

Is my guess correct? If not, what is going on? If so, is it possible to
explain to a programming noob how the interpreter does this?


Basically, yes.  You can find the discussion behind that optimization at:

http://bugs.python.org/issue980695

It knows when there are other references to the string because all
objects in CPython are reference-counted.  It also works despite your
attempts to fool it because after evaluating the first operation
(which is easily optimized to return the string itself in both cases),
the remaining part of the expression is essentially x = TOS + 'a',
where x and the top of the stack are the same string object, which is
the same state the original code reaches after evaluating just the x.


Nice, thanks.



The stated use case for this optimization is to make repeated
concatenation more efficient, but note that it is still generally
preferable to use the ''.join() construct, because the optimization is
specific to CPython and may not exist for other Python
implementations.


The slowdown in my code was caused by a method that built up a string of 
bytes by repeatedly using +=, before writing the result to a WAV file. 
My fix was to replaced the bytes string with a bytearray, which seems 
about as fast as the rewrite I just tried with b''.join. Do you know 
whether the bytearray method will still be fast on other implementations?



--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused compare function :)

2012-12-06 Thread Rotwang

On 06/12/2012 08:49, Bruno Dupuis wrote:

On Thu, Dec 06, 2012 at 04:32:34AM +, Steven D'Aprano wrote:

On Thu, 06 Dec 2012 03:22:53 +, Rotwang wrote:


On 06/12/2012 00:19, Bruno Dupuis wrote:

[...]

Another advice: never ever

except XXXError:
  pass

at least log, or count, or warn, or anything, but don't pass.


Really? I've used that kind of thing several times in my code. For
example, there's a point where I have a list of strings and I want to
create a list of those ints that are represented in string form in my
list, so I do this:

listofints = []
for k in listofstrings:
try:
listofints.append(int(k))
except ValueError:
pass

Another example: I have a dialog box with an entry field where the user
can specify a colour by entering a string, and a preview box showing the
colour. I want the preview to automatically update when the user has
finished entering a valid colour string, so whenever the entry field is
modified I call this:

def preview(*args):
try:
previewbox.config(bg = str(entryfield.get()))
except tk.TclError:
pass

Is there a problem with either of the above? If so, what should I do
instead?


They're fine.

Never, ever say that people should never, ever do something.


*cough*



Well, dependening on the context (who provides listofstrings?) I would
log or count errors on the first one... or not.


The actual reason for the first example is that I have a text widget 
with a bunch of tags (which are identified by strings), and I want to 
add a new tag whose name doesn't coincide with any of the existing tag 
names. I achieve this by setting my listofstrings equal to the list of 
existing tag names, and setting the new tag name as


str(max(listofints) + 1) if listofints else '0'

I realise that there are a bunch of other ways I could have done this. 
But I haven't a clue how I could rewrite the second example without 
using a try statement (other than by writing a function that would 
recognise when a string defines a valid Tkinter colour, including the 
long and possibly version-dependent list of colours with Zoolanderesque 
names like 'LightSteelBlue3').




On the second one, I would split the expression, because (not sure of
that point, i didn't import tk for years) previewbox.config and
entryfield.get may raise a tk.TclError for different reasons.

The point is Exceptions are made for error handling, not for normal
workflow.


Although I'm something of a noob, I'm pretty sure the Python community 
at large would disagree with this, as evidenced by the fact that 'EAFP' 
is an entry in the official Python glossary:


EAFP
Easier to ask for forgiveness than permission. This common Python
coding style assumes the existence of valid keys or attributes and
catches exceptions if the assumption proves false. This clean and
fast style is characterized by the presence of many try and except
statements. The technique contrasts with the LBYL style common to
many other languages such as C.

(from http://docs.python.org/2/glossary.html)

--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused compare function :)

2012-12-06 Thread Rotwang

On 06/12/2012 04:32, Steven D'Aprano wrote:

On Thu, 06 Dec 2012 03:22:53 +, Rotwang wrote:

[...]

Is there a problem with either of the above? If so, what should I do
instead?


They're fine.

Never, ever say that people should never, ever do something.


*cough*


Thanks.


--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused compare function :)

2012-12-05 Thread Rotwang

On 06/12/2012 00:19, Bruno Dupuis wrote:

[...]

Another advice: never ever

except XXXError:
 pass

at least log, or count, or warn, or anything, but don't pass.


Really? I've used that kind of thing several times in my code. For 
example, there's a point where I have a list of strings and I want to 
create a list of those ints that are represented in string form in my 
list, so I do this:


listofints = []
for k in listofstrings:
try:
listofints.append(int(k))
except ValueError:
pass

Another example: I have a dialog box with an entry field where the user 
can specify a colour by entering a string, and a preview box showing the 
colour. I want the preview to automatically update when the user has 
finished entering a valid colour string, so whenever the entry field is 
modified I call this:


def preview(*args):
try:
previewbox.config(bg = str(entryfield.get()))
except tk.TclError:
pass

Is there a problem with either of the above? If so, what should I do 
instead?



--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: [newbie] A question about lists and strings

2012-08-10 Thread Rotwang

On 10/08/2012 10:59, Peter Otten wrote:

[...]

If you have understood the above here's a little brain teaser:


a = ([1,2,3],)
a[0] += [4, 5]

Traceback (most recent call last):
   File stdin, line 1, in module
TypeError: 'tuple' object does not support item assignment


a[0]


What are the contents of a[0] now?


Ha, nice.


--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Getting started with IDLE and Python - no highlighting and no execution

2012-08-05 Thread Rotwang

On 06/08/2012 00:46, PeterSo wrote:

I am just starting to learn Python, and I like to use the editor
instead of the interactive shell. So I wrote the following little
program in IDLE

# calculating the mean

data1=[49, 66, 24, 98, 37, 64, 98, 27, 56, 93, 68, 78, 22, 25, 11]

def mean(data):
return sum(data)/len(data)

mean(data1)


There is no syntax highlighting and when I ran it F5, I got the
following in the shell window.


    RESTART







Any ideas?


I don't know what editor you're using or how it works, but I'm guessing 
that pressing f5 runs what you've written as a script, right? In that 
case the interpreter doesn't automatically print the result of 
expressions in the same way that the interactive interpreter does; you 
didn't tell it to print anything, so it didn't.




If I added print mean(data1), it gave me a invalid syntax

# calculating the mean

data1=[49, 66, 24, 98, 37, 64, 98, 27, 56, 93, 68, 78, 22, 25, 11]
data2=[1,2,3,4,5]

def mean(data):
return sum(data)/len(data)

mean(data1)
print mean(data1)


If you're using Python 3.x, you'll need to replace

print mean(data1)

with

print(mean(data1))

since the print statement has been replaced with the print function in 
Python 3.


If you're instead using Python 2.x then I don't know what the problem 
is, but in that case your mean() function won't work properly - the 
forward slash operator between a pair of ints gives you floor division 
by default, so you should instead have it return something like 
float(sum(data))/len(data).



--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Getting started with IDLE and Python - no highlighting and no execution

2012-08-05 Thread Rotwang

On 06/08/2012 02:01, Matthew Barnett wrote:

On 06/08/2012 01:58, MRAB wrote:

On 06/08/2012 01:09, Rotwang wrote:

On 06/08/2012 00:46, PeterSo wrote:

I am just starting to learn Python, and I like to use the editor
instead of the interactive shell. So I wrote the following little
program in IDLE

[...]


I don't know what editor you're using or how it works, but I'm guessing
that pressing f5 runs what you've written as a script, right? In that
case the interpreter doesn't automatically print the result of
expressions in the same way that the interactive interpreter does; you
didn't tell it to print anything, so it didn't.


It looks like it's IDLE.


Actually, he does say that it's IDLE at the start.
[snip]


Doh! Not sure how I missed that, sorry.

--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: Sudden doubling of nearly all messages

2012-07-22 Thread Rotwang

On 21/07/2012 19:16, Rick Johnson wrote:

[...]

It's due to the new Google Groups interface. They started forcing everyone to 
use the new buggy version about a week ago EVEN THOUGH the old interface is 
just fine.


I disagree - the old interface was dreadful and needed fixing.

The new one is much worse, obviously.

--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda in list comprehension acting funny

2012-07-12 Thread Rotwang

On 12/07/2012 04:59, Steven D'Aprano wrote:

On Wed, 11 Jul 2012 08:41:57 +0200, Daniel Fetchinson wrote:


funcs = [ lambda x: x**i for i in range( 5 ) ]


Here's another solution:

from functools import partial
funcs = [partial(lambda i, x: x**i, i) for i in range(5)]


Notice that the arguments i and x are defined in the opposite order.
That's because partial only applies positional arguments from the left.
If there was a right partial that applies from the right, we could use
the built-in instead:

funcs = [rpartial(pow, i) for i in range(5)]


I don't think anyone has suggested this yet:

funcs = [(lambda i: lambda x: x**i)(i) for i in range(5)]

It's a bit more opaque than other solutions, but it fits in one line and 
avoids defining functions with an implicit second argument, if that's 
desirable for some reason.



--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Celery 3.0 (chiastic slide) released!

2012-07-07 Thread Rotwang

On 07/07/2012 19:26, Ask Solem wrote:

===
  Celery 3.0 (Chiastic Slide) Released!
===


Does this have anything to do with the Autechre album?

--
I have made a thing that superficially resembles music:

http://soundcloud.com/eroneity/we-berated-our-own-crapiness
--
http://mail.python.org/mailman/listinfo/python-list


Re: cPickle - sharing pickled objects between scripts and imports

2012-06-25 Thread Rotwang

On 24/06/2012 00:17, Steven D'Aprano wrote:

On Sat, 23 Jun 2012 19:14:43 +0100, Rotwang wrote:


The problem is that if the object was
pickled by the module run as a script and then unpickled by the imported
module, the unpickler looks in __main__ rather than mymodule for the
object's class, and doesn't find it.


Possibly the solution is as simple as aliasing your module and __main__.
Untested:

# When running as a script
import __main__
sys['mymodule'] = __main__


??? What is sys here?



# When running interactively
import mymodule
__main__ = mymodule


of some variation thereof.

Note that a full solution to this problem actually requires you to deal
with three cases:

1) interactive interpreter, __main__ normally would be the interpreter
global scope

2) running as a script, __main__ is your script

3) imported into another module which is running as a script, __main__
would be that module.


I had not thought of that.



In the last case, monkey-patching __main__ may very well break that
script.


My original solution will also cause problems in this case. Thanks.

--
Hate music? Then you'll hate this:

http://tinyurl.com/psymix
--
http://mail.python.org/mailman/listinfo/python-list


cPickle - sharing pickled objects between scripts and imports

2012-06-23 Thread Rotwang
Hi all, I have a module that saves and loads data using cPickle, and 
I've encountered a problem. Sometimes I want to import the module and 
use it in the interactive Python interpreter, whereas sometimes I want 
to run it as a script. But objects that have been pickled by running the 
module as a script can't be correctly unpickled by the imported module 
and vice-versa, since how they get pickled depends on whether the 
module's __name__ is '__main__' or 'mymodule' (say). I've tried to get 
around this by adding the following to the module, before any calls to 
cPickle.load:


if __name__ == '__main__':
import __main__
def load(f):
p = cPickle.Unpickler(f)
def fg(m, c):
if m == 'mymodule':
return getattr(__main__, c)
else:
m = __import__(m, fromlist = [c])
return getattr(m, c)
p.find_global = fg
return p.load()
else:
def load(f):
p = cPickle.Unpickler(f)
def fg(m, c):
if m == '__main__':
return globals()[c]
else:
m = __import__(m, fromlist = [c])
return getattr(m, c)
p.find_global = fg
return p.load()
cPickle.load = load
del load


It seems to work as far as I can tell, but I'll be grateful if anyone 
knows of any circumstances where it would fail, or can suggest something 
less hacky. Also, do cPickle.Pickler instances have some attribute 
corresponding to find_global that lets one determine how instances get 
pickled? I couldn't find anything about this in the docs.



--
Hate music? Then you'll hate this:

http://tinyurl.com/psymix
--
http://mail.python.org/mailman/listinfo/python-list


Re: cPickle - sharing pickled objects between scripts and imports

2012-06-23 Thread Rotwang

On 23/06/2012 17:13, Peter Otten wrote:

Rotwang wrote:


Hi all, I have a module that saves and loads data using cPickle, and
I've encountered a problem. Sometimes I want to import the module and
use it in the interactive Python interpreter, whereas sometimes I want
to run it as a script. But objects that have been pickled by running the
module as a script can't be correctly unpickled by the imported module
and vice-versa, since how they get pickled depends on whether the
module's __name__ is '__main__' or 'mymodule' (say). I've tried to get
around this by adding the following to the module, before any calls to
cPickle.load:

if __name__ == '__main__':
  import __main__
  def load(f):
  p = cPickle.Unpickler(f)
  def fg(m, c):
  if m == 'mymodule':
  return getattr(__main__, c)
  else:
  m = __import__(m, fromlist = [c])
  return getattr(m, c)
  p.find_global = fg
  return p.load()
else:
  def load(f):
  p = cPickle.Unpickler(f)
  def fg(m, c):
  if m == '__main__':
  return globals()[c]
  else:
  m = __import__(m, fromlist = [c])
  return getattr(m, c)
  p.find_global = fg
  return p.load()
cPickle.load = load
del load


It seems to work as far as I can tell, but I'll be grateful if anyone
knows of any circumstances where it would fail, or can suggest something
less hacky. Also, do cPickle.Pickler instances have some attribute
corresponding to find_global that lets one determine how instances get
pickled? I couldn't find anything about this in the docs.


if __name__ == __main__:
 from mymodule import *

But I think it would be cleaner to move the classes you want to pickle into
another module and import that either from your main script or the
interpreter. That may also spare you some fun with unexpected isinstance()
results.


Thanks.

--
Hate music? Then you'll hate this:

http://tinyurl.com/psymix
--
http://mail.python.org/mailman/listinfo/python-list


Re: cPickle - sharing pickled objects between scripts and imports

2012-06-23 Thread Rotwang

On 23/06/2012 18:31, Dave Angel wrote:

On 06/23/2012 12:13 PM, Peter Otten wrote:

Rotwang wrote:


Hi all, I have a module that saves and loads data using cPickle, and
I've encountered a problem. Sometimes I want to import the module and
use it in the interactive Python interpreter, whereas sometimes I want
to run it as a script. But objects that have been pickled by running the
module as a script can't be correctly unpickled by the imported module
and vice-versa, since how they get pickled depends on whether the
module's __name__ is '__main__' or 'mymodule' (say). I've tried to get
around this by adding the following to the module, before any calls to
cPickle.load:

if __name__ == '__main__':
  import __main__
  def load(f):
  p = cPickle.Unpickler(f)
  def fg(m, c):
  if m == 'mymodule':
  return getattr(__main__, c)
  else:
  m = __import__(m, fromlist = [c])
  return getattr(m, c)
  p.find_global = fg
  return p.load()
else:
  def load(f):
  p = cPickle.Unpickler(f)
  def fg(m, c):
  if m == '__main__':
  return globals()[c]
  else:
  m = __import__(m, fromlist = [c])
  return getattr(m, c)
  p.find_global = fg
  return p.load()
cPickle.load = load
del load


It seems to work as far as I can tell, but I'll be grateful if anyone
knows of any circumstances where it would fail, or can suggest something
less hacky. Also, do cPickle.Pickler instances have some attribute
corresponding to find_global that lets one determine how instances get
pickled? I couldn't find anything about this in the docs.

if __name__ == __main__:
 from mymodule import *

But I think it would be cleaner to move the classes you want to pickle into
another module and import that either from your main script or the
interpreter. That may also spare you some fun with unexpected isinstance()
results.






I would second the choice to just move the code to a separately loaded
module, and let your script simply consist of an import and a call into
that module.

It can be very dangerous to have the same module imported two different
ways (as __main__ and as mymodule), so i'd avoid anything that came
close to that notion.


OK, thanks.



Your original problem is probably that you have classes with two leading
underscores, which causes the names to be mangled with the module name.
You could simply remove one of the underscores for all such names, and
see if the pickle problem goes away.


No, I don't have any such classes. The problem is that if the object was 
pickled by the module run as a script and then unpickled by the imported 
module, the unpickler looks in __main__ rather than mymodule for the 
object's class, and doesn't find it. Conversely if the object was 
pickled by the imported module and then unpickled by the module run as a 
script then the unpickler reloads the module and makes objects 
referenced by the original object into instances of 
mymodule.oneofmyclasses, whereas (for reasons unknown to me) the object 
itself is an instance of __main__.anotheroneofmyclasses. This means that 
any method of anotheroneofmyclasses that calls isinstance(attribute, 
oneofmyclasses) doesn't work the way it should.


--
Hate music? Then you'll hate this:

http://tinyurl.com/psymix
--
http://mail.python.org/mailman/listinfo/python-list


Strange threading behaviour

2012-06-21 Thread Rotwang
Hi all, I'm using Python 2.7.2 on Windows 7 and a module I've written is 
acting strangely. I can reproduce the behaviour in question with the 
following:


--- begin bugtest.py ---

import threading, Tkinter, os, pickle

class savethread(threading.Thread):
def __init__(self, value):
threading.Thread.__init__(self)
self.value = value
def run(self):
print 'Saving:',
with open(os.path.join(os.getcwd(), 'bugfile'), 'wb') as f:
pickle.dump(self.value, f)
print 'saved'

class myclass(object):
def gui(self):
root = Tkinter.Tk()
root.grid()
def save(event):
savethread(self).start()
root.bind('s', save)
root.wait_window()

m = myclass()
m.gui()

--- end bugtest.py ---


Here's the problem: suppose I fire up Python and type

 import bugtest

and then click on the Tk window that spawns and press 's'. Then 
'Saving:' gets printed, and an empty file named 'bugfile' appears in my 
current working directory. But nothing else happens until I close the Tk 
window; as soon as I do so the file is written to and 'saved' gets 
printed. If I subsequently type


 bugtest.m.gui()

and then click on the resulting window and press 's', then 'Saving: 
saved' gets printed and the file is written to immediately, exactly as I 
would expect. Similarly if I remove the call to m.gui from the module 
and just call it myself after importing then it all works fine. But it 
seems as if calling the gui within the module itself somehow stops 
savethread(self).run from finishing its job while the gui is still alive.


Can anyone help?


--
Hate music? Then you'll hate this:

http://tinyurl.com/psymix
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >