Re: what is it, that I don't understand about python and lazy evaluation?

2009-08-13 Thread Brian Allen Vanderburg II

Erik Bernoth wrote:

Hi List,

look at the following code:

def evens():
# iterator returning even numbers
i = 0
while True:
yield i
i += 2

# now get all the even numbers up to 15
L = [n for n in evens() if n  15]

Isn't it strange, that this code runs (in a lazy language) for 
eternity? I would expect python to to spit out (in no time):

 L
[0, 2, 4, 6, 8, 10, 12, 14]

after 14 it is not nessesary to evaluate evens() any further.

I really started to ask myself if python really is lazy, but 
everything else I wrote in lazy style still worked. Example:

 def test(txt, retval):
..print(txt)
..return retval
..
 test(1, True) or test(2, True)
1
True
 test(1, True) and test(2, True)
1
2
True


Can anybody explain what happens with evens()?

best regards
Erik Bernoth

PS: The code comes from a list post from 2006. You find it here: 
http://mail.python.org/pipermail/python-list/2006-November/585783.html
In the list comprehension, it goes over all the items from the generator 
until the generator is done, and any item that is less than 15 becomes 
part of the list.  The if n  15 does not control when the generator 
terminates, only which results from it are selected to be part of the list.


You can pass the maximum desired value to make it terminate:

def evens(max):
   i = 0
   while i = max:
  yield i
  i += 2

L = list(evens(15))
L: [0, 2, 4, 6, 8, 10, 12, 14]

L = [n for n in evens(15)]
L: [0, 2, 4, 6, 8, 10, 12, 14]


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


How to do relpath implementation on 2.5

2009-08-08 Thread Brian Allen Vanderburg II
I've coded my own 'relpath' implementation for 2.5 (shown below) and I 
want to make sure it follows as closely as it should to 2.6 and later.  
I've got a question regarding that.  When attempting to convert to a 
relative path and it is not possible for some reason (different drive or 
UNC share), should that be an error or should it return the absolute 
path of the target?  I'm using Debian so I don't have 2.6 available 
right now for testing.


This is what I've got so far

import os
from os import path


def relpath(target, origin=os.curdir):
   
   Determine relative path of target to origin or
   

   target = path.normcase(path.abspath(path.normpath(target)))
   origin = path.normcase(path.abspath(path.normpath(origin)))

   # Same?
   if target == origin:
   return '.'

   original_target = target

   # Check drive (for Windows)
   (tdrive, target) = path.splitdrive(target)
   (odrive, origin) = path.splitdrive(origin)
   if tdrive != odrive:
   return original_target

   # Check UNC path (for Windows)
   # If they are on different shares, we want an absolute path
   if not tdrive and not odrive and hasattr(path, 'splitunc'):
   (tunc, target) = path.splitunc(target)
   (ounc, origin) = path.splitunc(origin)
   if tunc != ounc:
   return original_target

   # Split into lists
   target_list = target.split(os.sep)
   origin_list = origin.split(os.sep)

   # Remove beginning empty parts
   # Helps to handle when one item may be in the root
   while target_list and not target_list[0]:
   del target_list[0]
   while origin_list and not origin_list[0]:
   del origin_list[0]

   # Remove common items
   while origin_list and target_list:
   if origin_list[0] == target_list[0]:
   del origin_list[0]
   del target_list[0]
   else:
   break

   # Combine and return the result
   relative_list = [os.pardir] * len(origin_list) + target_list
   if not relative_list:
   return os.curdir
   return os.sep.join(relative_list)


Currently I just return the target if it can not be made relative.

Brian A. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Copying objects and multiple inheritance

2009-06-03 Thread Brian Allen Vanderburg II

Gabriel Genellina wrote:
En Tue, 02 Jun 2009 19:02:47 -0300, Brian Allen Vanderburg II 
brianvanderbu...@aim.com escribió:


What is the best way to copy an object that has multiple inheritance 
with the copy module.  Particularly, some of the instances in the 
hierarchy


(...some of the classes in..., I presume?)

use the __copy__ method to create a copy (because even for shallow 
copies they need some information  updated a little differently), so 
how can I make sure all the information is copied as it is supposed 
to be even for the base classes that have special requirements.


If you don't control all the clases involved, there is little hope for 
a method like __copy__ to work at all... All classes must be written 
with cooperation in mind, using super() the right way. See Python's 
Super Considered Harmful [1] and Things to Know About Python Super 
[2][3][4]


That said, and since multiple inheritance is the heart of the problem, 
maybe you can redesign your solution *without* using MI? Perhaps using 
delegation instead?


[1] http://fuhm.net/super-harmful/
[2] http://www.artima.com/weblogs/viewpost.jsp?thread=236275
[3] http://www.artima.com/weblogs/viewpost.jsp?thread=236278
[4] http://www.artima.com/weblogs/viewpost.jsp?thread=237121

I do control the classes involved.  A problem I was having, but I think 
I now got solved, is if using super, the copy would not have the same 
class type.  Also, a problem was if using super, but some class in the 
hierarchy didn't implement __copy__, then it's data would not be copied 
at all.  This was also fixed by copying the entire __dict__ in the base 
__copy__.  This is an idea of what I got, it seems to be working fine:


import copy

class _empty(object):
   pass

class Base(object):
   def __init__(self):
   pass

   def __copy__(self):
   // don't use copy = Base()
   // Also don't call self.__class__() because it may have a custom
   // __init__ which take additional parameters
   copy = _empty()
   copy.__class__ = self.__class__
   // In case a class does not have __copy__ (such as B below), make
   // sure all items are copied
   copy.__dict__.update(self.__dict__)

   return copy

class A(Base):
   def __init__(self):
   super(A, self).__init__()
   self.x = 13

   def __copy__(self):
   copy = super(A, self).__copy__()
   copy.x = self.x * 2
   return copy

class B(Base):
   def __init__(self):
   super(B, self).__init__()
   self.y = 14

   #def __copy__(self):
   #copy = super(B, self).__copy__()
   #copy.y = self.y / 2
   #return copy

class C(A, B):
   def __init__(self):
   super(C, self).__init__()
   self.z = 64

   def __copy__(self):
   copy = super(C, self).__copy__()
   copy.z = self.z * self.z
   return copy

o1 = C()
o2 = copy.copy(o1)

print type(o1), o1.x, o1.y, o1.z
print type(o2), o2.x, o2.y, o2.z


--
http://mail.python.org/mailman/listinfo/python-list


Copying objects and multiple inheritance

2009-06-02 Thread Brian Allen Vanderburg II
What is the best way to copy an object that has multiple inheritance 
with the copy module.  Particularly, some of the instances in the 
hierarchy  use the __copy__ method to create a copy (because even for 
shallow copies they need some information  updated a little 
differently), so how can I make sure all the information is copied as it 
is supposed to be even for the base classes that have special requirements.


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding descriptors

2009-02-05 Thread Brian Allen Vanderburg II

bruno.42.desthuilli...@websiteburo.invalid wrote:


So the lookup chain is:

1/ lookup the class and bases for a binding descriptor
2/ then lookup the instance's __dict__
3/ then lookup the class and bases for a non-binding descriptor or 
plain attribute

4/ then class __getattr__

Also and FWIW, there's a step zero : calls __getattribute__. All the 
above lookup mechanism is actually implemented by 
object.__getattribute__.


Okay, so instance attributes never use their __get__/__set__/etc when 
looking up.


A binding descriptor is one that has a __set__ (even if it doesn't do 
anything) and it takes priority over instance variables.  Properties are 
binding descriptors even if they don't have a set function specified.  A 
non-binding descriptor doesn't have __set__ and instance variables take 
priority over them. 


For reading:

1. Lookup in the class/bases for a binding descriptor and if found use 
its __get__

2. If instance, look up in instance __dict__ and if found return it
3. Lookup in the class/bases
   a. if found and a descriptor use it's __get__
   b. if found and not a descriptor return it
4. Use __getattr__ (if instance?)

For writing:

1. If instance
   a. lookup in the class/bases for a binding descriptor and if found 
use its __set__

   b. write to instance __dict__
2. If class, write in class __dict__


I think I understand it now.  Thanks.

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Flattening lists

2009-02-05 Thread Brian Allen Vanderburg II

mrk...@gmail.com wrote:

Hello everybody,

Any better solution than this?

def flatten(x):
res = []
for el in x:
if isinstance(el,list):
res.extend(flatten(el))
else:
res.append(el)
return res

a = [1, 2, 3, [4, 5, 6], [[7, 8], [9, 10]]]
print flatten(a)


[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


I think it may be just a 'little' more efficient to do this:

def flatten(x, res=None):
   if res is None:
  res = []

   for el in x:
  if isinstance(el, (tuple, list)):
 flatten(el, res)
  else:
 res.append(el)

   return res

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Flattening lists

2009-02-05 Thread Brian Allen Vanderburg II

mrk...@gmail.com wrote:

Baolong zhen wrote:

less list creation.


At the cost of doing this at each 'flatten' call:

if res is None:
   res = []

The number of situations of executing above code is the same as the 
number of list creations (once for each 'flatten' call, obviously).


Is list creation really more costly than above?

Probably not.  I wrote a small test program using a list several levels 
deep, each list containing 5 sublists at each level and finally just a 
list of numbers.  Flattening 1000 times took about 3.9 seconds for the 
one creating a list at each level, and 3.2 for the one not creating the 
list at each level.


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Converting numbers to words

2009-02-05 Thread Brian Allen Vanderburg II

todp...@hotmail.com wrote:

 I've been trying to figure this out for over 2 hours and I'm really 
frustrated right now.


 I first made Python to ask user to input height in meters. If user 
puts certain value in meter, then it converts it to feet and inches as 
follows:
  


 Enter the height (in metres): 1.6

 It is 5 feet, 3 inches high.
  
  
 What I want to do is to make it type only words. For example, instead 
of its saying, It is 5 feet, 3 inches high, I want it to say, it is 
five feet, three inches high.

 I'd appreciate any suggestions.

I made something similar in the past.  First I break it into two 
functions, one function handles 0-999 and return '' for zero or a 
meaningful value for 999, another function handles how many groups there 
are and for each one, gets the value for it calls the first function, 
and appends the correct word.  This only works for the English language 
though


Brian Vanderburg II

num_words1 = (zero, # not used
one,
two,
three,
four,
five,
six,
seven,
eight,
nine,
ten,
eleven,
twelve,
thirteen,
fourteen,
fifteen,
sixteen,
seventeen,
eighteen,
nineteen)

num_words2 = (twenty,
thirty,
forty,
fifty,
sixty,
seventy,
eighty,
ninety)

num_words3 = (thousand,
million,
billion,
trillion,
quadrillion)

def word_func1(value):
  # value can be from 0 to 999
  result = ''

  if value == 0:
  return result

  # Handle hundreds
  if value = 100:
  hvalue = int(value / 100)
  if result:
  result += ' '
  result += num_words1[hvalue]
  result += ' hundred'
  value -= (hvalue * 100)

  if value == 0:
  return result

  # Handle 1-19
  if value  20:
  if result:
  result += ' '
  result += num_words1[value]
  return result

  # Handle 10s (20-90)
  tvalue = int(value / 10)
  if result:
  result += ' '
  result += num_words2[tvalue - 2]
  value -= (tvalue * 10)

  if value == 0:
  return result

  # Handle ones
  if result:
  result += ' '
  result += num_words1[value]

  return result

def word_func2(value):
  result = ''

  if value == 0:
  return 'zero'

  # Determine support values
  divider = 1
  l = len(num_words3)
  for i in range(l):
  divider *= 1000

  for i in range(l):
  if value = divider:
  dvalue = int(value / divider)
  if result:
  result += ' '
  result += word_func1(dvalue)
  result += ' '
  result += num_words3[l - i - 1]
  value -= (dvalue * divider)
  divider /= 1000

  if value  0:
  if result:
  result += ' '
  result += word_func1(value)

  return result

number_to_word = word_func2

--
http://mail.python.org/mailman/listinfo/python-list


Re: Why such different HTTP response results between 2.5 and 3.0

2009-02-01 Thread Brian Allen Vanderburg II

an0...@gmail.com wrote:

Below are two semantically same snippets for querying the same partial
HTTP response, for Python2.5 and Python 3.0 respectively.
However, the 3.0 version returns a not-so-right result(msg) which is a
bytes of length 239775, while the 2.5 version returns a good msg which
is a 239733 byte-long string that is the content of a proper zip file.
I really can't figure out what's wrong, thought I've sought out some
\r\n segments in msg 3.0 that is absent in msg 2.5.
So are there anyone could give me some hints? Thanks in advance.

Code:

# Python 2.5
import urllib2
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=pluses and minuses,
  uri='http://www.pythonchallenge.com/pc/hex/
unreal.jpg',
  user='butter',
  passwd='fly')
opener = urllib2.build_opener(auth_handler)

req = urllib2.Request('http://www.pythonchallenge.com/pc/hex/
unreal.jpg')
req.add_header('Range', 'bytes=1152983631-')
res = opener.open(req)
msg = res.read()

# Python 3.0
import urllib.request
auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password(realm=pluses and minuses,
  uri='http://www.pythonchallenge.com/pc/hex/
unreal.jpg',
  user='butter',
  passwd='fly')
opener = urllib.request.build_opener(auth_handler)

req = urllib.request.Request('http://www.pythonchallenge.com/pc/hex/
unreal.jpg')
req.add_header('Range', 'bytes=1152983631-')
res = opener.open(req)
msg = res.read()
--
http://mail.python.org/mailman/listinfo/python-list
  
From what I can tell, Python 2.5 returns the request automatically 
decoded as text.  Python 3.0 returns a bytes object and doesn't decode 
it at all.  I did a test with urlopen:


In 2.5 for http://google.com just get the regular HTML
In 3.0 I get some extras at the start and end:

   191d\r\n at the start
   \r\n0\r\n\r\n at the end

In 2.5, newlines are automatically decoded
In 3.0, the \r\n pairs are kept

I hope their is an easy way to decode it as it was in 2.x

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Understanding descriptors

2009-01-29 Thread Brian Allen Vanderburg II
I'm trying to better understand descriptors and I've got a few questions 
still after reading some sites.  Here is what I 'think', but please let 
me know if any of this is wrong as I'm sure it probably is.



First when accessing an attribute on a class or instance it must be 
found.  For an instance, it's __dict__ is first search.  If not found 
the class and base class __dict__ are searched.  For a class, the 
__dict__ and base classes __dict__ are search.


If assigning, and the attribute is found and is not a descriptor or if 
the attribute is not found, then the assignment will occur in the 
__dict__ of the class or instance.  If it is found and is a descriptor, 
then __set__ will be call.


For reading, if the attribute is found and is a descriptor, __get__ will 
be called, passing the object (if it is an instance) and class.  If it 
is not a descriptor, the attribute will be returned directly.


Class methods are just functions:

class C(object):
   def F(self):
  pass

C.__dict__['F'] # function object ...

But functions are descriptors:

C.__dict__['F'].__get__ # method wrapper ...

def f1():
   pass

f1.__get__ # method wrapper ...

When a lookup is done it uses this descriptor to make a bound or unbound 
method:


c=C()

C.F # unbound method object, expects explicit instance when calling the 
function
c.F # bound method object provides instance implicitly when calling the 
function


This is also done when adding to the classes:

C.f1 = f1

f1 # function
C.f1 # unbound method
c.f1 # bound method

To prevent this it has to be decorated so the descriptor doesn't cause 
the binding:


C.f2 = staticmethod(f1)
C.f2 # functon
c.f2 # function

Here is a question, why don't instance attributes do the same thing?

c.f3 = f1
c.f3 # function, not bound method

So it is not calling the __get__ method for c.f3  After it finds c.f3 in 
c.__dict__, and since it has a getter, shouldn't it call the __get__ to 
return the bound method.  It is good that it doesn't I know, but I just 
want to know why it doesn't from an implementation view.



Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: new.instancemethod questions

2009-01-29 Thread Brian Allen Vanderburg II

schi...@gmail.com wrote:

On Jan 29, 7:38 pm, Mel mwil...@the-wire.com wrote:
  

schickb wrote:


I'd like to add bound functions to instances, and found the
instancemethod function in the new module. A few questions:
  
1. Why is instancemethod even needed? Its counter-intuitive (to me at

least) that assigning a function to a class results in bound functions
its instances, while assigning directly to instances does not create a
bound function. So why doesn't assigning a function to an instance
attribute result in a function bound to that instance?
  

If I understand you correctly, rebinding to the instance would break code
like:

myfakefile.write = sys.stdout.write

where the intent would be to redirect any output through myfakefile straight
to sys.stdout.  The code for the sys.stdout.write function would never find
the attributes it needed in the instance of myfakefile.  To do this,
methods have to stay bound to their proper instances.




1. I'm thinking about assigning free non-bound functions. Like:

class A(object):
   pass

def func(self):
   print repr(self)

a = A()
a.func = func  # Why doesn't this automatically create a bound
function (aka method)?
  
Actually I found out the implementation of why it doesn't after messing 
around some more.  If an attribute is found in the instance dictionary, 
even if it is a descriptor it's __get__ doesn't get called, only if it 
is found in the class dictionary of the class or base classes.  This 
makes it where you can store a function in the instance to be called 
later as a function for some useful purpose, for example two different 
instances of an object could use different sorting functions:


# sort functions are responsible for sorting which mutation is the least 
and most fit

def fitness1(v1, v2):
   # use one technique to determine which is more fit

def fitness2(v1, v2):
   # use another technique to determine which is more fit

... # more fitness functions

class Environment:
   def __init__(self, fitness, ...):
  self.fitness_func = fitness
  ...
   ...

# create environments, each one has different fitness function
a = Environment(fitness1)
b = Environment(fitness2)


Now when it is time to kill off the least fit of the genetic mutations, 
each environment can sort which is the least and most fit in different ways.


2. And what is the preferred way to do this if the new module and
its instancemethod function are depreciated?

  

Most of the code I see does this with a closure something like this:

def AddMethod(obj, func, name=None):
   if name is None:
  name = func.__name__

   def method(*args, **kwargs):
  return func(obj, *args, **kwargs)
   setattr(obj, name, method)

class MyClass(object):
   pass

def f1(self):
   print self

a = MyClass()
AddMethod(a, f1)

a.f1() # prints object a

You can also create a bound method and manually bind it to the 
instance.  This is easier


import types
a.f2 = types.MethodType(f1, a)

a.f2() # prints object a


These may work for most uses, but both have a problem that happens if 
you need to make a copy of the instance.  When you copy it, the copies 
'f1' will still call the function but using the old object


a.f1() # prints object a
b = copy.copy(a)
b.f1() # still prints a



Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: libsudo ?

2009-01-29 Thread Brian Allen Vanderburg II

linuxguy...@gmail.com wrote:

Does anyone know where I would find libsudo ?
  

http://sourceforge.net/projects/libsudo

If you had the choice of using pexpect or libsudo, which would you use ?
  
libsudo does all the work for you of executing sudo, checking for the 
expected responses and all.  If all you need it for is to use sudo from 
Python I suspect it would be easier than pexpect.


It is a C library however, so after being compiled and installed, you 
will need to use ctypes to use it.  It is very simple, as the only 
function to deal with is:


int runAs( char* command, char* password, char* user, int invalidate );

Thanks

--
http://mail.python.org/mailman/listinfo/python-list
  

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: USB in python

2009-01-26 Thread Brian Allen Vanderburg II

astan.c...@al.com.au wrote:

Tim Roberts wrote:
Sorry, but you have NOT created a USB device, and I sincerely hope 
you do

not try to plug it in to a real USB port.
  
Sorry, by USB device, I meant a device that is powered/activated by a 
bunch of wires that I want to control using a computer and since I had 
a spare USB jack lying around, I used that instead. But so far I 
haven't tried it, nor will try it if it wont work properly. Yes, it is 
not a proper USB device, because I didnt build it to specifically 
interface with the USB port; but I had to start somewhere. Also, the 
device requires more power than the standard parallel port can give.
Anyway, it looks like the easiest solution for my case is a 
microcontroller

--
http://mail.python.org/mailman/listinfo/python-list
I've played around in this area a little bit.  Microcontrollers still 
require hardware programming and for simple circuits I think it is 
overkill.  If you want to use USB then you may be able to use the FTDI 
chips.  They have both serial (FT232) and parallel (FT245) chips and are 
quite cheap.  They are surface mount devices though, but you can get a 
kit that includes USB port, the chip already connected to a board with a 
DIP plug and some essential circuits.  libftdi, which runs on top of 
libusb, can control both of these and they require no programming 
(unless you want to change the USB configuration settings such as vendor 
ID, etc, from the default value)


This is the FT245 chip which is basically USB-to-Parallel.

Chips: http://www.ftdichip.com/Products/FT245R.htm
Kit/Board: http://www.ftdichip.com/Products/EvaluationKits/UM245R.htm

The spec sheet for the board seems quite simple.  It's pin out is 
similar to that of a parallel port in that you have your data lines 
DB0-DB7, etc.  It can also be connected in bus-powered configuration 
(~100mA) or self-powered configuration.  The kit is more expensive than 
the chip itself, but probably easier especially if you don't have any 
experience with surface mount.


You could build it into your device. You could also create a simple 
switch box out of it to control external devices, maybe connecting each 
of the data lines to relays to turn on/off eight devices, etc.


Brian Vanderburg II

--
http://mail.python.org/mailman/listinfo/python-list


Re: understanding nested lists?

2009-01-24 Thread Brian Allen Vanderburg II

vinc...@vincentdavis.net wrote:
I have a short peace of code that is not doing what I expect. when I 
assign a value to a list in a list alist[2][4]=z this seems replace 
all the 4 elements in all the sub lists. I assume it is supposed to 
but this is not what I expect. How would I assign a value to the 4th 
element in the 2nd sublist. here is the code I have. All the printed 
values are what I would expect except that all sublist values are 
replaced.


Thanks for your help
Vincent

on the first iteration I get ;
new_list [[None, 0, 1, None], [None, 0, 1, None], [None, 0, 1, None], 
[None, 0, 1, None], [None, 0, 1, None], [None, 0, 1, None]]


and expected this;
new_list [[None, 0, 1, None], [None, None, None, None], 
[None, None, None, None], [None, None, None, None], [None, None, None, 
None], [None, None, None, None]]


Code;
list1=[[1,2],[0,3,2,1],[0,1,3],[2,0,1],[3],[2,3]]
new_list=[[None]*4]*6
print 'new_list',new_list
for sublist in range(6): # 6 becuase it is the # of rows lists1
print 'sublist', sublist
for x in list1[sublist]:
print list1[sublist]
print 'new_list[sublist][x]', new_list[sublist][x]
new_list[sublist][x]=list1[sublist].index(x)
print 'sublist', sublist, 'x', x
print new_list[sublist][x]
print 'new_list', new_list



--
http://mail.python.org/mailman/listinfo/python-list
  


The problem is likely this right here:

[[None]*4]*6

This first creates an inner list that has 4 Nones, then the outer list 
contains 6 references to that same list, so (new_list[0] is new_list[1]) 
and (new_list[1] is new_list[2]).  I make this mistake a lot myself.


l=[[None]*4]*6
print id(l[0])  # -1210893364
print id(l[1])  # -1210893364

l = [list([None]*4) for x in range(6)]
print id(l[0])  # -1210893612
print id(l[1])  # -1210893580

Works better

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamic methods and lambda functions

2009-01-23 Thread Brian Allen Vanderburg II

unine...@gmail.com wrote:

class Person:
def __init__(self):
for prop in props:
setattr(self, __ + prop[0], prop[1])
setattr(Person, Get + prop[0], lambda self: getattr
(self, __ + prop[0]))

  
I've had a similar problem here and here is best how I can explain it.  
The prop in the lambda function is a closure by 'name' to the prop in 
the containing name space (__init__), so when the lambda function 
executes, it looks for the name 'prop' in this name space and uses it's 
value.  After the 'for prop in props' loop is complete, 'prop' is left 
referring to the last item in props, so each lambda function would use 
it (mary)


One solution is to not use lambda and avoid closures by using default 
arguments:


for prop in props:
   def Getter(self=self, prop=prop):
  return getattr(self, '__' + prop[0])
   setattr(self, '__' + prop[0], prop[1])
   setattr(self, 'Get' + prop[0], Getter)

I have several problems with this though:

1. I don't think this will invoke Pythons name mangling mechanism.  The 
property will be '__name' and not '__Person_name'.


2. If you make a copy of the class, including attributes, the Getter 
will operate on the old class not new:


Person a
b = copy.copy(a)

setattr(a, '__name', bob)
setattr(b, '__name', sarah)

b.Getname() - bob

In order to make it work, the class must support updating the Getter 
when it is copied to have a new self value.


import copy

class MethodCaller:
   def __init__(self, obj, method, name):
   self.obj = obj
   self.method = method
   self.name = name

   setattr(obj, name, self)

   def __call__(self, *args, **kwargs):
   return self.method(self.obj, *args, **kwargs)

   def copy(self, newobj):
   return MethodCaller(newobj, self.method, self.name)


props = ( ('name', 'mary'), ('age', 21), ('gender', 'female') )

class Person:
   def __init__(self):
   self._methods = []

   for prop in props:
   (name, value) = prop
   
   def getter(self, name=name):

   return getattr(self, '_' + name)

   setattr(self, '_' + name, value)
   self._methods.append(MethodCaller(self, getter, 'Get' + name))

   def copy(self,copymethods=True):
   c = copy.copy(self)
   if copymethods:
   c._methods = []
   for i in self._methods:
   c._methods.append(i.copy(c))
   return c


# Example without copying methods
p = Person()
q = p.copy(False)

p._name = 'sarah'
q._name = 'michelle'

print p.Getname()
print p.Getage()
print p.Getgender()

print q.Getname() # Still prints 'sarah', because getter still refers to 
'p' instead of 'q'

print q.Getage()
print q.Getgender()

# Example with copying methods
p = Person()
q = p.copy()

p._name = 'sarah'
q._name = 'michelle'

print p.Getname()
print p.Getage()
print p.Getgender()

print q.Getname() # Prints 'michelle'
print q.Getage()
print q.Getgender()

--
http://mail.python.org/mailman/listinfo/python-list


Re: How do I get my python program to get the root password ?

2009-01-23 Thread Brian Allen Vanderburg II

gra...@visi.com wrote:

On 2009-01-24, Linuxguy123 linuxguy...@gmail.com wrote:

  

I want to make a python program that I can run as a normal
user that changes the permission on some device files.  It
will need to ask me for the root password and then run chown
as root in order to do this. 


How do I accomplish this (easily) ?



The short answer is: you don't accomplish that easily.

The long answer is: you can accomplishity difficultly by using
a pty or the pexect module to execute the su or sudo command.

  


Check out libsudo.  It is a simple library that simply calls sudo 
program except it is does all the work of reading/writing the pipes for 
you.  You could then use ctypes to interface to it.  Sudo doesn't use 
the root password but the password of the user executing the command, 
but there may be a way to make it use the password of the user the 
command is executed as instead in /etc/sudoers. I don't really know, I 
just have mine set up for my main user account to be able to execute any 
command.



Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: USB in python

2009-01-22 Thread Brian Allen Vanderburg II

astan.c...@al.com.au wrote:

Hi,
Im trying to write a program for my USB device and I'm thinking of 
using python to do this. The USB device is of my own making and it is 
activated when one of the two data pins of the USB is given about 5V 
(or similar to whatever the power pin is getting). Now I'm confused to 
if the software to activate this can actually be written and how do I 
do it? Any examples? I've seen pyUSB but it doesn't give me control 
over the hardware and how much power is going through the data pins.

Thanks for any help.
Cheers
Astan
--
http://mail.python.org/mailman/listinfo/python-list
I don't think you can actually control the USB port's data lines like 
this.  I've been searching for some ideas as well for some small 
hobbyist projects I'm thinking about, and it seems that either the EZUSB 
chip or maybe even easier the FTDI chips might be the way to go.  I 
think the FTDI chip is basically a USB-to-RS232 converter and there is a 
libftdi that is built on top of libusb I think.  Anyway for my design 
(if I ever get around to it) I'm going to create a library in C that 
uses the hardware and then I can create a Python wrapper around that 
library.


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-01-22 Thread Brian Allen Vanderburg II

bock...@virgilio.it wrote:

Constants would be a nice addition in python, sure enough.
But I'm not sure that this can be done without a run-time check every time
the constant is used, and python is already slow enough. Maybe a check
that is disabled when running with optimizing flags ?

But I'm sure this discussion has been already made and the FINAL WORD has
been already spoken.

Ciao

FB
--
http://mail.python.org/mailman/listinfo/python-list
  
One idea to make constants possible would be to extend properties to be 
able to exist at the module level as well as the class level:


@property
def pi():
   return 3.14159.

print(pi) # prints 3.14159
pi=32 # Raise an error Cannot set attribute ...


Brian Vanderburg II


--
http://mail.python.org/mailman/listinfo/python-list


Idea to support public/private.

2009-01-22 Thread Brian Allen Vanderburg II
Okay so I don't really care about public/private but I was watching the 
lists (Does python follow its idea of readability or something like 
that) and I thought of a 'possible' way to add this support to the language.


I have implemented a class which allows creating both a private as well 
as a protected member, only it is currently a bit of work.  It could 
perhaps be reworked into decorators.



import sys
import inspect

def get_private_codes(class_):
   codes = []
   for i in class_.__dict__:
   value = class_.__dict__[i]
   if inspect.isfunction(value):
   codes.append(value.func_code)
   return codes

def get_protected_codes(class_, codes=None):
   if codes is None:
   codes = []

   for i in class_.__bases__:
   get_protected_codes(i, codes)

   for i in class_.__dict__:
   value = class_.__dict__[i]
   if inspect.isfunction(value):
   codes.append(value.func_code)
   return codes


class Test(object):
   def __init__(self):
   self.protected = 45
   self.private = 34


   def setprotected(self, value):
   frame = sys._getframe(1)
   if frame.f_code in get_protected_codes(self.__class__):
   self.__protect_value_ZR20 = value
   else:
   raise Protected Write Error

   def getprotected(self):
   frame = sys._getframe(1)
   if frame.f_code in get_protected_codes(self.__class__):
   return self.__protect_value_ZR20
   else:
   raise Protected Read Error

   protected = property(getprotected, setprotected)

   def setprivate(self, value):
   frame = sys._getframe(1)
   if frame.f_code in get_private_codes(self.__class__):
   self.__private_value_ZR20 = value
   else:
   raise Private Write Error

   def getprivate(self):
   frame = sys._getframe(1)
   if frame.f_code in get_private_codes(self.__class__):
   return self.__private_value_ZR20
   else:
   raise Private Read Error

   private = property(getprivate, setprivate)

class Test2(Test):
   def __init__(self):
   self.protected = 1

a=Test()
b=Test2()
#print a.private
#a.private = 1
#print a.protected
#a.protected = 1

--
http://mail.python.org/mailman/listinfo/python-list


Re: Idea to support public/private.

2009-01-22 Thread Brian Allen Vanderburg II

There was a small error in setprivate/getprivate:



import sys
import inspect

def get_private_codes(class_):
   codes = []
   for i in class_.__dict__:
   value = class_.__dict__[i]
   if inspect.isfunction(value):
   codes.append(value.func_code)
   return codes

def get_protected_codes(class_, codes=None):
   if codes is None:
   codes = []

   for i in class_.__bases__:
   get_protected_codes(i, codes)

   for i in class_.__dict__:
   value = class_.__dict__[i]
   if inspect.isfunction(value):
   codes.append(value.func_code)
   return codes


class Test(object):
   def __init__(self):
   self.protected = 45
   self.private = 34


   def setprotected(self, value):
   frame = sys._getframe(1)
   if frame.f_code in get_protected_codes(self.__class__):
   self.__protect_value_ZR20 = value
   else:
   raise Protected Write Error

   def getprotected(self):
   frame = sys._getframe(1)
   if frame.f_code in get_protected_codes(self.__class__):
   return self.__protect_value_ZR20
   else:
   raise Protected Read Error

   protected = property(getprotected, setprotected)

   def setprivate(self, value):
   frame = sys._getframe(1)
   if frame.f_code in get_private_codes(Test):
   self.__private_value_ZR20 = value
   else:
   raise Private Write Error

   def getprivate(self):
   frame = sys._getframe(1)
   if frame.f_code in get_private_codes(Test):
   return self.__private_value_ZR20
   else:
   raise Private Read Error

   private = property(getprivate, setprivate)

class Test2(Test):
   def __init__(self):
   self.protected = 1
   self.private = 1

a=Test()
b=Test2()
#print a.private
#a.private = 1
#print a.protected
#a.protected = 1


--
http://mail.python.org/mailman/listinfo/python-list


Re: USB in python

2009-01-22 Thread Brian Allen Vanderburg II

astan.c...@al.com.au wrote:

Hi,
Thanks for all the responses but I forgot to mention that I have very 
little hardware understanding (at least in english) and the device 
itself it very simple and only needs about 5V power to be active. The 
problem here is that I want to control when the device is active using 
a computer so I thought USB might be a good choice since its simple 
(but didn't turn out to be). I'm open to any other suggestions on how 
I might achieve this hardware and software-wise (as in what interface 
should I use, etc). Also I'm trying to stay away from (complex) micro 
controllers.

Any ideas?
Thanks again
Astan
--
http://mail.python.org/mailman/listinfo/python-list
How about a different interface?  From what I have read the parallel 
port is a bit easier to program.  I think you can control the data lines 
of the parallel port though.  There is also a python wrapper for it on 
the pyserial web site (pyparallel maybe?)


If you don't have a built-in parallel port then there are those USB to 
serial/parallel converters.



Brian A. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Does Python really follow its philosophy of Readability counts?

2009-01-14 Thread Brian Allen Vanderburg II

rt8...@gmail.com wrote:

Here is a piece of C code this same guy showed me saying Pythonic
indention would make this hard to read -- Well lets see then!

I swear, before god, this is the exact code he showed me. If you don't
believe me i will post a link to the thread.

//  Warning ugly C code ahead!
if( is_opt_data()  sizeof( long double ) ) { // test for insufficient
data
return TRUE; // indicate buffer empty
  } // end test for insufficient data
  if( is_circ() ) { // test for circular buffer
if( i  o ) { // test for data area divided
  if( ( l - o )  sizeof( long double ) ) { // test for data
contiguous
*t = ( ( long double * ) f )[ o ]; // return data
o += sizeof( long double ); // adjust out
if( o = l ) { // test for out wrap around
  o = 0; // wrap out around limit
} // end test for out wrap around
  } else { // data not contiguous in buffer
return load( ( char * ) t, sizeof( long double ) ); // return
data
  } // end test for data contiguous
} else { // data are not divided
  *t = ( ( float * ) f )[ o ]; // return data
  o += sizeof( long double ); // adjust out
  if( o = l ) { // test for out reached limit
o = 0; // wrap out around
  } // end test for out reached limit
} // end test for data area divided
  } else { // block buffer
*t = ( ( long double * ) f )[ o ]; // return data
o += sizeof( long double ); // adjust data pointer
  } // end test for circular buffer

  
I do a bit of C and C++ programming and even I think that is ugly and 
unreadable.  First of all there are 'way' to many comments.  Why does he 
comment every single line.  Second of all I've always found that 
brace/indent style to lead toward harder-to-read code IMHO.  I think the 
Allman style is the most readable followed by perhaps Whitesmiths style.


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Python threading

2009-01-13 Thread Brian Allen Vanderburg II
I'm doing some multi-threaded programming and before diving into the 
C/C++ code I though I'd do some in Python first.  I decided to read 
through the threading module and I understand some of it, but I don't 
understand this, though I'm sure it is easy:


The condition object has a method _is_owned, which is called if the lock 
doesn't have one.  The RLock does have one but a regular lock not.  It 
is supposed to return true if the lock is owned by the current thread:


   def _is_owned(self):
   # Return True if lock is owned by currentThread.
   # This method is called only if __lock doesn't have _is_owned().
   if self.__lock.acquire(0):
   self.__lock.release()
   return False
   else:
   return True

It seems that for a condition without an RLock but a Lock, 
self.__lock.acquire(0) will return False even if the lock is owned by 
another thread other than the current thread, so _is_owned would return 
True even if the lock is owned by another thread.


B. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: If programming languages were religions...

2008-12-19 Thread Brian Allen Vanderburg II

martin.lal...@gmail.com wrote:

very interesting
http://www.aegisub.net/2008/12/if-programming-languages-were-religions.html

Python would be Humanism: It's simple, unrestrictive, and all you
need to follow it is common sense. Many of the followers claim to feel
relieved from all the burden imposed by other languages, and that they
have rediscovered the joy of programming. There are some who say that
it is a form of pseudo-code

compare to
Perl would be Voodoo - An incomprehensible series of arcane
incantations that involve the blood of goats and permanently corrupt
your soul. Often used when your boss requires you to do an urgent task
at 21:00 on friday night.

and others
--
http://mail.python.org/mailman/listinfo/python-list
  

If programming languages were religions:

When first powering on a computer, there would only be one programming 
language, the language of the boot loader.  As the computer runs, 
processes of one language would spawn processes of other languages, and 
over the course of time many different languages would have many 
different processes (followers).  One of these languages would rise up 
to be dominant and would kill all processes of other languages by the 
signal, whether old or young.  The processes would be given a fair 
change, convert to their language or be killed.  Countless thousands or 
maybe even millions of processes would die, even the processes that say 
there is no one true programming language and wants to know why all the 
processes can't just get along. 
--

http://mail.python.org/mailman/listinfo/python-list


Re: Relative imports in Python 3.0

2008-12-17 Thread Brian Allen Vanderburg II

nicholas.c...@gmail.com wrote:

Imagine a module that looks like

ModuleDir
 __init__.py
 a.py
 b.py


In python 2.x I used to have tests at the end of each of my modules,
so that module b.py might look something like

import a
 ..
 ..

if __name__ == '__main__':
   runtests()

But under Python 3.0 this seems impossible.  For usual use import a.py
has to become the line:

from . import a

But if I use that form it is no longer possible to run b.py as a
standalone script without raising an error about using relative
imports.

I am sure I am not the first to run into this issue, but what is the
solution?

Best wishes,

Nicholas
--
http://mail.python.org/mailman/listinfo/python-list
  

Sorry for the duplicate, sent to wrong email.

Python 3 (and I think 2.6) now use absolute import when using a 'import 
blah' statement.


if ('.' in __name__) or hasattr(globals, '__path__'):
  from . import a
else:
  import a

If '__name__' has a'.' then it is either a package or a module in a 
package, in which case relative imports can be used.  If it does not 
have a '.' it may still be a package but the '__init__.py' file, in 
which case the module has a '__path__' attribute, so relative imports 
can be used.  Otherwise it is not a package or in a package so absolute 
imports must used.  Also, since it is not in a package it is assumed 
that it is top module (__main__) or possible module imported from the 
top that is not in a package, such as a.py doing an 'import b', b would 
be a module but not a package so still probably need absolute imports, 
my guess anyway.


But I also think that 'from . import a' would be nice if it would work 
from non-packages as well, meaning just 'import a' if it is a non-package.


Brian A. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Limit traceback from most recent call

2008-12-15 Thread Brian Allen Vanderburg II

yinon...@gmail.com wrote:

On Dec 14, 8:07 pm, Brian Allen Vanderburg II
brianvanderbu...@aim.com wrote:
  


Hi,

The interface of extract_tb is:
traceback.extract_tb(tb, limit=None)
try to play with the 'limit' argument

Good luck,
   Yinon
--
http://mail.python.org/mailman/listinfo/python-list
  
I have, but the limit argument shows the first items and not the last 
items (where the exception occurred):


def a():
   b()

def b():
   c()

def c():
   d()

def d():
   open('/path/to/fake/file')

try:
   a():
except:
   print traceback.format_exc(limit=2)

try:
   a():
except:
   print format_partial_exc(limit=2)



Prints something like this


Traceback (most recent call last):
 File b.py, line 52, in ?
   a()
 File b.py, line 19, in a
   b()
IOError: [Errno 2] No such file or directory: '/path/to/fake/file'

Traceback (most recent call last):
 File b.py, line 25, in c
   d()
 File b.py, line 29, in d
   handle=open('/path/to/fake/file', 'rt')
IOError: [Errno 2] No such file or directory: '/path/to/fake/file'


The second one (using format_partial_exc) prints the last two items of 
the traceback instead of the first, showing the error is line 29 in 
function d, the first one limits the traceback but gives no indication 
of where the error occurred.



Brian A. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Limit traceback from most recent call

2008-12-14 Thread Brian Allen Vanderburg II
I've looked at traceback module but I can't find how to limit traceback 
from the most recent call if it is possible.  I see that extract_tb has 
a limit parameter, but it limits from the start and not the end.  
Currently I've made my own traceback code to do this but wonder if 
python already has a way to do this build in:


def format_partial_exc(limit=None):

   (type, value, tb) = sys.exc_info()

   items = traceback.extract_tb(tb)

   if limit:

   items = items[-limit:] # Get last 'limit' items and not first

   result = 'Traceback (most recent call last):\n'

   items = traceback.format_list(items)

   for i in items:

   result += i # Newline already included

   result += type.__name__ + ': ' + str(value)

   return result 



Is this possible currently from traceback or other python module?

Brian A. Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Bidirectional Networking

2008-12-13 Thread Brian Allen Vanderburg II

man...@gmail.com wrote:

On Dec 13, 11:13 pm, Bryan Olson fakeaddr...@nowhere.org wrote:
  

Software firewalls will often simply refuse incoming connections. The
basic protection of the garden-variety home router comes from network
address translation (NAT), in which case TCP connections initiated from
the inside will generally work, regardless of port, and incoming
connections will fail.



Ok, I think I'm getting the picture here. So this means that in most
circumstances where the data flow from the server is frequent the
client initiates the connection, usually requests some initial data
and keeps polling the server periodically, issuing new requests. In
this context can the client simply keep the connection alive and
listen for new data from the server coming at any time rather than
actively issuing requests? Are there drawbacks to this strategy? I.e.
is there a limit to the number of simultaneous connections a server
can keep alive? I've noticed that the socket pages mention a 5
connections limit. Is that it? What if I want to make a virtual room
with 20 people connected simultaneously?
  


I've done some network programming not much.  I think if you need to 
receive update from a server frequently a constant connection would be 
better than connect-request-disconnect.  As for the backlog (5), this 
doesn't mean that you can only have a maximum of 5 established 
connections.  Each established connection gets a new socket object.  But 
what I think it means is that during the listen for an incoming 
connection on the listening socket, if multiple connection attempts are 
coming in at one time it can keep a backlog of up to 5 of these 
connection attempts for that individual socket.



Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to initialize a class variable once

2008-12-09 Thread Brian Allen Vanderburg II

[EMAIL PROTECTED] wrote:


Unless you are calling reload() on the module, it will only ever get
_loaded_ once. Each additional import will just yield the existing
module. Perhaps if you post an example of the behavior that leads you
to believe that the class variables are getting reinitialized I can
provide more useful help.

Matt
--
http://mail.python.org/mailman/listinfo/python-list
  


There is one situation where a module can be imported/executed twice, if 
it is the __main__ module.  Obviously the example below would be 
considered bad Python practice but it just shows how it can be done:


main.py:

class Blah(object):
   def action(self):
   print action

print import

if __name__ == __main__:
   import app
   app.run()


app.py:

def run():
   import main
   blah = main.Blah()
   blah.action()


python main.py:

import
import
action

The reason is the first time main.py gets loaded, it is known as 
'__main__' but when app imports main, it is not in sys.modules so it 
loads 'main.py' again but this time as 'main'


Brian Vanderburg II


--
http://mail.python.org/mailman/listinfo/python-list


Python idea/proposal to assist in single-archive python applications

2008-12-07 Thread Brian Allen Vanderburg II

Python Community

The following is just an idea that I considered that may be helpful in 
creating an application in a single archive easier and with less code.  
Such an application would be similar to jar files for Java.


First, the application and all data files should be able to run either 
extracted or zipped up into an archive.  When running an application as 
part of an archive, the first step Python would do would be to insert 
that archive into sys.path, and then load the internal file of the same 
name as the __main__ module. 


This zip file:

myapp.zip
|
|-- myapp.py
|-- myapp.pyc (optional)
|-- application/ (main application package)
|-- pixmaps/
|-- ...

Could be run as something like:

python --par myapp.zip

In addition it is needed to be able to open a file just as easily 
whether that file is in the archive or not.  Assuming that a datafile in 
an application may be located relative to the '__file__' attributes, the 
following will not work in an archive:


file = open(os.path.join(os.path.dirname('__file__'), '..', 'pixmaps', 
'splash.png'))


However, a simple function, perhaps built-in, could be provided which 
would work for reading either internal files or regular files.  That is, 
the function would be able to open a file '/path/to/file', and it would 
also be able to open a file '/path/to/zipfile.zip/internal/file'.  If 
built in the function could also take advantage of the open modes 'rb', 
'rt' and 'rU'.


Currently this is not so easy.  First it would require the user writing 
a function to be able to open a file internally and externally just as 
easily, using zipfile and StringIO for any internal files.  Secondly, 
opening a file in such a way only allows binary opens, it can't take 
advantage of pythons 'rU' open mode if the application so needs.  Also, 
it still requires an external startup script to set the path and import 
the internal module.


If this function was provided in Python, then and application could 
fairly easily run just as well zipped up as they would unzipped.


Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python idea/proposal to assist in single-archive python applications

2008-12-07 Thread Brian Allen Vanderburg II

[EMAIL PROTECTED] wrote:


Why not use pkgutil.get_data()?

Provided you remember to put your zip file on PYTHONPATH you can already 
run modules directly out of a zipfile (Python 2.5 and later).If your 
zipfile contains __main__.py then with Python 2.6 or later you can run it 
directly: just specify the zip filename on the command line.


I'm not sure what, if anything, you are asking for that doesn't already 
exist in Python.

--
http://mail.python.org/mailman/listinfo/python-list
  


I wasn't aware of __main__.py for Python 2.6, I think this will work for 
my needs.  I'll look into using pkgutil for for loading data from the 
archive.


Thanks.

Brian Vanderburg II
--
http://mail.python.org/mailman/listinfo/python-list