Re: utf-8 and ctypes

2010-09-29 Thread Brendan Miller
2010/9/29 Lawrence D'Oliveiro l...@geek-central.gen.new_zealand:
 In message mailman.1132.1285714474.29448.python-l...@python.org, Brendan
 Miller wrote:

 It seems that characters not in the ascii subset of UTF-8 are
 discarded by c_char_p during the conversion ...

 Not a chance.

 ... or at least they don't print out when I go to print the string.

 So it seems there’s a problem on the printing side. What happens when you
 construct a UTF-8-encoded string directly in Python and try printing it the
 same way?

Doing this seems to confirm something is broken in ctypes w.r.t. UTF-8...

if I enter:
str = 日本語のテスト

Then:
print str
日本語のテスト

However, when I create a string buffer, pass it into my c++ code, and
write the same UTF-8 string into it, python seems to discard pretty
much all the text. The same code works for pure ascii strings.

Python code:
_std_string_size = _lib_mbxclient.std_string_size
_std_string_size.restype = c_long
_std_string_size.argtypes = [c_void_p]

_std_string_copy = _lib_mbxclient.std_string_copy
_std_string_copy.restype = None
_std_string_copy.argtypes = [c_void_p, POINTER(c_char)]

# This function works for ascii, but breaks on strings with UTF-8!
def std_string_to_string(str_ptr):
buf = create_string_buffer(_std_string_size(str_ptr))
_std_string_copy(str_ptr, buf)
return buf.raw

C++ code:

extern C
long std_string_size(string* str)
{
return str-size();
}

extern C
void std_string_copy(string* str, char* buf)
{
std::copy(str-begin(), str-end(), buf);
}
-- 
http://mail.python.org/mailman/listinfo/python-list


utf-8 and ctypes

2010-09-28 Thread Brendan Miller
I'm using python 2.5.

Currently I have some python bindings written in ctypes. On the C
side, my strings are in utf-8. On the python side I use
ctypes.c_char_p to convert my strings to python strings. However, this
seems to break for non-ascii characters.

It seems that characters not in the ascii subset of UTF-8 are
discarded by c_char_p during the conversion, or at least they don't
print out when I go to print the string.

Does python not support utf-8 strings? Is there some other way I
should be doing the conversion?

Thanks,
Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


starting repl programmatically

2010-05-20 Thread Brendan Miller
I have a python script that sets up some environmental stuff. I would
then like to be able to change back to interactive mode and use that
environment. What's the best way to do that?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: starting repl programmatically

2010-05-20 Thread Brendan Miller
python -i myscript.py

almost does what I want. The only problem is if I exit with exit(0) it
does *not* enter interactive mode. I have to run off the end of the
script as near as I can tell. Is there another way to exit without
breaking python -i?

On Thu, May 20, 2010 at 4:57 PM, Brendan Miller catph...@catphive.net wrote:
 I have a python script that sets up some environmental stuff. I would
 then like to be able to change back to interactive mode and use that
 environment. What's the best way to do that?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes: delay conversion from c_char_p to string

2010-04-22 Thread Brendan Miller
On Thu, Apr 22, 2010 at 7:49 AM, Zvezdan Petkovic zvez...@zope.com wrote:

 On Apr 21, 2010, at 6:29 PM, Brendan Miller wrote:

 Here's the method I was using. Note that tmp_char_ptr is of type
 c_void_p. This should avoid the memory leak, assuming I am
 interpreting the semantics of the cast correctly. Is there a cleaner
 way to do this with ctypes?

    def get_prop_string(self, prop_name):
        # Have to work with c_void_p to prevent ctypes from copying to a 
 string
        # without giving me an opportunity to destroy the original string.
        tmp_char_ptr = _get_prop_string(self._props, prop_name)
        prop_val = cast(tmp_char_ptr, c_char_p).value
        _string_destroy(tmp_char_ptr)
        return prop_val

 Is this what you want?

 =

 import ctypes.util


 libc = ctypes.CDLL(ctypes.util.find_library('libc'))

 libc.free.argtypes = [ctypes.c_void_p]
 libc.free.restype = None
 libc.strdup.argtype = [ctypes.c_char_p]
 libc.strdup.restype = ctypes.POINTER(ctypes.c_char)


 def strdup_and_free(s):
    s_ptr = libc.strdup(s)
    print s_ptr.contents
    i = 0
    while s_ptr[i] != '\0':
        print s_ptr[i],
        i += 1
    print
    libc.free(s_ptr)

Ah, so c_char_p's are converted to python strings by ctypes, but
POINTER(c_char) is *not*.

Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


ctypes errcheck question

2010-04-21 Thread Brendan Miller
According to the ctypes docs: http://docs.python.org/library/ctypes.html

An errcheck function should return the args tuple when used with out
parameters (section 15.15.2.4. Function prototypes). However, in other
cases it says to return the result, or whatever result you want
returned from the function.

I have a single errcheck function that checks result codes and throws
an exception if the result indicates an error. I'd like to reuse it
with a number of different function, some of which are specified with
out parameters, and some of which are specified in the normal method:

func = my_lib.func
func.restype = c_long
func.argtypes = [c_int, etc]

So, my question is can do I have my errcheck function return the args
tuple in both cases? The documentation seems kind of ambiguous about
how it will behave if a function loaded without output parameters
returns the arguments tuple from errcheck.

Thanks,
Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


ctypes: delay conversion from c_char_p to string

2010-04-21 Thread Brendan Miller
I have a function exposed through ctypes that returns a c_char_p.
Since I need to deallocate that c_char_p, it's inconvenient that
ctypes copies the c_char_p into a string instead of giving me the raw
pointer. I believe this will cause a memory leak, unless ctypes is
smart enough to free the string itself after the copy... which I
doubt.

Is there some way to tell ctypes to return an actual c_char_p, or is
my best bet to return a c_void_p and cast to c_char_p when I'm reading
to convert to a string?

Thanks
Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes: delay conversion from c_char_p to string

2010-04-21 Thread Brendan Miller
Here's the method I was using. Note that tmp_char_ptr is of type
c_void_p. This should avoid the memory leak, assuming I am
interpreting the semantics of the cast correctly. Is there a cleaner
way to do this with ctypes?

def get_prop_string(self, prop_name):
# Have to work with c_void_p to prevent ctypes from copying to a string
# without giving me an opportunity to destroy the original string.
tmp_char_ptr = _get_prop_string(self._props, prop_name)
prop_val = cast(tmp_char_ptr, c_char_p).value
_string_destroy(tmp_char_ptr)
return prop_val

On Wed, Apr 21, 2010 at 3:15 PM, Brendan Miller catph...@catphive.net wrote:
 I have a function exposed through ctypes that returns a c_char_p.
 Since I need to deallocate that c_char_p, it's inconvenient that
 ctypes copies the c_char_p into a string instead of giving me the raw
 pointer. I believe this will cause a memory leak, unless ctypes is
 smart enough to free the string itself after the copy... which I
 doubt.

 Is there some way to tell ctypes to return an actual c_char_p, or is
 my best bet to return a c_void_p and cast to c_char_p when I'm reading
 to convert to a string?

 Thanks
 Brendan

-- 
http://mail.python.org/mailman/listinfo/python-list


gnu readline licensing?

2010-04-20 Thread Brendan Miller
Python provides a GNU readline interface... since readline is a GPLv3
library, doesn't that make python subject to the GPL? I'm confused
because I thought python had a more BSD style license.

Also, I presume programs written with the readline interface would
still be subject to GPL... might want to put a warning about that in
the python library docs.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gnu readline licensing?

2010-04-20 Thread Brendan Miller
On Tue, Apr 20, 2010 at 11:38 AM, Robert Kern robert.k...@gmail.com wrote:
 On 4/20/10 1:09 PM, Brendan Miller wrote:

 Python provides a GNU readline interface... since readline is a GPLv3
 library, doesn't that make python subject to the GPL? I'm confused
 because I thought python had a more BSD style license.

 The PSF License is more BSD-styled, yes. The readline module can also be
 built against the API-compatible, BSD-licensed libedit library. Python's
 source distribution (even the readline module source) does not have to be
 subject to the GPL, though it should be (and is) GPL-compatible.

 Also, I presume programs written with the readline interface would
 still be subject to GPL... might want to put a warning about that in
 the python library docs.

 *When* someone builds a binary of the Python readline module against the GNU
 readline library, then that binary module is subject to the terms of the
 GPL. Any programs that distribute with and use that binary are also subject
 to the terms of the GPL (though it can have a non-GPL, GPL-compatible
 license like the PSF License). This only applies when they are combined with
 the GNU readline library, not before. The program must have a GPL-compatible
 license in order to be distributed that way. It can also be distributed
 independently of GNU readline under any license.


Hmm... So if I ship python to a customer with proprietary software
that runs on top of it, then I need to be careful to disable
libreadline? Is there a configure flag for this or something?

Since libreadline is the default for Linux systems, and Python's
license advertises itself as not being copyleft, and being embeddable
and shippable... It would be nice if this were made clear. Maybe a
note here about libreadline: http://python.org/psf/license/

It seems to me that the whole of the python distribution would be GPL
after being built with libreadline, so this would be an easy trap to
fall into if you didn't realize that python used libreadline.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ctypes question

2010-04-15 Thread Brendan Miller
On Wed, Apr 14, 2010 at 12:12 PM, Mark Dickinson dicki...@gmail.com wrote:
 On Apr 14, 7:09 pm, Brendan Miller catph...@catphive.net wrote:
 I'm using python 2.5.2.

 I have a ctypes function with argtypes like this:

 _create_folder.argyptes = [c_void_p, c_int]

 Is that line a cut-and-paste?  If so, try 'argtypes' instead of
 'argyptes'.  :)

Woops! Well, nice to know that ctypes works as I expect.
-- 
http://mail.python.org/mailman/listinfo/python-list


ctypes question

2010-04-14 Thread Brendan Miller
I'm using python 2.5.2.

I have a ctypes function with argtypes like this:

_create_folder.argyptes = [c_void_p, c_int]

The issue I am having is that I can call it like this

_create_folder(some_pointer, asdf)

and it won't raise a TypeError. Why would it accept a string for an
integer argument?

I didn't see this behavior documented when I read through
http://docs.python.org/library/ctypes.html, maybe I'm just missing
it... if that's the case a reference would be appreciated.

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: order that destructors get called?

2010-04-08 Thread Brendan Miller
Thanks Steven and Gabriel. Those are very informative responses.

In my case my resource isn't bound to a lexical scope, but the:

def __del__(self,
  delete_my_resource=delete_my_resource):

pattern works quite well. I've made sure to prevent my class from
being part of a circular reference, so that the __del__ shouldn't be
an issue.

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


order that destructors get called?

2010-04-07 Thread Brendan Miller
I'm used to C++ where destrcutors get called in reverse order of construction
like this:

{
Foo foo;
Bar bar;

// calls Bar::~Bar()
// calls Foo::~Foo()
}

I'm writing a ctypes wrapper for some native code, and I need to manage some
memory. I'm wrapping the memory in a python class that deletes the underlying
 memory when the python class's reference count hits zero.

When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is released
*before* o is released. So when __del__ get called, delete_my_resource is now
None.

Obviously, MyClass needs to hang onto a reference to delete_my_resource.

What I'm wondering is if there's any documented order that reference counts
get decremented when a module is released or when a program terminates.

What I would expect is reverse order of definition but obviously that's not
the case.

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


rstring vs Rstring

2010-01-16 Thread Brendan Miller
Is there any difference whatsoever between a raw string beginning with
the captical R or one with the lower case r e.g. rstring vs
Rstring?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-18 Thread Brendan Miller
On Fri, Dec 18, 2009 at 10:39 AM, Carl Banks pavlovevide...@gmail.com wrote:
 On Dec 17, 10:00 pm, Brendan Miller catph...@catphive.net wrote:
 On Thu, Dec 17, 2009 at 6:44 PM, Steven D'Aprano

 st...@remove-this-cybersource.com.au wrote:
  On Thu, 17 Dec 2009 12:07:59 -0800, Brendan Miller wrote:

  I was thinking it would be cool to make python more usable in
  programming competitions by giving it its own port of the STL's
  algorithm library, which needs something along the lines of C++'s more
  powerful iterators.

  For the benefit of those of us who aren't C++ programmers, what do its
  iterators do that Python's don't?

 Python iterators basically only have one operation:

 next(), which returns the next element or throws StopIteration.

 In C++ terminology this is a Input iterator. It is good for writing
 for each loops or map reduce operations.

 Hmm.  I guess the main thing that's bothering me about this whole
 thread is that the true power of Python iterators is being overlooked
 here, and I don't think you're being fair to call Python iterators
 weak (as you did in another thread) or say that they are only good
 for for-else type loops.

 The fact is, Python iterators have a whole range of powers that C++
 iterators do not.  C++ iterators, at least the ones that come in STL,
 are limited to iterating over pre-existing data structures.  Python
 iterators don't have this limation--the data being iterated over can
 be virtual data like an infinite list, or data generated on the fly.
 This can be very powerful.

 Here's a cool slideshow on what can be done with iterators in Python
 (generators specifically):

 http://www.dabeaz.com/generators/

 It is true that Python iterators can't be used to mutate the
 underlying structure--if there is actual underlying data structure--
 but it doesn't mean they are weak or limited.  Python and C++
 iterators are similar in their most basic usage, but grow more
 powerful in different directions.


When I said they are weak I meant it in sense that the algorithms
writeable against an InputerIterator interface (which is what python's
iterator protocol provides) is a proper subset of the algorithms that
can be written against a RandomAccessIterator interface. The class of
algorithms expressible against a python iterator is indeed limited to
those that can be expressed with a for each loop or map/reduce
operation.

Yes, python iterators can indeed iterate over infinite lists. All that
you need to do is have the next() operation never throw. The same
thing can be done in c++, or any other language that has iterators.

Generators provide a convenient way to write input iterators; however,
the same thing can be done in any language. It just requires more
boilerplate code to keep track of the iteration state.

Of course, that's not to say generators aren't important. They make it
that much easier to write your own iterators, so in a practical sense,
people are more likely to write their own iterators in python than in
C++.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-18 Thread Brendan Miller
On Fri, Dec 18, 2009 at 2:47 PM, Bearophile bearophileh...@lycos.com wrote:
 Brendan Miller:
 I agree though, it doesn't matter to everyone and anyone. The reason I
 was interested was because i was trying to solve some specific
 problems in an elegant way. I was thinking it would be cool to make
 python more usable in programming competitions by giving it its own
 port of the STL's algorithm library, which needs something along the
 lines of C++'s more powerful iterators.

 It seems you have missed my post, so here it is, more explicitly:

 http://www.boostcon.com/site-media/var/sphene/sphwiki/attachment/2009/05/08/iterators-must-go.pdf

 Bye,
 bearophie
 --
 http://mail.python.org/mailman/listinfo/python-list


Andrei is arguing for replacing iterators with ranges, which are
equivalently powerful to C++ iterators but easier to use. Actually,
what I want in python is something like this.

If you look at Anh Hai Trinh's posts he implemented something
basically like a python version of andre's ranges, which he called
listagents. That pdf is interesting though because he's thought
through what an interface for bidrectional ranges should look like
which I had not yet.

However, you should note that andrei's ranges allow mutation of the
original datastructure they are a range over. My impression from your
earlier post was that you disagreed with that idea of mutating
algorithms and wanted something more functional, whereas I, and andrei
in that pdf, are more concerned with imperative programming and in
place algorithms.

I don't want to get into a big discussion about FP vs imperative
programming, as that is simply too large a topic and I have had that
discussion many times before. I'm more of a turing machine than a
lambda calculus guy myself, but if other people make other choices
that's fine with me.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-17 Thread Brendan Miller
On Thu, Dec 17, 2009 at 8:41 AM, Anh Hai Trinh anh.hai.tr...@gmail.com wrote:
 I have a couple of thoughts:
 1. Since [:] by convention already creates a copy, it might violate
 people's expectations if that syntax were used.

 Indeed, listagent returns self on __getitem__[:]. What I meant was
 this:

  x = [0, 1, 2, 3, 4, 5, 6, 7]
  a = listagent(x)[::2]
  a[:] = listagent(x)[::-2]

 And we get x = [7, 1, 5, 3, 3, 5, 1, 7], the copying happens in-place,
 of course.


 2. I'd give the listagent the mutable sequence interface

 Done!  I put the code in a repository here for those who might be
 interested:
 http://github.com/aht/listagent.py.

 In retrospect, the Python gurus here was right though. Copy, modify
 then replace is good enough, if not better since items are just
 pointers instead of values. For reversing, you need to translate all
 the indices (which cost exactly one addition and one multiplication
 per index). Is that cheaper than copying all the pointers to a new
 list?  For sorting, you definitely need to construct a lookup table
 since the sort algorithm needs to look over the indices multiple
 times, which means you are using two pointer indirections per index.
 Is that cheaper than just copying all the pointers to a new list? Even
 if there is any benefit at all, you'll need to implement listagent in
 C to squeeze it out.

 However, using listagents is faster if you just want a few items out
 of the slice. And it's cute.

Well, it doesn't really need to be any slower than a normal list. You
only need to use index and do extra additions because it's in python.
However, if listagent were written in C, you would just have a pointer
into the contents of the original list, and the length, which is all
that list itself has.

I don't actually expect you to write that, I'm just pointing it out.

As for copying pointers not taking much time... that depends on how
long the list is. if you are working with small sets of data, you can
do almost anything and it will be efficient. However, if you have
megabytes or gigabytes of data (say you are working with images or
video), than the difference between an O(1) or an O(n) operation is a
big deal.

I agree though, it doesn't matter to everyone and anyone. The reason I
was interested was because i was trying to solve some specific
problems in an elegant way. I was thinking it would be cool to make
python more usable in programming competitions by giving it its own
port of the STL's algorithm library, which needs something along the
lines of C++'s more powerful iterators.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-17 Thread Brendan Miller
On Thu, Dec 17, 2009 at 6:44 PM, Steven D'Aprano
st...@remove-this-cybersource.com.au wrote:
 On Thu, 17 Dec 2009 12:07:59 -0800, Brendan Miller wrote:

 I was thinking it would be cool to make python more usable in
 programming competitions by giving it its own port of the STL's
 algorithm library, which needs something along the lines of C++'s more
 powerful iterators.

 For the benefit of those of us who aren't C++ programmers, what do its
 iterators do that Python's don't?

Python iterators basically only have one operation:

next(), which returns the next element or throws StopIteration.

In C++ terminology this is a Input iterator. It is good for writing
for each loops or map reduce operations.

An input iterator can't mutate the data it points to.

C++ also has progressively stronger iterators:
http://www.sgi.com/tech/stl/Iterators.html

InputIterator - read only, one direction, single pass
ForwardIterator - read/write, one direction, multi pass
BidirectionalIterator - read/write, can move in either direction
RandomAccessIterator - read/write, can move in either direction by an
arbitrary amount in constant time (as powerful as a pointer)

Each only adds extra operations over the one before. So a
RandomAccessIterator can be used anywhere a InputIterator can, but not
vice versa.

Also, this is a duck typing relationship, not a formal class
inheritance. Anything that quacks like a RandomAccessIterator is a
RandomAccessIterator, but there is no actual RandomAccessIterator
class.

So, for instance stl sort function takes pair of random access
iterator delimiting a range, and can sort any datastructure that can
provide that powerful of an iterator (arrays, vectors, deques).

http://www.sgi.com/tech/stl/sort.html

MyCollection stuff;
// put some stuff in stuff

sort(stuff.begin(), stuff.end());

Where begin() and end() by convention return iterators pointing to the
beginning and end of the sequence.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-16 Thread Brendan Miller
On Wed, Dec 16, 2009 at 4:16 AM, Paul Rudin paul.nos...@rudin.co.uk wrote:
 Steven D'Aprano st...@remove-this-cybersource.com.au writes:


 I'm sympathetic to your request for list views. I've often wanted some
 way to cleanly and neatly do this:

 for item in seq[1:]:
     process(item)

 without making an unnecessary copy of almost all of seq.


 I don't know how it's implemented - but presumably itertools.islice
 could provide what you're asking for?
 --
 http://mail.python.org/mailman/listinfo/python-list


itertools.islice returns an iterator. My main point is that in python
iterators are weak and can't be used to write many types of
algorithms. They only go in one direction and they can't write to the
collection.

Another poster mentioned a stream library that is also iterator based.
Basically I'm saying that anything with iterators is pretty limited
because of the interface they present. See section 6.5 here for the
iterator protocol:

http://docs.python.org/library/stdtypes.html

Basically this is only useful for writing for loops or map/reduce
operations. However, python's primary datastructures, the dynamic
array (list) and hashtable (dictionary) are more powerful than that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-16 Thread Brendan Miller
On Wed, Dec 16, 2009 at 12:38 PM, Anh Hai Trinh anh.hai.tr...@gmail.com wrote:
 On Dec 16, 2:48 pm, Brendan Miller catph...@catphive.net wrote:

 No, that's what I'm getting at... Most of the existing mutating
 algorithms in python (sort, reverse) operate over entire collections,
 not partial collections delimited by indexes... which would be really
 awkward anyway.

 Ok it can be done! The code is here: http://gist.github.com/258134.

   from listagent import listagent
   x = [22, 7, 2, -5, 8, 4]
   listagent(x)[1:].sort()
   x
  [22, -5, 2, 4, 7, 8]
   listagent(x)[::2].reverse()
   x
  [7, -5, 2, 4, 22, 8]

 Basically the agent refers to the original list only by address
 translation, and indeed made no copy of anything whatever! I
 implemented Shell sort but I suppose others are possible.

 The implementation is incomplete, for now you cannot do slice
 assignment, i.e.

  a = listagent(x)[::-2]
  a[1:] = [4, 2]

 but it should be easy to implement, and I'm too sleepy.

Very cool, that's more or less what I was thinking of.

I have a couple of thoughts:
1. Since [:] by convention already creates a copy, it might violate
people's expectations if that syntax were used.

2. I'd give the listagent the mutable sequence interface, but and then
make new algorithms (except sort and reverse which are required by the
mutable sequence) as functions that just expect an object that mutable
sequence compatible. That way the algorithm is decoupled from the
datastructure, and the same code can handle both lists and listagents,
or even potentially other datastructures that can expose the same
interface.

For instance, I was coding up some stl algorithms in python including
next_permutation. So, if you wanted to make a generic next_permutation
in python, you wouldn't want to make it a member of listagent, because
then regular lists couldn't use it conveniently. However, if it were
just a function that accepted anything that implemented mutable
sequence, it could operate both over lists and over listagent without
any conversion.
-- 
http://mail.python.org/mailman/listinfo/python-list


iterators and views of lists

2009-12-15 Thread Brendan Miller
I was trying to reimplement some of the c++ library of generic
algorithms in c++ in python, but I was finding that this is
problematic to do this in a generic way because there isn't any
equivalent of c++'s forward iterators, random access iterators, etc.
i.e. all python iterators are just input iterators that can't mutate
the sequence they iterate over nor move backwards or by an arbitrary
offset.

I'm wondering if anyone has done work towards creating more powerful
iterators for python, or creating some more pythonic equivalent.

In particular, I was thinking that slices are almost equivalent to a
range of random access iterators, except that they create an
unnecessary copy. If there was a version of slice that defined a
mutable view over the original list, you'd have something equivalent
to t a pair of random access iterators.

For instance, if you wanted to reverse a segment of a list in place
you would be able to do:

list = [1,2,3,4]

#pretend this is some variant of slice that produces views instead
#obviously this couldn't use the same syntax, and might be a method instead.
view = list[1:]
view.reverse()

print list
[1,4,3,2]

This doesn't really handle forward and bidirectional iterators...
which I guess would be good for algorithms that operator over disk
base datastructures...

Anyone else had similar thoughts?

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: iterators and views of lists

2009-12-15 Thread Brendan Miller
On Tue, Dec 15, 2009 at 9:09 PM, Terry Reedy tjre...@udel.edu wrote:
 On 12/15/2009 10:39 PM, Brendan Miller wrote:
 I'm wondering if anyone has done work towards creating more powerful
 iterators for python, or creating some more pythonic equivalent.

 For sequences, integer indexes let you do anything you want that the
 container supports.

No, that's what I'm getting at... Most of the existing mutating
algorithms in python (sort, reverse) operate over entire collections,
not partial collections delimited by indexes... which would be really
awkward anyway.

Currently people slice and dice with well... slices, but those are
copying, so if you want to operate over part of a range you make a
copy, perform the operation, then copy the results back in.

I was thinking you'd want something like random access iterators in
c++, or pointers in c, to write typical in place algorithmic code. To
me, something like non-copying slices (maybe you'd call it a list
view?) would seem functionally similar and maybe more pythonic.
-- 
http://mail.python.org/mailman/listinfo/python-list


PyHeapTypeObject

2009-04-11 Thread Brendan Miller
What's the point of PyHeapTypeObject in Include/object.h? Why does the
layout of object types need to be different on the heap vs statically
allocated?

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


py2exe linux equivalent

2009-03-20 Thread Brendan Miller
I have a python application that I want to package up and deploy to
various people using RHEL 4.

I'm using python 2.6 to develop the app. The RHEL 4 machines have an
older version of python I'd rather not code against (although that's
an option). My main stumbling block is I need to use a couple of
python modules (paramiko and pycrypto) that include C bits in them.

Is there any tool out there that can pull in my dependencies and give
me a packaged binary that I can hand off to my users without worrying
about them having my modules or the right version of python? Extra
credit if it generates an RPM for me.

It really doens't matter if the binary generated is somewhat bloated
with excess dependencies. It can include glibc for all I care.

The main thing keeping me from using all kinds of python in my linux
development at work is not being able to package up the results and
hand them off in a convenient way.

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


Re: py2exe linux equivalent

2009-03-20 Thread Brendan Miller
 platform. AFAICT there are RHEL4 rpms for these, and RHEL4 already comes
 with its own version of Python so it seems you are attempting to make
 things much more difficult than need be.

There are no rpm's in our repository for the third party modules I
need... If it was that easy I wouldn't be asking.
--
http://mail.python.org/mailman/listinfo/python-list


Re: py2exe linux equivalent

2009-03-20 Thread Brendan Miller
So it sounds like the options are PyInstaller, cx_freeze, and
bbfreeze. Has anyone used any of these, and knows which one works best
on linux?
--
http://mail.python.org/mailman/listinfo/python-list


PyYaml in standard library?

2009-02-18 Thread Brendan Miller
I'm just curious whether PyYaml is likely to end up in the standard
library at some point?
--
http://mail.python.org/mailman/listinfo/python-list


Re: PyYaml in standard library?

2009-02-18 Thread Brendan Miller
On Wed, Feb 18, 2009 at 1:34 AM, Chris Rebert c...@rebertia.com wrote:
 On Wed, Feb 18, 2009 at 1:11 AM, Brendan Miller catph...@catphive.net wrote:
 I'm just curious whether PyYaml is likely to end up in the standard
 library at some point?

 I don't personally have a direct answer to your question, but I can
 point out that JSON and YAML are mostly compatible (JSON is almost a
 perfect YAML subset) and the `json` module is already in the Python
 std lib.

 Cheers,
 Chris

Yes, JSON is an (I think unintentional) subset of YAML... but a fairly
small subset.

A list in YAML looks like

--- # my list
- Elem 1
- Elem 2

Whereas in JSON you have [Elem 1, Elem 2]. People say JSON is a
subset because YAML will also accept the JSON style syntax if you want
to do something inline for convenience:

--- # my list containing a sublist in the second element.
- Elem 1
- [Sub elem 1, Sub elem 2]

But this is really a special purpose syntax in the context of YAML.

I think the json module sticks everything on the same line, which
isn't readable for large data structures. My impression is YAML is
that is is more readable than JSON, whereas JSON is mostly for browser
interop.
--
http://mail.python.org/mailman/listinfo/python-list


documentation link for python 3.0.1 on python.org is broken

2009-02-15 Thread Brendan Miller
Like the title says.
--
http://mail.python.org/mailman/listinfo/python-list


install modules for specific python version

2009-01-31 Thread Brendan Miller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have several version of python running side by side on my ubuntu
install (2.5,2.6,3.0).

I'm installing a module with a setup.py script, in this case
logilab-common, so that I can get pylint going. However, I need to
install into python 2.6, but by default it picks out 2.5 and throws
things in the site packages for that version.

Is there a standard way to specify what version of python you want to
install into? I originally installed my other python versions with the
altinstall method.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkmE9BIACgkQ4eGWG/zYzOmmdgCfbjr3p3wQ8A0TpjeFaPJtmHkx
ktQAoI7wONrj5gT4BDclePpwY5kiCy8p
=Pg9L
-END PGP SIGNATURE-
--
http://mail.python.org/mailman/listinfo/python-list


import reassignment different at module and function scope

2009-01-30 Thread Brendan Miller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

If I:

import sys

sys = sys.version

This executes find but:

import sys

def f():
sys = sys.version

This gives an error indicating that the sys on the right hand side of =
is undefined. What gives?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkmD/mMACgkQ4eGWG/zYzOmrWgCbBLuD2HNDJJly3Z1KCPoNOB1G
sDgAoJ+gMCt9hWKuDUN30VUP40zqtbmJ
=+pND
-END PGP SIGNATURE-
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-21 Thread Brendan Miller
On Wed, Jan 21, 2009 at 8:19 AM, Scott David Daniels
scott.dani...@acm.org wrote:
 Brendan Miller wrote:

 On Tue, Jan 20, 2009 at 10:03 PM, Paul Rubin
 http://phr.cx@nospam.invalid wrote:

   Of course I'm aware of the LOCK prefix but it slows
 down the instruction enormously compared with a non-locked instruction.

 I'm curious about that. I've been looking around for timing
 information on the lock signal, but am having some trouble finding
 them. Intuitively, given that the processor is much faster than the
 bus, and you are just waiting for processor to complete an addition or
 comparison before put the new memory value on the bus, it seems like
 there should be very little additional bus contention vs a normal add
 instruction.

 The opcode cannot simply talk to its cache, it must either go directly
 to off-chip memory or communicate to other processors that it (and it
 alone) owns the increment target.

Oh, right. *light bulb goes on* I wasn't thinking about cache at all.
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-20 Thread Brendan Miller
On Tue, Jan 20, 2009 at 3:46 AM, Paul Rubin
http://phr.cx@nospam.invalid wrote:
 s...@pobox.com writes:
 Carl, I'm quite unfamiliar with Boost and am not a C++ person, so may have
 read what you saw but not recognized it in the C++ punctuation soup.  I
 couldn't find what you referred to.  Can you provide a URL?

 http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm#ThreadSafety

I think you are misreading that. It says that multiple assignments to
different copies of a share_ptr in different threads are fine. This is
the important bit because copies of the same pointer share the same
reference count, and assignments and resets will decrement that ref
count. They say they don't handle mutations of the *same* pointer in
different threads, which is a different issue.

The programmer is responsible for synchronizing access to the pointer,
and the pointed to object, but not the ref count. This may be not be
obvious if you don't use shared_ptr a lot.

You also mentioned in an earlier post that most processors don't
support automic increments... I'm hesitant to dispute you here because
this is outside of my field of expertise. However, a quick google
search for x86 atomic increment comes up with this:

XADD

http://www.codemaestro.com/reviews/8
http://siyobik.info/index.php?module=x86id=159

Again, I'm not an assembly guru, but his sounds like exactly what
you'd want. It gains exclusive access to system memory in a
multi-processor environtment without leaving user space. Thus XADD is
an atomic increment/decrement. It would be educational if someone more
famliar with x86 than me could speak to the performance merits of this
on modern multicore machines.
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-20 Thread Brendan Miller
On Tue, Jan 20, 2009 at 6:29 PM, Paul Rubin
http://phr.cx@nospam.invalid wrote:
 Rhodri James rho...@wildebst.demon.co.uk writes:
  What cpu's do you know of that can atomically increment and decrement
  integers without locking?

 x86 (and pretty much any 8080 derivative, come to think of it).

 It would not have occurred to me that lock inc increments without
 locking.  I understand that's different from a lock value sitting in
 the data object but I thought that lock-free algorithm meant one
 that didn't assert any of these hardware locks either.  Maybe I'm
 wrong.

Right... I was wondering about that. Well, any kind of memory access
gets exclusive control of the bus except on NUMA, but I'm wondering
how CMPXCHG
http://en.wikipedia.org/wiki/Compare-and-swap

compares to XADD performance wise.

It seems to me that both of them must pull the old value across the
bus, hang onto the bus, and move the new value in. Maybe since XADD
needs to perform arithmetic there will be a few cycles lag between
getting the old value and pushing the new value? Maybe CMPXCHG doesn't
go through the ALU?

If the bus isn't just sitting idle and you can immediately push out
the new value then there's no real locking. Actually this article
explicitly mentions CMPXCHG as lock free.

http://en.wikipedia.org/wiki/Lock-free_and_wait-free_algorithms
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-20 Thread Brendan Miller
On Tue, Jan 20, 2009 at 10:03 PM, Paul Rubin
http://phr.cx@nospam.invalid wrote:
 Rhodri James rho...@wildebst.demon.co.uk writes:
 You asked a question about CPUs with atomic update, strongly implying
 there were none.  All I did was supply a counter-example,

 Well, more specifically, atomic update without locking, but I counted
 the LOCK prefix as locking while other people didn't, and that caused
 some confusion.  Of course I'm aware of the LOCK prefix but it slows
 down the instruction enormously compared with a non-locked instruction.

I'm curious about that. I've been looking around for timing
information on the lock signal, but am having some trouble finding
them. Intuitively, given that the processor is much faster than the
bus, and you are just waiting for processor to complete an addition or
comparison before put the new memory value on the bus, it seems like
there should be very little additional bus contention vs a normal add
instruction.
--
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-01-19 Thread Brendan Miller
 Constants would be a nice addition in python, sure enough.

My original question was about PEP-8 and whether it is pythonic to use
all caps to denote a variable that shouldn't be changed. More of a
style question than a language question.

I actually think *enforcing* constantness seems to go against the
grain of the language so to speek
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-19 Thread Brendan Miller
Maybe I'm missing something here but a lock free algorithm for
reference counting seems pretty trivial. As long as you can atomically
increment and decrement an integer without locking you are pretty much
done.

For a reference implementation of lock free reference counting on all
common platforms check out boosts implementation of shared_ptr (a
reference counting smart pointer designed around multithreaded use
cases).

There are well known concurrent and parallel GC techniques that
Hmm... I didn't really mention poor parallelism as a problem of GC. As
I see it the two trade offs that have to be made for GC vs alternative
techniques. The embarrassing pause during compaction which makes it
impossible to use for applications like interactive video display that
can't halt to compact a several gigabyte heap without causing stutter,
and the loose memory profile.

Maybe the document you sent me addresses those, and I'd be interested
if it did. It's 150~ pages though so I haven't really had time to read
it yet.

As far as parallelism problems with GC go... the only ones I can
imagine is that if you had a lot of threads going generating lots of
garbage you would need to start to compact more frequently. Since
compaction halts all threads, this could potentially cause very
frequent compactions? Is that what you were getting at? I'd wondered
about that before, but didn't know for a fact whether it came up in
real world scenarios.
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-18 Thread Brendan Miller
On Sat, Jan 17, 2009 at 7:57 PM, Paul Rubin
http://phr.cx@nospam.invalid wrote:
 alex23 wuwe...@gmail.com writes:
 Here's an article by Guido talking about the last attempt to remove
 the GIL and the performance issues that arose:

 I'd welcome a set of patches into Py3k *only if* the performance for
 a single-threaded program (and for a multi-threaded but I/O-bound
 program) *does not decrease*.

 The performance decrease is an artifact of CPython's rather primitive
 storage management (reference counts in every object).  This is
 pervasive and can't really be removed.  But a new implementation
 (e.g. PyPy) can and should have a real garbage collector that doesn't
 suffer from such effects.
 --
 http://mail.python.org/mailman/listinfo/python-list


That's interesting, I hadn't heard the reference counting mechanism
was related to the GIL. Is it just that you need to lock the reference
count before mutating it if there's no GIL? Really, that shouldn't be
the case. Reference counting can be done with a lock free algorithm.

Garbage collection is definitely in vogue right now. However, people
tend to treat it more like a religion than an algorithm. Garbage
collection vs reference counting  actually has some trade offs both
ways. GC gets you some amortized performance gains, and some space
gains because you don't need to hang on to a bunch of counts. However,
GC also has the problem of having a very loose memory profile and poor
interactive performance during compaction if the heap is large. In
some cases this discussion becomes complicated with python because
python has both reference counting and GC.
--
http://mail.python.org/mailman/listinfo/python-list


Re: braces fixed '#{' and '#}'

2009-01-18 Thread Brendan Miller
Yes, I also recently noticed the bug in python's parser that doesn't
let it handle squigly braces and the bug in the lexer that makes white
space significant. I'm surprised the dev's haven't noticed this yet.

On Sat, Jan 17, 2009 at 2:09 AM, v4vijayakumar
vijayakumar.subbu...@gmail.com wrote:
 I saw some code where someone is really managed to import braces from
 __future__. ;)

 def test():
 #{
print hello
 #}

This seems like the best workaround.  Hopefully python curly brace
support will be fixed soon. I think technically a language can't be
turing complete without curly braces right? That's definitely how I
read this:

http://www.thocp.net/biographies/papers/turing_oncomputablenumbers_1936.pdf

If the negation of what Gödel has shown had been proved, i.e. if, for each U,
either U or –U is provable, then we should have an immediate solution of the
Entscheidungsproblem. As a corollary we also have that real
programmers use squigly braces and everyone else is nubs
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's the point of rpython?

2009-01-17 Thread Brendan Miller
 The goals of the pypy project seems to be to create a fast python
 implementation. I may be wrong about this, as the goals seem a little
 amorphous if you look at their home page.

 The home page itself is ambiguous, and does oversell the performance
 aspect. The *actual* goal as outlined by their official docs is to
 implement Python in Python, at every level.

Ok fair enough. In some ways I see that as more of a purely
intellectual exercise than the practical endeavor that I assumed the
project was originally.

However, one of the links I was sent had one of the devs talking about
using the translation process to make C/Java/LLVM implementations out
of the same interpreter code. I'll say that makes a lot of sense.

Another question I was wondering about is whether they plan on
maintaining good C bindings? Will existing bindings for third party
libraries be able to work?

Also, are they going to do away with the GIL? The python devs seem to
consider the GIL a non-issue, though they may change their mind in 3
years when we all have 32 core desktops, until then getting rid of the
GIL would make pypy pretty attractive in some quarters. I know the
scons project was running into GIL issues.

Finally, I'm pretty unclear on what versions of python that pypy is targeting.
--
http://mail.python.org/mailman/listinfo/python-list


what's the point of rpython?

2009-01-16 Thread Brendan Miller
So I kind of wanted to ask this question on the pypy mailing list..
but there's only a pypy-dev list, and I don't want to put noise on the
dev list.

What's the point of RPython? By this, I don't mean What is RPython?
I get that. I mean, why?

The goals of the pypy project seems to be to create a fast python
implementation. I may be wrong about this, as the goals seem a little
amorphous if you look at their home page.

So, to do that this RPython compiler was created? Except that it
doesn't compile python, it compiles a static subset of python that has
type inference like ML.

This RPython thing seems like a totally unnecessary intermediate step.
Instead of writing an implementation of one language, there's an
implementation of two languages.

Actually, it seems like it harms the ultimate goal. For the
interpreted pyton to be fast, the interpreter has to be written in a
reasonably fast language. ML, and C++ compilers have had a lot of work
put into their optimization steps. A lot of the performance boost you
get from a static language is that knowing the types at compile time
lets you inline code like crazy, unroll loops, and even execute code
at compile time.

RPython is a statically typed language because I guess the developers
associate static languages with speed? Except that they use it to
generate C code, which throws away the type information they need to
get the speed increase. Huh? I thought the goal was to write a fast
dynamic language, not a slow static one?

Is this going anywhere or is this just architecture astronautics?

The RPython project seems kind of interseting to me and I'd like to
see more python implementations, but looking at the project I can't
help but think that they haven't really explained *why* they are doing
the things they are doing.

Anyway, I can tell this is the sort of question that some people will
interpret as rude. Asking hard questions is never polite, but it is
always necessary :)

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


pep 8 constants

2009-01-13 Thread Brendan Miller
PEP 8 doesn't mention anything about using all caps to indicate a constant.

Is all caps meaning don't reassign this var a strong enough
convention to not be considered violating good python style? I see a
lot of people using it, but I also see a lot of people writing
non-pythonic code... so I thought I'd see what the consensus is.

Brendan
--
http://mail.python.org/mailman/listinfo/python-list


Re: pep 8 constants

2009-01-13 Thread Brendan Miller
 FOO = 1

 def f(x=FOO):
   ...


 Use this instead:

 def f(x=1):
   ...


I tend to use constants as a means of avoiding the proliferation of
magic literals for maintenance reasons... Like say if your example of
FOO would have been used in 10 places. Maybe it is more pythonic to
simply denote such a thing as simply a normal variable? That doesn't
seem to give a hint that it shouldn't be assigned a second time.
--
http://mail.python.org/mailman/listinfo/python-list


Re: best python unit testing framwork

2008-11-14 Thread Brendan Miller
On Thu, Nov 13, 2008 at 3:54 AM, James Harris
[EMAIL PROTECTED] wrote:
 On 11 Nov, 22:59, Brendan Miller [EMAIL PROTECTED] wrote:
 What would heavy python unit testers say is the best framework?

 I've seen a few mentions that maybe the built in unittest framework
 isn't that great. I've heard a couple of good things about py.test and
 nose. Are there other options? Is there any kind of concensus about
 the best, or at least how they stack up to each other?

 You don't mention what you want from testing so it's hard to say which
 is best as it depends on one's point of view.

 For example, I had a requirement to test more than just Python
 programs. I wanted to test code written in any language. None of the
 frameworks I found supported this so I wrote my own tester. It
 interacts with programs via file streams - principally stdin, stdout
 and stderr though others can be added as needed.

I was speaking to unit tests (tests of individual classes and
functions in isolation of the rest of the program) and a test driven
development approach. I find those to be much more useful, and good
for pinpointing bugs and guiding development. In contrast regression
tests tend to be slow, and are the programmatic equivalent of kicking
the tires and seeing if they fall off.

I've also seen regression tests lead to a sweep the bugs under the
rug mentality where developers will code to prevent errors from
crashing the regression test, e.g. by catching and swallowing all
exceptions without fixing the underlying problem. It's easy to fool
regression tests since what it does works at such a high level that
most aspects of program correctness can't be directly checked. This is
very frustrating to me because it actually leads to lower code
quality.


 One nice by-product is that test code does not bloat-out the original
 source which remains unchanged.
That's the main reason most people don't write unit tests. It forces
them to properly decouple their code so that parts can be used
independently of one another. Adding such things to messy ball of mud
code after the fact is an enourmous pain in the butt. Thankfully,
since python is duck typed and doesn't require lots of boilerplate
writing interfaces and abstract factories (since the class object
itself acts as an abstract factory), writing properly decoupled code
in the first place doesn't look nearly as hard as in C++ or Java.
--
http://mail.python.org/mailman/listinfo/python-list


best python unit testing framwork

2008-11-11 Thread Brendan Miller
What would heavy python unit testers say is the best framework?

I've seen a few mentions that maybe the built in unittest framework
isn't that great. I've heard a couple of good things about py.test and
nose. Are there other options? Is there any kind of concensus about
the best, or at least how they stack up to each other?

Brendan
--
http://mail.python.org/mailman/listinfo/python-list


hiding modules in __init__.py

2008-10-18 Thread Brendan Miller
How would I implement something equivalent to java's package private in
python?

Say if I have

package/__init__.py
package/utility_module.py

and utility_module.py is an implementation detail subject to change.

Is there some way to use __init__.py to hide modules that I don't want
clients to see? Or is the best practice just to name the module you don't
want clients to use _utility_module and have it private by convention?

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


portable way to tell what Popen will call

2008-05-13 Thread Brendan Miller
I need a portable way to tell what subprocess.Popen will call.

For instance on unix systems, Popen will work for files flagged with the
executable bit, whereas on windows Popen will work on files ending the in
.exe extension (and I don't think anything else). Is there a portable way
to check what Popen will work on without actually execute it? Do I have to
write a bunch of platform specific code here?

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


portable /dev/null

2008-05-02 Thread Brendan Miller
Hi,

I have functions that take a file object and write to it. In some cases I
just want to throw out what is written to that file object. I want
something like open('/dev/null', 'w'), but portable.

It needs to have an underlying file descriptor/file handle, as it will be
passed to non python code.

Is there a portable /dev/null somewhere in the standard library?

Thanks,
Brendan
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable /dev/null

2008-05-02 Thread Brendan Miller
On Fri, 02 May 2008 21:41:36 +0200, Christian Heimes wrote:

 Brendan Miller schrieb:
 Hi,
 
 I have functions that take a file object and write to it. In some cases I
 just want to throw out what is written to that file object. I want
 something like open('/dev/null', 'w'), but portable.
 
 import os
 null = open(os.devnull, wb)
 
 :)
 
 Christian

Awesome. Thanks.

Brendan
--
http://mail.python.org/mailman/listinfo/python-list


portable fork+exec/spawn

2008-05-01 Thread Brendan Miller
I want to spawn a child process based on an external executable that I have
the path for. I then want to wait on that executable, and capture it's
output.

In the os module, fork is only supported on unix, but spawn is only
supported on windows.

The os.system call is implemented by calling the C system call, which is of
course inefficient and has portability gotchas because it calls the
underlying system shell (sh for unix, cmd for windows, and who knows what
on non unix, non windows platforms like VMS and mac os9). Most importantly,
os.system forces you to wait on the process.

Is there an actual portable means of asynchronously spawning a process and
getting a handle to it, being able to write stdin and read stdout, or does
everyone just write their own wrapper for fork and spawn?

Sorry if this post sounds a little complainy. I was just surprised to find
the os module lacking in portable abstractions over system services.

Brendan
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable fork+exec/spawn

2008-05-01 Thread Brendan Miller
On Fri, 02 May 2008 13:25:55 +1000, Ben Finney wrote:

 URL:http://docs.python.org/lib/module-subprocess.html

Awesome. This is exactly what I was hoping existed.
--
http://mail.python.org/mailman/listinfo/python-list