[RELEASE] Nevow 0.11.1

2014-06-22 Thread exarkun

Hello,

I'm pleased to announce the release of Nevow 0.11.1.

Nevow is a web application construction kit written in Python and based 
on Twisted. It is designed to allow the programmer to express as much of 
the view logic as desired in Python, and includes a pure Python XML 
expression syntax named stan to facilitate this. However it also 
provides rich support for designer-edited templates, using a very small 
XML attribute language to provide bi-directional template manipulation 
capability.


This release includes a number of minor new features and bug fixes.  It 
also includes changes to modernize Nevow's packaging - installation of 
Nevow using `pip` is now supported.  This release also marks the move of 
Nevow development to Github.


You can read about all of the changes in this release in the NEWS file:

   https://github.com/twisted/nevow/blob/release-0.11.1/NEWS.txt

You can find Nevow on PyPI:

   https://pypi.python.org/pypi/Nevow

Or on Github:

   https://github.com/twisted/nevow

Enjoy!

Jean-Paul
http://as.ynchrono.us/
--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


[RELEASE] Nevow 0.11.1

2014-06-21 Thread exarkun

Hello,

I'm pleased to announce the release of Nevow 0.11.1.

Nevow is a web application construction kit written in Python and based 
on Twisted. It is designed to allow the programmer to express as much of 
the view logic as desired in Python, and includes a pure Python XML 
expression syntax named stan to facilitate this. However it also 
provides rich support for designer-edited templates, using a very small 
XML attribute language to provide bi-directional template manipulation 
capability.


This release includes a number of minor new features and bug fixes.  It 
also includes changes to modernize Nevow's packaging - installation of 
Nevow using `pip` is now supported.  This release also marks the move of 
Nevow development to Github.


You can read about all of the changes in this release in the NEWS file:

  https://github.com/twisted/nevow/blob/release-0.11.1/NEWS.txt

You can find Nevow on PyPI:

  https://pypi.python.org/pypi/Nevow

Or on Github:

  https://github.com/twisted/nevow

Enjoy!

Jean-Paul
http://as.ynchrono.us/
--
https://mail.python.org/mailman/listinfo/python-list


[ANN] pyOpenSSL 0.14

2014-02-24 Thread exarkun

Greetings fellow Pythoneers,

I'm happy to announce that pyOpenSSL 0.14 is now available.

pyOpenSSL is a set of Python bindings for OpenSSL.  It includes some 
low-level cryptography APIs but is primarily focused on providing an API 
for using the TLS protocol from Python.


Check out the PyPI page (https://pypi.python.org/pypi/pyOpenSSL) for 
downloads.


This release of pyOpenSSL adds:

* Support for TLSv1.1 and TLSv1.2

* First-class support for PyPy

* New flags, such as MODE_RELEASE_BUFFERS and OP_NO_COMPRESSION

* Some APIs to access to the SSL session cache

* A variety of bug fixes for error handling cases

Additionally, there are three major changes to the project:

First, the documentation has been converted from LaTeX (CPython's 
previous documentation system) to Sphinx (CPython's new documentation 
system ;).  You can find the new documentation on the PyPI documentation 
site (https://pythonhosted.org/pyOpenSSL/) or 
https://pyopenssl.readthedocs.org/).


Second, pyOpenSSL is no longer implemented in C as a collection of 
extension modules using the Python/C API. Instead, pyOpenSSL is now a 
pure-Python project with a dependency on a new project, cryptography 
(https://github.com/pyca/cryptography), which provides (among other 
things) a cffi-based interface to OpenSSL.


This change means that pyOpenSSL development is now more accessible to 
Python programmers with little or no experience with C. This is also how 
pyOpenSSL is now able to support PyPy.


Finally, the project's code hosting has moved from Launchpad to Github. 
Many branches remain only on Launchpad along with their associated bug 
reports. Over the coming releases I hope that the fixes and features in 
these branches will be ported to Python and incorporated into the 
pyOpenSSL master development branch. Bug tracking has been disabled on 
Launchpad so that the amount of useful information hosted there can 
gradually dwindle to nothing. Please use Github 
(https://github.com/pyca/pyopenssl) for further development and bug 
reporting.


Thanks and enjoy,
Jean-Paul
--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


[ANN] pyOpenSSL 0.14

2014-02-23 Thread exarkun

Greetings fellow Pythoneers,

I'm happy to announce that pyOpenSSL 0.14 is now available.

pyOpenSSL is a set of Python bindings for OpenSSL.  It includes some 
low-level cryptography APIs but is primarily focused on providing an API 
for using the TLS protocol from Python.


Check out the PyPI page (https://pypi.python.org/pypi/pyOpenSSL) for 
downloads.


This release of pyOpenSSL adds:

* Support for TLSv1.1 and TLSv1.2

* First-class support for PyPy

* New flags, such as MODE_RELEASE_BUFFERS and OP_NO_COMPRESSION

* Some APIs to access to the SSL session cache

* A variety of bug fixes for error handling cases

Additionally, there are three major changes to the project:

First, the documentation has been converted from LaTeX (CPython's 
previous documentation system) to Sphinx (CPython's new documentation 
system ;).  You can find the new documentation on the PyPI documentation 
site (https://pythonhosted.org/pyOpenSSL/) or 
https://pyopenssl.readthedocs.org/).


Second, pyOpenSSL is no longer implemented in C as a collection of 
extension modules using the Python/C API. Instead, pyOpenSSL is now a 
pure-Python project with a dependency on a new project, cryptography 
(https://github.com/pyca/cryptography), which provides (among other 
things) a cffi-based interface to OpenSSL.


This change means that pyOpenSSL development is now more accessible to 
Python programmers with little or no experience with C. This is also how 
pyOpenSSL is now able to support PyPy.


Finally, the project's code hosting has moved from Launchpad to Github. 
Many branches remain only on Launchpad along with their associated bug 
reports. Over the coming releases I hope that the fixes and features in 
these branches will be ported to Python and incorporated into the 
pyOpenSSL master development branch. Bug tracking has been disabled on 
Launchpad so that the amount of useful information hosted there can 
gradually dwindle to nothing. Please use Github 
(https://github.com/pyca/pyopenssl) for further development and bug 
reporting.


Thanks and enjoy,
Jean-Paul
--
https://mail.python.org/mailman/listinfo/python-list


pyOpenSSL 0.13 release

2011-09-04 Thread exarkun

Hello all,

I'm happy to announce the release of pyOpenSSL 0.13.  With this release, 
pyOpenSSL now supports OpenSSL 1.0.  Additionally, pyOpenSSL now works 
with PyPy.


Apart from those two major developments, the following interesting 
changes have been made since the last release:


 * (S)erver (N)ame (I)ndication is now supported.
 * There are now APIs with which the underlying OpenSSL version can be 
queried.

 * The peer's certificate chain for a connection can now be inspected.
 * New methods have been added to PKey and X509 objects exposing more 
OpenSSL functionality.


Release files are available on PyPI.  The latest development version and 
the issue tracker can be found on Launchpad.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Announcing pyOpenSSL 0.12

2011-04-11 Thread exarkun

Exciting news everyone,

I have just released pyOpenSSL 0.12.  pyOpenSSL provides Python bindings 
to a number of OpenSSL APIs, including certificates, public and private 
keys, and of course running TLS (SSL) over sockets or arbitrary in- 
memory buffiers.


This release fixes an incompatibility with Python 2.7 involving 
memoryviews.  It also exposes the info callback constants used to 
report progress of the TLS handshake and later steps of SSL connections. 
Perhaps most interestingly, it also adds support for inspecting 
arbitrary X509 extensions.


http://python.org/pypi/pyOpenSSL - check it out.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


[ANN] pyOpenSSL 0.11 released

2010-11-01 Thread exarkun

Hello all,

I'm happy to announce the release of pyOpenSSL 0.11.  The primary change 
from the last release is that Python 3.2 is now supported.  Python 2.4 
through Python 2.7 are still supported as well.  This release also fixes 
a handful of bugs in error handling code.  It also adds APIs for 
generating and verifying cryptographic signatures and it improves the 
test suite to cover nearly 80% of the implementation.


Downloads and more details about the release can be found on the release 
page:


   https://launchpad.net/pyopenssl/main/0.11

Enjoy,
Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Determine sockets in use by python

2010-09-30 Thread exarkun

On 07:32 pm, jmellan...@lbl.gov wrote:

Thanks, I realized that even if I found out relevant info on the
socket, I would probably need to use ctypes to  provide a low level
interface to select, as the socket wouldn't be a python socket object,
unless there is some way to promote a c socket to a python socket
object.

Appreciate the info, folks.


There are a few options to help with that part of it:

 * select() works with integer file descriptors
 * socket.socket.fromfd gives you a socket object from an integer file 
descriptor
 * os.read and os.write let you read and write directly on file 
descriptors (although it sounds like you might not need this)


Jean-Paul

On Thu, Sep 30, 2010 at 7:14 AM, Jean-Paul Calderone
exar...@twistedmatrix.com wrote:

On Sep 29, 4:08�pm, Jim Mellander jmellan...@lbl.gov wrote:


On Wed, Sep 29, 2010 at 11:05 AM, Gary Herron gher...@digipen.edu 
wrote:

 On 09/29/2010 09:50 AM, Jim Mellander wrote:

 Hi:

 I'm a newbie to python, although not to programming.  Briefly, I 
am

 using a binding to an external library used for communication in a
 client-server context, with the server in python.  Typically, I 
would
 set this up with event callbacks, and then enter a select loop, 
which,
 most the time idles and processes input events when the socket 
shows

 activity, kinda like:

 while True:
 � � socket.select((my_socket),(),())
 � � process_event()

 Unfortunately, the API does not expose the socket to the script 
level,

 and the developer recommends a busy loop:

 while True:
 � � sleep(1)
 � � process_event()

 which I hope to avoid, for many reasons.  If the socket can be 
exposed

 to the script level, then the problem would be solved.

 Failing that, it would be nice to be able to pythonically 
determine

 the sockets in use and select on those. �Does anyone have any
 suggestions on how to proceed?

 Thanks in advance

 It's certain that any answer to this will depend on which operating 
system

 you are using. �So do tell: What OS?

Hi Gary:

Certainly not windows �I'm developing on OS/X but for production
probably Linux and FreeBSD

(I'm hoping for something a bit more portable than running 'lsof' and
parsing the output, but appreciate any/all advice)


Linux has /proc/self/fd and OS X has /dev/fd. �Those both suppose you
have some way of determining which file descriptor corresponds to the
socket or sockets that the library is using, of course. �Vastly better
would be to convince the author to expose that information via a real
API.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] filepath 0.1

2010-06-23 Thread exarkun

Hello all,

I'm happy to announce the initial release of filepath.

filepath is an abstract interface to the filesystem.  It provides APIs 
for path name manipulation and for inspecting and modifying the 
filesystem (for example, renaming files, reading from them, etc). 
filepath's APIs are intended to be easier than those of the standard 
library os.path module to use correctly and safely.


filepath is a re-packaging of the twisted.python.filepath module 
independent from Twisted (except for the test suite which still depends 
on Twisted Trial).


The low number of this release reflects the newness of this packaging. 
The implementation is almost entirely mature and well tested in real- 
world situations from its time as part of Twisted.


You can find the package on PyPI or Launchpad:

   http://pypi.python.org/pypi/filepath/0.1
   https://launchpad.net/filepath

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


[ANN] python-signalfd 0.1 released

2010-06-21 Thread exarkun

Hello all,

I'm happy to announce the initial release of python-signalfd.  This 
simple package wraps the sigprocmask(2) and signalfd(2) calls, useful 
for interacting with POSIX signals in slightly more advanced ways than 
can be done with the built-in signal module.


You can find the package on PyPI or Launchpad:

   http://pypi.python.org/pypi/python-signalfd/0.1
   https://launchpad.net/python-signalfd

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: a +b ?

2010-06-13 Thread exarkun

On 04:25 pm, wuwe...@gmail.com wrote:

Steven D'Aprano st...@remove-this-cybersource.com.au wrote:
No, I think your code is very simple. You can save a few lines by 
writing

it like this:

s = input('enter two numbers: ')
t = s.split()
print(int(t[0]) + int(t[1]))  # no need for temporary variables a and 
b


Not that we're playing a round of code golf here, but this is a
slightly nicer take on your version:

one, two = input('enter two numbers: ').split()
print(int(one) + int(two))

I like names over subscripts, but that's just me :)


Fore!

   print(sum(map(int, input('enter two numbers: ').split(

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: __getattribute__ and methods proxying

2010-06-12 Thread exarkun

On 07:01 pm, g.rod...@gmail.com wrote:

Hi,
I have a class which looks like the one below.
What I'm trying to accomplish is to wrap all my method calls and
attribute lookups into a proxy method which translates certain
exceptions into others.
The code below *apparently* works: the original method is called but
for some reason the except clause is totally ignored.

I thought __getattribute__ was designed for such kind of things
(proxying) but apparently it seems I was wrong.



class NoSuchProcess(Exception): pass
class AccessDenied(Exception): pass


class Process(object):

   def __getattribute__(self, name):
   # wrap all method calls and attributes lookups so that
   # underlying OSError exceptions get translated into
   # NSP and AD exceptions
   try:
   print here 1!
   return object.__getattribute__(self, name)
   except OSError, err:
   print here 2!
   if err.errno == errno.ESRCH:
   raise NoSuchProcess
   if err.errno == errno.EPERM:
   raise AccessDenied

   def cmdline(self):
   raise OSError(bla bla)


proc = Process()
proc.cmdline()


You've proxied attribute access here.  But no OSError is raised by the 
attribute access.  It completes successfully.  Then, the cmdline method 
which was retrieved by the attribute access is called, normally, with 
none of your other code getting involved.  This raises an OSError, which 
your code doesn't handle because it has already returned.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: error in importing numpy

2010-06-05 Thread exarkun

On 12:26 pm, michelpar...@live.com wrote:


Hi,
I am using ubuntu 9.10 . I just installed python 2.6.1 in 
/opt/python2.6 for using it with wingide for debugging symbols. I also 
installed numpy in python 2.6.1 using -- prefix method. but when i 
import numpy i get following error :


ImportError: undefined symbol: _PyUnicodeUCS4_IsWhitespace
please help me.


Your numpy is compiled for a UCS4 build of Python.  But you have a UCS2 
build of Python.  Rebuild one of them to match up with the other.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Plotting in batch with no display

2010-06-04 Thread exarkun

On 08:01 pm, h.schaat...@surrey.ac.uk wrote:

Admittedly not the strongest reason, but yet an important one,
for switching from Matlab to python/numpy/scipy/matplotlib,
is that Matlab is very cumbersome to run in batch.

Now I discover that some of the matplotlib.pyplot functions
(incl. plot and contour) insist on opening an X11 window
(just like Matlab does).  I would have preferred just to create
the plot straight on file.  The extra window is a nuisance on my
laptop, it is deep-felt pain if I try to run it remotely.  It fails
completely if I run it without any display at all.

Oddly, the bar() function does not open a window by default.
I was very happy with that.  It works exactly the way I want.
(Why isn't pyplot more consistent?)

Is there something I have missed?  Is it possible to create
standard 2D plots and contour plots without a display, writing
the graphics straight into a PDF file?


It's possible to plot with matplotlib without a display.  I'm not 
surprised you didn't figure out how, though, it's not all that obvious.


Check out the matplotlib.use function.  For example:

   import matplotlib
   matplotlib.use('agg')
   import pylab
   pylab.plot([1, 3, 5])
   fig = file('foo.png', 'wb')
   pylab.savefig(fig, format='png')
   fig.close()

This should work fine without a display.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: SOAP with fancy HTTPS requirements

2010-06-03 Thread exarkun

On 03:23 pm, li...@zopyx.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi there,

I need to implement the following:

sending SOAP requests and receiving SOAP responses
over HTTPS with

- authentication based on client-certificates _and_ basic authorization
- verification of the server cert

The client cert is protected with a passphrase and there must be some
mechanism for passing the passphrase to Python.

Is there some SOAP module doing this out-of-the-box?

I tried myself with httplib.HTTPSConnection what I could not find a way
passing the passphrase to the HTTPSConnection..Python always pops up
with an input for the passphrase (likely this is coming from OpenSSL).

Any ideas?


You'll find this easier with one of the third-party SSL libraries, like 
M2Crypto or pyOpenSSL.  The stdlib SSL support is fairly minimal.  For 
example, I *don't* see any support for passphrase-protected private keys 
in the Python 2.6 SSL APIs.  Compare this to the pyOpenSSL API 
load_privatekey, which accepts the passphrase as an argument:


 http://packages.python.org/pyOpenSSL/openssl-crypto.html

Or lets you specify a callback which will be called whenever a 
passphrase is required, set_passwd_cb:


 http://packages.python.org/pyOpenSSL/openssl-context.html

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: FIle transfer over network - with Pyro?

2010-06-03 Thread exarkun

On 06:58 pm, strom...@gmail.com wrote:

On Jun 3, 10:47�am, Nathan Huesken pyt...@lonely-star.org wrote:

Hi,

I am writing a network application which needs from time to time do
file transfer (I am writing the server as well as the client).
For simple network messages, I use pyro because it is very 
comfortable.

But I suspect, that doing a file transfer is very inefficient over
pyro, am I right (the files are pretty big)?

I somehow need to ensure, that the client requesting a file transfer 
is

the same client getting the file. So some sort of authentication is
needed.

What library would you use to do the file transfer?
Regards,
Nathan


I've never used Pyro, but for a fast network file transfer in Python,
I'd probably use the socket module directly, with a cache oblivious
algorithm:
  http://en.wikipedia.org/wiki/Cache-oblivious_algorithm

It doesn't use sockets, it uses files, but I recently did a Python
progress meter application that uses a cache oblivious algorithm that
can get over 5 gigabits/second throughput (that's without the network
in the picture, though if it were used on 10 Gig-E with a suitable
transport it could probably do nearly that), on a nearly-modern PC
running Ubuntu with 2 cores  It's at:
  http://stromberg.dnsalias.org/~strombrg/gprog/ .


This seems needlessly complicated.  Do you have a hard drive that can 
deliver 5 gigabits/second to your application?  More than likely not.


A more realistic answer is probably to use something based on HTTP. 
This solves a number of real-world problems, like the exact protocol to 
use over the network, and detecting network issues which cause the 
transfer to fail.  It also has the benefit that there's plenty of 
libraries already written to help you out.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: asyncore loop and cmdloop problem

2010-05-25 Thread exarkun

On 04:31 pm, kak...@gmail.com wrote:

On May 25, 5:47�pm, kak...@gmail.com kak...@gmail.com wrote:

On May 25, 5:23�pm, Michele Simionato michele.simion...@gmail.com
wrote:

 On May 25, 2:56�pm, kak...@gmail.com kak...@gmail.com wrote:

  Could you please provide me with a simple example how to do this 
with

  threads.
  I don't know where to put the cmdloop().
  Please help, i' m so confused!
  Thank you

 What are you trying to do? Do you really need to use the standard
 library? Likely what you want to accomplish is already implemented 
in

 Twisted; I remember there was something like that in their examples
 directory.

Thank you,
and sorry for the mistake i did before with my post.

Antonis


hi again. i installed twisted, but since i m not familiar with it, do
you remember which example you have in mind?
What i want to accomplish is something like asterisk. while you send
commands from the asterisk cli, at the same time
the underlying protocol sends you notifications to the console.
without exiting the application.
thanks


You can find a couple simple examples here:

 http://twistedmatrix.com/documents/current/core/examples/

Search for stdin and stdio.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Traceback spoofing

2010-05-21 Thread exarkun

On 01:42 am, tjre...@udel.edu wrote:

On 5/21/2010 7:22 PM, Zac Burns wrote:

Why can't I inherit from traceback to 'spoof' tracebacks?


Because a) they are, at least in part, an internal implementation 
detail of CPython,


But you can just say this about anything, since there is no Python 
specification.  So it's mostly meaningless.
and b) even if you could, Python would use the builtin original with 
exceptions,


Only if it were implemented that way.  One could certainly an 
implementation with different behavior.
and c) you can probably do anything sensible you want with them by 
wrapping them, as in, define a class with a traceback as the main 
instance attribute.


Except you can't re-raise them.

Something like this feature has been proposed before.  The only objects 
that I've ever heard raised are that it's harder to implement on CPython 
than anyone is willing to tackle.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to measure speed improvements across revisions over time?

2010-05-11 Thread exarkun

On 08:13 pm, m...@tplus1.com wrote:

I know how to use timeit and/or profile to measure the current run-time
cost of some code.

I want to record the time used by some original implementation, then
after I rewrite it, I want to find out if I made stuff faster or 
slower,

and by how much.

Other than me writing down numbers on a piece of paper on my desk, does
some tool that does this already exist?

If it doesn't exist, how should I build it?


http://github.com/tobami/codespeed sounds like the kind of thing you're 
looking for.  You can see an example of what it does at 
http://speed.pypy.org/


Jean-Paul


Matt

--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows - select.select, timeout and KeyboardInterrupt

2010-05-08 Thread exarkun

On 07:48 am, l...@geek-central.gen.new_zealand wrote:

In message mailman.2760.1273288730.23598.python-l...@python.org,
exar...@twistedmatrix.com wrote:
This is a good example of why it's a bad idea to use select on 
Windows.

Instead, use WaitForMultipleObjects.


How are you supposed to write portable code, then?


With WaitForMultipleObjects on Windows, epoll on Linux, kqueue on BSD, 
event completion on Solaris, etc...


Sound like more work than using select() everywhere?  Yea, a bit.  But 
not once you abstract it away from your actual application code.  After 
all, it's not like these *do* different things.  They all do the same 
thing (basically) - differently.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Windows - select.select, timeout and KeyboardInterrupt

2010-05-08 Thread exarkun

On 11:47 am, g.rod...@gmail.com wrote:

2010/5/7 Antoine Pitrou solip...@pitrou.net:

Le Fri, 07 May 2010 21:55:15 +0200, Giampaolo Rodol� a �crit�:
Of course, but 30 seconds look a little bit too much to me, also 
because
(I might be wrong here) I noticed that a smaller timeout seems to 
result

in better performances.


That's probably bogus.


Probably, I'll try to write a benchmark script and see what happens.

Plus, if scheduled callbacks are ever gonna be added to asyncore we
would be forced to lower the default timeout anyway in order to have 
a

decent reactivity.


Why?


Assuming loop() function does something like this:

...
select.select(r, w, e, timeout)
scheduler()  # checks for scheduled calls to be fired
...

...imagine a case where there's a connection (aka a dispatcher
instance) which does not receive or send any data *and* a scheduled
call which is supposed to be fired after, say, 5 seconds.
The entire loop would hang on select.select() which won't return for
30 seconds because the socket is not ready for reading and/or writing
resulting in scheduler() be called too late.


This would be an intensely lame way to implement support for scheduled 
callbacks.  Try this trivial modification instead, to make it awesome:


   ...
   select.select(r, w, e, scheduler.time_until_next_call())
   scheduler.run()
   ...

(Or maybe just use Twisted. ;)

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Windows - select.select, timeout and KeyboardInterrupt

2010-05-07 Thread exarkun

On 7 May, 07:25 pm, p.f.mo...@gmail.com wrote:

On 7 May 2010 15:36, Giampaolo Rodol� g.rod...@gmail.com wrote:

You can easily avoid this by setting a lower timeout when calling
asyncore.loop(), like 1 second or less (for example, Twisted uses
0.001 secs).


Thanks, that's what I was considering.


This is a good example of why it's a bad idea to use select on Windows. 
Instead, use WaitForMultipleObjects.

Actually there's no reason for asyncore to have such a high default
timeout (30 seconds).


I assumed it was to avoid busy waiting.

I think this should be signaled on the bug tracker.


If a longer timeout doesn't have issues with busy waiting, then I'd 
agree.


The *default* timeout is only the default.  An application that knows 
better can supply a different value.  I'm not sure how much good can be 
done by fiddling with the default.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fix bugs without breaking down existing code

2010-04-25 Thread exarkun

On 10:02 am, elvismoodbi...@gmail.com wrote:

Say, a Standard Library function works in a way it was not supposed
to.

Developers (who use Python) handle this issue themselves.
And then, you (a python-core developer) fix the behavior of the
function.

Although you have  1Cfixed 1D the bug, anyone who upgrades, will be in
trouble.
Their code may no longer work.

How do you go about this issue?
How do you fix things without breaking down existing systems?


CPython itself has no policy about such things.  Each case is handled 
independently by whomever is working on it.


http://twistedmatrix.com/trac/wiki/CompatibilityPolicy might be 
interesting, though.  The general idea there (and I'm not sure how well 
it's actually expressed there) is that if you want new behavior, then 
(with a few exceptions) you add a new API: you don't change an existing 
API.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: ssl module doesn't validate that domain of certificate is correct

2010-04-19 Thread exarkun

On 04:51 pm, na...@animats.com wrote:

   I'm converting some code from M2Crypto to the new ssl module, and
I've found what looks like a security hole.  The ssl module will
validate the certificate chain, but it doesn't check that the 
certificate

is valid for the domain.

   Here's the basic code:

sk = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock = ssl.wrap_socket(sk, ca_certs=certfile,
cert_reqs=ssl.CERT_REQUIRED)
sock.connect((domain,443))
cert = sock.getpeercert()
print('SSL cert for %s:' % (domain,))
for fieldname in cert :
print('%s = %s' % (fieldname, cert[fieldname]))

Note that I'm sending a CA cert list and am specifying CERT_REQUIRED,
so I should get a proper cert check.

Now let's try a host that presents the wrong SSL cert. Try, in
a browser,

https://www.countrysidecabinetry.com

You'll get an error.  But the ssl module is happy with this cert:

SSL cert for www.countrysidecabinetry.com:
notAfter = Dec  8 23:30:48 2010 GMT
subject = ((('serialNumber', 
u'E5gMXaDjnqfFPID2KNdLTVNEE6PjtqOr'),), (('countryName', u'US'),), 
(('organizationName', u'customla
serengravings.com'),), (('organizationalUnitName', u'GT57631608'),), 
(('organizationalUnitName', u'See www.rapidssl.com/resources/cp
s (c)09'),), (('organizationalUnitName', u'Domain Control Validated - 
RapidSSL(R)'),), (('commonName', u'customlaserengravings.com')

,))

Note that the cert is for customlaserengravings.com, but is being
presented by countrysidecabinetry.com.  Fail.

When I try this with M2Crypto, I get an SSL.Checker.WrongHost 
exception.

That's what should happen.


It's a bit debatable.  There probably should be a way to make this 
happen, but it's far from clear that it's the only correct behavior. 
And, as it turns out, there is a way to make it happen - call 
getpeercert() and perform the check yourself. ;)


Here's some related discussion for an equivalent API in a different 
module:


 http://twistedmatrix.com/trac/ticket/4023

At the very least, the documentation for this should be very clear about 
what is and is not being checked.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: ssl module doesn't validate that domain of certificate is correct

2010-04-19 Thread exarkun

On 05:49 pm, na...@animats.com wrote:

exar...@twistedmatrix.com wrote:

On 04:51 pm, na...@animats.com wrote:
   I'm converting some code from M2Crypto to the new ssl module, 
and

I've found what looks like a security hole.  The ssl module will
validate the certificate chain, but it doesn't check that the 
certificate

is valid for the domain.

...
It's a bit debatable.  There probably should be a way to make this 
happen, but it's far from clear that it's the only correct behavior. 
And, as it turns out, there is a way to make it happen - call 
getpeercert() and perform the check yourself. ;)


   Checking it yourself is non-trivial.


Yes.  It'd be nice to having something in the stdlib which accepted a 
hostname and a certificate and told you if they line up or not.

The SSL module doesn't seem to let you read all the cert extensions,


Yes.  That sucks.  It was argued about on python-dev and ultimately the 
people writing the code didn't want to expose everything.   I don't 
remember the exact argument for that position.

   It's very bad for the ssl module to both ignore this check and
not have that mentioned prominently in the documentation.


I agree.  As I said, I think the behavior should be well documented.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Possibly OT] Comments on PyPI

2010-04-11 Thread exarkun

On 09:24 am, st...@remove-this-cybersource.com.au wrote:

On Sun, 11 Apr 2010 10:13:16 +0200, Martin v. Loewis wrote:

Steven D'Aprano wrote:
How do I leave comments on PyPI? There's a checkbox Allow comments 
on

releases which I have checked, but no obvious way to actually post a
comment.


You need to login in order to rate or comment.


I already am logged in.

A more specific place to ask PyPI questions is catalog-...@python.org.


Another mailing list to subscribe too :(

Thanks Martin.


You can't comment on your own packages.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Browser-based MMOG web framework

2010-04-01 Thread exarkun

On 01:14 am, srosbo...@gmail.com wrote:

Hi, I could use some advice on my project.

It's a browser-based MMOG: The High Seas (working title)

Basically it is a trading game set in 1600s or 1700s ... inspirations:
Patrician 3, Mine Things, Space Rangers 2, ...

Travel between cities takes several days: game updates trading ship
positions every 10 minutes.  Apart from that it handles player input
to buy/sell goods, if their ship is in port.

I want the game logic and world state data storage on a webserver,
with players connecting via web browser.  Also, I want to make an
admin mode client for me to keep track of the world and add changes
to game world stuff.

I want to use Python but I haven't ever used it in a web context.

http://wiki.python.org/moin/WebFrameworks lists several different
options for Python Web Frameworks: Django, Grok, Pylons, TurboGears,
web2py, Zope.  I've heard of Django and Grok...that's about my level
of knowledge here.

My question: can any of these frameworks help me with what I'm trying
to do?


This is something that Twisted and Nevow Athena will probably be really 
good at doing (a lot better than the ones you've mentioned above, I 
think).


You can find an Athena introduction here (at least for now, the content 
might move to another site before too long):


   http://divmodsphinx.funsize.net/nevow/chattutorial/

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing tests for the Python bug tracker

2010-03-20 Thread exarkun

On 06:52 am, st...@remove-this-cybersource.com.au wrote:


but when I try running the test, I get an error:

$ python test_unicode_interpolation.py
Options: {'delimiter': None}
str of options.delimiter = None
repr of options.delimiter = None
len of options.delimiter
Traceback (most recent call last):
 File test_unicode_interpolation.py, line 3, in module
   import test.test_support
 File /home/steve/python/test.py, line 8, in module
   print len of options.delimiter, len(options.delimiter)
TypeError: object of type 'NoneType' has no len()


What am I doing wrong?


Take a careful look at the stack being reported.  Then, think of a 
better name than test for your file.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Asynchronous HTTP client

2010-03-07 Thread exarkun

On 06:53 am, ping.nsr@gmail.com wrote:

Hi,

I'm trying to find a way to create an asynchronous HTTP client so I
can get responses from web servers in a way like

 async_http_open('http://example.com/', callback_func)
 # immediately continues, and callback_func is called with response
as arg when it is ready

It seems twisted can do it, but I hesitate to bring in such a big
package as a dependency because my client should be light. Asyncore
and asynchat are lighter but they don't speak HTTP. The asynchttp
project on sourceforge is a fusion between asynchat and httplib, but
it hasn't been updated since 2001 and is seriously out of sync with
httplib.


Why should it be light?  In what way would using Twisted cause 
problems for you?


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Asynchronous HTTP client

2010-03-07 Thread exarkun

On 02:40 pm, ping.nsr@gmail.com wrote:

2010/3/7 exar...@twistedmatrix.com

On 06:53 am, ping.nsr@gmail.com wrote:

Hi,

I'm trying to find a way to create an asynchronous HTTP client so I
can get responses from web servers in a way like

 async_http_open('http://example.com/', callback_func)
 # immediately continues, and callback_func is called with response
as arg when it is ready

It seems twisted can do it, but I hesitate to bring in such a big
package as a dependency because my client should be light. Asyncore
and asynchat are lighter but they don't speak HTTP. The asynchttp
project on sourceforge is a fusion between asynchat and httplib, but
it hasn't been updated since 2001 and is seriously out of sync with
httplib.


Why should it be light?  In what way would using Twisted cause 
problems

for you?

Jean-Paul


I'm writing an open source python client for a web service. The client 
may

be used in all kinds of environments - Linux, Mac OS X, Windows, web
hosting, etc by others. It is not impossible to have twisted as a
dependency, but that makes deployment a larger job than simply 
uploading a

Python file.


Twisted is packaged for many Linux distributions (perhaps most of them). 
Many web hosts provide it.  It's also shipped with OS X.


Windows may be an issue, but note that there's a binary Windows 
installer (as of 10.0, an MSI, so installation can be easily automated).
I'm willing to use twisted, but I'd like to explore lighter 
alternatives

first.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: threading and signals - main thread solely responsible for signal handling?

2010-02-13 Thread exarkun

On 04:43 pm, malig...@gmail.com wrote:

The main part of my script is a function that does many long reads
(urlopen, it's looped). Since I'm hell-bent on employing SIGINFO to
display some stats, I needed to run foo() as a seperate thread to
avoid getting errno 4 (interrupted system call) errors (which occur if
SIGINFO is received while urlopen is setting itself up/waiting for a
response). This does the job, SIGINFO is handled without ever brutally
interrupting urlopen.

The problem is that after starting foo as a thread, my main thread has
nothing left to do - unless it receives a signal, and I am forced to
keep it in some sort of loop so that ANY signal handling can still
occur. I thought I'd just occupy it with a simple while 1: pass loop
but that, unfortunately, means 100% CPU usage.

Is there any way I could put the main thread to sleep? Or perhaps my
approach is totally wrong?


I don't think those two options are mutually exclusive. ;)

MRAB suggested you time.sleep() in a loop, which is probably fine. 
However, if you want to have even /less/ activity than that in the main 
thread, take a look at signal.pause().


Also, perhaps not terribly interesting, signal.siginterrupt() was 
recently introduced, which will let you avoid EINTR if SIGINFO is 
received while urlopen is in a syscall (but will also prevent the signal 
from being handled until the syscall returns on its own).


And there's always Twisted  friends. :)

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: merge stdin, stdout?

2010-02-04 Thread exarkun

On 01:56 am, jonny.lowe.12...@gmail.com wrote:

Hi everyone,

Is there an easy way to merge stdin and stdout? For instance suppose I
have script that prompts for a number and prints the number. If you
execute this with redirection from a file say input.txt with 42 in the
file, then executing

./myscript  input.txt  output.txt

the output.txt might look like this:

Enter a number:
You entered 42.

What I want is to have an easy way to merge input.txt and the stdout
so that output.txt look like:

Enter a number: 42
You entered 42.

Here, the first 42 is of course from input.txt.


It sounds like you might be looking for script(1)?

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: lists as an efficient implementation of large two-dimensional arrays(!)

2010-02-02 Thread exarkun

On 08:36 pm, gerald.brit...@gmail.com wrote:
On Tue, Feb 2, 2010 at 3:14 PM, Mitchell L Model mlm...@comcast.net 
wrote:

I need a 1000 x 1000 two-dimensional array of objects. (Since they are
instances of application classes it appears that the array module is
useless;



Did you try it with an array object using the array module?


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Overcoming python performance penalty for multicore CPU

2010-02-02 Thread exarkun

On 11:02 pm, na...@animats.com wrote:

   I know there's a performance penalty for running Python on a
multicore CPU, but how bad is it?  I've read the key paper
(www.dabeaz.com/python/GIL.pdf), of course.  It would be adequate
if the GIL just limited Python to running on one CPU at a time,
but it's worse than that; there's excessive overhead due to
a lame locking implementation.  Running CPU-bound multithreaded
code on a dual-core CPU runs HALF AS FAST as on a single-core
CPU, according to Beasley.


It's not clear that Beasley's performance numbers apply to any platform 
except OS X, which has a particularly poor implementation of the 
threading primitives CPython uses to implement the GIL.


You should check to see if it actually applies to your deployment 
environment.


The GIL has been re-implemented recently.  Python 3.2, I think, will 
include the new implementation, which should bring OS X performance up 
to the level of other platforms.  It may also improve certain other 
aspects of thread switching.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Function name unchanged in error message

2010-01-29 Thread exarkun

On 02:10 pm, c...@rebertia.com wrote:
On Fri, Jan 29, 2010 at 5:30 AM, andrew cooke and...@acooke.org 
wrote:

Is there any way to change the name of the function in an error
message? �In the example below I'd like the error to refer to bar(),
for example (the motivation is related function decorators - I'd like
the wrapper function to give the same name)

def foo():

... � � return 7
...

foo.__name__ = 'bar'
foo(123)

Traceback (most recent call last):
�File stdin, line 1, in module
TypeError: foo() takes no arguments (1 given)


It gets weirder:

print(foo)

function bar at 0x37b830


The name is represented in (at least) two places, on the function object 
and on the code object:


def foo(): pass
   ... foo.func_name
   'foo'
foo.func_code.co_name
   'foo'
foo.func_name = 'bar'
foo
   function bar at 0xb74f2cdc
foo.func_code.co_name = 'baz'
   Traceback (most recent call last):
 File stdin, line 1, in module
   TypeError: readonly attribute
   
new.function and new.code will let you construct new objects with 
different values (and copying over whichever existing attributes you 
want to preserve).


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Portable way to tell if a process is still alive

2010-01-28 Thread exarkun

On 10:50 am, gand...@shopzeus.com wrote:


Suppose we have a program that writes its process id into a pid file. 
Usually the program deletes the pid file when it exists... But in 
some cases (for example, killed with kill -9 or TerminateProcess) pid 
file is left there. I would like to know if a process (given with its 
process id) is still running or not. I know that this can be done 
with OS specific calls. But that is not portable. It can also be done 
by executing ps -p 23423 with subprocess module, but that is not 
portable either. Is there a portable way to do this?


If not, would it be a good idea to implement this (I think very 
primitive) function in the os module?


Not only is there no way to do it portably, there is no way to do it
reliably for the general case. The problem is that processes do not 
have
unique identifiers. A PID only uniquely identifies a running process; 
once

the process terminates, its PID becomes available for re-use.


Non-general case: the process is a service and only one instance should 
be running. There could be a pid file left on the disk. It is possible 
to e.g. mount procfs, and check if the given PID belongs to a command 
line / executed program that is in question. It cannot be guaranteed 
that a service will always delete its pid file when it exists. It 
happens for example, somebody kills it with kill -9 or exits on 
signal 11 etc. It actually did happened to me, and then the service 
could not be restarted because the PID file was there. (It is an error 
to run two instances of the same service, but it is also an error to 
not run it) Whan I would like to do upon startup is to check if the 
process is already running. This way I could create a guardian that 
checks other services, and (re)starts them if they stopped working.


And no, it is not a solution to write good a service that will never 
stop, because:


1. It is particulary not possible in my case - there is a software bug 
in a third party library that causes my service exit on various wreid 
signals.
2. It is not possible anyway. There are users killing processes 
accidentally, and other unforeseen bugs.
3. In a mission critical environment, I would use a guardian even if 
guarded services are not likely to stop


I understand that this is a whole different question now, and possibly 
there is no portable way to do it.  Just I wonder if there are others 
facing a similar problem here. Any thoughs or comments - is it bad that 
I would like to achieve? Is there a better approach?


I've been pondering using a listening unix socket for this.  As long as 
the process is running, a client can connect to the unix socket.  As 
soon as the process isn't, no matter the cause, clients can no longer 
connect to it.


A drawback of this approach in some cases is probably that the process 
should be accepting these connections (and then dropping them).  This 
may not always be easy to add to an existing app.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Great example of a python module/package following up to date conventions.

2010-01-28 Thread exarkun

On 07:28 pm, j...@joshh.co.uk wrote:

On 2010-01-28, Big Stu stu.dohe...@gmail.com wrote:

I'm hoping someone on here can point me to an example of a python
package that is a great example of how to put it all together.  I'm
hoping for example code that demonstrates:


Surely most of the Standard Library should satisfy all your
requirements?


Have you actually looked at any of the standard library?

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Great example of a python module/package following up to date conventions.

2010-01-28 Thread exarkun

On 07:49 pm, stu.dohe...@gmail.com wrote:



Have you actually looked at any of the standard library?

Jean-Paul


I'm looking at urllib2 right now and it is covering a bunch of the
bases I'm looking for.  And grepping in the /usr/lib/python2.5/ folder
for import statements on various things I'm interested in is bringing
up some good examples to check out as well.  Given that I'm still
fairly novice to this I'm not yet in the position to make a good
judgment on what is and isn't a good python practice so I was hoping
someone on here might be able to point at a module or 2 that has
really done a good job of following best practices.

Seems like a reasonable question with an answer that others in a
similar position to me might find useful.


You're right.  I was actually wondering if Josh had looked before 
suggesting it. :)  The stdlib varies wildly in quality, with much of it 
not serving as a particular good example of most of the points you 
mentioned.


urllib2 is probably better than a lot, but, for example, even it only 
manages about 75% line coverage by its test suite.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Library support for Python 3.x

2010-01-27 Thread exarkun

On 07:03 pm, no.em...@nospam.invalid wrote:

a...@pythoncraft.com (Aahz) writes:

From my POV, your question would be precisely identical if you had
started your project when Python 2.3 was just released and wanted to
know if the libraries you selected would be available for Python 2.6.


I didn't realize 2.6 broke libraries that had worked in 2.3, at least 
on

any scale.  Did I miss something?


Lots of tiny things change.  Any of these can break a library.  With the 
3 releases between 2.3 and 2.6, there are lots of opportunities for 
these changes.  I don't know what you mean by any scale, but I know 
that I've seen lots of things break on 2.6 that worked on 2.3, 2.4, or 
2.5.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: myths about python 3

2010-01-27 Thread exarkun

On 10:07 pm, pavlovevide...@gmail.com wrote:

On Jan 27, 12:56�pm, John Nagle na...@animats.com wrote:

Arguably, Python 3 has been rejected by the market.


No it's not fathomably arguable, because there's no reasonable way
that Python 3 could have fully replaced Python 2 so quickly.

At best, you could reasonably argue there hasn't been enough time to
tell.

�Instead, there's
now Python 2.6, Python 2.7, and Python 2.8.


It was always the plan to continue developing Python 2.x alongside
Python 3.x during the transition period.

Last I heard, don't remember where, the plan was for Python 2.7 to be
the last version in the Python 2 line.  If that's true, Python 3
acceptance is further along at this point than anticipated, since they
originally thought they might have to go up to 2.9.


This assumes that the decision to stop making new 2.x releases is based 
on Python 3 adoption, rather than on something else.  As far as I can 
tell, it's based on the personal desire of many of the core developers 
to stop bothering with 2.x.  In other words, it's more a gauge of 
adoption of Python 3 amongst Python core developers.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittest help needed!

2010-01-14 Thread exarkun

On 06:33 pm, rolf.oltm...@gmail.com wrote:

Hi Python gurus,

I'm quite new to Python and have a problem. Following code resides in
a file named test.py
---
import unittest


class result(unittest.TestResult):
   pass



class tee(unittest.TestCase):
   def test_first(self):
   print 'first test'
   print '-'
   def test_second(self):
   print 'second test'
   print '-'
   def test_final(self):
   print 'final method'
   print '-'

r = result()
suite = unittest.defaultTestLoader.loadTestsFromName('test.tee')

suite.run(r)

---

Following is the output when I run it
---
final method
-
first test
-
second test
-
final method
-
first test
-
second test
-

---

Looks like it's running every test twice, I cannot figure out why?


When you run test.py, it gets to the loadTestsFromName line.  There, it
imports the module named test in order to load tests from it.  To 
import

that module, it runs test.py again.  By the time it finishes running the
contents of test.py there, it has run all of your tests once, since part
of test.py is suite.run(r).  Having finished that, the import of the 
test
module is complete and the loadTestsFromName call completes.  Then, 
the

suite.run(r) line runs again, and all your tests run again.

You want to protect the suite stuff in test.py like this:

   if __name__ == '__main__':
   ...

Or you want to get rid of it entirely and use `python -m unittest 
test.py`

(with a sufficiently recent version of Python), or another runner, like
Twisted Trial or one of the many others available.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing a string.ishex function

2010-01-14 Thread exarkun

On 08:15 pm, da...@druid.net wrote:

On 14 Jan 2010 19:19:53 GMT
Duncan Booth duncan.bo...@invalid.invalid wrote:

 ishex2 = lambda s: not(set(s)-set(string.hexdigits)) # Yours
 ishex3 = lambda s: not set(s)-set(string.hexdigits)  # Mine

 I could actually go three better:

 ishex3=lambda s:not set(s)-set(string.hexdigits)

But none of those pass your own ishex('') should return False test.


Good point.  Obviously a unit test was missing.

Of course, all this is good clean fun but I wonder how useful an ishex
method really is.  Personally I would tend to do this instead.

try: x = isinstance(s, int) and s or int(s, 0)
except ValueError: [handle invalid input]

IOW return the value whether it is a decimal string (e.g. 12), a hex
string (e.g. 0xaf) or even if it is already an integer.  Of course,
you still have to test for '' and None.


Still missing some unit tests.  This one fails for 0.  Spend a few more 
lines and save yourself some bugs. :)


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Do I have to use threads?

2010-01-06 Thread exarkun

On 04:26 am, adityashukla1...@gmail.com wrote:

Hello people,

I have 5 directories corresponding 5  different urls .I want to 
download
images from those urls and place them in the respective directories.I 
have
to extract the contents and download them simultaneously.I can extract 
the
contents and do then one by one. My questions is for doing it 
simultaneously

do I have to use threads?

Please point me in the right direction.


See Twisted,

 http://twistedmatrix.com/

in particular, Twisted Web's asynchronous HTTP client,

 http://twistedmatrix.com/documents/current/web/howto/client.html
 http://twistedmatrix.com/documents/current/api/twisted.web.client.html

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Speeding up network access: threading?

2010-01-04 Thread exarkun

On 04:22 pm, m...@privacy.net wrote:

Hello,

what would be best practise for speeding up a larger number of http-get 
requests done via urllib? Until now they are made in sequence, each 
request taking up to one second. The results must be merged into a 
list, while the original sequence needs not to be kept.


I think speed could be improved by parallizing. One could use multiple 
threads.
Are there any python best practises, or even existing modules, for 
creating and handling a task queue with a fixed number of concurrent 
threads?


Using multiple threads is one approach.  There are a few thread pool 
implementations lying about; one is part of Twisted, 
http://twistedmatrix.com/documents/current/api/twisted.python.threadpool.ThreadPool.html.


Another approach is to use non-blocking or asynchronous I/O to make 
multiple requests without using multiple threads.  Twisted can help you 
out with this, too.  There's two async HTTP client APIs available.  The 
older one:


http://twistedmatrix.com/documents/current/api/twisted.web.client.getPage.html
http://twistedmatrix.com/documents/current/api/twisted.web.client.HTTPClientFactory.html

And the newer one, introduced in 9.0:

http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess.Popen and ordering writes to stdout and stderr

2009-12-17 Thread exarkun

On 09:15 pm, ch...@simplistix.co.uk wrote:

Hi All,

I have this simple function:

def execute(command):
process = Popen(command.split(),stderr=STDOUT,stdout=PIPE)
return process.communicate()[0]

..but my unit test for it fails:

from testfixtures import tempdir,compare
from unittest import TestCase

class TestExecute(TestCase):

@tempdir()
def test_out_and_err(self,d):
path = d.write('test.py','\n'.join((
import sys,
sys.stdout.write('stdout\\n'),
sys.stderr.write('stderr\\n'),
sys.stdout.write('stdout2\\n'),
)),path=True)
compare('stdout\nstderr\nstdout2\n',
execute(sys.executable+' '+path))

...because:

AssertionError:
@@ -1,4 +1,4 @@
-stdout
-stderr
-stdout2
+stdout
+stdout2
+stderr

...the order of the writes isn't preserved.
How can I get this to be the case?


You probably just need to flush stdout and stderr after each write.  You 
set them up to go to the same underlying file descriptor, but they still 
each have independent buffering on top of that.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess.Popen and ordering writes to stdout and stderr

2009-12-17 Thread exarkun

On 09:56 pm, ch...@simplistix.co.uk wrote:

exar...@twistedmatrix.com wrote:

How can I get this to be the case?


You probably just need to flush stdout and stderr after each write. 
You set them up to go to the same underlying file descriptor, but they 
still each have independent buffering on top of that.


Okay, but if I do:

os.system(sys.executable+' '+path)

...with test.py as-is, I get things in the correct order.


libc is probably giving you line buffering when you use os.system 
(because the child process inherits the parent's stdio, and the parent's 
stdio is probably a pty, and that's the policy libc implements).


When you use subprocess.Popen, the child process gets a pipe (ie, not a 
pty) for its stdout/err, and libc gives you block buffering instead.


This makes the difference, since your test writes all end with \n - 
flushing the libc buffer when it's in line buffered mode, but not 
otherwise.


Try the os.system version with the parent process's stdio not attached 
to a pty (say, 'cat | program | cat') or try giving the subprocess.Popen 
version a pty of its own (I'm not sure how you do this with 
subprocess.Popen, though).  You should be able to observe the behavior 
change based on this.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-14 Thread exarkun

On 06:46 am, tjre...@udel.edu wrote:

On 12/13/2009 10:29 PM, exar...@twistedmatrix.com wrote:

Doesn't matter. Sometimes it makes sense to call it directly.


It only makes sense to call next (or .__next__) when you are prepared 
to explicitly catch StopIteration within a try..except construct.

You did not catch it, so it stopped execution.

Let me repeat: StopIteration is intended only for stopping iteration. 
Outside that use, it is a normal exception with no special meaning.


You cut out the part of my message where I wrote that one might have 
forgotten the exception handling code that you posit is required, and 
that the current behavior makes debugging this situation unnecessarily 
challenging.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-14 Thread exarkun

On 02:58 pm, m...@egenix.com wrote:

exar...@twistedmatrix.com wrote:

On 08:45 am, tjre...@udel.edu wrote:

Tom Machinski wrote:

In most cases, `list(generator)` works as expected. Thus,
`list(generator expression)` is generally equivalent to 
`[generator

expression]`.

Here's a minimal case where this equivalence breaks, causing a 
serious

and hard-to-detect bug in a program:

   def sit(): raise StopIteration()


StopIteration is intended to be used only within the .__next__ method
of iterators. The devs know that other 'off-label' use results in the
inconsistency you noted, but their and my view is 'don't do that'.


Which is unfortunate, because it's not that hard to get StopIteration
without explicitly raising it yourself and this behavior makes it
difficult to debug such situations.

What's with this view, exactly?  Is it just that it's hard to 
implement

the more desirable behavior?


I'm not exactly sure what you're asking for.

The StopIteration exception originated as part of the for-loop
protocol. Later on it was generalized to apply to generators
as well.

The reason for using an exception is simple: raising and catching
exceptions is fast at C level and since the machinery for
communicating exceptions up the call stack was already there
(and doesn't interfere with the regular return values), this
was a convenient method to let the upper call levels know
that an iteration has ended (e.g. a for-loop 4 levels up the
stack).

I'm not sure whether that answers your question, but it's the
reason for things being as they are :-)


I'm asking about why the behavior of a StopIteration exception being 
handled from the `expression` of a generator expression to mean stop 
the loop is accepted by the devs as acceptable.  To continue your 
comparison to for loops, it's as if a loop like this:


   for a in b:
   c

actually meant this:

   for a in b:
   try:
   c
   except StopIteration:
   break

Note, I know *why* the implementation leads to this behavior.  I'm 
asking why the devs *accept* this.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-14 Thread exarkun

On 06:00 pm, tjre...@udel.edu wrote:

On 12/14/2009 10:21 AM, exar...@twistedmatrix.com wrote:

I'm asking about why the behavior of a StopIteration exception being
handled from the `expression` of a generator expression to mean stop
the loop is accepted by the devs as acceptable.


Any unhandled exception within a loop stops the loop,
and the exception is passed to the surrounding code.

To continue your
comparison to for loops, it's as if a loop like this:

for a in b:
c

actually meant this:

for a in b:
try:
c
except StopIteration:
break


No it does not.


No what does not?  I said It is as if.  This is a hypothetical.  I'm 
not claiming this is the actual behavior of anything.

Note, I know *why* the implementation leads to this behavior.


You do not seem to know what the behavior is.
Read what I wrote last night.


Well, I'm a bit tired of this thread.  Please disregard my question 
above.  I'm done here.  Sorry for the confusion.  Have a nice day.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-13 Thread exarkun

On 08:45 am, tjre...@udel.edu wrote:

Tom Machinski wrote:

In most cases, `list(generator)` works as expected. Thus,
`list(generator expression)` is generally equivalent to `[generator
expression]`.

Here's a minimal case where this equivalence breaks, causing a serious
and hard-to-detect bug in a program:

   def sit(): raise StopIteration()


StopIteration is intended to be used only within the .__next__ method 
of iterators. The devs know that other 'off-label' use results in the 
inconsistency you noted, but their and my view is 'don't do that'.


Which is unfortunate, because it's not that hard to get StopIteration 
without explicitly raising it yourself and this behavior makes it 
difficult to debug such situations.


What's with this view, exactly?  Is it just that it's hard to implement 
the more desirable behavior?


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-13 Thread exarkun

On 08:18 pm, st...@remove-this-cybersource.com.au wrote:

On Sun, 13 Dec 2009 14:35:21 +, exarkun wrote:
StopIteration is intended to be used only within the .__next__ method 
of

iterators. The devs know that other 'off-label' use results in the
inconsistency you noted, but their and my view is 'don't do that'.


Which is unfortunate, because it's not that hard to get StopIteration
without explicitly raising it yourself and this behavior makes it
difficult to debug such situations.


I can't think of any way to get StopIteration without explicitly 
raising
it yourself. It's not like built-ins or common data structures 
routinely
raise StopIteration. I don't think I've *ever* seen a StopIteration 
that

I didn't raise myself.


Call next on an iterator.  For example:  iter(()).next()


What's with this view, exactly?  Is it just that it's hard to 
implement

the more desirable behavior?


What is that more desirable behaviour? That StopIteration is used to
signal that Python should stop iterating except when you want it to be
ignored? Unfortunately, yes, it's quite hard to implement do what the
caller actually wants, not what he asked for behaviour -- and even if 
it

were possible, it goes against the grain of the Zen of Python.

If you've ever had to debug faulty Do What I Mean software, you'd see
this as a good thing.


I have plenty of experience developing and debugging software, Steven. 
Your argument is specious, as it presupposes that only two possibilities 
exist: the current behavior of some kind of magical faerie land 
behavior.


I'm surprised to hear you say that the magical faerie land behavior 
isn't desirable either, though.  I'd love a tool that did what I wanted, 
not what I asked.  The only serious argument against this, I think, is 
that it is beyond our current ability to create (and so anyone claiming 
to be able to do it is probably mistaken).


You chopped out all the sections of this thread which discussed the more 
desirable behavior.  You can go back and read them in earlier messages 
if you need to be reminded.  I'm not talking about anything beyond 
what's already been raised.


I'm pretty sure I know the answer to my question, though - it's hard to 
implement, so it's not implemented.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-13 Thread exarkun

On 02:50 am, lie.1...@gmail.com wrote:

On 12/14/2009 9:45 AM, exar...@twistedmatrix.com wrote:

On 08:18 pm, st...@remove-this-cybersource.com.au wrote:

On Sun, 13 Dec 2009 14:35:21 +, exarkun wrote:

StopIteration is intended to be used only within the .__next__
method of
iterators. The devs know that other 'off-label' use results in the
inconsistency you noted, but their and my view is 'don't do that'.


Which is unfortunate, because it's not that hard to get 
StopIteration

without explicitly raising it yourself and this behavior makes it
difficult to debug such situations.


I can't think of any way to get StopIteration without explicitly 
raising
it yourself. It's not like built-ins or common data structures 
routinely
raise StopIteration. I don't think I've *ever* seen a StopIteration 
that

I didn't raise myself.


Call next on an iterator. For example: iter(()).next()


.next() is not meant to be called directly


Doesn't matter.  Sometimes it makes sense to call it directly.  And I 
was just giving an example of a way to get StopIteration raised without 
doing it yourself - which is what Steve said he couldn't think of.


I'm surprised to hear you say that the magical faerie land behavior
isn't desirable either, though. I'd love a tool that did what I 
wanted,

not what I asked. The only serious argument against this, I think, is
that it is beyond our current ability to create (and so anyone 
claiming

to be able to do it is probably mistaken).


In your world, this is what happens:
 list = [a, b, c]
 # print list
 print list
[a, b, c]
 # make a copy of list
 alist = list(llst) # oops a mistype
 alist = alist - ] + , d]
 print alist
[a, b, c, d]
 alist[:6] + i, + alist[6:]
 print alist
[a, i, b, c, d]
 print alist
 # hearing the sound of my deskjet printer...
 C:\fikle.text.write(alist)
 print open(C:\file.txt).read()
h1a/h1
ul
lii/li
lib/li
lic d/li
 # great, exactly what I needed


I don't understand the point of this code listing, sorry.  I suspect you 
didn't completely understand the magical faerie land I was describing - 
where all your programs would work, no matter what mistakes you made 
while writing them.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dangerous behavior of list(generator)

2009-12-13 Thread exarkun

On 04:11 am, ste...@remove.this.cybersource.com.au wrote:

On Sun, 13 Dec 2009 22:45:58 +, exarkun wrote:

On 08:18 pm, st...@remove-this-cybersource.com.au wrote:

On Sun, 13 Dec 2009 14:35:21 +, exarkun wrote:
StopIteration is intended to be used only within the .__next__ 
method

of
iterators. The devs know that other 'off-label' use results in the
inconsistency you noted, but their and my view is 'don't do that'.


Which is unfortunate, because it's not that hard to get 
StopIteration

without explicitly raising it yourself and this behavior makes it
difficult to debug such situations.


I can't think of any way to get StopIteration without explicitly 
raising
it yourself. It's not like built-ins or common data structures 
routinely
raise StopIteration. I don't think I've *ever* seen a StopIteration 
that

I didn't raise myself.


Call next on an iterator.  For example:  iter(()).next()


Or in more recent versions of Python, next(iter(())).

Good example. But next() is a special case, and since next() is
documented as raising StopIteration if you call it and it raises
StopIteration, you have raised it yourself. Just not explicitly.


But if you mistakenly don't catch it, and you're trying to debug your 
code to find this mistake, you probably won't be aided in this pursuit 
by the exception-swallowing behavior of generator expressions.


What's with this view, exactly?  Is it just that it's hard to 
implement

the more desirable behavior?


What is that more desirable behaviour? That StopIteration is used 
to
signal that Python should stop iterating except when you want it to 
be
ignored? Unfortunately, yes, it's quite hard to implement do what 
the
caller actually wants, not what he asked for behaviour -- and even 
if

it were possible, it goes against the grain of the Zen of Python.

If you've ever had to debug faulty Do What I Mean software, you'd 
see

this as a good thing.


I have plenty of experience developing and debugging software, Steven.
Your argument is specious, as it presupposes that only two 
possibilities

exist: the current behavior of some kind of magical faerie land
behavior.

I'm surprised to hear you say that the magical faerie land behavior
isn't desirable either, though.  I'd love a tool that did what I 
wanted,

not what I asked.  The only serious argument against this, I think, is
that it is beyond our current ability to create (and so anyone 
claiming

to be able to do it is probably mistaken).


I'd argue that tools that do what you want rather than what you ask for
are not just currently impossible, but always will be -- no matter how
good the state of the art of artificial intelligent mind-reading 
software

becomes.


That may be true.  I won't try to make any predictions about the 
arbitrarily distant future, though.
You chopped out all the sections of this thread which discussed the 
more

desirable behavior.  You can go back and read them in earlier messages
if you need to be reminded.  I'm not talking about anything beyond
what's already been raised.


I'm glad for you. But would you mind explaining for those of us aren't
mind-readers what YOU consider the more desirable behaviour?


The behavior of list comprehensions is pretty good.  The behavior of 
constructing a list out of a generator expression isn't as good.  The 
behavior which is more desirable is for a StopIteration raised out of 
the `expression` part of a `generator_expression` to not be treated 
identically to the way a StopIteration raised out of the `genexpr_for` 
part is.  This could provide behavior roughly equivalent to the behavior 
of a list comprehension.


If you're talking the list constructor and list comprehensions treating
StopIteration the same, then I don't think it is at all self-evident 
that
the current behaviour is a bad thing, nor that the only reason for it 
is

that to do otherwise would be hard.


I don't expect it to be self-evident.  I wasn't even trying to convince 
anyone that it's desirable (although I did claim it, so I won't fault 
anyone for making counter-arguments).  The only thing I asked was what 
the motivation for the current behavior is.  If the motivation is that 
it is self-evident that the current behavior is the best possible 
behavior, then someone just needs to say that and my question is 
answered. :)


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Socket question

2009-12-03 Thread exarkun

On 02:52 pm, fasteliteprogram...@gmail.com wrote:
Is there away in python i can connect to a server in socket to two 
servers at the same time or can't it be done?


I'm not sure what you're asking.  Can you clarify?

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Twisted 9.0.0

2009-12-02 Thread exarkun

On 12:18 am, tjre...@udel.edu wrote:

Christopher Armstrong wrote:

= Twisted 9.0.0 =

I'm happy to announce Twisted 9, the first (and last) release of
Twisted in 2009. The previous release was Twisted 8.2 in December of
2008. Given that, a lot has changed!

This release supports Python 2.3 through Python 2.6, though it is the
last one that will support Python 2.3. The next release will support
only Python 2.4 and above. Twisted: the framework of the future!


Not unless it supports 3.1+. Is that in the cards (tickets)?


Somewhat.

A description of the plan on stackoverflow: http://bit.ly/6hWqYU

A message with some ticket links from a thread on the twisted-python 
mailing list: http://bit.ly/8csFSa


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Imitating tail -f

2009-11-30 Thread exarkun

On 11:15 am, p...@boddie.org.uk wrote:

On 22 Nov, 05:10, exar...@twistedmatrix.com wrote:


tail -f is implemented by sleeping a little bit and then reading to
see if there's anything new.


This was the apparent assertion behind the 99 Bottles concurrency
example:

http://wiki.python.org/moin/Concurrency/99Bottles

However, as I pointed out (and as others have pointed out here), a
realistic emulation of tail -f would actually involve handling
events from operating system mechanisms. Here's the exchange I had at
the time:

http://wiki.python.org/moin/Concurrency/99Bottles?action=diffrev2=12rev1=11

It can be very tricky to think up good examples of multiprocessing
(which is what the above page was presumably intended to investigate),
as opposed to concurrency (which can quite easily encompass responding
to events asynchronously in a single process).

Paul

P.S. What's Twisted's story on multiprocessing support? In my limited
experience, the bulk of the work in providing usable multiprocessing
solutions is in the communications handling, which is something
Twisted should do very well.


Twisted includes a primitive API for launching and controlling child 
processes, reactor.spawnProcess.  It also has several higher-level APIs 
built on top of this aimed at making certain common tasks more 
convenient.  There is also a third-party project called Ampoule which 
provides a process pool to which it is is relatively straightforward to 
send jobs and then collect their results.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Can't print Chinese to HTTP

2009-11-30 Thread exarkun

On 05:05 pm, gnarlodi...@gmail.com wrote:

Thanks for the help, but it doesn't work. All I get is an error like:

UnicodeEncodeError: 'ascii' codec can't encode character '\\u0107' in
position 0: ordinal not in range(128)

It does work in Terminal interactively, after I import the sys module.
But my script doesn't act the same. Here is my entire script:

#!/usr/bin/python
print(Content-type:text/plain;charset=utf-8\n\n)
import sys
sys.stdout.buffer.write('f49\n'.encode(utf-8))

All I get is the despised Internal Server Error with Console
reporting:

malformed header from script. Bad header=\xe6\x99\x89


As the error suggests, you're writing f49 to the headers section of the 
response.  This is because you're not ending the headers section with a 
blank line.  Lines in HTTP end with \r\n, not with just \n.


Have you considered using something with fewer sharp corners than CGI? 
You might find it more productive.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Imitating tail -f

2009-11-21 Thread exarkun

On 02:43 am, ivo...@gmail.com wrote:
I'm trying to simply imitate what tail -f does, i.e. read a file, 
wait

until it's appended to and process the new data, but apparently I'm
missing something.

The code is:

54 f = file(filename, r, 1)
55 f.seek(-1000, os.SEEK_END)
56 ff = fcntl.fcntl(f.fileno(), fcntl.F_GETFL)
57 fcntl.fcntl(f.fileno(), fcntl.F_SETFL, ff | os.O_NONBLOCK)
58
59 pe = select.poll()
60 pe.register(f)
61 while True:
62 print repr(f.read())
63 print pe.poll(1000)

The problem is: poll() always returns that the fd is ready (without
waiting), but read() always returns an empty string. Actually, it
doesn't matter if I turn O_NDELAY on or off. select() does the same.

Any advice?


select(), poll(), epoll, etc. all have the problem where they don't 
support files (in the thing-on-a-filesystem sense) at all.  They just 
indicate the descriptor is readable or writeable all the time, 
regardless.


tail -f is implemented by sleeping a little bit and then reading to 
see if there's anything new.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: checking 'type' programmatically

2009-11-20 Thread exarkun

On 10:10 am, mrk...@gmail.com wrote:


Disclaimer: this is for exploring and debugging only. Really.

I can check type or __class__ in the interactive interpreter:

Python 2.6.2 (r262:71600, Jun 16 2009, 16:49:04)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type help, copyright, credits or license for more information.
 import subprocess
 
p=subprocess.Popen(['/bin/ls'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)

 p
subprocess.Popen object at 0xb7f2010c
 (so, se) = p.communicate()
 so
'abc.txt\nbak\nbox\nbuild\ndead.letter\nDesktop\nhrs\nmbox\nmmultbench\nmmultbench.c\npyinstaller\nscreenlog.0\nshutdown\ntaddm_import.log\nv2\nvm\nworkspace\n'
 se
''
 so.__class__
type 'str'
 type(so)
type 'str'
 type(se)
type 'str'

But when I do smth like this in code that is ran non-interactively (as 
normal program):


req.write('stderr type %sbr' % type(se))
req.write('stderr class %sbr' % str(se.__class__))

then I get empty output. WTF?

How do I get the type or __class__ into some object that I can display?


Hooray for HTML.

You asked a browser to render stderr type type 'str'br.  This 
isn't valid HTML, so pretty much any behavior goes.  In this case, the 
browser seems to be discarding the entire type 'str' - not too 
suprising, as it has some features in common with an html tag.


Try properly quoting your output (perhaps by generating your html with a 
real html generation library).


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: WindowsError is not available on linux?

2009-11-18 Thread exarkun

On 07:53 pm, a...@pythoncraft.com wrote:

In article mailman.599.1258510702.2873.python-l...@python.org,
Peng Yu  pengyu...@gmail.com wrote:


It's not clear to me whether WindowsError is available on linux or
not, after I read the document.


Here's what I told a co-worker to do yesterday:

if os.name == 'nt':
   DiskError = (OSError, WindowsError)
else:
   DiskError = WindowsError

try:
   disk_operation()
except DiskError:
   logit()


This isn't necessary.  WindowsError subclasses OSError.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Logic operators with in statement

2009-11-16 Thread exarkun

On 02:02 pm, mr.spoo...@gmail.com wrote:

Hi,
I'm trying to use logical operators (or, and) with the in statement,
but I'm having some problems to understand their behavior.


and and or have no particular interaction with in.


In [1]: l = ['3', 'no3', 'b3']

In [2]: '3' in l
Out[2]: True

In [3]: '3' and '4' in l
Out[3]: False

In [4]: '3' and 'no3' in l
Out[4]: True

This seems to work as I expected.


What this actually does is '3' and ('no3' in l).  So it might have 
produced the result you expected, but it didn't work how you expected. 
:)


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Stagnant Frame Data?

2009-11-15 Thread exarkun

On 04:00 pm, __pete...@web.de wrote:

Mike wrote:

I'll apologize first for this somewhat lengthy example. It does
however recreate the problem I've run into. This is stripped-down code
from a much more meaningful system.

I have two example classes, AutoChecker and Snapshot that evaluate
variables in their caller's namespace using the frame stack. As
written, the output is not what is expected: the variables evaluate to
stagnant values.

However, if the one indicated line is uncommented, then the result is
as expected.

So my questions are: Is this a bug in Python? Is this an invalid use
of frame data? Why does the single line sys._getframe(1).f_locals
fix the behavior?


A simplified demonstration of your problem:

def f(update):

... a = locals()
... x = 42
... if update: locals()
... print a
...

f(False)

{'update': False}

f(True)

{'a': {...}, 'x': 42, 'update': True}

The local namespace is not a dictionary, and the locals()/f_locals
dictionary contains a snapshot of the local namespace. Accessing the
f_locals attribute is one way to trigger an update of that snapshot.

What's puzzling is that the same dictionary is reused.


http://bugs.python.org/issue6116 is vaguely related.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


[ANN] pyOpenSSL 0.10

2009-11-13 Thread exarkun

I'm happy to announce the release of pyOpenSSL 0.10.

pyOpenSSL 0.10 exposes several more OpenSSL APIs, including support for 
running TLS connections over in-memory BIOs, access to the OpenSSL 
random number generator, the ability to pass subject and issuer 
parameters when creating an X509Extension instance, more control over 
PKCS12 creation and an API for exporting PKCS12 objects, and APIs for 
controlling the client CA list servers send to clients.


Several bugs have also been fixed, including a crash when certain 
X509Extension instances are deallocated, a mis-handling of the OpenSSL 
error queue in the X509Name implementation, Windows build issues, and a 
possible double free when using a debug build.


The style of the docstrings for APIs implemented in C has also been 
changed throughout the project to be more useful to Python programmers. 
Extension type objects can also now be used to instantiate those types.


Many thanks to numerous people who contributed patches to this release.

You can find pyOpenSSL 0.10 on the Python Package Index:

   http://pypi.python.org/pypi/pyOpenSSL/0.10

You can now also find the pyOpenSSL documentation there:

   http://packages.python.org/pyOpenSSL/

As part of the ongoing transition away from SourceForge, I won't be 
uploading the release or the documentation to SourceForge.  Please 
continue to use the pyOpenSSL Launchpad page for bug reports:


   https://launchpad.net/pyopenssl

Enjoy!

Jean-Paul Calderone
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cancelling a python thread (revisited...)

2009-11-08 Thread exarkun

On 12:40 pm, s...@uni-hd.de wrote:

On Nov 8, 4:27�am, Carl Banks pavlovevide...@gmail.com wrote:

It doesn't sound like the thread is communicating with the process
much. �Therefore:


There is quite a bit of communication -- the computation results are
visulized while they are generated.


I'm curious how this visualization works, since earlier you said 
something to the affect that there were no shared resources.  If you 
kill a thread and it had opened a window and was drawing on it, with 
most toolkits, you'll end up with a window stuck in your screen, won't 
you?

[snip]

I really don't get that.  If the reason would be that it is too much
work to
implement, then I could accept it.  But saying: We know it is useful,
but we
won't allow to do it, just does not seem reasonable.  Thread
cancellation
might be generally unsafe, but there are cases when it is safe.  It
should be
up to the user to decide it.  There are many things that do harm if
you don't
use them correctly, and of course it would be a bad idea to remove all
of
them from Python.


The CPython philosophy sort of follows the guideline that you should be 
allowed to do bad stuff if you want, except when that bad stuff would 
crash the interpreter (clearly ctypes is an exception to this rule of 
thumb).  I think this is the argument that has been applied in 
opposition to adding thread termination in the past, though I don't 
remember for sure.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Most efficient way to pre-grow a list?

2009-11-07 Thread exarkun

On 01:18 am, pavlovevide...@gmail.com wrote:

On Nov 7, 5:05�pm, sturlamolden sturlamol...@yahoo.no wrote:

On 7 Nov, 03:46, gil_johnson gil_john...@earthlink.net wrote:

 I don't have the code with me, but for huge arrays, I have used
 something like:

  arr[0] = initializer
  for i in range N:
  � � �arr.extend(arr)

 This doubles the array every time through the loop, and you can add
 the powers of 2 to get the desired result.
 Gil

You should really use append instead of extend. The above code is O
(N**2), with append it becomes O(N) on average.


I think the top one is O(N log N), and I'm suspicious that it's even
possible to grow a list in less than O(N log N) time without knowing
the final size in advance.  Citation?  Futhermore why would it matter
to use extend instead of append; I'd assume extend uses the same
growth algorithm.  (Although in this case since the extend doubles the
size of the list it most likely reallocates after each call.)

[None]*N is linear time and is better than growing the list using
append or extend.


The wikipedia page for http://en.wikipedia.org/wiki/Amortized_analysis 
conveniently uses exactly this example to explain the concept of 
amortized costs.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Aaaargh! global name 'eggz' is not defined

2009-10-29 Thread exarkun

On 09:52 pm, a...@pythoncraft.com wrote:

In article mailman.2279.1256851983.2807.python-l...@python.org,
Robert Kern  robert.k...@gmail.com wrote:


I like using pyflakes. It catches most of these kinds of typo errors, 
but is

much faster than pylint or pychecker.


Coincidentally, I tried PyFlakes yesterday and was unimpressed with the
way it doesn't work with import *.


Consider it (some very small, I'm sure) motivation to stop using import 
*, which is itself only something used in unimpressive software. ;)


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Web development with Python 3.1

2009-10-25 Thread exarkun

On 25 Oct, 11:52 pm, a...@baselinedata.co.uk wrote:


I am very much new to Python, and one of my first projects is a simple
data-based website. I am starting with Python 3.1 (I can hear many of 
you shouting don't - start with 2.6), but as far as I can see, none 
of the popular python-to-web frameworks (Django, CherryPy, web.py, 
etc.) are Python3 compatible yet.


So, what can I use to start my web programming experience using 3.1?

Any help would be appreciated.


don't - start with 2.6

Alan

--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Re: (from stdlib-sig) ctypes or struct from an h file

2009-10-18 Thread exarkun

On 08:13 pm, de...@nospam.web.de wrote:

Yuvgoog Greenle schrieb:

Is there a way that Python and C can have a shared definition for a
binary data structure?

It could be nice if:
1. struct or ctypes had a function that could parse a .h/.c/.cpp file
to auto-generate constructors
or
2. a ctypes definition could be exported to a .h file.

So my question is - is there a way to do this in the std-lib or even 
pypi?



--yuv


ps If this doesn't exist, then I'm probably going to open a project
and would like some tips/ideas.



gccxml can be used to do this, there is a ctypes utilities module that 
works with the output of gccxml.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


ctypes_configure can do this, too, doesn't require gccxml, and works 
with non-gcc compilers.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: pickle's backward compatibility

2009-10-13 Thread exarkun

On 02:48 pm, m...@egenix.com wrote:

exar...@twistedmatrix.com wrote:

On 03:17 pm, pengyu...@gmail.com wrote:

Hi,

If I define my own class and use pickle to serialize the objects in
this class, will the serialized object be successfully read in later
version of python.

What if I serialize (using pickle) an object of a class defined in
python library, will it be successfully read in later version of
python?


Sometimes.  Sometimes not.  Python doesn't really offer any guarantees
regarding this.


I think this needs to be corrected: the pickle protocol versions are
compatible between Python releases, however, there are two things to
consider:

* The default pickle version sometimes changes between minor
  releases.

  This is easy to handle, though, since you can provide the pickle
  protocol version as parameter.

* The pickle protocol has changed a bit between 2.x and 3.x.

  This is mostly due to the fact that Python's native string
  format changed to Unicode in 3.x.


The pickle protocol isn't the only thing that determines whether an 
existing pickle can be loaded.  Consider this very simple example of a 
class which might exist in Python 2.x:


   class Foo:
   def __init__(self):
   self._bar = None

   def bar(self):
   return self._bar

Nothing particularly fancy or interesting going on there.  Say you write 
a pickle that includes an instance of this class.


Now consider this modified version of Foo from Python 2.(x+1):

   class Foo(object): # The class is new-style now, because someone felt 
like

  # making it new style

   def __init__(self, baz):  # The class requires an argument to 
__init__

 # now to specify some new piece of info

self.barValue = None  # _bar was renamed barValue because 
someone
  # thought it would make sense to 
expose the

  # info publically

self._baz = baz

   def bar(self):
return self.barValue  # Method was updated to use the new 
name of

  # the attribute

Three fairly straightforward changes.  Arguably making Foo new style and 
adding a required __init__ argument are not backwards compatible changes 
to the Foo class itself, but these are changes that often happen between 
Python releases.  I think that most people would not bother to argue 
that renaming _bar to barValue is an incompatibility, though.


But what happens when you try to load your Python 2.x pickle in Python 
2.(x+1)?


First, you get an exception like this:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/lib/python2.5/pickle.py, line 1374, in loads
 return Unpickler(file).load()
   File /usr/lib/python2.5/pickle.py, line 858, in load
 dispatch[key](self)
   File /usr/lib/python2.5/pickle.py, line 1070, in load_inst
 self._instantiate(klass, self.marker())
   File /usr/lib/python2.5/pickle.py, line 1060, in _instantiate
 value = klass(*args)
 TypeError: in constructor for Foo: __init__() takes exactly 2 arguments 
(1 given)


But let's say the class didn't get changed to new-style after all... 
Then you can load the pickle, but what happens when you try to call the 
bar method?  You get this exception:


 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 6, in bar
 AttributeError: Foo instance has no attribute 'barValue'

So these are the kinds of things I am talking about when I say that 
there aren't really any guarantees.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: What do I do now?

2009-10-12 Thread exarkun

On 11 Oct, 10:53 pm, fordhai...@gmail.com wrote:
I've been programming since about 3 years, and come to think of it 
never
written anything large. I know a few languages: c, python, perl, java. 
Right

now, I just write little IRC bots that basically don't do anything.

I have two questions:

1) What should I start programming (project that takes 1-2 months, not 
very

short term)?


You should make sure you pick something you find interesting.  It can be 
a challenge to work on a long term project that isn't appealing to you 
personally in some way.

2) Whtat are some good open source projects I can start coding for?


I think that Twisted is one of the better projects to work on if you're 
looking to improve your skills.  We have a well-structured development 
process which includes lots of feedback from other developers.  This 
sort of feedback is one of the best ways I know of to improve ones 
development skills.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: pickle's backward compatibility

2009-10-12 Thread exarkun

On 03:17 pm, pengyu...@gmail.com wrote:

Hi,

If I define my own class and use pickle to serialize the objects in
this class, will the serialized object be successfully read in later
version of python.

What if I serialize (using pickle) an object of a class defined in
python library, will it be successfully read in later version of
python?


Sometimes.  Sometimes not.  Python doesn't really offer any guarantees 
regarding this.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrent threads to pull web pages?

2009-10-02 Thread exarkun

On 05:48 am, wlfr...@ix.netcom.com wrote:

On Fri, 02 Oct 2009 01:33:18 -, exar...@twistedmatrix.com declaimed
the following in gmane.comp.python.general:

There's no need to use threads for this.  Have a look at Twisted:

  http://twistedmatrix.com/trac/


Strange... While I can easily visualize how to convert the 
problem

to a task pool -- especially given that code to do a single occurrence
is already in place...

... conversion to an event-dispatch based system is something 
/I/

can not imagine...


The cool thing is that there's not much conversion to do from the single 
request version to the multiple request version, if you're using 
Twisted.  The single request version looks like this:


   getPage(url).addCallback(pageReceived)

And the multiple request version looks like this:

   getPage(firstURL).addCallback(pageReceived)
   getPage(secondURL).addCallback(pageReceived)

Since the APIs don't block, doing things concurrently ends up being the 
easy thing.


Not to say it isn't a bit of a challenge to get into this mindset, but I 
think anyone who wants to put a bit of effort into it can manage. :) 
Getting used to using Deferreds in the first place (necessary to 
write/use even the single request version) is probably where more people 
have trouble.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python RPG Codebase

2009-10-01 Thread exarkun

On 05:46 am, jackd...@gmail.com wrote:
On Thu, Oct 1, 2009 at 1:22 AM, Lanny lan.rogers.b...@gmail.com 
wrote:

I've been thinking about putting together a text based RPG written
fully in Python, possibly expanding to a MUD system. I'd like to know
if anyone feels any kind of need for this thing or if I'd be wasting
my time, and also if anyone would be interested in participating,
because of the highly modular nature of any RPG it should make for
easy cooperation.


You might not be aware that twisted (the popular asynchronous TCP/IP
server) was started as a MUD project.  It grew to do many other things
but Gylph still hacks on his MUD code in fits and starts (interesting
stuff too, not your typical LPC/Mush/MOO/Diku codebase).  There are
also a couple pure-python MUD clients if you're into that kind of
thing.

I still follow this stuff in passing because I learned more in
undergrad hacking LPC than I did in my classes proper (probably
because I spent hundreds of hours in class but thousands of hours
writing LPC).


This project's current home is 
http://divmod.org/trac/wiki/DivmodImaginary.  Unfortunately, there's 
not much there. :)  I'm hoping to give a talk on Imaginary at PyCon next 
year.  Part of my preparation for that will be writing a lot more 
documentation, improving the web site, and generally making things 
friendlier towards newcomers.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrent threads to pull web pages?

2009-10-01 Thread exarkun

On 1 Oct, 09:28 am, nos...@nospam.com wrote:

Hello

I recently asked how to pull companies' ID from an SQLite 
database,

have multiple instances of a Python script download each company's web
page from a remote server, eg. www.acme.com/company.php?id=1, and use
regexes to extract some information from each page.

I need to run multiple instances to save time, since each page takes
about 10 seconds to be returned to the script/browser.

Since I've never written a multi-threaded Python script before, to
save time investigating, I was wondering if someone already had a
script that downloads web pages and save some information into a
database.


There's no need to use threads for this.  Have a look at Twisted:

 http://twistedmatrix.com/trac/

Here's an example of how to use the Twisted HTTP client:

 http://twistedmatrix.com/projects/web/documentation/examples/getpage.py

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrent threads to pull web pages?

2009-10-01 Thread exarkun

On 01:36 am, k...@kyleterry.com wrote:

On Thu, Oct 1, 2009 at 6:33 PM, exar...@twistedmatrix.com wrote:

On 1 Oct, 09:28 am, nos...@nospam.com wrote:

Hello

   I recently asked how to pull companies' ID from an SQLite 
database,
have multiple instances of a Python script download each company's 
web

page from a remote server, eg. www.acme.com/company.php?id=1, and use
regexes to extract some information from each page.

I need to run multiple instances to save time, since each page takes
about 10 seconds to be returned to the script/browser.

Since I've never written a multi-threaded Python script before, to
save time investigating, I was wondering if someone already had a
script that downloads web pages and save some information into a
database.


There's no need to use threads for this.  Have a look at Twisted:

 http://twistedmatrix.com/trac/

Here's an example of how to use the Twisted HTTP client:

http://twistedmatrix.com/projects/web/documentation/examples/getpage.py


I don't think he was looking for a framework... Specifically a 
framework

that you work on.


He's free to use anything he likes.  I'm offering an option he may not 
have been aware of before.  It's okay.  It's great to have options.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted PB: returning result as soon as ready

2009-09-28 Thread exarkun

On 25 Sep, 01:25 pm, jacopo.pe...@gmail.com wrote:

In the following chunk of code the CLIENT receives both the results
from  1Ccompute 1D at the same time (i.e. when the second one has
finished). This way it cannot start  1CelaborateResult 1D on the first
result while the SERVER is still running the second  1Ccompute 1D.
How could I get the first result as soon as ready and therefore
parallelize things?
thanks, Jacopo

SERVER:

fibo=Fibonacci()
fact=pb.PBServerFactory(fibo)
reactor.listenTCP(port,  fact)
reactor.run()

CLIENT:

fact=pb.PBClientFactory()
reactor.connectTCP(host, port,   fact)
d=fact.getRootObject()
n1=1
d.addCallback(lambda obj: obj.callRemote(compute, n1))
d.addCallback(elaborateResult)

d2=fact.getRootObject()
n2=1
d2.addCallback(lambda obj: obj.callRemote(compute,  n2))
d2.addCallback(elaborateResult)

reactor.run()


elaborateResult will be called the first time as soon as the Deferred 
returned by the first compute call fires.  This will happen as soon as 
the client receives the response from the server.


If you're seeing the first call of elaborateResult not happen until 
after the server has responded to the second compute call's Deferred 
fires, then it's probably because the two Deferreds are firing at almost 
exactly the same time, which means the server is returning the results 
at almost exactly the same time.


Considering your questions in another thread, my suspicion is that your 
Fibonacci calculator is blocking the reactor with its operation, and so 
even though it finishes doing the first calculation long before it 
finishes the second, it cannot actually *send* the result of the first 
calculation because the second calculation blocks it from doing so. 
Once the second calculation completes, nothing is blocking the reactor 
and both results are sent to the client.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Re: Twisted PB: returning result as soon as ready

2009-09-28 Thread exarkun

On 06:06 am, jacopo.pe...@gmail.com wrote:

Jean-Paul, thanks a lot for your patient.
I have read most of a the  1CThe Twisted Documentation 1D which I think is 
very good for Deferred and ok for PB but it is really lacking on the 
Reactor. In my case it looks like this is key to achieve what I have in 
mind. I've also just received  1CTwisted network programming essential 1D 
but I don't expect too much from this book. Would you be able to 
suggest me a reference to understand the Reactors? I need to be aware 
of when this is blocked and how to avoid it.


I have a very simple objective in mind. I want to distribute some 
processing to different severs and collect and process results from a 
client as soon as they are ready. To achieve true parallelism it looks 
more complex than expected.


It would probably be best to move to the twisted-python mailing list. 
There are a lot more people there who can help out.


Jean-Pal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Catch script hangs

2009-09-27 Thread exarkun

On 10:40 pm, ba...@ymail.com wrote:

Due to an ftp server issue, my python script sometimes hangs whilst
downloading, unable to receive any more data. Is there any way that I
could have python check, maybe through a thread or something, whether
it has hanged (or just, if it's still active after 10 seconds, stop
it?). I have looked at threading but there does not seem to be a stop
method on threading, which is a problem. Could the lower level thread
module be a solution?


No.  There are a great many issues which arise when trying to forcibly 
terminate a thread.  Python doesn't expose this functionality because 
most platforms don't provide it in a safe or reliable way.


You could give Twisted's FTP client a try.  Since it isn't blocking, you 
don't need to use threads, so it's easy to have a timeout.


You could also explore solutions based on signal.alarm().  A single- 
threaded signal-based solution has some issues as well, but not nearly 
as many as a thread-based solution.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: is this whiff/wsgi claim true?

2009-09-26 Thread exarkun

On 25 Sep, 02:26 pm, aaron.watt...@gmail.com wrote:

Hi folks.  I just modified the WHIFF concepts index page

   http://aaron.oirt.rutgers.edu/myapp/docs/W1000.concepts

To include the following paragraph with a startling and arrogant
claim in the final sentence :)


Developers build WHIFF applications much like they build
static web content, PHP applications, JSP pages, or ASP
pages among others -- the developer drops files into a
directory, and the files are automatically used to respond
to URLs related to the filename.
**This intuitive and ubiquitous approach to organizing
web components is not automatically supported by other
WSGI infrastructures.**



This sounds like Twisted Web's RPY files:

 http://twistedmatrix.com/projects/web/documentation/howto/using- 
twistedweb.html#auto5


Although you may be talking about something that is more closely tied to 
WSGI than this feature is in Twisted Web.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted: 1 thread in the reactor pattern

2009-09-26 Thread exarkun

On 25 Sep, 05:25 am, jacopo.pe...@gmail.com wrote:

On Sep 24, 7:54�pm, exar...@twistedmatrix.com wrote:

On 07:10 am, jacopo.pe...@gmail.com wrote:
On Sep 23, 5:57�pm, exar...@twistedmatrix.com wrote:
[snip]

[snip]


If you have a function that takes 5 minutes to run, then you're 
blocking
the reactor thread for 5 minutes and no other events are serviced 
until

the function finishes running.

You have to avoid blocking the reactor thread if you want other events
to continue to be serviced. �There are various strategies for avoiding
blocking. �Different strategies are appropriate for different kinds of
blocking code.

Jean-Paul- Hide quoted text -

- Show quoted text -


Even if the server is engaged in a 5 minutes processing other arriving
requests of callRemote() are queued and  Deferreds are returned
immediately.


Nope, they're not.  The bytes representing the new requests sit in the 
socket buffer until the function finishes processing and the reactor 
gets an opportunity to read them.


Could you suggest me any doc to better understand?


If you haven't read 
http://twistedmatrix.com/projects/core/documentation/howto/async.html 
yet, that may be a good idea.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted: 1 thread in the reactor pattern

2009-09-24 Thread exarkun

On 07:10 am, jacopo.pe...@gmail.com wrote:

On Sep 23, 5:57�pm, exar...@twistedmatrix.com wrote:
[snip]


It isn't possible. �While the remote methods are running, other events
are not being serviced. �This is what is meant when people describe
Twisted as a *cooperative* multitasking system.  Event handlers 
(such
as remote methods) run for a short enough period of time that it 
doesn't
matter that the reactor is prevented from accepting new connections 
(or

what have you) for the duration of their execution.

Jean-Paul


Jean -Paul, not sure I have understood.
Say I have one server S and two clients C1 and C2 (all on separate
machines).

(a) C1 requests a remote call of f1() to S, f1() requires 5 minutes 
of

processing.
(b) S  puts f1() in a queue and returns immediately a Deferred to
C1.
(c) Now f1() starts and keeps S 19s processor busy for 5 mins
(d)  after few seconds C2 requests a remote call f2() to S.
(e) On S the processor is already engaged with f1() but still
1Csomeone 1D on S is able to accept the request from C2, put it in a
queue (after f1()) and return a Deferred to C2.
(f) At some point after f1() is finished f2() will start

I believe (b) is what you say  1Crun for a short enough period of time
that it doesn't
matter that the reactor is prevented from accepting new connections
(or
what have you) for the duration of their execution. 1D ?!


If you have a function that takes 5 minutes to run, then you're blocking 
the reactor thread for 5 minutes and no other events are serviced until 
the function finishes running.


You have to avoid blocking the reactor thread if you want other events 
to continue to be serviced.  There are various strategies for avoiding 
blocking.  Different strategies are appropriate for different kinds of 
blocking code.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted 100% event driven or hybrid?

2009-09-23 Thread exarkun

On 05:55 am, jacopo.pe...@gmail.com wrote:

I am diving into Twisted and Perspective Broker (PB) in particular. I
am designing a system  having several models running on different
machines, they need to be recalculated periodically, I have to collect
the results, process them and start again from the beginning.

It is not clear to me if I can blend some event driven programming
with a more traditional one where the flow would be deterministic.
In my case I have to iterate on a list of models and send the request
of recalculation to the specific machine where the model resides. I
don 19t want to wait for each single result but I want to sent all the
requests in one go. In this phase I am happy to have an event driven
framework with callbacks. Then I have to stop and wait for all the
results to be ready, I collect and process them. From now on I don 19t
need a the system to be event drive any more, the processing should
occur only on the master machine, following a deterministic flow.
As soon as finished I am ready to start again to resubmit the models
for recalculation and so on. This should go on forever.

Is  it possible to have an hybrid system like this? If I call
reactor.spot() at the certain point of the execution where does the
execution continue from? Would this be a good design or in general is
better to keep a 100% event drive system even if I don 19t actually need
to handle asynchronicity for big chunks of the code.


If you're happy to block event processing, then there's no reason you 
can't do just that - once you have your results, start processing them 
in a blocking manner.  Twisted will not service events while you're 
doing this, but as long as you're happy with that, it doesn't really 
matter to Twisted.  You might not be as happy with this later on if your 
requirements change, but you can always worry about that later.


In particular, though, there's no reason or need to call reactor.stop() 
in order to switch to your non-event driven code.  Wherever you were 
thinking of putting that call, just put your non-event driven code there 
instead.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted: 1 thread in the reactor pattern

2009-09-23 Thread exarkun

On 06:08 am, jacopo.pe...@gmail.com wrote:

I am diving into Twisted and Perspective Broker (PB) in particular and
I would like to understand more about what happens behind the
curtains.
Say I have a client and a server on two different machines, the server
gets callRemote() 19s in an asynchronous way, these requests are parked
in a queue and then served sequentially (not in parallel  13 correct me
if I am wrong).


Since, as you point out below, there is only one thread, the remote 
methods can only be invoked one at a time.


However, rather central to the asynchronous operation of Twisted 
libraries and applications, the remote method itself may return before 
the remote call has been completely serviced.  So while only one remote 
method will run at a time, each of the two remote calls may run 
concurrently at some point before they are responded to.

If everything is implemented in a single thread, how
is it possible that while the processor is engaged in the processing
triggered by  callRemote() 19s at the same time the reactor is ready to
listen/accept new events and put them in a queue? To me it looks like
there should be at least 2 processes, one for the reactor and on for
the rest.


It isn't possible.  While the remote methods are running, other events 
are not being serviced.  This is what is meant when people describe 
Twisted as a *cooperative* multitasking system.  Event handlers (such 
as remote methods) run for a short enough period of time that it doesn't 
matter that the reactor is prevented from accepting new connections (or 
what have you) for the duration of their execution.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Most active coroutine library project?

2009-09-23 Thread exarkun

On 05:00 pm, sajmik...@gmail.com wrote:

On Sun, Aug 23, 2009 at 11:02 AM, Phillip B Oldham
phillip.old...@gmail.com wrote:

I've been taking a look at the multitude of coroutine libraries
available for Python, but from the looks of the projects they all seem
to be rather quiet. I'd like to pick one up to use on a current
project but can't deduce which is the most popular/has the largest
community.

Libraries I looked at include: cogen, weightless, eventlet and
circuits (which isn't exactly coroutine-based but it's event-driven
model was intriguing).

Firstly, are there any others I've missed? And what would the
consensus be on the which has the most active community behind it?
--
http://mail.python.org/mailman/listinfo/python-list


Coroutines are built into the language.  There's a good talk about
them here: http://www.dabeaz.com/coroutines/


But what some Python programmers call coroutines aren't really the same 
as what the programming community at large would call a coroutine.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted 100% event driven or hybrid?

2009-09-23 Thread exarkun

On 05:48 pm, mcfle...@vrplumber.com wrote:

exar...@twistedmatrix.com wrote:

On 05:55 am, jacopo.pe...@gmail.com wrote:

...
results to be ready, I collect and process them. From now on I don 
19t

need a the system to be event drive any more, the processing should
occur only on the master machine, following a deterministic flow.
As soon as finished I am ready to start again to resubmit the models
for recalculation and so on. This should go on forever.


Jean-Paul is obviously far more authoritative on the twisted way than
I am, so if he says you can just run your synchronous operation in- 
situ,

that's probably the way to go, but IIRC there's a
reactor.deferToThread() function which can run your synchronous code
off to the side, while allowing the twisted code to continue to
process incoming operations.  Thus you'd do something like:

def process( dataset ):
   dl = [ remote_call( x ) for x in dataset]
   dl = defer.DeferredList( dl )
   def on_all_results( results ):
   reactor.deferToThread( sync_process, (results,)).addCallback(
process )
   return dl.addCallback( on_all_results )

(I'm typing all of that from the distance of a few years of memory
decay, so take it as loosely this, with the proper function names and
the like).  Threads aren't really part of the twisted way in my
understanding, but they can be used if necessary AFAIK, and they will
let your application remain responsive to network events during the
processing.


Yep, you're correct here Mike (except it's 
`twisted.internet.defer.deferToThread` rather than 
`twisted.internet.reactor.deferToThread`).  If it is safe to call 
`sync_process` in a thread, then this may be a good approach as well, 
and it will free up the reactor to continue to respond to events 
(assuming `sync_process` plays nicely - ie, is written in Python or is 
an extension that releases the GIL, of course).


In my post, I was trying to highlight the idea that there's not really 
anything special going on in a Twisted program.  You can choose to block 
if you wish, if the consequences (events go unserviced for a while) are 
acceptable to you.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Most active coroutine library project?

2009-09-23 Thread exarkun

On 08:16 pm, sajmik...@gmail.com wrote:

On Wed, Sep 23, 2009 at 2:05 PM,  exar...@twistedmatrix.com wrote:
[snip]


But what some Python programmers call coroutines aren't really the 
same as

what the programming community at large would call a coroutine.

Jean-Paul


Really?  I'm curious as to the differences.  (I just skimmed the entry
for coroutines in Wikipedia and PEP 342, but I'm not fully
enlightened.)


The important difference is that coroutines can switch across multiple 
stack frames.  Python's enhanced generators can still only switch 
across one stack frame - ie, from inside the generator to the frame 
immediately outside the generator.  This means that you cannot use 
enhanced generators to implement an API like this one:


   def doSomeNetworkStuff():
   s = corolib.socket()
   s.connect(('google.com', 80))
   s.sendall('GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n')
   response = s.recv(8192)

where connect, sendall, and recv don't actually block the entire calling 
thread, they only switch away to another coroutine until the underlying 
operation completes.  With real coroutines, you can do this.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Most active coroutine library project?

2009-09-23 Thread exarkun

On 09:40 pm, t...@urandom.ca wrote:

On Wed, 2009-09-23 at 20:50 +, exar...@twistedmatrix.com wrote:

immediately outside the generator.  This means that you cannot use
enhanced generators to implement an API like this one:

def doSomeNetworkStuff():
s = corolib.socket()
s.connect(('google.com', 80))
s.sendall('GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n')
response = s.recv(8192)

where connect, sendall, and recv don't actually block the entire 
calling
thread, they only switch away to another coroutine until the 
underlying

operation completes.  With real coroutines, you can do this.


I might be missing some subtlety of your point, but I've implemented
this functionality using generators in a library called Kaa[1].  In 
kaa,

your example looks like:

   import kaa

   @kaa.coroutine()
   def do_some_network_stuff():
  s = kaa.Socket()
  yield s.connect('google.com:80')
  yield s.write('GET / HTTP/1.1\nHost: www.google.com\n\n')
  response = yield s.read()

   do_some_network_stuff()
   kaa.main.run()


I specifically left out all yield statements in my version, since 
that's exactly the point here. :)  With real coroutines, they're not 
necessary - coroutine calls look just like any other call.  With 
Python's enhanced generators, they are.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Most active coroutine library project?

2009-09-23 Thread exarkun

On 10:00 pm, t...@urandom.ca wrote:

On Wed, 2009-09-23 at 21:53 +, exar...@twistedmatrix.com wrote:

I specifically left out all yield statements in my version, since
that's exactly the point here. :)  With real coroutines, they're not
necessary - coroutine calls look just like any other call.  With
Python's enhanced generators, they are.


Yes, I take your point.

Explicitly yielding may not be such a terrible thing, though.  It's 
more

clear when reading such code where it may return back to the scheduler.
This may be important for performance, unless of course the coroutine
implementation supports preemption.


Sure, no value judgement intended, except on the practice of taking 
words with well established meanings and re-using them for something 
else ;)


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Most active coroutine library project?

2009-09-23 Thread exarkun

On 10:18 pm, t...@urandom.ca wrote:

On Wed, 2009-09-23 at 22:07 +, exar...@twistedmatrix.com wrote:

Sure, no value judgement intended, except on the practice of taking
words with well established meanings and re-using them for something
else ;)


I think it's the behaviour that's important, and not the specific 
syntax

needed to implement that behaviour.

In other words, I disagree (if this is what you're suggesting) that
sticking yield in front of certain expressions makes it any less a
coroutine.


Alright, I won't pretend to have any particular insight into what the 
fundamental coroutineness of a coroutine is.


To me, the difference I outlined in this thread is important because it 
is a difference that is visible in the API (almost if it were some 
unusual, extra part of the function's signature) to application code. If 
you have a send function that is what I have been calling a real 
coroutine, that's basically invisible.  Put another way, if you started 
with a normal blocking send function, then applications would be using 
it without yield.  If you used real coroutines to make it 
multitasking-friendly, then the same applications that were already 
using it would continue to work (at least, they might).  However, if you 
have something like Python's enhanced generators, then they all break 
very obviously, since send no longer returns the number of bytes 
written, but now returns a generator object, something totally 
different.


Now, I would say that this there's not a huge amount of value in being 
able to make a function into a coroutine behind the application's back. 
All kinds of problems can result from this.  Others will certainly 
disagree with me and say that it's worth more than the cost of the 
trouble it might cause.  But either way, there's clearly *some* 
difference between the real coroutine way and the enhanced generators 
way.


If you think that's not an important difference, I don't mind.  I just 
hope I've made it clear why I initially said that enhanced generators 
aren't what a lot of people would call coroutines. :)


Now, requiring explicit yields does mean that the coroutine has
specific, well-defined points of reentry.  But I don't believe it's a
necessary condition that coroutines allow arbitrary (in the
non-deterministic sense) reentry points, only multiple.


I don't think non-deterministic is the right word to use here; at 
least, it's not what I was trying to convey as possible in coroutines. 
More like invisible.


That aside, I do think that most people familiar with coroutines from 
outside of Python would disagree with this, but I haven't don't a formal 
survey or anything, so perhaps I'm mistaken.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Single line output, updating output in place

2009-09-23 Thread exarkun

On 04:11 am, tusklah...@gmail.com wrote:

Hello, I'm a newb and have been playing with Python trying to print a
changing value to the screen that updates as the value changes. I have 
this

code, which is pretty much doing what I want:

#!/usr/bin/env python3

import time

text = input('Please enter something: ')

for c in text:
   print('This is what you entered:', '{0}'.format(c), '\033[A')
   if c == text[-1]:
   print('\n')
   time.sleep(1)



Which will output: This is what you entered: text with text 
constantly
changing. It all stays on the same line, which is what I'm shooting 
for. So
my question is, the little bit that allows me to do this, the '\033[A', 
I
don't really know what that is. I was looking at other code while 
trying to
figure this out, and '\033[A' was used to do this, but I don't really 
know
what it is or where to find information on it. It's an escape code, 
isn't

it? But is it in Python, in Bash, or what? Forgive me if my question is
hazy, I'm just not sure why adding '\033[A' got it working for me, 
where
would I find the information that would have enabled me to know that 
this is

what I needed to use?


Its a vt102 control sequence.  It means move the cursor up one row. 
vt102 is something your terminal emulator implements (or, heck, maybe 
you have a real physical vt102 terminal nah).  To Python, it's just 
a few more meaningless bytes.


You can read all about vt102 on vt100.net:

   http://vt100.net/docs/vt102-ug/

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: socket send O(N**2) complexity

2009-09-21 Thread exarkun

On 08:00 pm, r...@freenet.co.uk wrote:
Zac Burns wrote in news:mailman.211.1253559803.2807.python- 
l...@python.org

in comp.lang.python:

The mysocket.mysend method given at
http://docs.python.org/howto/sockets.html has an (unwitting?) O(N**2)
complexity for long msg due to the string slicing.

I've been looking for a way to optimize this, but aside from a pure
python 'string slice view' that looks at the original string I can't
think of anything. Perhaps start and end keywords could be added to
send? I can't think of a reason for the end keyword,  but it would be
there for symmetry.


I ran this script on various versions of python I have access to:

#encoding: utf-8
raw_input( start )

s = 'x' * 100
r = [None] * 1000

raw_input( allocated 1 meg +  )

for i in xrange(1000):
 r[i] = s[:]

raw_input( end )

Niether of the CPython versions (2.5 and 3.0 (with modified code))
exibited any memory increase between allocated 1 meg +  and end


You bumped into a special case that CPython optimizes.  s[:] is s.  If 
you repeat your test with s[1:], you'll see memory climb as one might 
normally expect.

pypy-c (1.0.0) showed a 30k jump, and IronPython 2.0 showed a few megs
jump.

AIUI, as a python string is imutable, a slice of a string is a
new string which points (C char *) to the start of the slice data
and with a length that is the length of the slice, about 8 bytes
on 32 bit machine.

So even though a slice assignment new_s = s[:] appears to a python
programmer to make a copy of s, its only the a few bytes of
metadata (the pointer and the length) that is really copied, the
strings character data stays where it is.

So the code you cite is in fact O(N) as the copy is constant size.


This all (basically) valid for the special case of s[:].  For any other 
string slicing, though, the behavior is indeed O(N**2).


To the OP, you can get view-like behavior with the buffer builtin. 
Here's an example of its usage from Twisted, where it is employed for 
exactly the reason raised here:


http://twistedmatrix.com/trac/browser/trunk/twisted/internet/abstract.py#L93

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: How python source code in large projects are organized?

2009-09-20 Thread exarkun

On 07:10 pm, pengyu...@gmail.com wrote:

On Sun, Sep 20, 2009 at 11:31 AM, Daniel Fetchinson
fetchin...@googlemail.com wrote:

I am wondering what is the best way of organizing python source code
in a large projects. There are package code, testing code. I'm
wondering if there has been any summary on previous practices.


I suggest looking at the source code of large projects like twisted,
PIL, django, turbogears, etc. These have different styles, pick the
one you like best.


Is there a webpage or a document that describes various practices?


I wrote very briefly on this topic, hope it helps:

 http://jcalderone.livejournal.com/39794.html

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: pyjamas pyv8run converts python to javascript, executes under command-line

2009-09-19 Thread exarkun

On 19 Sep, 11:04 pm, robert.k...@gmail.com wrote:

Daniel Fetchinson wrote:
the pyjamas project is taking a slightly different approach to 
achieve
this same goal: beat the stuffing out of the pyjamas compiler, 
rather

than hand-write such large sections of code in pure javascript, and
double-run regression tests (once as python, second time converted 
to

javascript under pyv8run, d8 or spidermonkey).

anyway, just thought there might be people who would be intrigued 
(or

horrified enough to care what's being done in the name of computer
science) by either of these projects.

I've added pyjamas to the implementations page on the Python Wiki in
the compilers section:

http://wiki.python.org/moin/implementation


In what way is pyjamas a python implementation? As far as I know
pyjamas is an application written in python that is capable of
generating javascript code. Does this make it a 'python
implementation'? That would be news to me but I've been wrong many
times before.


It converts Python code to Javascript.


The question is whether it converts Python code to JavaScript code with 
the same behavior.  I think you're implying that it does, but you left 
it implicit, and I think the point is central to deciding if pyjamas is 
a Python implementation or not, so I thought I'd try to make it 
explicit.


Does pyjamas convert any Python program into a JavaScript program with 
the same behavior?  I don't intend to imply that it doesn't - I haven't 
been keeping up with pyjamas development, so I have no idea idea.  I 
think that the case *used* to be (perhaps a year or more ago) that 
pyjamas only operated on a fairly limited subset of Python.  If this was 
the case but has since changed, it might explain why some people are 
confused to hear pyjamas called a Python implementation now.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: VT100 in Python

2009-09-14 Thread exarkun

On 09:29 am, n...@craig-wood.com wrote:

Wolfgang Rohdewald wolfg...@rohdewald.de wrote:

 On Sunday 13 September 2009, Nadav Chernin wrote:
 I'm writing program that read data from some instrument trough
  RS232. This instrument send data in VT100 format. I need only to
  extract the text without all other characters that describe how to
  represent data on the screen. Is there some library in python for
  converting VT100 strings?

 that should be easy using regular expressions


At a basic level parsing VT100 is quite easy, so you can get rid of
the VT100 control.  They start with ESC, have other characters in the
middle then end with a letter (upper or lowercase), so a regexp will
make short work of them.  Something like r\x1B[^A-Za-z]*[A-Za-z]

You might need to parse the VT100 stream as VT100 builds up a screen
buffer though and the commands don't always come out in the order you
might expect.

I think twisted has VT100 emulator, but I couldn't find it in a brief
search just now.


Yep, though it's one of the parts of Twisted that only has API 
documentation and a few examples, no expository prose-style docs.  If 
you're feeling brave, though:


http://twistedmatrix.com/documents/current/api/twisted.conch.insults.insults.ITerminalTransport.html

http://twistedmatrix.com/documents/current/api/twisted.conch.insults.insults.ITerminalProtocol.html

 http://twistedmatrix.com/projects/conch/documentation/examples/ (the 
insults section)


It's not really all that complicated, but without adequate docs it can 
still be tricky to figure things out.  There's almost always someone on 
IRC (#twisted on freenode) to offer real-time help, though.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: python and openSSL

2009-09-09 Thread exarkun

On 9 Sep, 01:30 pm, luca...@gmail.com wrote:

Hi all.
I need a trick to do something like this:

openssl smime -decrypt -verify -inform DER -in ReadmeDiKe.pdf.p7m
-noverify -out ReadmeDike.pdf

To unwrap a p7m file and read his content.

I know that I could use somthing like:

import os
os.system('openssl ')


but i would use a python library to wrap openssl.

I've already install pyOpenSSL, but I can not understand how to run the 
same

command.


pyOpenSSL wraps the parts of OpenSSL which deal with S/MIME.  I think 
that M2Crypto does, though I haven't used those parts of it myself.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: HTTPS on Twisted

2009-09-07 Thread exarkun

On 07:20 pm, koranth...@gmail.com wrote:

On Sep 6, 7:53�pm, koranthala koranth...@gmail.com wrote:

Hi,
� �For a financial application, �I am creating a python tool which
uses HTTPS to transfer the data from client to server. Now, everything
works perfectly, since the SSL support comes free with Twisted.
� �I have one problem though. As an upgrade, now, I have to send many
requests as the same client to the server. Many in the range of 10
msgs every second. Now, I am creating a separate TCP connection for
each and am sending the data. Is it possible to create just one SSL
and TCP connection and send each message over that using Twisted?
� �I read through Twisted, but was unable to come up with the answer
though. Even if I have to use multiple TCP connections, is it possible
to have just one SSL connection?

� �I think persistent connections should be possible for TCP, but is
it possible is Twisted?


You can probably get persistent http connections working with 
twisted.web.client (not with getPage or the other convenience 
functions though).


The new http client which is being developed will support this in a much 
simpler way.  With a bit of luck (or maybe some additional help from 
people interested in a really high-quality http client), this should be 
included in Twisted 9.0 which I hope will be out very soon.


Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: First release of pyfsevents

2009-09-07 Thread exarkun

On 12:57 am, a...@pythoncraft.com wrote:
In article d103be2b-3f1e- 
46f3-9a03-46f7125f5...@r5g2000yqi.googlegroups.com,

Nicolas Dumazet  nicd...@gmail.com wrote:

On Sep 3, 10:33=A0pm, a...@pythoncraft.com (Aahz) wrote:


I'm curious why you went with FSEvents rather than kqueue. My company
discovered that FSEvents is rather coarse-grained: it only tells you 
that

there has been an event within a directory, it does *not* tell you
anything about the change!


It depends what you want to do with your events. In my case, knowing
that an event occurred in a directory is sufficient because I already
know the state of the directory.  If you look in the examples/ folder,
(watcher) you'll find that with very little work, you can maintain a
directory snapshot in memory and compare it against the new state of
the directory to know exactly what happened, when necessary.


Thanks!

kqueue has the limitation that kern.kq_calloutmax is usually set
at 4096. Meaning that one could not use this on a big (Mercurial)
repository with 5k files. FSEvents on the other hand saves us the
trouble to have to register each file individually.  Also, I am not
quite sure if we can use kqueue to register a directory, to be warned
when a file is created in this directory.


sigh  %(obscenity)s  I didn't realize that you had to register each
file individually with kqueue.  We were hoping to avoid having to write
watcher code because that is not reliable for renames (especially
multiple renames in quick succession).

Maybe we'll try using /dev/fsevents directly


Just a guess, but since the kqueue interface is based on file 
descriptors, not on file names, following renames reliably shouldn't be 
a problem with it.  If someone knows about this for sure though, it'd be 
nice to hear about it. :)  All of the kqueue documentation I've seen has 
been rather incomplete.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: incorrect DeprecationWarning?

2009-09-05 Thread exarkun

On 12:20 pm, alan.is...@gmail.com wrote:

Alan G Isaac wrote:

Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit
(Intel)] on win32
Type help, copyright, credits or license for more 
information.

class MyError(Exception):

... def __init__(self, message):
... Exception.__init__(self)
... self.message = message
...

e = MyError('msg')

__main__:4: DeprecationWarning: BaseException.message has been
deprecated as of Python 2.6


So? Why would that mean I cannot add such an attribute
to derived classes?




On 9/4/2009 6:42 PM, Terry Reedy wrote:
It does not mean that. Try printing e.message and you should see 
'msg'.

I believe what it does mean is the the special meaning of
exception.message (I have forgotten what it is) is gone in Python 3.

In Py3
class MyError(Exception):
def __init__(self, message):
Exception.__init__(self)
self.message = message

e = MyError('msg')
print(e.message)

# 'msg'

No warning any more.




Exactly!

I think you are missing my point.
I understand it is just a DeprecationWarning.
But **why** should I receive a deprecation warning
when I am **not** using the deprecated practice?
Since I am **not** using the deprecated practice, the
warning is incorrect. (See the class definition above.)
And this incorrect warning affects a lot of people!


You are using the deprecated practice.  Attributes are not scoped to a 
particular class.  There is only one message attribute on your 
MyError instance.  It does not belong just to MyError.  It does not 
belong just to Exception.  It does not belong just to BaseException. 
It is shared by all of them.  Because BaseException deprecates 
instances of it having a message attribute, any instance of any 
subclass of BaseException which uses this attribute will get the 
deprecation warning.  Perhaps you weren't intending to use the message 
attribute as BaseException was using it, but this doesn't matter. 
There is only one message attribute, and BaseException already 
claimed it, and then deprecated it.


What anyone who is **not** using the deprecated practice
should expect in Python 2.6 is the Py3 behavior.  That is
not what we get: we get instead an incorrect deprecation
warning.


Possibly so, but there is no way for the runtime to know that you're not 
trying to use the deprecated behavior.  All it can tell is that you're 
using the deprecated attribute name.  Perhaps you can come up with a way 
for it to differentiate between these two cases and contribute a patch, 
though.


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: incorrect DeprecationWarning?

2009-09-05 Thread exarkun

On 02:28 pm, alan.is...@gmail.com wrote:


I am not sure how best to deprecate dependence on the
Python 2.5 mistake, but this is not it.  And I know at
least one important library that is affected.


I'll agree that it's not great.  I certainly would have preferred it not 
to have been done.  It is futile to complain about this kind of thing on 
python-list, though.  Raise the issue on python-dev.  I don't think 
anyone will listen to you, but who knows until you try.  If you have an 
alternate suggestion to make, that might help gain some traction; if 
not, the issue will probably just be dismissed.  Even so, I suspect 
someone will say This is irrelevant, just rename your attribute. 
Python developers aren't much concerned with this kind of thing.


Cynically,
Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python community buildbot page still 503

2009-09-03 Thread exarkun

On 06:23 pm, mar...@v.loewis.de wrote:

If I am not mistaken http://python.org/dev/buildbot/community/all/ has
been down since python.org had its harddrive issues.

Anyone know a time line on getting it back up and running.


This service is, unfortunately, unmaintained. It broke when I upgraded
the buildbot master to a new code base, and nobody upgraded the 
buildbot

configuration file.

So I have now removed it from the web server configuration, and put a
notice on the web site.


Um.  Where should I have been watching to get some warning about this? 
And now that I know, can you tell me what I need to do to restore it?


Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >