Re: [Python-Dev] deleting setdefaultencoding iin site.py is evil

2009-08-25 Thread exarkun

On 04:08 pm, ch...@simplistix.co.uk wrote:

Hi All,

Would anyone object if I removed the deletion of of 
sys.setdefaultencoding in site.py?


I'm guessing "yes!" so thought I'd state my reasons now:

This deletion appears to be pretty flimsy; reload(sys) and you have it 
back. Which is lucky, because I need it after it's been deleted...


The ability to change the default encoding is a misfeature.  There's 
essentially no way to write correct Python code in the presence of this 
feature.


Using setdefaultencoding is never the sensible way to deal with encoded 
strings.  Actually exposing this function in the sys module would lead 
all kinds of people who haven't fully grasped the way str, unicode, and 
encodings work to doing horrible things to create broken programs.  It's 
bad enough that it's already possible to get this function back with the 
reload(sys) trick.


Why? Well, because you can no longer put sitecustomize.py in a project- 
specific location (http://bugs.python.org/issue1734860) and because for 
some projects the only way I can deal with encoded strings sensibly is 
to use setdefaultencoding, in my case at the start of a script 
generated by zc.buildout's zc.recipe.egg (I *know* all the encodings in 
this project are utf-8, but I don't want to go playing whack-a-mole 
with whatever modules this rather large project uses that haven't been 
made properly unicode aware).


Yes, it needs to be used as early as possible, and the docs should say 
this, but deleting it seems to be petty in terms of stopping its use 
when sitecustomize.py is too early and too system-wide and spraying 
.decode('utf-8')'s all over a code base made up of a load of eggs 
managed by buildout simply isn't feasible...


Thoughts?


It may be a major task, but the best thing you can do is find each str 
and unicode operation in the software you're working with and make them 
correct with respect to your inputs and outputs.  Flipping a giant 
switch for the entire process is just going to change which things are 
wrong.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] deleting setdefaultencoding iin site.py is evil

2009-08-27 Thread exarkun

On 26 Aug, 11:51 pm, ch...@simplistix.co.uk wrote:

exar...@twistedmatrix.com wrote:
The ability to change the default encoding is a misfeature.  There's 
essentially no way to write correct Python code in the presence of 
this feature.


How so? If every single piece of text in your project is encoded in a 
superset of ascii (such as utf-8), why would this be a problem?
Even if you were evil/stupid and mixed encodings, surely all you'd get 
is different unicode errors or mayvbe the odd strange character during 
display?


This is what I meant when I said what I said about correct code.  If 
you're happy to have encoding errors and corrupt data, then I guess 
you're happy to have a function like setdefaultencoding.
It may be a major task, but the best thing you can do is find each str 
and unicode operation in the software you're working with and make 
them correct with respect to your inputs and outputs.  Flipping a 
giant switch for the entire process is just going to change which 
things are wrong.


Well, flipping that giant switch has worked in production for the past 
5 years, so I'm afraid I'll respectfully disagree. I'd suspect the 
pragmatics of real world software are with that function even exists, 
and it's extremely useful when used correctly...


I suppose it's fortunate for you that the function exists, then.  For my 
part, I have managed to write and operate a lot of code in production 
for at least as long without ever touching it.  Generally speaking, I 
also don't find that I encounter lots of unicode errors or corrupted 
data (*sometimes* I do; in those cases, I fix the broken code and it 
doesn't happen again).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] quick PEP 387 comments

2009-08-27 Thread exarkun

On 12:49 am, benja...@python.org wrote:

I should probably mark that PEP as abandoned or deferred, since for
various reasons, it seems like this is not what Python-dev feels is
needed [1].


Re-reading that thread, I see some good discussion about how to improve 
the PEP, a little bit of misunderstanding about what the PEP is about, 
and not a lot of strong opposition.  Maybe it's worth picking it up 
again?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fast Implementation for ZIP decryption

2009-08-30 Thread exarkun

On 12:59 pm, st...@pearwood.info wrote:

On Sun, 30 Aug 2009 06:55:33 pm Martin v. L�wis wrote:

> Does it sound worthy enough to create a patch for and integrate
> into python itself?

Probably not, given that people think that the algorithm itself is
fairly useless.


I would think that for most people, the threat model isn't "the CIA is
reading my files" but "my little brother or nosey co-worker is reading
my files", and for that, zip encryption with a good password is
probably perfectly adequate. E.g. OpenOffice uses it for
password-protected documents.

Given that Python already supports ZIP decryption (as it should), are
there any reasons to prefer the current pure-Python implementation over
a faster version?


Given that the use case is "protect my biology homework from my little 
brother", how fast does the implementation really need to be?  Is 
speeding it up from 0.1 seconds to 0.001 seconds worth the potential new 
problems that come with more C code (more code to maintain, less 
portability to other runtimes, potential for interpreter crashes or even 
arbitrary code execution vulnerabilities from specially crafted files)?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3145 (With Contents)

2009-09-15 Thread exarkun

On 04:25 pm, eric.pru...@gmail.com wrote:

I'm bumping this PEP again in hopes of getting some feedback.

Thanks,
Eric

On Tue, Sep 8, 2009 at 23:52, Eric Pruitt  
wrote:

PEP: 3145
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
Content-Type: text/plain
Created: 04-Aug-2009
Python-Version: 3.2

Abstract:

   In its present form, the subprocess.Popen implementation is prone 
to
   dead-locking and blocking of the parent Python script while waiting 
on data

� �from the child process.

Motivation:

� �A search for "python asynchronous subprocess" will turn up numerous
   accounts of people wanting to execute a child process and 
communicate with
   it from time to time reading only the data that is available 
instead of
   blocking to wait for the program to produce data [1] [2] [3].  The 
current
   behavior of the subprocess module is that when a user sends or 
receives
   data via the stdin, stderr and stdout file objects, dead locks are 
common
   and documented [4] [5].  While communicate can be used to alleviate 
some of
   the buffering issues, it will still cause the parent process to 
block while
   attempting to read data when none is available to be read from the 
child

� �process.

Rationale:

   There is a documented need for asynchronous, non-blocking 
functionality in
   subprocess.Popen [6] [7] [2] [3].  Inclusion of the code would 
improve the
   utility of the Python standard library that can be used on Unix 
based and
   Windows builds of Python.  Practically every I/O object in Python 
has a
   file-like wrapper of some sort.  Sockets already act as such and 
for
   strings there is StringIO.  Popen can be made to act like a file by 
simply
   using the methods attached the the subprocess.Popen.stderr, stdout 
and
   stdin file-like objects.  But when using the read and write methods 
of
   those options, you do not have the benefit of asynchronous I/O.  In 
the
   proposed solution the wrapper wraps the asynchronous methods to 
mimic a

� �file object.

Reference Implementation:

   I have been maintaining a Google Code repository that contains all 
of my
   changes including tests and documentation [9] as well as blog 
detailing

� �the problems I have come across in the development process [10].

   I have been working on implementing non-blocking asynchronous I/O 
in the
   subprocess.Popen module as well as a wrapper class for 
subprocess.Popen
   that makes it so that an executed process can take the place of a 
file by
   duplicating all of the methods and attributes that file objects 
have.


"Non-blocking" and "asynchronous" are actually two different things. 
From the rest of this PEP, I think only a non-blocking API is being 
introduced.  I haven't looked beyond the PEP, though, so I might be 
missing something.
   There are two base functions that have been added to the 
subprocess.Popen
   class: Popen.send and Popen._recv, each with two separate 
implementations,

� �one for Windows and one for Unix based systems. �The Windows
   implementation uses ctypes to access the functions needed to 
control pipes
   in the kernel 32 DLL in an asynchronous manner.  On Unix based 
systems,

� �the Python interface for file control serves the same purpose. �The
   different implementations of Popen.send and Popen._recv have 
identical
   arguments to make code that uses these functions work across 
multiple

� �platforms.


Why does the method for non-blocking read from a pipe start with an "_"? 
This is the convention (widely used) for a private API.  The name also 
doesn't suggest that this is the non-blocking version of reading. 
Similarly, the name "send" doesn't suggest that this is the non-blocking 
version of writing.

� �When calling the Popen._recv function, it requires the pipe name be
   passed as an argument so there exists the Popen.recv function that 
passes
   selects stdout as the pipe for Popen._recv by default. 
 Popen.recv_err
   selects stderr as the pipe by default. "Popen.recv" and 
"Popen.recv_err"
   are much easier to read and understand than "Popen._recv('stdout' 
..." and

� �"Popen._recv('stderr' ..." respectively.


What about reading from other file descriptors?  subprocess.Popen allows 
arbitrary file descriptors to be used.  Is there any provision here for 
reading and writing non-blocking from or to those?

� �Since the Popen._recv function does not wait on data to be produced
   before returning a value, it may return empty bytes. 
 Popen.asyncread

� �handles this issue by returning all data read over a given time
� �interval.


Oh.  Popen.asyncread?   What's that?  This is the first time the PEP 
mentions it.
   The ProcessIOWrapper class uses the asyncread and asyncwrite 
functions to
   allow a process to act like a file so that there are no blocking 
issues
   that can arise from using the stdout and stdin f

Re: [Python-Dev] PEP 3144 review.

2009-09-16 Thread exarkun

On 11:10 am, ncogh...@gmail.com wrote:

Steven D'Aprano wrote:
I've been skimming emails in this thread, since most of them go over 
my

head and I have no current need for an ipaddress module. But one thing
I noticed stands out and needs commenting on:

On Wed, 16 Sep 2009 11:05:26 am Peter Moody wrote:

On Tue, Sep 15, 2009 at 6:02 PM, Eric Smith 

wrote:

I completely agree. I don't know of any situation where I'd want a
network of "192.168.1.1/24" to be anything other than an error.

when you're entering the address of your nic.


Eric is talking about a network. Peter replies by talking about an
address.


Martin explained it better in another part of the thread:

if you know your address is 82.94.164.162, how do you compute
what you should spell for 82.94.164.162/27?


Or, to put it another way, given an arbitrary host in a network (e.g.
your own machine or the default gateway) and the netmask for that
network, calculate the network address.

With a "lax" parser on IPNetwork this is a trivial task - just create
the network object and then retrieve the network address from it.

If, on the other hand, IPNetwork demands that you already know the
network address before allowing you to create an IPNetwork object, then
you're pretty much out of luck - if all you have to work with are the 
IP

strings then this is actually a tricky calculation.

If the default IPNetwork constructor was made more strict, then this
functionality would have to be made available another way (probably as
an alternate constructor like IPNetwork.from_host_address rather than 
as

a boolean 'strict' option)


This seems to be the right solution to me, particularly the use of an 
alternate constructor rather than an ambiguously named flag.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thinking about 2.7

2009-09-23 Thread exarkun

On 02:35 pm, benja...@python.org wrote:

Hi everyone,
I've started plotting the release of 2.7. I'd like to try for a final
release mid next summer. 3.2 should be released, if not at the same
time as 2.7, within a few weeks to avoid 2.x having features which 3.x
doesn't. If no one has problems with this, I will draft a schedule.

Are we still planning to make 3.3 the main development focus and start
the 5 years of 2.x maintenance after this release?


I hope that this decision will be delayed until the release is closer, 
so that it can be based on how 3.x adoption is progressing.

Additionally, I'm very apprehensive about doing any kind of release
without the buildbots running. Does anyone know when they might be up?


I was planning on replying to Antoine's earlier message about the 
buildbots after a sufficiently long silence.  I'll reply here instead.


Quite a few years of experience with a distributed team of build slave 
managers has shown me that by far the most reliable way to keep slaves 
online is to have them managed by a dedicated team.  This team doesn't 
need to be small, but since finding dedicated people can sometimes be 
challenging, I think small teams are the most likely outcome (possibly 
resulting in a team of one).  Adding more people who are only mildly 
interested doesn't help.  If, as I believe is the case with Python's 
buildbot configuration, the mildly interested people have sole control 
over certain slaves, then it is actually detrimental.


It's easy for someone to volunteer to set up a new slave.  It's even 
easy to make sure it keeps running for 6 months.  But it's not as easy 
to keep it running indefinitely.  This isn't about the software involved 
(at least not entirely).  It's about someone paying attention to whether 
the slave restarts on reboots, and about paying attention to whether the 
slave host has lots its network connection, or been decomissioned, or 
whether a system upgrade disabled the slave, or whatever other random 
administrative-like tasks are necessary to keep things running.  Casual 
volunteers generally just won't keep up with these tasks.


I suggest finding someone who's seriously interested in the quality of 
CPython and giving them the responsibility of keeping things operating 
properly.  This includes paying attention to the status of slaves, 
cajoling hardware operators into bringing hosts back online and fixing 
network issues, and finding replacements of the appropriate type 
(hardware/software platform) when a slave host is permanently lost.


I would also personally recommend that this person first (well, after 
tracking down all the slave operators and convincing them to bring their 
slaves back online) acquire shell access to all of the slave machines so 
that the owners of the slave hosts themselves no longer need to be the 
gating factor for most issues.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thinking about 2.7

2009-09-23 Thread exarkun

On 06:03 pm, br...@python.org wrote:
On Wed, Sep 23, 2009 at 07:35, Benjamin Peterson  
wrote:

[snip]


Additionally, I'm very apprehensive about doing any kind of release
without the buildbots running. Does anyone know when they might be up?


I don't know the answer, but it might be "never". We used to do
releases without them, so it's not impossible. Just means you have to
really push the alphas, betas, and RCs.

Titus Brown is actually working on a buildbot alternative that's more
volunteer-friendly (pony-build). Hopefully that will be up and going
by then so we can contemplate using that.


I certainly wish Titus luck in his project, but I'm skeptical about 
pony-build magically fixing the problems CPython development has with 
buildbot.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] thinking about 2.7

2009-09-24 Thread exarkun

On 24 Sep, 11:27 pm, mar...@v.loewis.de wrote:

Additionally, I'm very apprehensive about doing any kind of release
without the buildbots running. Does anyone know when they might be 
up?


When I (or somebody else) contacts all the slave operators and asks 
them

to restart the buildbot slaves.


Does this mean you're taking responsibility for this task?  Or are you 
looking for a volunteer?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3144 review.

2009-09-28 Thread exarkun

On 03:57 am, mar...@v.loewis.de wrote:

Finally, to Stephen's point about seeing the other side of the
argument, I wrote this offlist a week ago:

  I *understand* what you're saying, I *understand* that
192.168.1.1/24 isn't a network,


But you still want to treat it as one.

Could you explain what benefit there is for allowing the user to 
create

network objects that don't represent networks? Is there a use-case
where these networks-that-aren't-networks are something other than a
typo? Under what circumstances would I want to specify a network as
192.168.1.1/24 instead of 192.168.1.0/24?


It's fairly obvious to me why the library should support 192.168.1.1/24
as an input, and return a network.

End-users are likely going to enter such things (e.g. 
82.94.164.162/29),

as they will know one IP address in the network (namely, of one machine
that they are looking at), and they will know the prefix length (more
often, how large the network is - 8 or 16 or 32). So very clearly,
end users should not be required to make the computation in their 
heads.


So Python code has to make the computation, and it seems most natural
that the IP library is the piece of code that is able to compute a
network out of that input.


And this is a rather classic example of a misfeature.  "(Non-developer) 
End users will type it in" is not argument for supporting a particular 
string format as the primary constructor for an object.  Constructors 
are for *developers*.  They should be made most useful for *developers*. 
The issue of dealing with user input (which may have many other quirks) 
is separate and should be dealt with separately.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [New-bugs-announce] [issue7064] Python 2.6.3 / setuptools 0.6c9: extension module builds fail with KeyError

2009-10-09 Thread exarkun

On 5 Oct, 01:04 pm, ziade.ta...@gmail.com wrote:

On Mon, Oct 5, 2009 at 2:50 PM,   wrote:


   Ned> Due to a change in distutils released with Python 2.6.3, 
packages
   Ned> that use setuptools (version 0.6c9, as of this writing), or 
the

� �Ned> easy_install command, to build C extension modules fail ...
� �...
� �Ned> Among the packages known to be affected include lxml,
   Ned> zope-interface, jinja2, and, hence, packages dependent on 
these

� �Ned> packages (e.g. sphinx, twisted, etc.).

Maybe the Python test suite should include tests with a small number 
of
widely used non-core packages like setuptools.  I realize the pybots 
project
exists to tackle this sort of stuff in greater detail.  I'm thinking 
more of
a smoke test than a comprehensive test suite covering all external 
packages.
Setuptools is particularly important because so many extension authors 
use

it. �If it breaks it implicitly breaks a lot of PyPI packages.


I have created 6 months ago such a buildbot that downloads tarballs
from the community,
and run a few distutils commands on them, and make sure the result is
similar in 2.6/2.7.
and for "sdist" that the resulting tarball is similar.

It was running over Twisted and Numpy, but has been discontinued 
because
it was on my own server, where it was hard to keep it up 
(cpu/bandwidth)


If the Snakebite project could host my buildbot (or at least slaves)
or if the PSF could pay for a dedicated
server for this, we would be able to trigger such warnings, and
provide an e-mail service to package maintainers for example.

The build could occur everytime Distutils *or* the project changes.


If you want, until Snakebite is up, I can probably provide a slave which 
can at least do this testing for Twisted and perhaps some other 
projects.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Better module shutdown procedure

2009-10-14 Thread exarkun

On 08:16 pm, n...@arctrix.com wrote:

The current shutdown code in pythonrun.c zaps module globals by
setting them to None (an attempt to break reference cycles). That
causes problems since __del__ methods can try to use the globals
after they have been set to None.

The procedure implemented by http://bugs.python.org/issue812369
seems to be a better idea. References to modules are replaced by
weak references and the GC is allowed to cleanup reference cycles.

I would like to commit this change but I'm not sure if it is a good
time in the development cycle or if anyone has objections to the
idea. Please speak up if you have input.


I notice that the patch doesn't include any unit tests for the feature 
being provided (it does change test_sys.py - although I don't understand 
how that change is related to this feature).  I hope that regardless of 
whatever else is decided, if the change is to be made, it will be 
accompanied by new unit tests verify its proper operation.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] nonlocal keyword in 2.x?

2009-10-22 Thread exarkun

On 08:24 pm, mar...@v.loewis.de wrote:

Mike Krell wrote:
Is there any possibility of backporting support for the nonlocal 
keyword

into a  2.x release?


If so, only into 2.7. Can you please explain why it would be desirable
to do that? 2.7 will likely be the last 2.x release, so only a fairly
small portion of the applications would be actually able to use this 
(or

any other new feature added to 2.7): most code supporting 2.x will also
have to support 2.6, so the keyword won't be available to such code,
anyway.


For the same reason that it is desirable to backport all of the other 
changes from 3.x - because it makes the 2.x to 3.x transition easier.


If Python 2.7 supports the nonlocal keyword, then 2.7 becomes that much 
better of a stepping stone towards 3.x.


You've suggested that most 2.x code will have to support 2.6 and so 
won't be able to use the nonlocal keyword even if it is added to 2.7. 
This precise argument could be applied to all of the features in 2.6 
which aim to bring it closer to 3.x.  Any program which must retain 
Python 2.5 compatibility will not be able to use them.  Yet 2.6 is a 
more useful stepping stone towards 3.x than 2.5 is.


So yes, it would be quite desirable to see nonlocal and as many other 
3.x features as possible backported for 2.7.  And depending on how close 
2.7 manages to get, it may make sense to backport anything that doesn't 
make it into 2.7 for a 2.8 release.


The 3.x transition is *hard*.  Anything that makes it easier is good.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 12:16 pm, solip...@pitrou.net wrote:



For a), I think we can solve this only by redundancy, i.e. create more
build slaves, hoping that a sufficient number would be up at any point
in time.


We are already doing this, aren't we?
http://www.python.org/dev/buildbot/3.x/

It doesn't seem to work very well, it's a bit like a Danaides vessel.

The source of the problem is that such a system can degrade without
anybody taking action. If the web server's hard disk breaks down, 
people

panic and look for a solution quickly. If the source control is down,
somebody *will* "volunteer" to fix it. If the automated build system
produces results less useful, people will worry, but not take action.


Well, to be fair, buildbots breaking also happens much more frequently
(perhaps one or two orders of magnitude) than the SVN server or the Web
site going down. Maintaining them looks like a Sisyphean task, and 
nobody

wants that.


Perhaps this is a significant portion of the problem.  Maintaining a 
build slave is remarkably simple and easy.  I maintain about half a 
dozen slaves and spend at most a few minutes a month operating them. 
Actually setting one up in the first place might take a bit longer, 
since it involves installing the necessary software and making sure 
everything's set up right, but the actual slave configuration itself is 
one command:


 buildbot create-slavepassword>


Perhaps this will help dispel the idea that it is a serious undertaking 
to operate a slave.


The real requirement which some people may find challenging is that the 
slave needs to operate on a host which is actually online almost all of 
the time.  If you don't such a machine, then there's little point 
offering to host a slave.

I don't know what kind of machines are the current slaves, but if they
are 24/7 servers, isn't it a bit surprising that the slaves would go 
down

so often? Is the buildbot software fragile? Does it require a lot of
(maintenance, repair) work from the slave owners?


As I have no specific experience maintaining any of the CPython build 
slaves, I can't speak to any maintenance issues which these slaves have 
encountered.  I would expect that they are as minimal as the issues I 
have encountered maintaining slaves for other projects, but perhaps this 
is wrong.  I do recall that there were some win32 issues (discussed on 
this list, I think) quite a while back, but I think those were resolved. 
I haven't heard of any other issues since then.  If there are some, 
perhaps the people who know about them could raise them and we could try 
to figure out how to resolve them.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 09:47 am, mar...@v.loewis.de wrote:

Mark Dickinson wrote:

Would it be worth spending some time discussing the buildbot situation
at the PyCon 2010 language summit?  In the past, I've found the
buildbots to be an incredibly valuable resource;  especially when
working with aspects of Python or C that tend to vary significantly
from platform to platform (for me, this usually means floating-point,
and platform math libraries, but there are surely many other things it
applies to).  But more recently there seem to have been some
difficulties keeping a reasonable number of buildbots up and running.
A secondary problem is that it can be awkward to debug some of the
more obscure test failures on buildbots without having direct access
to the machine.  From conversations on IRC, I don't think I'm alone in
wanting to find ways to make the buildbots more useful.


These are actually two issues:
a) where do we get buildbot hardware and operators?
b) how can we reasonably debug problems occurring on buildbots

For a), I think we can solve this only by redundancy, i.e. create
more build slaves, hoping that a sufficient number would be up at
any point in time.

So: what specific kinds of buildbots do you think are currently 
lacking?

A call for volunteers will likely be answered quickly.

So the question is: how best to invest time and possibly money to
improve the buildbot situation (and as a result, I hope, improve the
quality of Python)?


I don't think money will really help (I'm skeptical in general that
money helps in open source projects). As for time: "buildbot scales",
meaning that the buildbot slave admins will all share the load, being
responsible only for their own slaves.


I think that money can help in two ways in this case.

First, there are now a multitude of cloud hosting providers which will 
operate a slave machine for you.  BuildBot has even begun to support 
this deployment use-case by allowing you to start up and shut down vms 
on demand to save on costs.  Amazon's EC2 service is supported out of 
the box in the latest release.


Second, there are a number of active BuildBot developers.  One of them 
has even recently taken a contract from Mozilla to implement some non- 
trivial BuildBot enhancements.  I think it very likely that he would 
consider taking such a contract from the PSF for whatever enhancements 
would help out the CPython buildbot.

On the master side: would you be interested in tracking slave admins?

What could be done to make maintenance of build
slaves easier?


This is something that only the slave admins can answer. I don't think
it's difficult - it's just that people are really unlikely to 
contribute

to the same thing over a period of five years at a steady rate. So we
need to make sure to find replacements when people drop out.


This is a good argument for VMs.  It's certainly *possible* to chase an 
ever changing set of platforms, but it strikes me as something of a 
waste of time.

The source of the problem is that such a system can degrade without
anybody taking action. If the web server's hard disk breaks down,
people panic and look for a solution quickly. If the source control
is down, somebody *will* "volunteer" to fix it. If the automated
build system produces results less useful, people will worry, but
not take action.


To me, that raises the question of why people aren't more concerned with 
the status of the build system.  Shouldn't developers care if the code 
they're writing works or not?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [TIP] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 12:48 pm, c...@msu.edu wrote:


[snip]

The most *exciting* part of pony-build, apart from the always-riveting
spectacle of "titus rediscovering problems that buildbot solved 5 years 
ago",

is the loose coupling of recording server to the build slaves and build
reporters.  My plan is to enable a simple and lightweight XML-RPC 
and/or
REST-ish interface for querying the recording server from scripts or 
other Web
sites.  This has Brett aquiver with anticipation, I gather -- no more 
visual

inspection of buildbot waterfall pages ;)


BuildBot has an XML-RPC interface.  So Brett can probably do what he 
wants with BuildBot right now.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 05:47 pm, p.f.mo...@gmail.com wrote:

2009/10/25  :
Perhaps this is a significant portion of the problem.  Maintaining a 
build
slave is remarkably simple and easy.  I maintain about half a dozen 
slaves
and spend at most a few minutes a month operating them. Actually 
setting one
up in the first place might take a bit longer, since it involves 
installing
the necessary software and making sure everything's set up right, but 
the

actual slave configuration itself is one command:

 buildbot create-slavepassword>


Perhaps this will help dispel the idea that it is a serious 
undertaking to

operate a slave.

The real requirement which some people may find challenging is that 
the
slave needs to operate on a host which is actually online almost all 
of the
time.  If you don't such a machine, then there's little point offering 
to

host a slave.


I have been seriously considering setting up one or more buildslaves
for a while now. However, my biggest issue is that they would be
running as VMs on my normal PC, which means that it's the issue of
keeping them continually online that hurts me.

If I could (say) just fire the slaves up for a set period, or fire
them up, have them do a build and report back, and then shut down,
that would make my life easier (regular activities rather than ongoing
sysadmin works better for me).

It sounds like a buildslave isn't really what I should be looking at.
Maybe Titus' push model pony-build project would make more sense for
me.


Maybe.  I wonder if Titus' "push model" (I don't really understand this 
term in this context) makes sense for continuous integration at all, 
though.  As a developer, I don't want to have access to build results 
across multiple platforms when someone else feels like it.  I want 
access when *I* feel like it.


Anyway, BuildBot is actually perfectly capable of dealing with this.  I 
failed to separate my assumptions about how everyone would want to use 
the system from what the system is actually capable of.


If you run a build slave and it's offline when a build is requested, the 
build will be queued and run when the slave comes back online.  So if 
the CPython developers want to work this way (I wouldn't), then we don't 
need pony-build; BuildBot will do just fine.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun




On 06:32 pm, mar...@v.loewis.de wrote:
I've been trying to get some feedback about firing up buildbots on 
Cloud

Servers for a while now and haven't had much luck.  I'd love to find a
way of having buildbots come to life, report to the mother ship, do 
the

build, then go away 'till next time they're required.


I'm not quite sure whom you have been trying to get feedback from, and
can't quite picture your proposed setup from above description.

In any case, ISTM that your approach isn't compatible with how buildbot
works today (not sure whether you are aware of that): a build slave
needs to stay connected all the time, so that the build master can
trigger a build when necessary.

So if your setup requires the slaves to shut down after a build, I 
don't

think this can possibly work.


This is supported in recent versions of BuildBot with a special kind of 
slave:


http://djmitche.github.com/buildbot/docs/0.7.11/#On_002dDemand- 
_0028_0022Latent_0022_0029-Buildslaves


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 25 Oct, 09:36 pm, db3l@gmail.com wrote:


I think the other issue most likely to cause a perceived "downtime"
with the Windows build slave that I've had a handful of cases over the
past two years where the build slave appears to be operating properly,
but the master seems to just queue up jobs as if it were down.  The
slave still shows an established TCP link to the master, so I
generally only catch this when I happen to peek at the status web
page, or catch a remark here on python-dev, so that can reduce
availability.


This sounds like something that should be reported upstream. 
Particularly if you know how to reproduce it.  Has it been?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible language summit topic: buildbots

2009-10-25 Thread exarkun

On 01:28 am, db3l@gmail.com wrote:

exar...@twistedmatrix.com writes:

This sounds like something that should be reported
upstream. Particularly if you know how to reproduce it.  Has it been?


No, largely because I can't reproduce it at all.  It's happened maybe
4-5 times in the past 2 years or so.  All that I see is that my end
looks good yet the master end seems not to be dispatching jobs (it
never shows an explicit disconnect for my slave though).

My best guess is that something disrupted the TCP connection, and that
the slave isn't doing anything that would let it know its connection
was dropped.  Although I thought there were periodic pings even from
the slave side.

Given the frequency, it's not quite high priority to me, though having
the master let the owner of a slave know when it's down would help cut
down on lost availability due to this case, so I suppose I could 
suggest

that feature to the buildbot developers.


This feature exists, at least.  BuildBot can email people when slaves 
are offline for more than some configured time limit.  I'm not sure if 
the CPython master is configured to do this or not.


It's easy to set up if not, the BuildSlave initializer accepts a list of 
email addresses that will be notified when that slave goes offline, 
notify_on_missing:


http://buildbot.net/apidocs/buildbot.buildslave.AbstractBuildSlave- 
class.html#__init__


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reworking the GIL

2009-10-26 Thread exarkun

On 04:18 pm, dan...@stutzbachenterprises.com wrote:
On Mon, Oct 26, 2009 at 10:58 AM, Antoine Pitrou 
wrote:

Er, I prefer to keep things simple. If you have lots of I/O you should
probably
use an event loop rather than separate threads.


On Windows, sometimes using a single-threaded event loop is sometimes
impossible.  WaitForMultipleObjects(), which is the Windows equivalent 
to

select() or poll(), can handle a maximum of only 64 objects.


This is only partially accurate.  For one thing, WaitForMultipleObjects 
calls are nestable.  For another thing, Windows also has I/O completion 
ports which are not limited to 64 event sources.  The situation is 
actually better than on a lot of POSIXes.

Do we really need priority requests at all?  They seem counter to your
desire for simplicity and allowing the operating system's scheduler to 
do

its work.


Despite what I said above, however, I would also take a default position 
against adding any kind of more advanced scheduling system here.  It 
would, perhaps, make sense to expose the APIs for controlling the 
platform scheduler, though.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "Buildbot" category on the tracker

2009-10-29 Thread exarkun

On 02:30 pm, solip...@pitrou.net wrote:


Hello,

What do you think of creating a "buildbot" category in the tracker? 
There are
often problems on specific buildbots which would be nice to track, but 
there's

nowhere to do so.


Is your idea that this would be for tracking issues with the *bots* 
themselves?  That is, not just for tracking cases where some test method 
fails on a particular bot, but for tracking cases where, say, a bot's 
host has run out of disk space and cannot run the tests at all?


For the case where a test is failing because of some platform or 
environment issue, it seems more sensible to track the ticket as 
relating to that platform or environment, or track it in relation to the 
feature it affects.


Of course, tickets could move between these classifications as 
investigation reveals new information about the problem.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "Buildbot" category on the tracker

2009-10-29 Thread exarkun

On 29 Oct, 11:41 pm, jnol...@gmail.com wrote:

On Thu, Oct 29, 2009 at 7:04 PM,   wrote:

On 02:30 pm, solip...@pitrou.net wrote:


Hello,

What do you think of creating a "buildbot" category in the tracker? 
There

are
often problems on specific buildbots which would be nice to track, 
but

there's
nowhere to do so.


Is your idea that this would be for tracking issues with the *bots*
themselves?  That is, not just for tracking cases where some test 
method
fails on a particular bot, but for tracking cases where, say, a bot's 
host

has run out of disk space and cannot run the tests at all?

For the case where a test is failing because of some platform or 
environment

issue, it seems more sensible to track the ticket as relating to that
platform or environment, or track it in relation to the feature it 
affects.


Of course, tickets could move between these classifications as 
investigation

reveals new information about the problem.


Then again, I know for a fact certain tests fail ONLY on certain
buildbots because of the way they're configured. For example, certain
multiprocessing tests will fail if /dev/shm isn't accessible on Linux,
and several of the buildbosts are in tight chroot jails and don't have
that exposed.

Is it a bug in that buildbot, a platform specific bug, etc?


It's a platform configuration that can exist.  If you're rejecting that 
configuration and saying that CPython will not support it, then it's 
silly to have a buildbot set up that way, and presumably that should be 
changed.


The point is that this isn't about buildbot.  It's about CPython and 
what platforms it supports.  Categorizing it by "buildbot" is not 
useful, because no one is going to be cruising along looking for 
multiprocessing issues to fix by browsing tickets by the "buildbot" 
category.


If, on the other hand, (sticking with this example) /dev/shm-less 
systems are not a platform that CPython wants to support, then having a 
buildbot running on one is a bit silly.  It will probably always fail, 
and all it does is contribute another column of red.  Who does that 
help?




Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] "Buildbot" category on the tracker

2009-10-30 Thread exarkun

On 12:55 pm, jnol...@gmail.com wrote:
On Fri, Oct 30, 2009 at 4:53 AM, "Martin v. Löwis"  
wrote:

I'm confused: first you said they fail, now you say they get skipped.
Which one is it? I agree with R. David's analysis: if they fail, it's
a multiprocessing bug, if they get skipped, it's a flaw in the build
slave configuration (but perhaps only slightly so, because it is good
if both cases are tested - and we do have machines also that provide
/dev/shm).


They failed until we had the tests skip those platforms - at the time,
I felt that it was more of a bug with the build slave configuration
than a multiprocessing issue, I don't like skipping tests unless the
platform fundamentally isn't supported (e.g. broken semaphores for
some actions on OS/X) - linux platforms support this functionality
just fine - except when in locked-down chroot jails.

The only reason I brought it up was to point out the a buildbot
configuration on a given host can make tests fail even if those tests
would normally pass on that operating system.


Just as a build slave can be run in a chroot, so can any other Python 
program.  This is a real shortcoming of the multiprocessing module. 
It's entirely possible that people will want to run Python software in 
chroots sometimes.  So it's proper to acknowledge that this is an 
unsupported environment.  The fact that the kernel in use is the same as 
the kernel in use on another supported platform is sort of irrelevant. 
The kernel is just one piece of the system, there are many other 
important pieces.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Cloud build slaves (was Re: Possible language summit topic: buildbots)

2009-10-30 Thread exarkun

On 04:31 pm, c...@msu.edu wrote:

On Fri, Oct 30, 2009 at 04:21:06PM +, Antoine Pitrou wrote:


Hello,

Sorry for the little redundancy, I would like to underline Jean-Paul's
suggestion here:

Le Sun, 25 Oct 2009 14:05:12 +0000, exarkun a ??crit??:
> I think that money can help in two ways in this case.
>
> First, there are now a multitude of cloud hosting providers which 
will

> operate a slave machine for you.  BuildBot has even begun to support
> this deployment use-case by allowing you to start up and shut down 
vms
> on demand to save on costs.  Amazon's EC2 service is supported out 
of

> the box in the latest release.

I'm not a PSF member, but it seems to me that the PSF could ask Amazon
(or any other virtual machine business anyway) to donate a small 
number

of permanent EC2 instances in order to run buildslaves on.


[ ... ]

I'm happy to provide VMs or shell access for Windows (XP, Vista, 7); 
Linux

ia64; Linux x86; and Mac OS X.


Okay, let's move on this.  Martin has, I believe, said that potential 
slave operators only need to contact him to get credentials for new 
slaves.  Can you make sure to follow up with him to get slaves running 
on these machines?  Or would you rather give out access to someone else 
and have them do the build slave setup?

Others have made similar offers.


I'll similarly encourage them to take action, then.  Do you happen to 
remember who?

The
architectures supported by the cloud services don't really add anything
(and generally don't have Mac OS X support, AFAIK).


That's not entirely accurate.  Currently, CPython has slaves on these 
platforms:


 - x86
   - FreeBSD
   - Windows XP
   - Gentoo Linux
   - OS X
 - ia64
   - Ubuntu Linux
 - Alpha
   - Debian Linux

So, assuming we don't want to introduce any new OS, Amazon could fill in 
the following holes:


 - x86
   - Ubuntu Linux
 - ia64
   - FreeBSD
   - Windows XP
   - Gentoo Linux

So very modestly, that's 4 currently missing slaves which Amazon's cloud 
service *does* add.  It's easy to imagine further additions it could 
make as well.
What we really need (IMO) is someone to dig into the tests to figure 
out which
tests fail randomly and why, and to fix them on specific architectures 
that
most of us don't personally use.  This is hard work that is neither 
glamorous

nor popular.


Sure.  That's certainly necessary.  I don't think anyone is suggesting 
that it's not.  Fortunately, adding more build slaves is not mutually 
exclusive with a developer fixing bugs in CPython.
I think the idea of paying a dedicated developer to make the 
CPython+buildbot
tests reliable is better, although I would still be -0 on it (I don't 
think

the PSF should be paying for this kind of thing at all).


I hope everyone is on board with the idea of fixing bugs in CPython, 
either in the actual implementation of features or in the tests for 
those features.  That being the case, the discussion of whether or not 
the PSF should try to fund such a task is perhaps best discussed on the 
PSF members list.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [TIP] Possible language summit topic: buildbots

2009-10-30 Thread exarkun

On 04:42 pm, ole...@gmail.com wrote:

On Sun, Oct 25, 2009 at 9:13 AM,   wrote:

On 12:48 pm, c...@msu.edu wrote:


[snip]

The most *exciting* part of pony-build, apart from the always- 
riveting
spectacle of "titus rediscovering problems that buildbot solved 5 
years

ago",
is the loose coupling of recording server to the build slaves and 
build
reporters.  My plan is to enable a simple and lightweight XML-RPC 
and/or
REST-ish interface for querying the recording server from scripts or 
other

Web
sites. �This has Brett aquiver with anticipation, I gather -- no more
visual
inspection of buildbot waterfall pages ;)


BuildBot has an XML-RPC interface.  So Brett can probably do what he 
wants

with BuildBot right now.


... but pony-build follows a different model


But BuildBot exists, is deployed, and can be used now, without waiting.

(Sorry, I don't really understand what point you were hoping to make 
with your message, so I just thought I'd follow up in the same style and 
hope that you'll understand my message even if I don't understand yours 
:).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] EC2 buildslaves

2009-11-01 Thread exarkun

On 31 Oct, 08:13 pm, solip...@pitrou.net wrote:

Martin v. L�wis  v.loewis.de> writes:


Not sure whether it's still relevant after the offers of individually
donated hardware.


We'll see, indeed.

However, if you want to look into this, feel free to
set up EC2 slaves.


I only know to setup mainstream Linux distros though (Debian- or
Redhat-lookalikes :-)). I've just played a bit and, after the hassle of 
juggling
with a bunch of different keys and credentials, setting up an instance 
and

saving an image for future use is quite easy.


Starting with a mainstream distro doesn't seem like a bad idea.  For 
example, there isn't currently a 32bit Ubuntu (any version) slave.  That 
would be a nice gap to fill in, right?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3003 - Python Language Moratorium

2009-11-05 Thread exarkun

On 5 Nov, 11:55 pm, bobbyrw...@gmail.com wrote:

What exactly are those better ways? Document as deprecated only?

-Brett


A switch to ENABLE those warnings?


Lord knows I'm sick of filtering them out of logs.

A switch to enable deprecation warnings  would give developers a
chance to see them when migrating to a new version of python.  And it
would prevent users from having to deal with them.


PendingDeprecationWarning is silent by default and requires a switch to 
be enabled.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3003 - Python Language Moratorium

2009-11-07 Thread exarkun

On 12:10 pm, s...@pobox.com wrote:
   Guido> ... it's IMO pretty mysterious if you encounter this and 
don't

   Guido> already happen to know what it means.

If you require parens maybe it parses better:

   import (a or b or c) as mod

Given that the or operator shortcuts I think that (a or b or c) 
terminates

once a module is found isn't too hard to grasp.


Did everyone forget what the subject of this thread is?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPI comments and ratings, *really*?

2009-11-12 Thread exarkun

On 09:44 am, lud...@lericson.se wrote:
Why are there comments on PyPI? Moreso, why are there comments which I 
cannot control as a package author on my very own packages? That's 
just absurd.


It's *my* package, and so should be *my* choice if I want user input 
or not.


And ratings? I thought it was the Python Package Index, not the Python 
Anonymous Package Tribunal.


As I see it, there are only two ways to fix these misguided steps of 
development: throw them out, or make them opt-in settings.


The comments I simply do not understand. Why not instead provide a 
form for mailing the package author?
The ratings are just not what PyPI should be doing, is about, or what 
I signed up for.


See the various catalog-sig threads on this topic.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPI comments and ratings, *really*?

2009-11-12 Thread exarkun

On 03:01 pm, dalc...@gmail.com wrote:
On Thu, Nov 12, 2009 at 12:32 PM, Jesse Noller  
wrote:
On Thu, Nov 12, 2009 at 9:25 AM, Barry Warsaw  
wrote:

On Nov 12, 2009, at 8:06 AM, Jesse Noller wrote:

Frankly, I agree with him. As implemented, I *and others* think this
is broken. I've taken the stance of not publishing things to PyPi
until A> I find the time to contribute to make it better or B> It
changes.


That's distressing.  For better or worse PyPI is the central 
repository of

3rd party packages. �It should be easy, desirable, fun and socially
encouraged to get your packages there.


PyPI isn't a place to file bugs, complain something didn't work for
you if you didn't even have the common decency in some cases to email
them. Being unable, as an author, to remove, moderate, or even respond
to issues there bothers me quite a bit.


I also agree with you. I do not see the point to make PyPI yet another
social network.


+1

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unittest/doctest formatting differences in 2.7a1?

2009-12-09 Thread exarkun

On 05:11 pm, lrege...@jarn.com wrote:
On Wed, Dec 9, 2009 at 17:34, Michael Foord  
wrote:

Can you be more specific?


Only with an insane amount of work. I'll hold that off for a while.


I don't know if this is related at all (and I guess we won't until 
Lennart can be more specific :), but here are some Twisted unit test 
failures which are probably due to unittest changes in 2.7:


===
[FAIL]: twisted.trial.test.test_loader.LoaderTest.test_sortCases

Traceback (most recent call last):
 File 
"/home/buildslave/pybot/trunk.glass-x86/build/Twisted/twisted/trial/test/test_loader.py", 
line 167, in test_sortCases

   [test._testMethodName for test in suite._tests])
twisted.trial.unittest.FailTest: not equal:
a = ['test_b', 'test_c', 'test_a']
b = ['test_c', 'test_b', 'test_a']

===
[FAIL]: twisted.trial.test.test_loader.ZipLoadingTest.test_sortCases

Traceback (most recent call last):
 File 
"/home/buildslave/pybot/trunk.glass-x86/build/Twisted/twisted/trial/test/test_loader.py", 
line 167, in test_sortCases

   [test._testMethodName for test in suite._tests])
twisted.trial.unittest.FailTest: not equal:
a = ['test_b', 'test_c', 'test_a']
b = ['test_c', 'test_b', 'test_a']

===
[FAIL]: twisted.trial.test.test_tests.TestSkipMethods.test_reasons

Traceback (most recent call last):
 File 
"/home/buildslave/pybot/trunk.glass-x86/build/Twisted/twisted/trial/test/test_tests.py", 
line 143, in test_reasons

   str(reason))
twisted.trial.unittest.FailTest: not equal:
a = 'skip1 (twisted.trial.test.test_tests.SkippingTests)'
b = 'skip1'

===
[FAIL]: twisted.trial.test.test_class.AttributeSharing.test_shared

Traceback (most recent call last):
 File 
"/home/buildslave/pybot/trunk.glass-x86/build/Twisted/twisted/trial/test/test_class.py", 
line 131, in test_shared

   'test_2')
twisted.trial.unittest.FailTest: not equal:
a = 'test_2 (twisted.trial.test.test_class.ClassAttributeSharer)'
b = 'test_2'

===

I'm not opposed to the improvement of unittest (or any part of Python). 
Perhaps more of the improvements can be provided in new APIs rather than 
by changing the behavior of existing APIs, though.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unittest/doctest formatting differences in 2.7a1?

2009-12-11 Thread exarkun

On 9 Dec, 06:09 pm, fuzzy...@voidspace.org.uk wrote:

On 09/12/2009 18:02, exar...@twistedmatrix.com wrote:

On 05:11 pm, lrege...@jarn.com wrote:
On Wed, Dec 9, 2009 at 17:34, Michael Foord 
 wrote:

Can you be more specific?


Only with an insane amount of work. I'll hold that off for a while.


I don't know if this is related at all (and I guess we won't until 
Lennart can be more specific :), but here are some Twisted unit test 
failures which are probably due to unittest changes in 2.7:


===
[FAIL]: twisted.trial.test.test_loader.LoaderTest.test_sortCases

Traceback (most recent call last):
 File 
"/home/buildslave/pybot/trunk.glass-x86/build/Twisted/twisted/trial/test/test_loader.py", 
line 167, in test_sortCases

   [test._testMethodName for test in suite._tests])
twisted.trial.unittest.FailTest: not equal:
a = ['test_b', 'test_c', 'test_a']
b = ['test_c', 'test_b', 'test_a']


Is the order significant? Can you just compare sorted versions of the 
lists instead?


If the order *is* significant I would be interested to know which 
change caused this.


It looks like a change to shortDescription may be responsible for all 
the failures mentioned here.


The order *is* significant (it's a test for how tests are ordered...). 
The sorting code (which wasn't super awesome, I'll admit) it uses was 
broken by the change in the return value of shortDescription, something 
which is much more obvious in some of the other failures.

[snip]

a = 'skip1 (twisted.trial.test.test_tests.SkippingTests)'
b = 'skip1'


These two test failures are due to the change in repr of something I 
guess? Is a or b the original output?


b is the original, a is the produced value.  Here's a simpler example:

Python 2.6.4 (r264:75706, Nov  2 2009, 14:38:03) [GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.

from twisted.test.test_abstract import AddressTests
AddressTests('test_decimalDotted').shortDescription()

'test_decimalDotted'




Python 2.7a0 (trunk:76325M, Nov 16 2009, 09:50:40) [GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.

from twisted.test.test_abstract import AddressTests
AddressTests('test_decimalDotted').shortDescription()

'test_decimalDotted (twisted.test.test_abstract.AddressTests)'




Aside from compatibility issues, this seems like a not-too-ideal change 
in behavior for a method named "shortDescription", at least to me. 
Going out on a limb, I'll guess that it was made in order to provide a 
different user-facing experience in the stdlib's test runner.  If that's 
right, then I think it would make sense to revert shortDescription back 
to the previous behavior and either introduce a new method which returns 
this string or just put the logic all into the display code instead.


I'm not opposed to the improvement of unittest (or any part of 
Python). Perhaps more of the improvements can be provided in new APIs 
rather than by changing the behavior of existing APIs, though.


Well, introducing lots of new APIs has its own problems of course.


It seems to me that it typically has fewer problems.
Hearing about difficulties changes cause is good though (the reason for 
alphas I guess) and if it is possible to work out why things are broken 
then maybe we can still have the new functionality without the 
breakage.


Of course.  Also, I should have pointed this out in my previous email, 
this information about failures is always easily available, at least for 
Twisted (and at most for any project interested in participating in the 
project), on the community buildbots page:


 http://www.python.org/dev/buildbot/community/trunk/

So anyone who cares to can check to see if their changes have broken 
things right away, instead of only finding out 6 or 12 or 18 months 
later. :)
The problem with Lennart's report is that it is just "things are 
broken" without much clue. I am sympathetic to this (working out 
*exactly* what is broken in someone else's code can be a painful chore) 
but I'm not sure what can be done about it without more detail.


Certainly.  Perhaps Zope would like to add something to the community 
builders page.


Thanks,

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 2.7 alpha 2

2010-01-12 Thread exarkun

On 12 Jan, 10:04 pm, mar...@v.loewis.de wrote:

[...]

I've done a fair bit of 3.x porting, and I'm firmly convinced that
2.x can do nothing:

[...]

Inherently, 2.8 can't improve on that.


I agree that there are limitations like the ones you've listed, but I
disagree with your conclusion.  Maybe you assume that it's just as 
hard
to move to 2.8 (using the py3k backports not available in say 2.5) as 
it

is to 3.x?


Not at all, no. I'd rather expect that code that runs on 2.7 will run
on 2.8 unmodified.

But a hypothetical 2.8 would also give people a way to move closer to
py3k without giving up on using all their 2.x-only dependencies.


How so? If they use anything that is new in 2.8, they *will* need to
drop support for anything before it, no???
I think it's much more likely that libraries like Twisted can support 
2.8

in the near future than 3.x.


Most likely, Twisted "supports" 2.8 *today* (hopefully). But how does
that help Twisted in moving to 3.2?


I'm not reading this thread carefully enough to make any arguments on 
either side, but I can contribute a fact.


Twisted very likely does not support 2.8 today.  I base this on the fact 
that Twisted does not support 2.7 today, and I expect 2.8 will be more 
like 2.7 than it will be like 2.3 - 2.6 (which Twisted does support).


When I say "support" here, I mean "all of the Twisted unit tests pass on 
it".


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread exarkun

On 10:47 pm, tjre...@udel.edu wrote:

On 1/29/2010 4:19 PM, Collin Winter wrote:
On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlan 
wrote:



Agreed. We originally switched Unladen Swallow to wordcode in our
2009Q1 release, and saw a performance improvement from this across the
board. We switched back to bytecode for the JIT compiler to make
upstream merger easier. The Unladen Swallow benchmark suite should
provided a thorough assessment of the impact of the wordcode ->
bytecode switch. This would be complementary to a JIT compiler, rather
than a replacement for it.

I would note that the switch will introduce incompatibilities with
libraries like Twisted. IIRC, Twisted has a traceback prettifier that
removes its trampoline functions from the traceback, parsing CPython's
bytecode in the process. If running under CPython, it assumes that the
bytecode is as it expects. We broke this in Unladen's wordcode switch.
I think parsing bytecode is a bad idea, but any switch to wordcode
should be advertised widely.


Several years, there was serious consideration of switching to a 
registerbased vm, which would have been even more of a change. Since I 
learned 1.4, Guido has consistently insisted that the CPython vm is not 
part of the language definition and, as far as I know, he has rejected 
any byte- code hackery in the stdlib. While he is not one to, say, 
randomly permute the codes just to frustrate such hacks, I believe he 
has always considered vm details private and subject to change and any 
usage thereof 'at one's own risk'.


Language to such effect might be a useful addition to this page (amongst 
others, perhaps):


 http://docs.python.org/library/dis.html

which very clearly and helpfully lays out quite a number of APIs which 
can be used to get pretty deep into the bytecode.  If all of this is 
subject to be discarded at the first sign that doing so might be 
beneficial for some reason, don't keep it a secret that people need to 
join python-dev to learn.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread exarkun

On 10:55 pm, collinwin...@google.com wrote:


That people are directly munging CPython
bytecode means that CPython should provide a better, more abstract way
to do the same thing that's more resistant to these kinds of changes.


Yes, definitely!  Requesting a supported way to do the kind of 
introspection that you mentioned earlier (I think it's a little simpler 
than you recollect) has been on my todo list for a while.  The hold-up 
is just figuring out exactly what kind of introspection API would make 
sense.


It might be helpful to hear more about how the wordcode implementation 
differs from the bytecode implementation.  It's challenging to abstract 
from a single data point. :)


For what it's worth, I think this is the code in Twisted that Collin was 
originally referring to:


   # it is only really originating from
   # throwExceptionIntoGenerator if the bottom of the traceback
   # is a yield.
   # Pyrex and Cython extensions create traceback frames
   # with no co_code, but they can't yield so we know it's okay to 
just

   # return here.
   if ((not lastFrame.f_code.co_code) or
   lastFrame.f_code.co_code[lastTb.tb_lasti] != 
cls._yieldOpcode):

   return

And it's meant to be used in code like this:

def foo():
   try:
   yield
   except:
   # raise a new exception

def bar():
   g = foo()
   g.next()
   try:
   g.throw(...)
   except:
   # Code above is invoked to determine if the exception raised
   # was the one that was thrown into the generator or a different
   # one.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The fate of the -U option

2010-02-03 Thread exarkun

On 02:52 pm, m...@egenix.com wrote:


Note that in Python 2.7 you can use

from __future__ import unicode_literals

on a per module basis to achieve much the same effect.


In Python 2.6 as well.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The fate of the -U option

2010-02-03 Thread exarkun

On 03:21 pm, m...@egenix.com wrote:

exar...@twistedmatrix.com wrote:

On 02:52 pm, m...@egenix.com wrote:


Note that in Python 2.7 you can use

from __future__ import unicode_literals

on a per module basis to achieve much the same effect.


In Python 2.6 as well.


Right, but there are a few issues in 2.6 that will be fixed
in 2.7.


Ah.  I don't see anything on the "What's New" page.  Got a link?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Forking and Multithreading - enemy brothers

2010-02-04 Thread exarkun

On 04:58 pm, jaeda...@gmail.com wrote:

Jesse Noller  gmail.com> writes:

We already have an implementation that spawns a
subprocess and then pushes the required state to the child. The
fundamental need for things to be pickleable *all the time* kinda
makes it annoying to work with.


This requirement puts a fairly large additional strain on working with
unwieldy, wrapped C++ libraries in a multiprocessing environment.
I'm not very knowledgeable on the internals of the system, but would
it be possible to have some kind of fallback system whereby if an 
object
fails to pickle we instead send information about how to import it? 
This
has all kinds of limitations - it only works for importable things 
(i.e. not
instances), it can potentially lose information dynamically added to 
the

object, etc., but I thought I would throw the idea out there so someone
knowledgeable can decide if it has any merit.


It's already possible to define pickling for arbitrary objects.  You 
should be able to do this for the kinds of importable objects you're 
talking about, and perhaps even for some of the actual instances (though 
that depends on how introspectable they are from Python, and whether the 
results of this introspection can be used to re-instantiate the object 
somewhere else).


Take a look at the copy_reg module.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] IO module improvements

2010-02-05 Thread exarkun

On 03:57 pm, gu...@python.org wrote:
On Fri, Feb 5, 2010 at 5:28 AM, Antoine Pitrou  
wrote:

Pascal Chambon  gmail.com> writes:


By the way, I'm having trouble with the "name" attribute of raw 
files,

which can be string or integer (confusing), ambiguous if containing a
relative path,


Why is it ambiguous? It sounds like you're using str() of the name and
then can't tell whether the file is named e.g. '1' or whether it
refers to file descriptor 1 (i.e. sys.stdout).


I think string/integer and ambiguity were different points.  Here's the 
ambiguity:


   exar...@boson:~$ python
   Python 2.6.4 (r264:75706, Dec  7 2009, 18:45:15)[GCC 4.4.1] on 
linux2
   Type "help", "copyright", "credits" or "license" for more 
information.

   >>> import os, io
   >>> f = io.open('.bashrc')
   >>> os.chdir('/')
   >>> f.name
   '.bashrc'
   >>> os.path.abspath(f.name)
   '/.bashrc'
   >>>
Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3147: PYC Repository Directories

2010-02-06 Thread exarkun

On 08:21 pm, ba...@python.org wrote:

On Feb 03, 2010, at 01:17 PM, Guido van Rossum wrote:

Can you clarify? In Python 3, __file__ always points to the source.
Clearly that is the way of the future. For 99.99% of uses of __file__,
if it suddenly never pointed to a .pyc file any more (even if one
existed) that would be just fine. So what's this talk of switching to
__source__?


Upon further reflection, I agree.  __file__ also points to the source 
in
Python 2.7.  Do we need an attribute to point to the compiled bytecode 
file?


What if, instead of trying to annotate the module object with this 
assortment of metadata - metadata which depends on lots of things, and 
can vary from interpreter to interpreter, and even from module to module 
(depending on how it was loaded) - we just stuck with the __loader__ 
annotation, and encouraged/allowed/facilitated the use of the loader 
object to learn all of this extra information?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __file__ is not always an absolute path

2010-02-06 Thread exarkun

On 10:29 pm, gu...@python.org wrote:
On Sat, Feb 6, 2010 at 12:49 PM, Ezio Melotti  
wrote:

In #7712 I was trying to change regrtest to always run the tests in a
temporary CWD (e.g. /tmp/@test_1234_cwd/).
The patches attached to the issue add a context manager that changes 
the
CWD, and it works fine when I run ./python -m test.regrtest from 
trunk/.
However, when I try from trunk/Lib/ it fails with ImportErrors (note 
that

the latest patch by Florent Xicluna already tries to workaround the
problem). The traceback points to "the_package = __import__(abstest,
globals(), locals(), [])" in runtest_inner (in regrtest.py), and a 
"print

__import__('test').__file__" there returns 'test/__init__.pyc'.
This can be reproduced quite easily:

[snip]

I haven't tried to repro this particular example, but the reason is
that we don't want to have to call getpwd() on every import nor do we
want to have some kind of in-process variable to cache the current
directory. (getpwd() is relatively slow and can sometimes fail
outright, and trying to cache it has a certain risk of being wrong.)


Assuming you mean os.getcwd():

exar...@boson:~$ python -m timeit -s 'def f(): pass' 'f()'
1000 loops, best of 3: 0.132 usec per loop
exar...@boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()'
100 loops, best of 3: 1.02 usec per loop
exar...@boson:~$
So it's about 7x more expensive than a no-op function call.  I'd call 
this pretty quick.  Compared to everything else that happens during an 
import, I'm not convinced this wouldn't be lost in the noise.  I think 
it's at least worth implementing and measuring.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __file__ is not always an absolute path

2010-02-06 Thread exarkun

On 6 Feb, 11:53 pm, gu...@python.org wrote:

On Sat, Feb 6, 2010 at 3:22 PM,   wrote:

On 10:29 pm, gu...@python.org wrote:


[snip]

I haven't tried to repro this particular example, but the reason is
that we don't want to have to call getpwd() on every import nor do we
want to have some kind of in-process variable to cache the current
directory. (getpwd() is relatively slow and can sometimes fail
outright, and trying to cache it has a certain risk of being wrong.)


Assuming you mean os.getcwd():


Yes.

exar...@boson:~$ python -m timeit -s 'def f(): pass' 'f()'
1000 loops, best of 3: 0.132 usec per loop
exar...@boson:~$ python -m timeit -s 'from os import getcwd' 
'getcwd()'

100 loops, best of 3: 1.02 usec per loop
exar...@boson:~$
So it's about 7x more expensive than a no-op function call. I'd call 
this
pretty quick. Compared to everything else that happens during an 
import,
I'm not convinced this wouldn't be lost in the noise. I think it's at 
least

worth implementing and measuring.


But it's a system call, and its speed depends on a lot more than the
speed of a simple function call. It depends on the OS kernel, possibly
on the filesystem, and so on.


Do you know of a case where it's actually slow?  If not, how convincing 
should this argument really be?  Perhaps we can measure it on a few 
platforms before passing judgement.


For reference, my numbers are from Linux 2.6.31 and my filesystem 
(though I don't think it really matters) is ext3.  I have eglibc 2.10.1 
compiled by gcc version 4.4.1.

Also "os.getcwd()" abstracts away
various platform details that the C import code would have to
replicate.


That logic can all be hidden behind a C API which os.getcwd() can then 
be implemented in terms of.  There's no reason for it to be any harder 
to invoke from C than it is from Python.

Really, the approach of preprocessing sys.path makes much
more sense. If an app wants sys.path[0] to be an absolute path too
they can modify it themselves.


That may turn out to be the less expensive approach.  I'm not sure in 
what other ways it is the approach that makes much more sense.


Quite the opposite: centralizing the responsibility for normalizing this 
value makes a lot of sense if you consider things like reducing code 
duplication and, in turn, removing the possibility for bugs.


Adding better documentation for __file__ is another task which I think 
is worth undertaking, regardless of whether any change is made to how 
its value is computed.  At the moment, the two or three sentences about 
it in PEP 302 are all I've been able to find, and they don't really get 
the job done.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-09 Thread exarkun

On 10:42 pm, fuzzy...@voidspace.org.uk wrote:

On 09/02/2010 21:57, Ben Finney wrote:

Michael Foord  writes:

The next 'big' change to unittest will (may?) be the introduction of
class and module level setUp and tearDown. This was discussed on
Python-ideas and Guido supported them. They can be useful but are 
also
very easy to abuse (too much shared state, monolithic test classes 
and

modules). Several authors of other Python testing frameworks spoke up
*against* them, but several *users* of test frameworks spoke up in
favour of them. ;-)

I think the perceived need for these is from people trying to use the
18unittest 19 API for test that are *not* unit tests.

That is, people have a need for integration tests (test this module's
interaction with some other module) or system tests (test the 
behaviour
of the whole running system). They then try to crowbar those tests 
into

18unittest 19 and finding it lacking, since  18unittest 19 is designed for
tests of function-level units, without persistent state between those
test cases.


I've used unittest for long running functional and integration tests 
(in both desktop and web applications). The infrastructure it provides 
is great for this. Don't get hung up on the fact that it is called 
unittest. In fact for many users the biggest reason it isn't suitable 
for tests like these is the lack of shared fixture support - which is 
why the other Python test frameworks provide them and we are going to 
bring it into unittest.


For what it's worth, we just finished *removing* support for setUpClass 
and tearDownClass from Trial.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-11 Thread exarkun

On 02:41 pm, ole...@gmail.com wrote:

On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord
 wrote:

On 11/02/2010 12:30, Nick Coghlan wrote:


Michael Foord wrote:


I'm not sure what response I expect from this email, and neither 
option
will be implemented without further discussion - possibly at the 
PyCon

sprints - but I thought I would make it clear what the possible
directions are.


I'll repeat what I said in the python-ideas thread [1]: with the 
advent

of PEP 343 and context managers, I see any further extension of the
JUnit inspired setUp/tearDown nomenclature as an undesirable 
direction

for Python to take.

Instead, I believe unittest should be adjusted to allow appropriate
definition of context managers that take effect at the level of the 
test

module, test class and each individual test.

For example, given the following method definitions in 
unittest.TestCase

for backwards compatibility:

� def __enter__(self):
� � self.setUp()

� def __exit__(self, *args):
� � self.tearDown()

The test framework might promise to do the following for each test:

� with get_module_cm(test_instance): # However identified
� � with get_class_cm(test_instance): # However identified
� � � with test_instance: # **
� � � � test_instance.test_method()




What Nick pointed out is the right direction (IMHO), and the one I had


Why?  Change for the sake of change is not a good thing.  What are the 
advantages of switching to context managers for this?


Perhaps the idea was more strongly justified in the python-ideas thread. 
Anyone have a link to that?

in mind since I realized that unittest extensibility is the key
feature that needs to be implemented . I even wanted to start a
project using this particular architecture to make PyUnit extensible.


What makes you think it isn't extensible now?  Lots of people are 
extending it in lots of ways.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-11 Thread exarkun

On 10 Feb, 02:47 pm, ole...@gmail.com wrote:

On Tue, Feb 9, 2010 at 6:15 PM,   wrote:


For what it's worth, we just finished *removing* support for 
setUpClass and

tearDownClass from Trial.


Ok ... but why ? Are they considered dangerous for modern societies ?


Several reasons:

 - Over the many years the feature was available, we never found anyone 
actually benefiting significantly from it.  It was mostly used where 
setUp/tearDown would have worked just as well.


 - There are many confusing corner cases related to ordering and error 
handling (particularly in the face of inheritance).  Different users 
invariably have different expectations about how these things work, and 
there's no way to satisfy them all.  One might say that this could apply 
to any feature, but...


 - People are exploring other solutions (such as testresources) which 
may provide better functionality more simply and don't need support deep 
in the loader/runner/reporter implementations.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-11 Thread exarkun

On 04:18 pm, tsea...@palladion.com wrote:


Just as a point of reference:  zope.testing[1] has a "layer" feature
which is used to support this usecase:  a layer is a class namedd as an
attribute of a testcase, e.g.:

 class FunctionalLayer:
@classmethod
def setUp(klass):
""" Do some expesnive shared setup.
"""
@classmethod
def tearDown(klass):
""" Undo the expensive setup.
"""

 class MyTest(unittest.TestCase):
 layer = FunctionalLayer

The zope.testing testrunner groups testcase classes together by layer:
each layer's setUp is called, then the testcases for that layer are 
run,

then the layer's tearDown is called.

Other features:

- - Layer classes can define per-testcase-method 'testSetUp' and
'testTearDown' methods.

- - Layers can be composed via inheritance, and don't need to call base
 layers' methods directly:  the testrunner does that for them.

These features has been in heavy use for about 3 1/2 years with a lot 
of

success.


[1] http://pypi.python.org/pypi/zope.testing/


On the other hand:

 http://code.mumak.net/2009/09/layers-are-terrible.html

I've never used layers myself, so I won't personally weigh in for or 
against.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-12 Thread exarkun

On 07:48 pm, gu...@python.org wrote:

On Fri, Feb 12, 2010 at 7:49 AM, Michael Foord
 wrote:
My *hope* is that we provide a general solution, possibly based on all 
or

part of Test Resources, with an easy mechanism for the setUpClass and
setUpModule but also solves the more general case of sharing fixtures
between tests. If that doesn't turn out to be possible then we'll go 
for a
straight implementation of setUpClass / setUpModule. I'm hoping I can 
get

this together in time for the PyCon sprints...


Do you have a reference for Test Resources?


http://pypi.python.org/pypi/testresources/0.2.2

[snip]

However from this example I *cannot* guess whether those resources are
set up and torn down per test or per test class. Also the notation


The idea is that you're declaring what the tests need in order to work. 
You're not explicitly defining the order in which things are set up and 
torn down.  That is left up to another part of the library to determine.


One such other part, OptimisingTestSuite, will look at *all* of your 
tests and find an order which involves the least redundant effort.


You might have something else that breaks up the test run across 
multiple processes and uses the resource declarations to run all tests 
requiring one thing in one process and all tests requiring another thing 
somewhere else.


You might have still something else that wants to completely randomize 
the order of tests, and sets up all the resources at the beginning and 
tears them down at the end.  Or you might need to be more 
memory/whatever conscious than that, and do each set up and tear down 
around each test.


The really nice thing here is that you're not constrained in how you 
group your tests into classes and modules by what resources you want to 
use in them.  You're free to group them by what they're logically 
testing, or in whatever other way you wish.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setUpClass and setUpModule in unittest

2010-02-12 Thread exarkun

On 08:27 pm, gu...@python.org wrote:

On Fri, Feb 12, 2010 at 12:20 PM,   wrote:
The idea is that you're declaring what the tests need in order to 
work.
You're not explicitly defining the order in which things are set up 
and torn

down. �That is left up to another part of the library to determine.

One such other part, OptimisingTestSuite, will look at *all* of your 
tests

and find an order which involves the least redundant effort.


So is there a way to associate a "cost" with a resource? I may have
one resource which is simply a /tmp subdirectory (very cheap) and
another that requires starting a database service (very expensive).


I don't think so.  From the docs, "This TestSuite will introspect all 
the test cases it holds directly and if they declare needed resources, 
will run the tests in an order that attempts to minimise the number of 
setup and tear downs required.".


You might have something else that breaks up the test run across 
multiple
processes and uses the resource declarations to run all tests 
requiring one
thing in one process and all tests requiring another thing somewhere 
else.


I admire the approach, though I am skeptical. We have a thing to split
up tests at Google which looks at past running times for tests to make
an informed opinion. Have you thought of that?
You might have still something else that wants to completely randomize 
the
order of tests, and sets up all the resources at the beginning and 
tears

them down at the end.  Or you might need to be more memory/whatever
conscious than that, and do each set up and tear down around each 
test.


How does your code know the constraints?


To be clear, aside from OptimisingTestSuite, I don't think testresources 
implements any of the features I talked about.  They're just things one 
might want to and be able to implement, given a test suite which uses 
testresources.


The really nice thing here is that you're not constrained in how you 
group
your tests into classes and modules by what resources you want to use 
in
them.  You're free to group them by what they're logically testing, or 
in

whatever other way you wish.


I guess this requires some trust in the system. :-)


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea)

2010-02-16 Thread exarkun

On 03:43 pm, dirk...@ochtman.nl wrote:
On Tue, Feb 16, 2010 at 16:26, Tres Seaver  
wrote:
Because timezones are defined politically, they change frequently. 
pytz

is released frequently (multiple times per year) to accomodate those
changes: �I can't see any way to preserve that flexibility if the
package were part of stdlib.


By using what the OS provides. At least on Linux, the basic timezone
data is usually updated by other means (at least on the distro I'm
familiar with, it's updated quite often, too; through the package
manager). I'm assuming Windows and OS X would also be able to provide
something like this. I think pytz already looks at this data if it's
available (precisely because it might well be newer).


pytz includes its own timezone database.  It doesn't use the system 
timezone data, even on Linux.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PEP 3148] futures - execute computations asynchronously

2010-03-05 Thread exarkun

On 05:06 pm, c...@hagenlocher.org wrote:

On Fri, Mar 5, 2010 at 8:35 AM, Jesse Noller  wrote:


On Fri, Mar 5, 2010 at 11:21 AM, Daniel Stutzbach
 wrote:
> On Fri, Mar 5, 2010 at 12:03 AM, Brian Quinlan  
wrote:

>>
>> import futures
>
> +1 on the idea, -1 on the name.  It's too similar to "from 
__future__ import

> ...".

Futures is a common term for this, and implemented named this in other
languages. I don't think we should be adopting things that are common,
and found elsewhere and then renaming them.


Another common term for this is a "promise".


Promises aren't exactly the same.  This would be a particularly bad name 
to apply here.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PEP 3148] futures - execute computations asynchronously

2010-03-05 Thread exarkun

On 07:10 pm, gu...@python.org wrote:

On Fri, Mar 5, 2010 at 10:30 AM,   wrote:

On 05:06 pm, c...@hagenlocher.org wrote:


On Fri, Mar 5, 2010 at 8:35 AM, Jesse Noller  
wrote:


On Fri, Mar 5, 2010 at 11:21 AM, Daniel Stutzbach
 wrote:
> On Fri, Mar 5, 2010 at 12:03 AM, Brian Quinlan 


> wrote:
>>
>> import futures
>
> +1 on the idea, -1 on the name.  It's too similar to "from 
__future__

> import
> ...".

Futures is a common term for this, and implemented named this in 
other
languages. I don't think we should be adopting things that are 
common,

and found elsewhere and then renaming them.


Another common term for this is a "promise".


Promises aren't exactly the same.  This would be a particularly bad 
name to

apply here.


Please explain. Even the Wikipedia article
(http://en.wikipedia.org/wiki/Futures_and_promises), despite promising
to explain the difference, didn't explain it.


The "explicit" futures on the wikipedia page seems to cover what is 
commonly referred to as a future.  For example, Java's futures look like 
this.


The "implicit" futures are what is generally called a promise.  For 
example, E's promises look like this.


Though the difference is mainly one of API, it turns out to make a 
significant difference in what you can accomplish.  Promises are much 
more amenable to the pipelining optimization, for example.  They're also 
much harder to implement in Python without core language changes.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PEP 3148] futures - execute computations asynchronously

2010-03-06 Thread exarkun

On 02:10 am, br...@sweetapp.com wrote:

On 7 Mar 2010, at 03:04, Phillip J. Eby wrote:

At 05:32 AM 3/6/2010, Brian Quinlan wrote:

Using twisted (or any other asynchronous I/O framework) forces you to
rewrite your I/O code. Futures do not.


Twisted's "Deferred" API has nothing to do with I/O.


I see, you just mean the API and not the underlying model.

We discussed the Deferred API on the stdlib-sig and I don't think that 
anyone expressed a preference for it over the one described in the PEP.


Do you have any concrete criticism?


From reading some of the stdlib-sig archives, it sounds like there is 
general agreement that Deferreds and Futures can be used to complement 
each other, and that getting code that is primarily Deferred-based to 
integrate with Future-based code or vice versa should eventually be 
possible.


Do I have the right sense of people's feelings?

And relatedly, once Futures are accepted and implemented, are people 
going to use them as an argument to exclude Deferreds from the stdlib 
(or be swayed by other people making such arguments)?  Hopefully not, 
given what I read on stdlib-sig, but it doesn't hurt to check...


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PEP 3148] futures - execute computations asynchronously

2010-03-08 Thread exarkun

On 08:56 pm, digitalx...@gmail.com wrote:
On Mon, Mar 8, 2010 at 12:04 PM, Dj Gilcrease  
wrote:

A style I have used in my own code in the past is a Singleton class
with register and create methods, where the register takes a
name(string) and the class and the create method takes the name and
*args, **kwargs and acts as a factory.



So I decided to play with this design a little and since I made it a
singleton I decided to place all the thread/process tracking and exit
handle code in it instead of having the odd semi-global scoped
_shutdown, _thread_references, _remove_dead_thread_references and
_python_exit objects floating around in each executor file, seems to
work well. The API would be

from concurrent.futures import executors

executor = executors.create(NAME, *args, **kwargs) # NAME is 'process'
or 'thread' by default


To create your own executor you create your executor class and add the
following at the end


Getting rid of the process-global state like this simplifies testing 
(both testing of the executors themselves and of application code which 
uses them).  It also eliminates the unpleasant interpreter 
shutdown/module globals interactions that have plagued a number of 
stdlib systems that keep global state.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ffi junk messages

2010-04-07 Thread exarkun

On 01:29 pm, mar...@v.loewis.de wrote:

Mark Dickinson wrote:

On Wed, Apr 7, 2010 at 1:39 PM, Jeroen Ruigrok van der Werven
 wrote:

Before I file a bug report, is anyone else seeing this (in my case on
FreeBSD 8):

Modules/_ctypes/libffi/src/x86/sysv.S:360: Error: junk at end of 
line, first unrecognized character is `@'
Modules/_ctypes/libffi/src/x86/sysv.S:387: Error: junk at end of 
line, first unrecognized character is `@'
Modules/_ctypes/libffi/src/x86/sysv.S:423: Error: junk at end of 
line, first unrecognized character is `@'


It's on the buildbots, too.  See:

http://www.python.org/dev/buildbot/builders/x86%20FreeBSD%20trunk/builds/208/steps/compile/logs/stdio


Instead of submitting a bug report, it would be better to submit a


In *addition* to submitted a bug report, surely. :)

patch, though. Can you try having the build process use freebsd.S
instead of sysv.S?

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python- 
dev/exarkun%40twistedmatrix.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Getting an optional parameter instead of creating a socket internally (was: Re: stdlib socket usage and " keepalive" )

2010-04-12 Thread exarkun

On 12 Apr, 11:19 pm, j...@jcea.es wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2010 12:47 AM, Antoine Pitrou wrote:

Jesus Cea  jcea.es> writes:


PS: "socket.setdefaulttimeout()" is not enough, because it could
shutdown a perfectly functional connection, just because it was idle 
for

too long.


The socket timeout doesn't shutdown anything. It just puts a limit on 
how much
time recv() and send() can block. Then it's up to you to detect 
whether the
server is still alive (for example by pinging it through whatever 
means the

application protocol gives you).


A regular standard library (let say, poplib) would abort, after getting
the timeout exception.
4. Modify client libraries to accept a new optional socket-like 
object

as an optional parameter. This would allow things like transparent
compression or encryption, or to replace the socket connection by
anything else (read/write to shared memory or database, for example).


This could be useful too.


I have been thinking about this for years. Do you actually think this
could be formally proposed?.


Every once in a while I make a little bit more progress on the PEP I'm 
working on for this.  If you want to talk more about this, you can find 
me in #python-dev or #twisted on freenode.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] MSDN licenses available for python-dev

2010-04-18 Thread exarkun

On 02:56 pm, techto...@gmail.com wrote:

Twisted folks will surely appreciate any help and may be able to
contribute back.
http://twistedmatrix.com/trac/wiki/Windows


Extra Windows and VS licenses would certainly be helpful for Twisted 
development, and might lead indirectly to CPython/Windows improvements. 
Is this the kind of use for which it is appropriate to request an MSDN 
license via the PSF?


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Enhanced tracker privileges for dangerjim to do triage.

2010-04-25 Thread exarkun

On 09:39 pm, solip...@pitrou.net wrote:

 pobox.com> writes:



Sean> However, I will step up for him and say that I've known him 
a
Sean> decade, and he's very trustworthy.  He has been the 
president (we
Sean> call that position Maximum Leader) of our Linux Users Group 
here

Sean> for 5 years or so.

Given that Sean is vouching for him I'm fine with it.


I'm not sure I agree. Of course it could be argued the risk is minimal, 
but I
think it's better if all people go through the same path of proving 
their

motivation and quality of work.
And if there's something wrong with that process we'd better address it 
than

give random privileges to people we like :)


+1

As others have said, I don't have any problem with Sean's judgement. 
That's not what's being questioned here.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Enhanced tracker privileges for dangerjim to do triage.

2010-04-25 Thread exarkun

On 25 Apr, 11:18 pm, st...@holdenweb.com wrote:

Tres Seaver wrote:

Antoine Pitrou wrote:

 pobox.com> writes:
Sean> However, I will step up for him and say that I've known 
him a
Sean> decade, and he's very trustworthy.  He has been the 
president (we
Sean> call that position Maximum Leader) of our Linux Users 
Group here

Sean> for 5 years or so.

Given that Sean is vouching for him I'm fine with it.
I'm not sure I agree. Of course it could be argued the risk is 
minimal, but I
think it's better if all people go through the same path of proving 
their

motivation and quality of work.
And if there's something wrong with that process we'd better address 
it than

^^^

Don't overlook this part of Antoine's post.

give random privileges to people we like :)


I think there is a definite "unpriced externality" to keeping the
process barriers high here.  I don't belive from conversations at the
language summit / PyCon that the community is being overrun with 
hordes
of unworthies clamoring to triage Python bugs:  rather the opposite, 
in
fact.  It seems to me that backing from an established community 
member

ought to be enough to get a prospective triageur at least provisional
roles to do the work, with the caveat that it might be revoked it it
didn't turn out well.  If it does turn out well, then look to *expand*
that user's roles in the community, with a nice helping of public
acclaim to go with it.

I am not arguing for "making exceptions for friends" here;  rather 
that
the acknowledged issues with inclusiveness / espansion of the 
developer

community require making changes to the rules to encourage more
participation.

BTW, language like "prov[ing] their motivation" is itself 
demotivating,

and likely contributes to the status quo ante.


With my PSF hat on I'd like to support Tres here (and, by extension,
Sean's proposal). Lowering the barriers of entry is a desirable goal.

If adding people created work for already-busy developers then I'd be
against it*, but with Sean offering to mentor his new protege and 
ensure

that he limits his role to triage initially that doesn't seem to be an
issue.

Maybe it's time to review the way people "prove their motivation and 
the

quality of their work"?


Sounds good.  Why is the barrier for this permission any higher than 
someone asking for it?  Is there really a need to protect against 
contributors with malicious intent?


I think there should be a page on python.org that says all contributors 
are welcome, and one way to become a contributor is to wrangle the issue 
tracker, and explains what this involves (I don't really have any idea, 
actually; I assume it's things like setting the owner of new tickets to 
someone who might actually fix it, things that would happen 
automatically if roundup had the right information), and then anyone who 
steps up gets the necessary access.


Jean-Paul

regards
Steve

* I'd be against it, but I'd fight to change the development process so
that adding new people *didn't* create work. We should, in my opinion,
be looking for a continual influx of new worker bees.
--
Steve Holden   +1 571 484 6266   +1 800 494 3119

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Anyone can do patch reviews (was: Enhanced tracker privileges...)

2010-04-27 Thread exarkun

On 01:38 pm, rdmur...@bitdance.com wrote:
On Tue, 27 Apr 2010 11:15:49 +1000, Steven D'Aprano 
 wrote:

No, of course not. There are always other reasons, the biggest is too
many things to do and not enough time to do it. If I did review
patches, would they be accepted on the strength on my untrusted
reviews?


It is very very helpful for *anyone* to review patches.   Let's see if
I can clarify the process a little.  (This is, of course, my take
on it, others can chime in if they think I got anything wrong.)

Someone submits a bug.  Someone submits a patch to fix that bug (or add
the enhancement).  Is that patch ready for commit?  No.  Is it ready
for *commit review* (ie: someone with commit privileges to look at it
with an eye toward committing it)?  Probably not.

What makes a patch ready for commit review?  The patch should:

   1) conform to pep 7/8
   2) have unit tests that fail before the patch and succeed after
   3) have documentation updates if needed
   4) have a py3k port *if and only if* the port is non-trivial
   (well, if someone wants to add one when it is trivial that's OK,
   but it probably won't get used)
   5) if it is at all likely to have system dependencies, be tested
   on at least linux and windows


This list would make a good addition to one of the cpython development 
pages.  If potential contributors could find this information, then 
they'd be much more likely to participate by doing reviews.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Parallel test execution on buildbot

2010-05-08 Thread exarkun

Hi all,

Has anyone considered using regrtest's -j option in the buildbot 
configuration to speed up the test runs?  Antoine Pitrou pointed out 
that even for single CPU slaves, this could be a win due to the number 
of tests that spend time sleeping or waiting on I/O.  And on slaves with 
multiple CPUs it would clearly be even better.  eg, I don't know what 
hardware is actually in the Solaris slave (bot loewis-sun), but if it 
has the full 4 UltraSPARCs that it could, then running with -j4 or -j5 
there might bring its runtime down from nearly 100 minutes to 20 or 25 - 
competitive with some of the more reasonable slaves.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] configuring the buildbot to skip some tests?

2010-05-13 Thread exarkun

On 03:17 am, jans...@parc.com wrote:

I've got parc-tiger-1 up and running again.  It's failing on test_tk,
which makes sense, because it's running as a background twisted 
process,

and thus can't access the window server.  I should configure that out.


You can run it in an xvfb.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] robots exclusion file on the buildbot pages?

2010-05-15 Thread exarkun

On 05:48 pm, solip...@pitrou.net wrote:


Hello,

The buildbots are sometimes subject to a flood of "svn exception"
errors. It has been conjectured that these errors are caused by Web
crawlers pressing "force build" buttons without filling any of the
fields (of course, the fact that we get such ugly errors in the
buildbot results, rather than a clean error message when pressing
the button, is a buildbot bug in itself). Couldn't we simply exclude 
all

crawlers from the buildbot Web pages?


Most (all?) legitimate crawlers won't submit forms.  Do you think 
there's a non-form link to the force build URL (which _will_ accept a 
GET request to mean the same thing as a POST)?


One thing I have noticed is that spammers find these forms and submit 
them with garbage.  We can probably suppose that such people are going 
to ignore a robots.txt file.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] robots exclusion file on the buildbot pages?

2010-05-15 Thread exarkun

On 08:32 pm, mar...@v.loewis.de wrote:
I'd find it useful if the "branch" field was a choice pull-down 
listing

valid branches, rather than a plain text field, and if the "revision"
field always defaulted to "HEAD".  Seems to me that since the form is
coming from the buildmaster, that should be possible.


Unfortunately, these forms are deeply hidden in the buildbot code. So
I'd rather avoid editing them, or else upgrading to the next buildbot
version becomes even more tedious.


Someone sufficiently interested in this feature could work with buildbot 
upstream to get the feature added to an upcoming release, though 
(obviously).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ssl

2010-06-05 Thread exarkun

On 08:34 am, krist...@ccpgames.com wrote:

Hello there.
I wanted to do some work on the ssl module, but I was a bit daunted at 
the prerequisites.  Is there anywhere that I can get at precompiled 
libs for the openssl that we use?
In general, gettin all those "external" projects seem to be complex to 
build.  Is there a fast way?


I take it the challenge is that you want to do development on Windows? 
If so, this might help:


 http://www.slproweb.com/products/Win32OpenSSL.html

It's what I use for any Windows pyOpenSSL development I need to do.


What I want to do, is to implement a separate BIO for OpenSSL, one that 
calls back into python for writes and reads.  This is so that I can use 
my own sockets implementation for the actual IO, in particular, I want 
to funnel the encrypted data through our IOCompletion-based stackless 
sockets.


For what it's worth, Twisted's IOCP SSL support is implemented using 
pyOpenSSL's support of OpenSSL memory BIOs.  This is a little different 
from your idea: memory BIOs are a built-in part of OpenSSL, and just 
give you a buffer from which you can pull whatever bytes OpenSSL wanted 
to write (or a buffer into which to put bytes for OpenSSL to read).


I suspect this would work well enough for your use case.  Being able to 
implement an actual BIO in Python would be pretty cool, though.


If successful, I think this would be a useful addition to ssl.
You would do something like:

class BIO():
 def write(): pass
 def read(): pass

from ssl.import
bio = BIO()
ssl_socket = ssl.wrap_bio(bio, ca_certs=...)


Hopefully this would integrate more nicely with the recent work Antoine 
has done with SSL contexts.  The preferred API for creating an SSL 
connection is now more like this:


   import ssl
   ctx = ssl.SSLContext(...)
   conn = ctx.wrap_socket(...)

So perhaps you want to add a wrap_bio method to SSLContext.  In fact, 
this would be the more general API, and could supercede wrap_socket: 
after all, socket support is just implemented with the socket BIOs. 
wrap_socket would become a simple wrapper around something like 
wrap_bio(SocketBIO(socket)).


I am new to OpenSSL, I haven't even looked at what a BIO looks like, 
but I read this:  http://marc.info/?l=openssl- 
users&m=99909952822335&w=2
which indicates that this ought to be possible.  And before I start 
experimenting, I need to get my OpenSSL external ready.


Any thoughts?


It should be possible.  One thing that's pretty tricky is getting 
threading right, though.  Python doesn't have to deal with this problem 
yet, as far as I know, because it never does something that causes 
OpenSSL to call back into Python code.  Once you have a Python BIO 
implementation, this will clearly be necessary, and you'll have to solve 
this.  It's certainly possible, but quite fiddly.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Library Support in 3.x (Was: email package status in 3.X)

2010-06-19 Thread exarkun

On 10:59 am, arcri...@gmail.com wrote:

You mean Twisted support, because library support is at the point where
there are fewer actively maintained packages not yet ported than those 
which
are.  Of course if your Python experience is hyper-focused to one 
framework
that isn't ported yet, it will certainly seem like a lot, and you guys 
who

run #Python are clearly hyper-focused on Twisted.


Arc,

This isn't about Twisted.  Let's not waste everyone's time by trying to 
make it into a conflict between Twisted users and the rest of the Python 
community.


You listed six other major packages that you yourself use that aren't 
available on Python 3 yet, so why are you trying to say here that this 
is all about Twisted?

[snip]

This anti-Py3 rhetoric is damaging to the community and needs to stop.
We're moving forward toward Python 3.2 and beyond, complaining about it 
only

saps valuable developer time (including your own) from getting these
libraries you need ported faster.


No, it's not damaging.  Critical self-evaluation is a useful tool. 
Trying to silence differing perspectives is what's damaging to the 
community.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python Library Support in 3.x (Was: email package status in 3.X)

2010-06-19 Thread exarkun

On 01:09 pm, arcri...@gmail.com wrote:

[snip]
It is not "critical self-evaluation" to repeat "Python 3 is not ready" 
as
litany in #Python and your supporting website.  I use the word "litany" 
here

because #Python refers users to what appears to be a religious website
http://python-commandments.org/python3.html


It's not my website.  I don't own the domain, I don't control the 
hosting, I didn't generate the content, I have no access to change 
anything on it.  I've barely even frequent #python in the last three 
years.


Perhaps you were directing those comments at Stephen Thorne though 
(although I don't know if he's any more involved in it than I am so 
don't take this as anything but idle speculation).
I have further witnessed (and even been the other party to) you and 
other
ops in #Python telling package developers, who have clearly said that 
they

are working to port their legacy package to Py3, that "Python 3 is not
ready".


I'm not going to condone or condemn events which I didn't observe.

However you've never witnessed me discouraging developers who were 
actively porting software to Python 3 because I've never done it.  I'm 
sure this was an honest mistake and you simply confused me with someone 
else.

Besides rally against it what have you, as a Twisted developer, done
regarding the Python 3 migration process?


This, however, I find extremely insulting.  I don't answer to you.  The 
only reason I'm replying at all is to correct the two pieces of 
misinformation in your message.


I don't see how this discussion can go anywhere productive, so I'll do 
my best to make this my last post on the subject.  Obviously I made a 
mistake posting to the thread at all.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots: why am I skipping these tests?

2010-06-30 Thread exarkun

On 05:24 am, mar...@v.loewis.de wrote:

Seems to work fine.  So this I don't understand.  Any ideas, anyone?


Didn't we discuss this before? The buildbot slave has no controlling
terminal anymore, hence it cannot open /dev/tty. If you are curious,
just patch your checkout to output the exact errno (e.g. to stdout),
and trigger a build through the web.


Could the test be rewritten (or supplemented) to use a pty?  Most or 
perhaps all of the same operations should be supported.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots: why am I skipping these tests?

2010-06-30 Thread exarkun


On 04:44 pm, solip...@pitrou.net wrote:

On Wed, 30 Jun 2010 09:00:09 PDT
Bill Janssen  wrote:


So, my question then is, why are these skips "unexpected"?  Seems to 
me

that if this is the case, this test will never run on any platform.


You can change the value of the "usepty" option in your buildbot.tac.
(you will also have to restart the buildslave process)


But don't do this.  The usepty option is completely unrelated to the 
suggestion I was making.  Flipping it to True will only cause other 
things to break and have no impact on this test.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots: why am I skipping these tests?

2010-06-30 Thread exarkun

On 04:26 pm, jans...@parc.com wrote:

exar...@twistedmatrix.com wrote:

Could the test be rewritten (or supplemented) to use a pty?  Most or
perhaps all of the same operations should be supported.


Buildbot seems to be explicitly not using a PTY.  From the the top of
the test output:

make buildbottest
in dir /Users/buildbot/buildarea/trunk.parc-leopard-1/build (timeout 
1800 secs)

watching logfiles {}
argv: ['make', 'buildbottest']
[...]
closing stdin
using PTY: False


This output is telling you that the build slave isn't giving the child 
processes it creates a pty.  What I had in mind was writing the test to 
create a new pty, instead of trying to use the controlling tty.  So 
basically, the two things are completely unrelated and this buildbot 
configuration isn't hurting anything (and in fact is likely helping 
quite a few things, so I suggest leaving it alone).


I believe this is specified by the build master.

This test seems to work on Ubuntu and FreeBSD, though.


That's interesting.  I wonder if those slaves are able to open /dev/tty 
for some reason?  The slave is supposed to detach from the controlling 
terminal when it daemonizes.  There could be a bug in that code, I 
suppose, or the slaves could be running without daemonization for some 
reason.  The operators would have to tell us about that, I think.  Or, 
another possibility is that /dev/tty doesn't work how I expect it to and 
on Ubuntu and FreeBSD it can be opened even if you don't have a 
controlling terminal.  Hopefully not, though.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots: why am I skipping these tests?

2010-06-30 Thread exarkun

On 05:29 pm, mar...@v.loewis.de wrote:

Am 30.06.2010 13:32, schrieb exar...@twistedmatrix.com:

On 05:24 am, mar...@v.loewis.de wrote:

Seems to work fine.  So this I don't understand.  Any ideas, anyone?


Didn't we discuss this before? The buildbot slave has no controlling
terminal anymore, hence it cannot open /dev/tty. If you are curious,
just patch your checkout to output the exact errno (e.g. to stdout),
and trigger a build through the web.


Could the test be rewritten (or supplemented) to use a pty?  Most or
perhaps all of the same operations should be supported.


I'm not sure. It uses TIOCGPGRP, basically to establish that ioctl
can also put results into a Python array (IIUC). This goes back to
http://bugs.python.org/555817

Somebody rewriting it would need to make sure the original test purpose
is still met.


Absolutely.  And even so, it may still make sense to run the test 
against both /dev/tty and a pty (or whatever subset of those things can 
be acquired in the testing environment).


You can do a TIOCGPGRP on a new pty (created by os.openpty) but it 
produces somewhat less interesting results than doing it on /dev/tty. 
FIONREAD might be a nice alternative.  It produces interesting (ie, non- 
zero) values in an easily predictable/controllable way (it tells you how 
many bytes are in the read buffer).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS X buildbots: why am I skipping these tests?

2010-06-30 Thread exarkun

On 06:46 pm, exar...@twistedmatrix.com wrote:


On 04:44 pm, solip...@pitrou.net wrote:

On Wed, 30 Jun 2010 09:00:09 PDT
Bill Janssen  wrote:


So, my question then is, why are these skips "unexpected"?  Seems to 
me

that if this is the case, this test will never run on any platform.


You can change the value of the "usepty" option in your buildbot.tac.
(you will also have to restart the buildslave process)


But don't do this.  The usepty option is completely unrelated to the 
suggestion I was making.  Flipping it to True will only cause other 
things to break and have no impact on this test.


Ah, sorry.  I confused myself.  The option is related.  But it will also 
break other things, so I still would recommend looking for other 
solutions.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Getting an optional parameter instead of creating a socket internally

2010-07-11 Thread exarkun

On 03:11 pm, j...@jcea.es wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13/04/10 04:03, exar...@twistedmatrix.com wrote:

On 12 Apr, 11:19 pm, j...@jcea.es wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2010 12:47 AM, Antoine Pitrou wrote:

Jesus Cea  jcea.es> writes:


PS: "socket.setdefaulttimeout()" is not enough, because it could
shutdown a perfectly functional connection, just because it was 
idle

for
too long.


The socket timeout doesn't shutdown anything. It just puts a limit 
on

how much
time recv() and send() can block. Then it's up to you to detect
whether the
server is still alive (for example by pinging it through whatever
means the
application protocol gives you).


A regular standard library (let say, poplib) would abort, after 
getting

the timeout exception.
4. Modify client libraries to accept a new optional socket-like 
object

as an optional parameter. This would allow things like transparent
compression or encryption, or to replace the socket connection by
anything else (read/write to shared memory or database, for 
example).


This could be useful too.


I have been thinking about this for years. Do you actually think this
could be formally proposed?.


Every once in a while I make a little bit more progress on the PEP I'm
working on for this.  If you want to talk more about this, you can 
find

me in #python-dev or #twisted on freenode.

Jean-Paul


Jean-Paul, I would like to have this for 3.2. How is the PEP going?.


It's still little more than an outline.  You can see it here:

 http://twistedmatrix.com/trac/wiki/ProtocolPEP

And if you're interested in helping, we can figure out a way to do that 
(you can have edit permission on the wiki or we can move the document 
elsewhere, whatever).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Getting an optional parameter instead of creating a socket internally

2010-07-12 Thread exarkun

On 12:30 pm, thebra...@brasse.org wrote:

On Sun, Jul 11, 2010 at 9:06 PM,  wrote:


It's still little more than an outline.  You can see it here:

 http://twistedmatrix.com/trac/wiki/ProtocolPEP

And if you're interested in helping, we can figure out a way to do 
that

(you can have edit permission on the wiki or we can move the document
elsewhere, whatever).

Jean-Paul
This seems like an interesting idea to me. I would like to figure out 
some
way I could help with the PEP. If you move the document, could you 
please

keep me updated on the new location?


Sure.  If it moves, there'll be a pointer from its existing location to 
the new one.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] EINVAL

2010-07-22 Thread exarkun

On 10:33 am, solip...@pitrou.net wrote:

On Thu, 22 Jul 2010 17:50:00 +0900
"Stephen J. Turnbull"  wrote:


I think that's Antoine's PEP 3151.  Interestingly, he doesn't mention
EINVAL at all.

http://www.python.org/dev/peps/pep-3151/


That's right. It is based on a survey of existing exception-catching
code in the stdlib. There's only one match in the whole Lib/ subtree:

$ grep -r EINVAL Lib/
Lib/plat-sunos5/STROPTS.py:968:EINVAL = 22

I guess EINVAL would most often indicate a programming error, which is
why it doesn't get handled specifically in except clauses.


For setgroups it means you exceeded a platform-specific limit.  On 
Windows, for non-blocking connect, it means wait a little longer.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 proposed changes for basic plugins support

2010-08-02 Thread exarkun

On 12:21 pm, m...@egenix.com wrote:

Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby  
wrote:

..


So without specific examples of why this is a problem, it's hard to 
see why
a special Python-specific set of configuration files is needed to 
resolve

it, vs. say, encouraging application authors to use the available
alternatives for doing plugin directories, config files, etc.


I don't have a specific example in mind, and I must admit that if an
application does the right thing
(provide the right configuration file), this activate feature is not
useful at all. So it seems to be a bad idea.

I propose that we drop the PLUGINS file idea and we add a new metadata
field called Provides-Plugin
in PEP 345, which will contain the info I've described minus the state
field. This will allow us to expose
plugins at PyPI.

IOW, have entry points like setuptools provides, but in a metadata
field instead of a entry_points.txt file.


Do we really need to make Python packaging even more complicated by
adding support for application-specific plugin mechanisms ?

Packages can already work as application plugins by simply defining
a plugins namespace package and then placing the plugin packages
into that namespace.

See Zope for an example of how well this simply mechanism works out in
practice: it simply scans the "Products" namespace for sub-packages and
then loads each sub-package it finds to have it register itself with 
Zope.


This is also roughly how Twisted's plugin system works.  One drawback, 
though, is that it means potentially executing a large amount of Python 
in order to load plugins.  This can build up to a significant 
performance issue as more and more plugins are installed.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 proposed changes for basic plugins support

2010-08-02 Thread exarkun

On 01:27 pm, m...@egenix.com wrote:

exar...@twistedmatrix.com wrote:

On 12:21 pm, m...@egenix.com wrote:


See Zope for an example of how well this simply mechanism works out 
in
practice: it simply scans the "Products" namespace for sub-packages 
and

then loads each sub-package it finds to have it register itself with
Zope.


This is also roughly how Twisted's plugin system works.  One drawback,
though, is that it means potentially executing a large amount of 
Python

in order to load plugins.  This can build up to a significant
performance issue as more and more plugins are installed.


I'd say that it's up to the application to deal with this problem.

An application which requires lots and lots of plugins could
define a registration protocol that does not require loading
all plugins at scanning time.


It's not fixable at the application level, at least in Twisted's plugin 
system.  It sounds like Zope's system has the same problem, but all I 
know of that system is what you wrote above.  The cost increases with 
the number of plugins installed on the system, not the number of plugins 
the application wants to load.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 376 proposed changes for basic plugins support

2010-08-02 Thread exarkun

On 03:08 pm, mer...@netwok.org wrote:

Le 02/08/2010 14:31, exar...@twistedmatrix.com a �crit :

On 12:21 pm, m...@egenix.com wrote:

Do we really need to make Python packaging even more complicated by
adding support for application-specific plugin mechanisms ?

Packages can already work as application plugins by simply defining
a plugins namespace package and then placing the plugin packages
into that namespace. [...]


This is also roughly how Twisted's plugin system works.  One drawback,
though, is that it means potentially executing a large amount of 
Python

in order to load plugins.  This can build up to a significant
performance issue as more and more plugins are installed.


If namespace packages make it into Python, they would indeed solve a
part of the problem in a nice, generic way.


I don't think this solves the problem.  Twisted's plugin system already 
uses namespace packages.  It helps slightly, by spreading out your 
plugins, but you can still end up with lots of plugins in a particular 
namespace.

Regarding the performance
issue, I wonder if functions in pkgutil or importlib could allow one to
iterate over the plugins (i.e. submodules and subpackages of the
namespace package) without actually loading then. We would get only
their names though, not their description or any other information
useful to decide to activate them or not.


The trick is to continue to provide enough information so that the code 
iterating over the data can make a correct decision.  It's not clear 
that names are enough.

Maybe importing is the way to
go, with a doc recommendation that people make their plugins 
subpackages

with an __init__ module containing only a docstring.

Regards


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Looking after the buildbots (in general)

2010-08-04 Thread exarkun

On 02:51 pm, ba...@python.org wrote:

On Aug 04, 2010, at 11:16 AM, Antoine Pitrou wrote:

I think the issue is that many core developers don't have the reflex
to check buildbot state after they commit some changes (or at least
on a regular, say weekly, basis), and so gradually the buildbots have
a tendency to turn from green to red, one after another.


I'd classify this as a failure of the tools, not of the developers. 
These
post-commit verification steps should be proactive, and scream really 
loud (or
even prevent future commits) until everything is green again. 
Buildbots
themselves can be unstable, so this may or may not be workable, and 
changing

any of this will take valuable volunteer time.  It's also unsexy work.


How hard is it to look at a web page?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Looking after the buildbots (in general)

2010-08-04 Thread exarkun

On 03:17 pm, fuzzy...@voidspace.org.uk wrote:

On 04/08/2010 16:15, exar...@twistedmatrix.com wrote:

On 02:51 pm, ba...@python.org wrote:

On Aug 04, 2010, at 11:16 AM, Antoine Pitrou wrote:

I think the issue is that many core developers don't have the reflex
to check buildbot state after they commit some changes (or at least
on a regular, say weekly, basis), and so gradually the buildbots 
have

a tendency to turn from green to red, one after another.


I'd classify this as a failure of the tools, not of the developers. 
These
post-commit verification steps should be proactive, and scream really 
loud (or
even prevent future commits) until everything is green again. 
Buildbots
themselves can be unstable, so this may or may not be workable, and 
changing

any of this will take valuable volunteer time. It's also unsexy work.


How hard is it to look at a web page?


The hard part is remembering.


Seems to be setting the bar awfully low for Python developers.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Looking after the buildbots (in general)

2010-08-04 Thread exarkun

On 03:31 pm, ba...@python.org wrote:

On Aug 04, 2010, at 03:15 PM, exar...@twistedmatrix.com wrote:

On 02:51 pm, ba...@python.org wrote:

On Aug 04, 2010, at 11:16 AM, Antoine Pitrou wrote:

I think the issue is that many core developers don't have the reflex
to check buildbot state after they commit some changes (or at least
on a regular, say weekly, basis), and so gradually the buildbots 
have

a tendency to turn from green to red, one after another.


I'd classify this as a failure of the tools, not of the developers.

These post-commit verification steps should be proactive, and scream
really >loud (or

even prevent future commits) until everything is green again.

Buildbots themselves can be unstable, so this may or may not be
workable, and >changing
any of this will take valuable volunteer time.  It's also unsexy 
work.


How hard is it to look at a web page?


That's not the right question :)

The real questions are: how hard is it to remember how to find the 
appropriate

web page


Oh, come on.  I don't believe this.

how hard is it to know which buildbots are *actually* stable enough
to rely on,


This is more plausible.  But it's not the tools' fault that the test 
suite has intermittent failures.  Developers choose to add new features 
or change existing ones instead of fixing bugs in existing code or 
tests.  I'd call that a developer failure.

how hard is it to decipher the results to know what they're
telling you?


Red bad, green good.

A much more plausible explanation, to me, is that most developers don't 
really care if things are completely working most of the time.  They're 
happy to push the work onto other developers and onto the release team. 
And as long as other developers let them get away with that, it's not 
likely to stop.


But perhaps the people picking up the slack here don't mind and are 
happy to keep doing it, in which case nothing needs to change.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Looking after the buildbots (in general)

2010-08-04 Thread exarkun

On 03:53 pm, g.bra...@gmx.net wrote:

Am 04.08.2010 17:15, schrieb exar...@twistedmatrix.com:

On 02:51 pm, ba...@python.org wrote:

On Aug 04, 2010, at 11:16 AM, Antoine Pitrou wrote:

I think the issue is that many core developers don't have the reflex
to check buildbot state after they commit some changes (or at least
on a regular, say weekly, basis), and so gradually the buildbots 
have

a tendency to turn from green to red, one after another.


I'd classify this as a failure of the tools, not of the developers.
These
post-commit verification steps should be proactive, and scream really
loud (or
even prevent future commits) until everything is green again.
Buildbots
themselves can be unstable, so this may or may not be workable, and
changing
any of this will take valuable volunteer time.  It's also unsexy 
work.


How hard is it to look at a web page?


The hard part is to know *when* to look.  As you might have noticed, 
the
Python test suite does not run in ten seconds, especially on some of 
the
buildbots -- it can take 1-2 there to complete.  So if you look too 
soon,

you won't see all the results, and usually the slow systems are the
interesting ones.

Now we could of course have a commit hook that counts down two hours 
and

then sends an email to the committer "Now look at the buildbot!"...


I don't think it's that hard to take a look at the end of the day (or 
before starting anything else the next morning).  All it really takes is 
a choice on the part of each developer to care whether or not their 
changes are correct.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal counter to debug leaking file descriptors

2010-08-31 Thread exarkun

On 05:22 pm, gl...@twistedmatrix.com wrote:


On Aug 31, 2010, at 10:03 AM, Guido van Rossum wrote:

On Linux you can look somewhere in /proc, but I don't know that it
would help you find where a file was opened.


"/dev/fd" is actually a somewhat portable way of getting this 
information.  I don't think it's part of a standard, but on Linux it's 
usually a symlink to "/proc/self/fd", and it's available on MacOS and 
most BSDs (based on a hasty and completely-not-comprehensive 
investigation).  But it won't help you find out when the FDs were 
originally opened, no.

___


On OS X and Solaris, dtrace and ustack will tell you exactly when and 
where the FDs were originally opened, though.  On Linux, SystemTap might 
give you the same information (but I know much less about SystemTap). 
If http://bugs.python.org/issue4111 is resolved, then this may even be 
possible without using a patched version of Python.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Volunteer help with porting

2010-09-07 Thread exarkun

On 01:33 pm, p...@phd.pp.ru wrote:

Hello. Thank you for the offer!

On Tue, Sep 07, 2010 at 06:36:10PM +0530, Prashant Kumar wrote:
My name is Prashant Kumar and I wish to contribute to the Python 
development

process by helping convert certain existing python
over to python3k.

Is there anyway I could obtain a list of libraries which need to be 
ported

over to python3k, sorted by importance(by importance i mean
packages which serve as a dependency for larger number of packages 
being

more important).


  As there is already Python 3.2 alpha, the core of Python has already
been ported - and this mailing list is for discussion of development of 
the

very Python, not about working on 3rd-party libraries.


How about the email package?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Volunteer help with porting

2010-09-07 Thread exarkun

On 02:34 pm, p...@phd.pp.ru wrote:
On Tue, Sep 07, 2010 at 02:02:59PM -, exar...@twistedmatrix.com 
wrote:

On 01:33 pm, p...@phd.pp.ru wrote:
  As there is already Python 3.2 alpha, the core of Python has 
already

been ported


How about the email package?


  What about email? It is a core library, right? It has been ported 
AFAIK.

Or have I missed something?


Are you just assuming that because there have been 3.x releases, 
everything in the standard library works properly in 3.x?  Or do you 
know something about the email package specifically?


Last I checked, there were a number of outstanding issues with email in 
3.x.  And as Michael Foord pointed out, there are other standard library 
modules in the same state.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Atlassian and bitbucket merge

2010-09-29 Thread exarkun

On 01:13 am, st...@holdenweb.com wrote:

I see that Atlassian have just taken over BitBucket, the Mercurial
hosting company. IIRC Atlassian offered to host our issue tracking on
JIRA, but in the end we decided to eat our own dog food and went with
roundup.

I'm wondering if they'd be similarly interested in supporting our Hg
server. Or is self-hosting the only acceptable solution? From recent
mail it looks likes we may be up and running on Hg fairly soon.


I know of two medium sized projects (smaller than CPython) that are 
switching away from BitBucket's free services because of their poor 
reliability.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] We should be using a tool for code reviews

2010-09-30 Thread exarkun

On 02:47 pm, jnol...@gmail.com wrote:
On Wed, Sep 29, 2010 at 2:32 PM, Guido van Rossum  
wrote:

I would like to recommend that the Python core developers start using
a code review tool such as Rietveld or Reviewboard. I don't really
care which tool we use (I'm sure there are plenty of pros and cons to
each) but I do think we should get out of the stone age and start
using a tool for the majority of our code reviews.

While I would personally love to see Rietveld declared the official
core Python code review tool, I realize that since I wrote as a Google
engineer and it is running on Google infrastructure (App Engine), I
can't be fully objective about the tool choice -- even though it is
open source, has several non-Googler maintainers, and can be run
outside App Engine as well.

But I do think that using a specialized code review tool rather than
unstructured email plus a general-purpose issue tracker can hugely
improve developer performance and also increase community
participation. (A code review tool makes it much more convenient for a
senior reviewer to impart their wisdom to a junior developer without
appearing judgmental or overbearing.)

See also this buzz thread:
http://www.google.com/buzz/115212051037621986145/At6Rj82Kret/When- 
will-the-Python-dev-community-start-using


--
--Guido van Rossum (python.org/~guido)


Regardless of the tool(s) used, code reviews are a fantastic
equalizer. If you have long time, experienced developers "submitting"
to the same rules that newer contributors have to follow then it helps
remove the idea that there is special treatment occurring.


Of course, this is only true if the core developers *do* submit to the 
same rules.  Is anyone proposing that current core committers have all 
their work reviewed before it is accepted?


(I am strongly in favor of this, but I don't think many core committers 
are.)


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Support for async read/write

2010-10-19 Thread exarkun

On 04:50 pm, j...@jcea.es wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Current Python lacks support for "aio_*" syscalls to do async IO. I
think this could be a nice addition for python 3.3.


Adding more platform wrappers is always nice.  Keep in mind that the 
quality of most (all?) aio_* implementations is spotty at best, though. 
On Linux, they will sometimes block (for example, if you fail to align 
buffers properly, or open a file without O_DIRECT, or if there are too 
many other aio operations active on the system at the time, etc).  Thus, 
these APIs are finicky at best, and the Python bindings will be 
similarly fragile.


Jean-Paul

If you agree, I will create an issue in the tracker. If you think the
idea is of no value, please say so for me to move on.


Do you have an application in mind?

Maybe an 3th party
module, but I think this functionality sould be available in core 
python.


Starting off as a 3rd party library to try to develop some interest and 
users (and thus experience) before adding it to the standard library 
might make sense (as it frequently does).

Thanks!.

PS: The function calls are: aio_cancel, aio_error, aio_fsync, aio_read,
aio_return, aio_write.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Support for async read/write

2010-10-19 Thread exarkun

On 01:37 am, gl...@twistedmatrix.com wrote:


On Oct 19, 2010, at 8:09 PM, James Y Knight wrote:

There's a difference.

os._exit is useful. os.open is useful. aio_* are *not* useful. For 
anything. If there's anything you think you want to use them for, 
you're wrong. It either won't work properly or it will worse 
performing than the simpler alternatives.



I'd like to echo this sentiment.  This is not about providing a 'safe' 
wrapper to hide some powerful feature of these APIs: the POSIX aio_* 
functions are really completely useless.


To quote the relevant standard 
:



APPLICATION USAGE

None.

RATIONALE

None.

FUTURE DIRECTIONS

None.

Not only is the performance usually worse than expected, the behavior 
of aio_* functions require all kinds of subtle and mysterious 
coordination with signal handling, which I'm not entirely sure Python 
would even be able to pull off without some modifications to the signal 
module.  (And, as Jean-Paul mentioned, if your OS kernel runs out of 
space in a queue somewhere, completion notifications might just never 
be delivered at all.)


Just to be clear, James corrected me there.  I thought Jesus was talking 
about the mostly useless Linux AIO APIs, which have the problems I 
described.  He was actually talking about the POSIX AIO APIs, which have 
a different set of problems making them a waste of time.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SVN rev. 85392 broke module handling in py3k

2010-10-22 Thread exarkun

On 02:13 pm, stefan...@behnel.de wrote:

Benjamin Peterson, 22.10.2010 16:03:

2010/10/22 Stefan Behnel:
since SVN rev. 85392, Cython's installation fails on the py3k branch 
with a
weird globals error. I think it is related to some sys.modules magic 
that we

do in order to support running Cython in Python 3 using lib2to3.

Basically, what we do is, we import some parts of Cython at the 
beginning
that are Py3 clean, specifically some distutils build_ext replacement 
for

building Cython modules. Then we start up distutils, which first runs
lib2to3 on Cython's sources to convert them into Py3 code. When it 
then gets
to building the binary modules, we remove all Cython modules and 
packages
from sys.modules and reimport their 2to3-ed sources so that we can 
run the
complete compiler during the installation (to bootstrap parts of 
Cython into

binary modules).

Since the above revision, this process bails out with an error when
accessing "os.path" because "os" is None. The "os" module is imported
globally in our early-imported build_ext module, more or less like 
this:


import os

from distutils.command import build_ext as _build_ext

class build_ext(_build_ext.build_ext):

def build_extensions(self):
print(os) # prints None!

I suspect that the fact that we remove the modules from sys.modules 
somehow
triggers the cleanup of these modules while there are still objects 
from

these modules alive that refer to their globals. So, what I think is
happening is that the module cleanup sets the module's globals to 
None

before the objects from that module that refer to these globals have
actually gone out of scope.


Instances of classes don't refer to the module their class is defined 
in.  It seems more likely that the reason the module is garbage 
collected is that there really is nothing which refers to it anymore.


The behavior of setting the attributes of a module being freed to None 
has been in place for a long time, r85392 only restored it after a brief 
absence.


Perhaps Cython itself should keep the modules alive that it wants kept 
alive.  Alternatively, if Cython owns the code that's running into the 
zapped global, you could change it to not use globals.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issue 10194 - Adding a gc.remap() function

2010-10-26 Thread exarkun

On 08:28 pm, pinge...@yahoo.com wrote:

--- On Tue, 10/26/10, "Martin v. L�wis"  wrote:

I think this then mandates a PEP; I'm -1 on the feature also.


I am happy to write up a PEP for this feature.  I'll start that
process now, though if anyone feels that this idea has no chance of
acceptance please let me know.


This can be implemented with ctypes right now (I half did it several 
years ago).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issue 10194 - Adding a gc.remap() function

2010-10-27 Thread exarkun

On 26 Oct, 11:31 pm, pinge...@yahoo.com wrote:
--- On Tue, 10/26/10, exar...@twistedmatrix.com 
 wrote:

This can be implemented with ctypes right now (I half did
it several years ago).

Jean-Paul


Is there a trick to doing it this way, or are you suggesting
building a ctypes wrapper for each C type in the Python
library, and then effectively reimplementing tp_traverse
in Python?


That's the idea, yes.

Jean-Paul




___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python- 
dev/exarkun%40twistedmatrix.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] MemoryError... how much memory?

2010-10-27 Thread exarkun

On 07:09 pm, facundobati...@gmail.com wrote:
On Wed, Oct 27, 2010 at 12:05 PM, Benjamin Peterson 
 wrote:

Isn't this usually when you do something like [None]*2**300? In that
case, wouldn't you know how much memory you're requesting?


It could happen on any malloc. It depends on how much you have free.

Don't think on getting a MemoryError on a python you just opened in
the console. Think about a server with a month of uptime, where you
have all the memory fragmented, etc.

Also, why is that useful?


It helps to determine why we're having some Memory Errors on our
long-lived server, how is the behaviour when that happens, etc.


But... If you allocated all of your memory to some garbage, and then a 5 
byte string can't be allocated, you don't really care about the 5 byte 
string, you care about the garbage that's wasting your memory.


Tools like heapy will give you a lot of information.  Maybe it wouldn't 
hurt anyone to have more information in a MemoryError.  But I don't 
think it's going to help a lot either.  It's not the information that 
you're really interested in.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Continuing 2.x

2010-10-28 Thread exarkun

On 04:04 pm, ba...@python.org wrote:


I'd *much* rather this enthusiasm be spent on making Python 3 rock, and 
in

porting third party code to Python 3.


Enthusiasm isn't fungible.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Continuing 2.x

2010-10-29 Thread exarkun

On 02:51 am, br...@python.org wrote:

2010/10/28 Kristj�n Valur J�nsson :

Hi all.
This has been a lively discussion.
My desire to keep 2.x alive in some sense is my own and I don't know 
if anyone shares it but as a member of this community I think I'm 
allowed to voice it.  So, just to clarify my particular position, let 
me explain where all this comes from.

[snip]

And as everyone has said so far (and with which I agree), that's fine.
As long as it is not called Python 2.8 -- EVE-Python 2.8 or some Monty
Python reference -- then that's fine. And as pointed out by folks,
once Hg kicks in and we have user repos you can even host it on
hg.python.org yourself to give it some semblance of authority if you
want.


In case anyone was discouraged by the idea that a 2.x continuation would 
not be allowed to bear the name "Python" as Brett suggests here, I want 
to make a clarification.


Brett is speaking for himself here (and he never claimed otherwise!). 
However, decisions about where to allow the use of the "Python" 
trademark are made by the Python Software Foundation.  The PSF has not 
decided to reject any request by a 2.x continuation project to use the 
name "Python".  Of course, this does not mean they would necessarily 
allow such a use.  I just wanted to point out that they have not 
categorically rejected it, as one might be tempted to infer from Brett's 
message.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Cleaning-up the new unittest API

2010-11-02 Thread exarkun

On 04:29 pm, fuzzy...@voidspace.org.uk wrote:

On 02/11/2010 16:23, Terry Reedy wrote:

On 11/2/2010 10:05 AM, C. Titus Brown wrote:
...but, as someone who has to figure out how to teach stuff to CSE 
undergrads

(and biology grads) I hate the statement "...any programmer should
expect this..."


And indeed I (intentionally) did not say that. People who are ignorant 
and inexperienced about something should avoid making expectations in 
any direction until they have read the doc and experimented a bit.
Expectations come from consistent behaviour. sorted behaves 
consistently for *most* of the built-in types and will also work for 
custom types that provide a 'standard' (total ordering) implementation 
of __lt__.


It is very easy to *not realise* that a consequence of sets (and 
frozensets) providing partial ordering through operator overloading is 
that sorting is undefined for them.


Perhaps.  The documentation for sets says this, though:

 Since sets only define partial ordering (subset relationships), the 
output of the list.sort() method is undefined for lists of sets.
Particularly as it still works for other mutable collections. Worth 
being aware that custom implementations of standard operators will 
break expectations of users who aren't intimately aware of the problem 
domains that the specific type may be created for.


I can't help thinking that most of this confusion is caused by using < 
for determining subsets.  If < were not defined for sets and people had 
to use "set.issubset" (which exists already), then sorting a list with 
sets would raise an exception, a much more understandable failure mode 
than getting back a list in arbitrary order.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On breaking modules into packages Was: [issue10199] Move Demo/turtle under Lib/

2010-11-02 Thread exarkun

On 12:47 am, ben+pyt...@benfinney.id.au wrote:

Antoine Pitrou  writes:

I don't agree with this. Until it's documented, it's an implementation
detail and should be able to change without notice.


If it's an implementation detail, shouldn't it be named as one (i.e.
with a leading underscore)?

If someone wants to depend on some undocumented detail of the
directory layout it's their problem (like people depending on bytecode
and other stuff).


I would say that names without a single leading underscore are part of
the public API, whether documented or not.


And if that isn't the rule, then what the heck is?

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pickle alternative in stdlib (Was: On breaking modules into packages)

2010-11-04 Thread exarkun

On 06:28 am, techto...@gmail.com wrote:
On Wed, Nov 3, 2010 at 9:08 PM, Glyph Lefkowitz 
 wrote:


This is the strongest reason why I recommend to everyone I know that 
they
not use pickle for storage they'd like to keep working after upgrades 
[not
just of stdlib, but other 3rd party software or their own software]. 
:)


+1.
Twisted actually tried to preserve pickle compatibility in the bad old 
days,
but it was impossible.  Pickles should never really be saved to disk 
unless

they contain nothing but lists, ints, strings, and dicts.


But what is alternative in stdlib?
Don't you think that Python doesn't provide any?


Persistence is a very hard problem.  Lots and lots of trade-offs need to 
be made, and you generally want to tailor those trade-offs to the 
particular application at hand.  This probably means that the stdlib 
isn't a suitable place to try to solve the problem.


Look outside the stdlib and you'll find an extremely vibrant and diverse 
collection of software which is aimed at solving this problem, though.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pickle alternative in stdlib (Was: On breaking modules into packages)

2010-11-04 Thread exarkun

On 12:21 am, m...@gsites.de wrote:

Am 04.11.2010 17:15, schrieb anatoly techtonik:
> pickle is insecure, marshal too.

If the transport or storage layer is not save, you should 
cryptographically sign the data anyway::


def pickle_encode(data, key):
msg = base64.b64encode(pickle.dumps(data, -1))
sig = base64.b64encode(hmac.new(key, msg).digest())
return sig + ':' + msg

def pickle_decode(data, key):
if data and ':' in data:
sig, msg = data.split(':', 1)
if sig == base64.b64encode(hmac.new(key, msg).digest()):
return pickle.loads(base64.b64decode(msg))
raise pickle.UnpicklingError("Wrong or missing signature.")

Bottle (a web framework) uses a similar approach to store non-string 
data in client-side cookies. I don't see a (security) problem here.


Your pickle_decode leaks information about the key.  An attacker will 
eventually (a few seconds to a few minutes, depending on how they have 
access to this system) be able to determine your key and send you 
arbitrary pickles (ie, execute arbitrary code on your system).


Oops.

This stuff is hard.  If you're going to mess around with it, make sure 
you're *serious* (better approach: don't mess around with it).


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Snakebite, buildbot and low hanging fruit -- feedback wanted! (Was Re: SSH access against buildbot boxes)

2010-11-07 Thread exarkun

On 11:24 am, tr...@snakebite.org wrote:


2.  Address the second problem of the buildbot web interface sucking 
for non-standard branches.  I'm thinking along the lines of a hack to 
buildbot, such that upon creation of new per-activity branches off a 
mainline, something magically runs in the background and sets up a 
complete buildbot view at python.snakebite.org/dev/buildbot/branch-name>, just as if you were looking at a trunk buildbot page.


This is basically trivial.  I gave #python-dev a tool for forcing 
builds, dunno if anyone still has a copy, but it's easy to reconstruct 
from <http://twistedmatrix.com/trac/browser/sandbox/exarkun/force- 
builds.py> (which is what the Twisted project uses).  Plus, you can add 
?branch= to most BuildBot views to limit display of results to 
just builds for the named branch.
Titus, for example, alluded to some nifty way for a committer to push 
his local hg branch/changes somewhere, such that it would kick off 
builds on multiple platforms in the same sorta' vein as point 2, but 
able to leverage cloud resources like Amazon's EC2, not just Snakebite 
hardware.


BuildBot supports managing EC2 instance lifetimes to run builds.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >