Re: [Python-Dev] requirements for moving __import__ over to importlib?

2012-02-09 Thread Mike Meyer
On Thu, 9 Feb 2012 14:19:59 -0500
Brett Cannon br...@python.org wrote:
 On Thu, Feb 9, 2012 at 13:43, PJ Eby p...@telecommunity.com wrote:
  Again, the goal is fast startup of command-line tools that only use a
  small subset of the overall framework; doing disk access for lazy imports
  goes against that goal.
 
 Depends if you consider stat calls the overhead vs. the actual disk
 read/write to load the data. Anyway, this is going to lead down to a
 discussion/argument over design parameters which I'm not up to having since
 I'm not actively working on a lazy loader for the stdlib right now.

For those of you not watching -ideas, or ignoring the Python TIOBE
-3% discussion, this would seem to be relevant to any discussion of
reworking the import mechanism:

http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059801.html

mike
-- 
Mike Meyer m...@mired.org http://www.mired.org/
Independent Software developer/SCM consultant, email for more information.

O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-28 Thread Mike Meyer
Antoine Pitrou solip...@pitrou.net wrote:

On Sat, 28 Jan 2012 13:14:36 -0500
Barry Warsaw ba...@python.org wrote:
 On Jan 28, 2012, at 09:15 AM, Guido van Rossum wrote:
 
 So I do not support the __preview__ package. I think we're better
off
 flagging experimental modules in the docs than in their name. For
the
 specific case of the regex module, the best way to adoption may just
 be to include it in the stdlib as regex and keep it there. Any other
 solution will just cause too much anxiety.
 
 +1
 
 What does the PEP give you above this simple as possible solution?

I think we'll just see folks using the unstable APIs and then
complaining when we remove them, even though they *know* *upfront* that
these APIs will go away.

That problem would be much worse if some modules were simply marked
experimental in the doc, rather than put in a separate namespace.
You will see people copying recipes found on the internet without
knowing that they rely on unstable APIs.

How. About doing them the way we do depreciated modules, and have them spit 
warnings to stderr?  Maybe add a flag and environment variable to disable that.

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 408 -- Standard library __preview__ package

2012-01-28 Thread Mike Meyer
Antoine Pitrou solip...@pitrou.net wrote:

Le samedi 28 janvier 2012 à 10:46 -0800, Mike Meyer a écrit :
 Antoine Pitrou solip...@pitrou.net wrote:
 You will see people copying recipes found on the internet without
 knowing that they rely on unstable APIs.
 
 How. About doing them the way we do depreciated modules, and have
them
 spit warnings to stderr?  Maybe add a flag and environment variable
to
 disable that.

You're proposing that new experimental modules spit warnings when you
use them?

To be explicit, when the system loada them.

 I don't think that's a good way of promoting their use :)

And importing something from __preview__or __experimental__or whatever won't? 
This thread did include the suggestion that they go into their final location 
instead of a magic module.

(something we do want to do even though we also want to convey the idea
that they're not yet stable or fully approved)

Doing it with a message pointing at the page describing the status makes sure 
users read the docs before using them. That solves the problem of using them 
without realizing it.

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed PEP on concurrent programming support

2012-01-11 Thread Mike Meyer
On Wed, 4 Jan 2012 00:07:27 -0500
PJ Eby p...@telecommunity.com wrote:

 On Tue, Jan 3, 2012 at 7:40 PM, Mike Meyer m...@mired.org wrote:
  A suite is marked
  as a `transaction`, and then when an unlocked object is modified,
  instead of indicating an error, a locked copy of it is created to be
  used through the rest of the transaction. If any of the originals
  are modified during the execution of the suite, the suite is rerun
  from the beginning. If it completes, the locked copies are copied
  back to the originals in an atomic manner.
 I'm not sure if locked is really the right word here.  A private
 copy isn't locked because it's not shared.

Do you have a suggestion for a better word? Maybe the safe state
used elsewhere?

  For
  instance, combining STM with explicit locking would allow explicit
  locking when IO was required,
 I don't think this idea makes any sense, since STM's don't really
 lock, and to control I/O in an STM system you just STM-ize the
 queues. (Generally speaking.)

I thought about that. I couldn't convince myself that STM by itself
sufficient. If you need to make irreversible changes to the state of
an object, you can't use STM, so what do you use? Can every such
situation be handled by creating safe values then using an STM to
update them?

   mike
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A question about the subprocess implementation

2012-01-07 Thread Mike Meyer
On Sat, 7 Jan 2012 21:25:37 + (UTC)
Vinay Sajip vinay_sa...@yahoo.co.uk wrote:

 The subprocess.Popen constructor takes stdin, stdout and stderr keyword
 arguments which are supposed to represent the file handles of the child 
 process.
 The object also has stdin, stdout and stderr attributes, which one would 
 naively
 expect to correspond to the passed in values, except where you pass in e.g.
 subprocess.PIPE (in which case the corresponding attribute would be set to an
 actual stream or descriptor).
 
 However, in common cases, even when keyword arguments are passed in, the
 corresponding attributes are set to None. The following script

Note that this is documented behavior for these attributes.

 This seems to me to contradict the principle of least surprise. One
 would expect, when an file-like object is passed in as a keyword
 argument, that it be placed in the corresponding attribute.

Since the only reason they exist is so you can access your end of a
pipe, setting them to anything would seem to be a bug. I'd argue that
their existence is more a pola violation than them having the value
None. But None is easier than a call to hasattr.

 That way, if one wants to do p.stdout.close() (which is necessary in
 some cases), one doesn't hit an AttributeError because NoneType has
 no attribute 'close'.

You can close the object you passed in if it wasn't PIPE. If you
passed in PIPE, the object has to be exposed some way, otherwise you
*can't* close it.

This did raise one interesting question, which will go to ideas...

   mike


-- 
Mike Meyer m...@mired.org http://www.mired.org/
Independent Software developer/SCM consultant, email for more information.

O ascii ribbon campaign - stop html mail - www.asciiribbon.org



 import os
 from subprocess import Popen, PIPE
 import tempfile
 
 cmd = 'ls /tmp'.split()
 
 p = Popen(cmd, stdout=open(os.devnull, 'w+b'))
 print('process output streams: %s, %s' % (p.stdout, p.stderr))
 p = Popen(cmd, stdout=tempfile.TemporaryFile())
 print('process output streams: %s, %s' % (p.stdout, p.stderr))
 
 prints
 
 process output streams: None, None
 process output streams: None, None
 
 under both Python 2.7 and 3.2. However, if subprocess.PIPE is passed in, then
 the corresponding attribute *is* set: if the last four lines are changed to
 
 p = Popen(cmd, stdout=PIPE)
 print('process output streams: %s, %s' % (p.stdout, p.stderr))
 p = Popen(cmd, stdout=open(os.devnull, 'w+b'), stderr=PIPE)
 print('process output streams: %s, %s' % (p.stdout, p.stderr))
 
 then you get
 
 process output streams: open file 'fdopen', mode 'rb' at 0x2088660, None
 process output streams: None, open file 'fdopen', mode 'rb' at 0x2088e40
 
 under Python 2.7, and
 
 process output streams: _io.FileIO name=3 mode='rb', None
 process output streams: None, _io.FileIO name=5 mode='rb'
 
 This seems to me to contradict the principle of least surprise. One would
 expect, when an file-like object is passed in as a keyword argument, that it 
 be
 placed in the corresponding attribute. That way, if one wants to do
 p.stdout.close() (which is necessary in some cases), one doesn't hit an
 AttributeError because NoneType has no attribute 'close'.
 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A question about the subprocess implementation

2012-01-07 Thread Mike Meyer
On Sun, 8 Jan 2012 02:06:33 + (UTC)
Vinay Sajip vinay_sa...@yahoo.co.uk wrote:

 Mike Meyer mwm at mired.org writes:
 
  Since the only reason they exist is so you can access your end of a
  pipe, setting them to anything would seem to be a bug. I'd argue that
  their existence is more a pola violation than them having the value
  None. But None is easier than a call to hasattr.
 
 I don't follow your reasoning, re. why setting them to a handle used for
 subprocess output would be a bug - it's logically the same as the PIPE case.

No, it isn't. In the PIPE case, the value of the attributes isn't
otherwise available to the caller.

I think you're not following because you're thinking about what you
want to do with the attributes:

 storing it [the fd] in proc.stdout or proc.stderr?

As opposed to what they're used for, which is communicating the fd's
created in the PIPE case to the caller.  Would you feel the same way
if they were given the more accurate names pipe_input and
pipe_output?

  You can close the object you passed in if it wasn't PIPE. If you
  passed in PIPE, the object has to be exposed some way, otherwise you
  *can't* close it.
 Yes, I'm not disputing that I need to keep track of it - just that proc.stdout
 seems a good place to keep it.

I disagree. Having the proc object keep track of these things for you
is making it more complicated (by the admittedly trivial change of
assigning those two attributes when they aren't used) so you can make
your process creation code less complicated (by the equally trivial
change of assigning the values in those two attributes when they are
used). Since only the caller knows when this complication is needed,
that's the logical place to put it.

 That way, the closing code can be de-coupled from the code that sets
 up the subprocess.

There are other ways to do that. It's still the same tradeoff - you're
making the proc code more complicated to make the calling code
simpler, even though only the calling code knows if that's needed.

   mike
-- 
Mike Meyer m...@mired.org http://www.mired.org/
Independent Software developer/SCM consultant, email for more information.

O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Proposed PEP on concurrent programming support

2012-01-03 Thread Mike Meyer
PEP: XXX
Title: Interpreter support for concurrent programming
Version: $Revision$
Last-Modified: $Date$
Author: Mike Meyer m...@mired.org
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 11-Nov-2011
Post-History: 


Abstract


The purpose of this PEP is to explore strategies for making concurrent
programming in Python easier by allowing the interpreter to detect and
notify the user about possible bugs in concurrent access. The reason
for doing so is that Errors should never pass silently.

Such bugs are caused by allowing objects to be accessed simultaneously
from another thread of execution while they are being modified.
Currently, python systems provide no support for such bugs, falling
back on the underlying platform facilities and some tools built on top
of those.  While these tools allow prevention of such modification if
the programmer is aware of the need for them, there are no facilities
to detect that such a need might exist and warn the programmer of it.

The goal is not to prevent such bugs, as that depends on the
programmer getting the logic of the interactions correct, which the
interpreter can't judge.  Nor is the goal to warn the programmer about
any such modifications - the goal is to catch standard idioms making
unsafe modifications.  If the programmer starts tinkering with
Python's internals, it's assumed they are aware of these issues.


Rationale
=

Concurrency bugs are among the hardest bugs to locate and fix.  They
result in corrupt data being generated or used in a computation.  Like
most such bugs, the corruption may not become evident until much later
and far away in the program.  Minor changes in the code can cause the
bugs to fail to manifest.  They may even fail to manifest from run to
run, depending on external factors beyond the control of the
programmer.

Therefore any help in locating and dealing with such bugs is valuable.
If the interpreter is to provide such help, it must be aware of when
things are safe to modify and when they are not. This means it will
almost certainly cause incompatible changes in Python, and may impose
costs so high for non-concurrent operations as to make it untenable.
As such, the final options discussed are destined for Python version 4
or later, and may never be implemented in any mainstream
implementation of Python.

Terminology
===

The word thread is used throughout to mean concurrent thread of
execution.  Nominally, this means a platform thread.  However, it is
intended to include any threading mechanism that allows the
interpreter to change threads between or in the middle of a statement
without the programmer specifically allowing this to happen.

Similarly, the word interpreter means any system that processes and
executes Python language files.  While this normally means cPython,
the changes discussed here should be amenable to other
implementations.


Concept
===

Locking object
--

The idea is that the interpreter should indicate an error anytime an
unlocked object is mutated.  For mutable types, this would mean
changing the value of the type. For Python class instances, this would
mean changing the binding of an attribute.  Mutating an object bound
to such an attribute isn't a change in the object the attribute
belongs to, and so wouldn't indicate an error unless the object bound
to the attribute was unlocked.

Locking by name
---

It's also been suggested that locking names would be useful.  That
is, to prevent a specific attribute of an object from being rebound,
or a key/index entry in a mapping object. This provides a finer
grained locking than just locking the object, as you could lock a
specific attribute or set of attributes of an object, without locking
all of them.

Unfortunately, this isn't sufficient: a set may need to be locked to
prevent deletions for some period, or a dictionary to prevent adding a
key, or a list to prevent changing a slice, etc.

So some other locking mechanism is required.  If that needs to specify
objects, some way of distinguishing between locking a name and locking
the object bound to the name needs to be invented, or there needs to
be two different locking mechanisms.  It's not clear that the finer
grained locking is worth adding yet another language mechanism.


Alternatives



Explicit locking


These alternatives requires that the programmer explicitly name
anything that is going to be changed to lock it before changing it.
This lets the interpreter gets involved, but makes a number of errors
possible based on the order that locks are applied.

Platform locks
''

The current tool set uses platform locks via a C extension.  The
problem with these is that the interpreter has no knowledge of them,
and so can't do anything about detecting the mutation of unlocked
objects.


A ``locking`` keyword
'

Adding a statement to tell the interpreter to lock objects

Re: [Python-Dev] Anyone still using Python 2.5?

2011-12-21 Thread Mike Meyer
On Thu, 22 Dec 2011 01:49:37 +
Michael Foord fuzzy...@voidspace.org.uk wrote:
 These figures can't possibly be true. No-one is using Python 3 yet. ;-)

Since you brought it up. Is anyone paying people (or trying to hire
people) to write Python 3?

Thanks,
mike
-- 
Mike Meyer m...@mired.org http://www.mired.org/
Independent Software developer/SCM consultant, email for more information.

O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fixing the XML batteries

2011-12-09 Thread Mike Meyer
On Fri, 09 Dec 2011 09:02:35 +0100
Stefan Behnel stefan...@behnel.de wrote:

 a) The stdlib documentation should help users to choose the right
 tool right from the start.
 b) cElementTree should finally loose it's special status as a
 separate library and disappear as an accelerator module behind
 ElementTree.

+1 and +1.

I've done a lot of xml work in Python, and unless you've got a
particular reason for wanting to use the dom, ElementTree is the only
sane way to go.

I recently converted a middling-sized app from using the dom to using
ElementTree, and wrote up some guidelines for the process for the
client. I can try and shake it out of my clients lawyers if it would
help with this or others are interested.

 mike
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [PATCH] Adding braces to __future__

2011-12-09 Thread Mike Meyer
On Fri, 9 Dec 2011 21:26:29 +0100
Cedric Sodhi man...@gmx.net wrote:
 Readable code, is it really an advantage?
 Of course it is.

Ok, you got that right.

 Forcing the programmer to write readable code, is that an advantage?
 No suspense, the answer is Of course not.

This is *not* an Of course. Readable code is *important*. Giving
programmers more power in exchange for less readable code is a bad
trade.  For an extended analsysis, see:
http://blog.mired.org/2011/10/more-power-is-not-always-good-thing.html

One of Python's best points is that the community resists the urge to
add things just to add things. The community generally applies three
tests to any feature before accepting it:

1) It should have a good use case.
2) It should enable more readable code for that use case.
3) It shouldn't make writing unreadable code easy.

DB fails all three of these tests. It doesn't have a good use
case. The code you create using it is not more readable than the
alternative. And it definitely makes writing unreadable code easy.

And of course, it violates TOOWTDI.

mike
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python XML Validator

2008-03-11 Thread Mike Meyer
On Tue, 11 Mar 2008 14:55:04 +0100 Stefan Behnel [EMAIL PROTECTED] wrote:
 (weird places these threads come up at, but now that it's here...)
 Mike Meyer wrote:
  On Tue, 04 Mar 2008 15:44:32 -0800 Ned Deily [EMAIL PROTECTED] wrote:
  In article [EMAIL PROTECTED],
   Mike Meyer [EMAIL PROTECTED] wrote:
  On Thu, 28 Feb 2008 23:42:49 + (UTC) Medhat Gayed 
  [EMAIL PROTECTED] wrote:
  lxml is good but not written in python and difficult to install and 
  didn't 
  work on MacOS X.

Please note that this original complaint is *not* mine. However...

 Due to a design problem in MacOS-X, not a problem in lxml.

I didn't find it noticeably harder to install lxml on MacOS-X than
most other systems.

 But it's not that hard to install either, as previous posts presented.

Depends on how you define hard. If I have to create a custom
environment with updated version of system libraries just to use lxml,
I'd call that hard. That was pretty much the only route available
the first time I wanted lxml on OS-X. And ubuntu. And RHEL.

The second time for OS-X, I used an older version of lxml (1.3.6), and
just did setup.py install. Worked like a charm. That's not hard.

The only system that installing a modern version of lxml on was easy
was FreeBSD, probably because libxml2 and libxslt aren't part of the
system software.

  However, the authors tend to require recent
  versions of libxml2 and libxslt, which means recent versions of lxml
  won't build and/or work with the libraries bundled with many Unix and
  Unix-like systems
 I wouldn't consider a dependency on an almost three year old library version
 recent, libxml2 2.6.20 was released in July 2005.

Well, if you're on a development box that you update regularly, you're
right: three years old is pretty old. If you're talking about a
production box that you don't touch unless you absolutely have to,
you're wrong: three years old is still pretty recent. For example, the
most recent release of RHEL is 4.6, which ships with libxml2 2.6.16.

  Which means you wind up having to
  build those yourself if you want a recent version of lxml, even if
  you're using a system that includes lxml in it's package system.
 If you want a clean system, e.g. for production use, buildout has proven to be
 a good idea. And we also provide pretty good instructions on our web page on
 how to install lxml on MacOS-X and what to take care of.

Yes, but the proposal was to include it in the Python standard
library. Software that doesn't work on popular target platforms
without updating a standard system library isn't really suitable for
that.

 mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python XML Validator

2008-03-05 Thread Mike Meyer
On Wed, 05 Mar 2008 13:01:14 +1300 Greg Ewing [EMAIL PROTECTED] wrote:

 Mike Meyer wrote:
  Trying to install it from the repository is a PITA, because
  it uses both the easyinstall and Pyrex
 
 It shouldn't depend on Pyrex as long as it's distributed
 with the generated C files. If it's not, that's an
 oversight on the part of the distributor.

Sorry I wasn't clear. from the repository means building from
sourced checked out of the source repository, not from a
distribution.

mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python XML Validator

2008-03-05 Thread Mike Meyer
On Tue, 04 Mar 2008 15:44:32 -0800 Ned Deily [EMAIL PROTECTED] wrote:

 In article [EMAIL PROTECTED],
  Mike Meyer [EMAIL PROTECTED] wrote:
  On Thu, 28 Feb 2008 23:42:49 + (UTC) Medhat Gayed 
  [EMAIL PROTECTED] wrote:
   lxml is good but not written in python and difficult to install and 
   didn't 
   work
   on MacOS X.
  lxml is built on top of libxml2/libxslt, which are bundled with most
  Unix-like OS's (including Mac OS X), or available in their package
  systems. Trying to install it from the repository is a PITA, because
  it uses both the easyinstall and Pyrex (later Cython) packages - which
  aren't bundled with anything. On the other hand, if it's in the
  package system (I no longer have macports installed anywhere, but
  believe it was there at one time), that solves all those problems. I
  believe they've excised the easyinstall source dependencies, though.
 [...]
  If you just want an xml module in the standard library that's more
  complete, I'd vote for the source distribution of lxml, as that's C +
  Python and built on top of commonly available libraries. The real
  issue would be making current lxml work with the outdated versions
  of those libraries found in current OS distributions.
 
 I'm not sure what you perceive to be the problems with easy_install on 
 OSX; I find it makes life *much* simpler for managing python packages.

I don't, but the real issue is that it's been considered - and
rejected - for inclusion in the standard library multiple times. The
OPs request was for a validating XML parser in the standard
library. Any third party code that requires easy_install won't be
acceptable.

I think lxml is the best Python XML library that meets his
requirements, and it would make my life a lot easier if it were part
of the standard library. However, the authors tend to require recent
versions of libxml2 and libxslt, which means recent versions of lxml
won't build and/or work with the libraries bundled with many Unix and
Unix-like systems - including OSX. Which means you wind up having to
build those yourself if you want a recent version of lxml, even if
you're using a system that includes lxml in it's package system.


 Be that as it may, since the release of lxml 2.0, the project has 
 updated the lxml website with useful information about source 
 installations and, in particular, OSX source installations:
 
 http://codespeak.net/lxml/build.html
 
 IIRC, here's what worked for me on Leopard (10.5.2) using the python.org 
 2.5.2, though it should work fine with the Apple-supplied 2.5.1:

This is similar to what I went through with 1.3.6 on Tiger, but I used
MacPorts. On Leopard, 1.3.6 builds out of the box. Just do sudo
python setup.py install and you're done. That's probably the easiest
way to get a validating xml parser on OS X at this time.

   mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python XML Validator

2008-03-04 Thread Mike Meyer
On Thu, 28 Feb 2008 23:42:49 + (UTC) Medhat Gayed [EMAIL PROTECTED] wrote:
 lxml is good but not written in python and difficult to install and didn't 
 work
 on MacOS X.

lxml is built on top of libxml2/libxslt, which are bundled with most
Unix-like OS's (including Mac OS X), or available in their package
systems. Trying to install it from the repository is a PITA, because
it uses both the easyinstall and Pyrex (later Cython) packages - which
aren't bundled with anything. On the other hand, if it's in the
package system (I no longer have macports installed anywhere, but
believe it was there at one time), that solves all those problems. I
believe they've excised the easyinstall source dependencies, though.

Using lxml on OS X Tiger was problematical, because the versions of
python, libxml2 and libxslt provided with Tiger were pretty much all
older than lxml supported; I built python from macports, including
current versions of libxml2, libxslt and lxml, and everything worked
with no problems. (I later stopped working with this on the Mac
because I need cx_Oracle as well, which doesn't exist for intel macs).

On Leopard, Python is up to date, but libxml/libxslt seems a bit
behind for lxml 2.0.x (no schematron support being the obvious
problem). I went back to the 1.3.6 source tarball (which is what I'm
using everywhere anyway), and python setup.py install worked like a
charm. (So it looks the easyinstall dependency is gone).

Of course, the real issue here is that, while Python may come with
batteries included you only get the common sizes like A, C and D. If
you need a B cell, you're on your own. In XML land, validation is one
such case. Me, I'd like complete xpath support, and xslt as well. But
this happens with other subsystems, like doing client-side SSL support,
but not server-side (at least, not as of 2.4; I haven't checked 2.5).

If you just want an xml module in the standard library that's more
complete, I'd vote for the source distribution of lxml, as that's C +
Python and built on top of commonly available libraries. The real
issue would be making current lxml work with the outdated versions
of those libraries found in current OS distributions.

mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [OT] Re: getpass and stdin

2008-02-26 Thread Mike Meyer
On Tue, 26 Feb 2008 15:32:03 -0500 Leif Walsh [EMAIL PROTECTED] wrote:
 On Tue, Feb 26, 2008 at 2:14 PM, Shaya Potter [EMAIL PROTECTED] wrote:
  1) I am willing to type in the password, which is obvious to anyone who
  can read a simple script.  That just doesn't work for a program you want
  to run in the background to type it in every time.
 
 I recommend you just hack on this getmail program and give it a daemon
 mode.  That shouldn't be too large of a task, and it will certainly be
 more secure (and you can even commit your changes as a new feature!).
 Otherwise, your best bet is probably, as Charles said, making the
 passfile work for you (maybe play with nfs and see if you can get it
 to hide things...I'm no wizard with it, but I'm willing to bet it's
 possible).

Actually, the easiest thing is probably to use a file that's not
really a file, like /dev/stdin or (cat -),

  mike

-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r60919 - peps/trunk/pep-0008.txt

2008-02-23 Thread Mike Meyer
On Fri, 22 Feb 2008 18:53:48 -0500 Barry Warsaw [EMAIL PROTECTED] wrote: now 
is
 In 50 years, our grandchildren will be writing code with brain  
 implants and displays burned right into their retina, and they'll / 
 still/ be subject to 79 characters.  I laugh every time I think that  
 they'll have no idea why it is, but they'll still be unable to change  
 it. :)

There are reasons (other than antiquated media formats) for a coding
standard to mandate a line length of 70-80 characters: to improve the
readability of the source code. Depending on who (and how) you ask,
the most comfortable line length for people to read will vary some,
but 70 characters is close to the maximum you'll get, because it gets
harder to track back across the page as lines get longer. While code
is so ragged that that's probably not as much of a problem, comments
aren't. Formatting those to a max of ~70 characters makes them easy to
read. Formatting your program so that block comments and code are
about the same makes optimal use of the display space.

Whether the readability issue has anything to do with why 80 column
cards dominated the industry (some were as short as 24 columns, and I
could swear I saw a reference to 120-column cards *somewhere*) is left
as an exercise for the reader.

 mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com