CrunchyFrog 0.3.0 released

2008-10-24 Thread Andi Albrecht
I'm pleased to announce CrunchyFrog 0.3.0.

CrunchyFrog is a database front-end for GNOME. Skip down for more information.

Download: http://crunchyfrog.googlecode.com/files/crunchyfrog-0.3.0.tar.gz


Changes in 0.3.0


New Features

 * Support for GNOME keyring.
 * Support for multiple statements in a SQL editor.
 * UI cleanups.

Bug Fixes
-
 * Removed obsolete dependencies (gtksourceview1, gdl).
 * Connection chooser doesn't get insensitive when a editor is closed.
 * Performance improvements.

Translations

 * Danish
 * Dutch
 * French
 * German
 * Hebrew
 * Indonesian
 * Italian
 * Spanish
 * Swedish
 * Turkish

Thanks to all Launchpad contributors!

Complete change log: http://crunchyfrog.googlecode.com/svn/trunk/CHANGES


What is CrunchyFrog
===

CrunchyFrog is a database navigator and query tool for GNOME.
Currently PostgreSQL, MySQL, Oracle, SQLite3, MS-SQL databases and LDAP
servers are supported for browsing and querying. More databases
and features can be added using the plugin system.
CrunchyFrog is licensed under the GPLv3 and is entirely written
in Python/PyGTK.

Homepage:http://cf.andialbrecht.de/
Screenshots: http://cf.andialbrecht.de/screenshots.html
Download:http://cf.andialbrecht.de/download.html

Development: http://crunchyfrog.googlecode.com/
Discussions: http://groups.google.com/group/crunchyfrog
Issues/Bugs: http://code.google.com/p/crunchyfrog/issues/list


Regards,

Andi
--
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Re: How to examine the inheritance of a class?

2008-10-24 Thread Steven D'Aprano
On Thu, 23 Oct 2008 20:53:03 -0700, John Ladasky wrote:

 On Oct 23, 6:59 pm, James Mills [EMAIL PROTECTED] wrote:
 
 Developer. NOT User.
 
 For the foreseeable future, this program is for my use only.  So the
 developer and the user are one and the same.
 
 And, thank you, __bases__ is what I was looking for.  Though Chris Mills
 also pointed out that isinstance() handles the type checking nicely.

issubclass() may be better than directly looking at __bases__.

This may not matter if your only writing for yourself, but beware that 
the use of issubclass() and isinstance() will wreck duck-typing and 
prevent such things as delegation from working correctly.



-- 
Steven

--
http://mail.python.org/mailman/listinfo/python-list


Re: Will Python 3 be stackless?

2008-10-24 Thread davy zhang
multiprocessing is good enough for now,

On Fri, Oct 24, 2008 at 4:30 AM, Diez B. Roggisch [EMAIL PROTECTED] wrote:
 Phillip B Oldham schrieb:

 On Thu, Oct 23, 2008 at 9:20 PM, Chris Rebert [EMAIL PROTECTED] wrote:

 No, it will definitely not.

 From your statement (and I'm terribly sorry if I've taken it out of

 context) it would seem that such features are frowned-upon. Is this
 correct? And if so, why?

 You got the wrong impression. It's not frowned upon. It just is a lot of
 extra effort to implemnt  thus makes the development of normal features
 more complex.

 Diez
 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread greg

Andy wrote:


1) Independent interpreters (this is the easier one--and solved, in
principle anyway, by PEP 3121, by Martin v. Löwis


Something like that is necessary for independent interpreters,
but not sufficient. There are also all the built-in constants
and type objects to consider. Most of these are statically
allocated at the moment.


2) Barriers to free threading.  As Jesse describes, this is simply
just the GIL being in place, but of course it's there for a reason.
It's there because (1) doesn't hold and there was never any specs/
guidance put forward about what should and shouldn't be done in multi-
threaded apps


No, it's there because it's necessary for acceptable performance
when multiple threads are running in one interpreter. Independent
interpreters wouldn't mean the absence of a GIL; it would only
mean each interpreter having its own GIL.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Martin v. Löwis
 You seem confused.  PEP 3121 is for isolated interpreters (ie emulated
 processes), not threading.

Just a small remark: this wasn't the primary objective of the PEP.
The primary objective was to support module cleanup in a reliable
manner, to allow eventually to get modules garbage-collected properly.
However, I also kept the isolated interpreters feature in mind there.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: python3 - the hardest hello world ever ?

2008-10-24 Thread henning . vonbargen
 Many thanks, it works when setting the LANG environment variable.

BTW:
For Windows users, when running Python command-line programs,
you can also modify the properties of the cmd.exe window and
tell windows to use the TT Lucida Console font instead of the raster
font.

Then, before starting the Python program, do a
CHCP 1252
This way the sys.stdout.encoding will be cp1252
(tested with Python 2.4.3 and 2.5.1).
--
http://mail.python.org/mailman/listinfo/python-list


Re: Logger / I get all messages 2 times

2008-10-24 Thread ASh
On Oct 23, 5:10 pm, Diez B. Roggisch [EMAIL PROTECTED] wrote:
 ASh wrote:
  Hi,

  I have this source:

  import logging
  import logging.config

  logging.config.fileConfig(logging.properties)
  log = logging.getLogger(qname)
  log.debug(message)

  --- OUTPUT
  DEBUG logger_test:8:  message
  DEBUG logger_test:8:  message

  --- FILE CONFIG
  [formatters]
  keys: detailed

  [handlers]
  keys: console

  [loggers]
  keys: root, engine

  [formatter_detailed]
  format: %(levelname)s %(module)s:%(lineno)d:  %(message)s

  [handler_console]
  class: StreamHandler
  args: []
  formatter: detailed

  [logger_root]
  level: ERROR
  handlers: console

  [logger_engine]
  level: DEBUG
  qualname: qname
  handlers: console

  ---

  Why do I get the log 2 times?

 Because you add the handler console two times, to logger_engine and
 logger_root. You should only add it to root, or set propagate to false.

 Diez

What if I want to output only the specific logger to console and
ignore every other?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about scope

2008-10-24 Thread Bruno Desthuilliers

Pat a écrit :
(snip)


Stripping out the extra variables and definitions, this is all that 
there is.

Whether or not this technique is *correct* programming is irrelevant.


It's obviously relevant. If it was correct, it would work, and you 
wouldn't be asking here !-)


I 
simply want to know why scoping doesn't work like I thought it would.



--- myGlobals.py file:

class myGlobals():
remote_device_enabled = bool


irrelevant
You're using the class as a bare namespace. FWIW, you could as well use 
the module itself - same effect, simplest code.

/irrelevant


--- my initialize.py file:

from myGlobals import *
def initialize():
myGlobals.remote_device_enabled = True




--- my main.py file:

import from myGlobals import *


I assume the first import is a typo. But this sure means you didn't 
run that code.



RDE =  myGlobals.remote_device_enabled

def main():
if RDE:# this will not give me the correct value


For which definition of correct value ? You didn't import nor execute 
initialize() so far, so at this stage RDE is bound to the bool type 
object. FWIW, note that calling initialize *after* the assignement to 
RDE won't change the fact that RDE will be still bound to the the bool 
type object.


irrelevant
You may want to have a look at how other Python application manage 
application-wide settings.

/irrelevant
--
http://mail.python.org/mailman/listinfo/python-list


Re: File Upload Size

2008-10-24 Thread rodmc
On Oct 13, 11:55 am, Diez B. Roggisch [EMAIL PROTECTED] wrote:
 rodmc wrote:
  On 13 Oct, 00:10, Mike Driscoll [EMAIL PROTECTED] wrote:
  On Oct 12, 9:34 am, rodmc [EMAIL PROTECTED] wrote:

   Hi,

   Is there a way to get the size of a file on a remote machine before it
   is uploaded? I would like to write some form of status counter which
   is updated as a fie is uploaded, and also to use this feature to
   prevent files which are too big from being uploaded.

   Best,

   rod

  Looks like ftplib does that. Check the
  docs:http://www.python.org/doc/2.5.2/lib/module-ftplib.html

  Mike

  Hi Mike,

  Thanks for this information I will look at it. The only condition is
  that everything must run via a webpage.

 Which is crucial information and rules out Mike's suggestion.

 And the answer is: no, you can't access file-attributes on remote machines.
 HTTP does require a content-length header though. If that exceeds a certain
 size, you can terminate the connection.

 You need to do that also if the client actually pushes more data than
 announced.

 And progress-counting can be done by counting the already arrived data 
 making e.g. an Ajax-Call to fetch that from the server.

 Diez

Thanks. I have basic file uploading working, however is there a limit
to what can be uploaded via form? It works perfectly for up to around
20MB then breaks. Also how do I retrieve the content-length header? I
am quite new to HTTP programming so sorry for the naive questiomn.

Best,

rod
--
http://mail.python.org/mailman/listinfo/python-list


@property decorator doesn't raise exceptions

2008-10-24 Thread Rafe
Hi,

I've encountered a problem which is making debugging less obvious than
it should be. The @property decorator doesn't always raise exceptions.
It seems like it is bound to the class but ignored when called. I can
see the attribute using dir(self.__class__) on an instance, but when
called, python enters __getattr__. If I correct the bug, the attribute
calls work as expected and do not call __getattr__.

I can't seem to make a simple repro. Can anyone offer any clues as to
what might cause this so I can try to prove it?


Cheers,

- Rafe
--
http://mail.python.org/mailman/listinfo/python-list


dictionary

2008-10-24 Thread asit
what the wrong with the following code 

 d={server:mpilgrim,database:master,
... uid:sa,
... pwd:secret}

 d
{'pwd': 'secret', 'database': 'master', 'uid': 'sa', 'server':
'mpilgrim'}

 [%s=%s % (k,v) for k,v in d.items()]
  File stdin, line 1
[%s=%s % (k,v) for k,v in d.items()]
  ^
SyntaxError: EOL while scanning single-quoted string
--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Duncan Booth
asit [EMAIL PROTECTED] wrote:

  [%s=%s % (k,v) for k,v in d.items()]

The first  opens a string, the second  terminates it, the third  opens 
it again, and you don't have a fourth  in your line to close it.

Try using an editor which supports syntax colouring (even Idle does this) 
and the problem will be instantly apparent.

-- 
Duncan Booth http://kupuguy.blogspot.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: substitution __str__ method of an instance

2008-10-24 Thread Duncan Booth
Steven D'Aprano [EMAIL PROTECTED] wrote:

 However, you can dispatch back to the instance if you really must:
 
 
 class MyObj(object):
 def __init__(self):
 self.__str__ = lambda self: I'm an object!
 def __str__(self):
 return self.__str__(self)
 
 
 But honestly, this sounds like a bad idea. If instances of the one 
class 
 have such radically different methods that they need to be treated 
like 
 this, I question whether they actually belong in the same class.
 

Another option would be to just change the class of the object:

 class C(object):
pass

 c = C()
 print c
__main__.C object at 0x01180C70
 def wrapstr(instance, fn=None):
if fn is None:
def fn(self): return I'm an object
Wrapper = type(instance.__class__.__name__, (instance.__class__,), 
{'__str__':fn})
instance.__class__ = Wrapper


 wrapstr(c)
 print c
I'm an object
 isinstance(c, C)
True
 type(c)
class '__main__.C'
 wrapstr(c, lambda s: object %s at %s % (type(s).__name__, id(s)))
 print c
object C at 18353264

(I'll leave enhancing wrapstr so that it avoids multiple levels of 
wrapping as an exercise for anyone who actually wants to use it.)

-- 
Duncan Booth http://kupuguy.blogspot.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Tim Chase

[%s=%s % (k,v) for k,v in d.items()]

  File stdin, line 1
[%s=%s % (k,v) for k,v in d.items()]
  ^
SyntaxError: EOL while scanning single-quoted string


You have three quotation marks...  you want

  %s=%s

not

  %s=%s

-tkc



--
http://mail.python.org/mailman/listinfo/python-list


Re: Building truth tables

2008-10-24 Thread andrea
On 26 Set, 20:01, Aaron \Castironpi\ Brady [EMAIL PROTECTED]
wrote:

 Good idea.  If you want prefixed operators: 'and( a, b )' instead of
 'a and b', you'll have to write your own.  ('operator.and_' is bitwise
 only.)  It may be confusing to mix prefix with infix: 'impl( a and b,
 c )', so you may want to keep everything prefix, but you can still use
 table( f, n ) like Tim said.

After a while I'm back, thanks a lot, the truth table creator works,
now I just want to parse some strings to make it easier to use.

Like

(P \/ Q) - S == S

Must return a truth table 2^3 lines...

I'm using pyparsing and this should be really simple, but it doesn't
allow me to recurse and that makes mu stuck.
The grammar BNF is:

Var :: = [A..Z]
Exp ::= Var | !Exp  | Exp \/ Exp | Exp - Exp | Exp /\ Exp | Exp ==
Exp

I tried different ways but I don't find a smart way to get from the
recursive bnf grammar to the implementation in pyparsing...
Any hint?
--
http://mail.python.org/mailman/listinfo/python-list


Re: File Upload Size

2008-10-24 Thread Diez B. Roggisch

Thanks. I have basic file uploading working, however is there a limit
to what can be uploaded via form? It works perfectly for up to around
20MB then breaks.



There is no limit, but the larger the upload, the larger the chance of a 
failure. I'm currently not exactly sure if there is a way to overcome 
this with a continuous upload scheme for browsers - maybe google helps.



Also how do I retrieve the content-length header? I
am quite new to HTTP programming so sorry for the naive questiomn.



That depends on your HTTP-framework/libraries of choice.

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-24 Thread Christian Heimes

Rafe wrote:

Hi,

I've encountered a problem which is making debugging less obvious than
it should be. The @property decorator doesn't always raise exceptions.
It seems like it is bound to the class but ignored when called. I can
see the attribute using dir(self.__class__) on an instance, but when
called, python enters __getattr__. If I correct the bug, the attribute
calls work as expected and do not call __getattr__.

I can't seem to make a simple repro. Can anyone offer any clues as to
what might cause this so I can try to prove it?


You must subclass from object to get a new style class. properties 
don't work correctly on old style classes.


Christian

--
http://mail.python.org/mailman/listinfo/python-list


Re: look-behind fixed width issue (package re)

2008-10-24 Thread MRAB
On Oct 24, 6:29 am, Peng Yu [EMAIL PROTECTED] wrote:
 Hi,

 It seem that the current python requires fixed-width pattern for look-
 behind. I'm wondering if there is any newly development which make
 variable-width pattern available for look-behind.

The re module is currently being worked on, but unfortunately it won't
appear until Python 2.7. Variable-width look-behind is one of the
improvements.
--
http://mail.python.org/mailman/listinfo/python-list


Re: regexp in Python (from Perl)

2008-10-24 Thread Pat

Bruno Desthuilliers wrote:

MRAB a écrit :

On Oct 19, 5:47 pm, Bruno Desthuilliers
[EMAIL PROTECTED] wrote:

Pat a écrit :

(snip)

ip = ip[ :-1 ]
ip =+ '9'

or:

ip = ip[:-1]+9


(snip)

  re.sub(r'^(((\d+)\.){3})\d+$', \g19, 192.168.1.1)
'192.168.1.9'


re.sub(r'^(((\d+)\.){3})\d+$', \g19, 192.168.1.100)

'192.168.1.9'


The regular expression changes the last sequence of digits to
9 (192.168.1.100 = 192.168.1.9) but the other code replaces the
last digit (192.168.1.100 = 192.168.1.109).


Mmm - yes, true.

ip = ..join(ip.split('.')[0:3] + ['9'])


As I first stated, in my very particular case, I knew that the last 
octet was always going to be a single digit.


But I did learn a lot from everyone else's posts for the more generic 
cases.  thx!

--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread asit
On Oct 24, 3:06 pm, Tim Chase [EMAIL PROTECTED] wrote:
  [%s=%s % (k,v) for k,v in d.items()]
File stdin, line 1
  [%s=%s % (k,v) for k,v in d.items()]
^
  SyntaxError: EOL while scanning single-quoted string

 You have three quotation marks...  you want

%s=%s

 not

%s=%s

 -tkc

Thanx
--
http://mail.python.org/mailman/listinfo/python-list


Re: Need some advice

2008-10-24 Thread Larry Bates

alex23 wrote:

On Oct 23, 3:15 pm, Larry Bates [EMAIL PROTECTED] wrote:

Bruno is correct, the protocol IS https, you don't type shttp into your browser
get secure http connection.


https[1] and shttp[2] are two entirely different protocols.

[1] http://en.wikipedia.org/wiki/Https
[2] http://en.wikipedia.org/wiki/Secure_hypertext_transfer_protocol


Ok, I stand corrected on shttp.  I've been a programmer for over 30 years and 
have been using python and web development for about 7 years and I've never seen 
any reference to it until now.  IMHO it wouldn't be a good idea to implement any 
production code utilizing such an obscure protocol when https is available.  You 
asked for advice in the Subject of OP, that's my advice.


-Larry
--
http://mail.python.org/mailman/listinfo/python-list


Re: look-behind fixed width issue (package re)

2008-10-24 Thread Gerhard Häring

MRAB wrote:

On Oct 24, 6:29 am, Peng Yu [EMAIL PROTECTED] wrote:

Hi,

It seem that the current python requires fixed-width pattern for look-
behind. I'm wondering if there is any newly development which make
variable-width pattern available for look-behind.


The re module is currently being worked on, but unfortunately it won't
appear until Python 2.7. Variable-width look-behind is one of the
improvements.


Most probably a backport to Python 2.6 or even 2.5 under a different 
module name like re_ng wouldn't be too difficult to do for anybody that 
needs the new functionality and knows a bit about building extension 
modules.


-- Gerhard

--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Steven D'Aprano
On Fri, 24 Oct 2008 10:04:32 +, Duncan Booth wrote:

 asit [EMAIL PROTECTED] wrote:
 
  [%s=%s % (k,v) for k,v in d.items()]
 
 The first  opens a string, the second  terminates it, the third 
 opens it again, and you don't have a fourth  in your line to close it.
 
 Try using an editor which supports syntax colouring (even Idle does
 this) and the problem will be instantly apparent.

Or just read the exception, which explained exactly what's wrong:

EOL while scanning single-quoted string


What are programmers coming to these days? When I was their age, we were 
expected to *read* the error messages our compilers gave us, not turn to 
the Interwebs for help as soon there was the tiniest problem.


-- 
Steven
who is having a you damn kids get off my lawn moment...
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread sturlamolden

Instead of appdomains (one interpreter per thread), or free
threading, you could use multiple processes. Take a look at the new
multiprocessing module in Python 2.6. It has roughly the same
interface as Python's threading and queue modules, but uses processes
instead of threads. Processes are scheduled independently by the
operating system. The objects in the multiprocessing module also tend
to have much better performance than their threading and queue
counterparts. If you have a problem with threads due to the GIL, the
multiprocessing module with most likely take care of it.

There is a fundamental problem with using homebrew loading of multiple
(but renamed) copies of PythonXX.dll that is easily overlooked. That
is, extension modules (.pyd) are DLLs as well. Even if required by two
interpreters, they will only be loaded into the process image once.
Thus you have to rename all of them as well, or you will get havoc
with refcounts. Not to speak of what will happen if a Windows HANDLE
is closed by one interpreter while still needed by another. It is
almost guaranteed to bite you, sooner or later.

There are other options as well:

- Use IronPython. It does not have a GIL.

- Use Jython. It does not have a GIL.

- Use pywin32 to create isolated outproc COM servers in Python. (I'm
not sure what the effect of inproc servers would be.)

- Use os.fork() if your platform supports it (Linux, Unix, Apple,
Cygwin, Windows Vista SUA). This is the standard posix way of doing
multiprocessing. It is almost unbeatable if you have a fast copy-on-
write implementation of fork (that is, all platforms except Cygwin).












--
http://mail.python.org/mailman/listinfo/python-list


using modules in destructors

2008-10-24 Thread [EMAIL PROTECTED]
Hi

i have i have a class that makes temp folders to do work in. it keeps
track of them, so that in the __del__() it can clean them up. ideally
if the user of the module still has objects left at the end of their
program, they should be automatically cleaned up. in my destructor i
had a call to shutil.rmtree (which had been imported at the start of
more module), however when the destructor is called shutil has been
set to None.

i have made a minimal case to reproduce

#!/usr/bin/env python
import shutil
from math import *

class Foo(object):
def __init__(self):
print shutil
def __del__(self):
print shutil

if __name__ == '__main__':
print shutil
a = Foo()

this outputs
module 'shutil' from '/usr/lib/python2.5/shutil.pyc'
module 'shutil' from '/usr/lib/python2.5/shutil.pyc'
None

the odd thing is that if i remove the line from math import * then i
get the output
module 'shutil' from '/usr/lib/python2.5/shutil.pyc'
module 'shutil' from '/usr/lib/python2.5/shutil.pyc'
module 'shutil' from '/usr/lib/python2.5/shutil.pyc'

This seems inconsistent, and makes me wonder if it is a bug in the
interpreter.

As an ugly work around i have found that i can keep a reference to
shutil in the class.

class Foo(object):
def __init__(self):
self.shutil = shutil
print self.shutil
def __del__(self):
print shutil
print self.shutil

But given the difference an import statement can make, i am not sure
this is robust.

I have been working with Python 2.5.2 (r252:60911, Oct  5 2008,
19:24:49) from ubuntu intrepid.

(if google groups does bad things to the code formating, please see
http://ubuntuforums.org/showthread.php?p=6024623 )

Thanks

Sam
--
http://mail.python.org/mailman/listinfo/python-list


Re: File Upload Size

2008-10-24 Thread rodmc
Hi Diez,

Thanks, I will look on Google again, to date though all examples I
have used come up against similar problems. As for HTTP framework and
libraries, I will see what is currently supported. At present I am
using standard Python libraries.

Best,

rod

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara
On Oct 24, 9:35 am, sturlamolden [EMAIL PROTECTED] wrote:
 Instead of appdomains (one interpreter per thread), or free
 threading, you could use multiple processes. Take a look at the new
 multiprocessing module in Python 2.6.

That's mentioned earlier in the thread.


 There is a fundamental problem with using homebrew loading of multiple
 (but renamed) copies of PythonXX.dll that is easily overlooked. That
 is, extension modules (.pyd) are DLLs as well.

Tell me about it--there's all kinds of problems and maintenance
liabilities with our approach.  That's why I'm here talking about this
stuff.

 There are other options as well:

 - Use IronPython. It does not have a GIL.

 - Use Jython. It does not have a GIL.

 - Use pywin32 to create isolated outproc COM servers in Python. (I'm
 not sure what the effect of inproc servers would be.)

 - Use os.fork() if your platform supports it (Linux, Unix, Apple,
 Cygwin, Windows Vista SUA). This is the standard posix way of doing
 multiprocessing. It is almost unbeatable if you have a fast copy-on-
 write implementation of fork (that is, all platforms except Cygwin).

This is discussed earlier in the thread--they're unfortunately all
out.

--
http://mail.python.org/mailman/listinfo/python-list


Re: using modules in destructors

2008-10-24 Thread Michele Simionato
This is expected behavior (see http://www.python.org/doc/essays/cleanup)
but it is definitely a wart of Python. The best advice I can give you
is *never* use __del__. There are alternatives,
such as the with statement, weak references or atexit.
See for instance  http://code.activestate.com/recipes/523007/
If you Google in this newsgroup for __del__ you will find a lot of
discussion.

  Michele Simionato
--
http://mail.python.org/mailman/listinfo/python-list


Re: Logger / I get all messages 2 times

2008-10-24 Thread Vinay Sajip
On Oct 24, 8:28 am, ASh [EMAIL PROTECTED] wrote:
 On Oct 23, 5:10 pm, Diez B. Roggisch [EMAIL PROTECTED] wrote:



  ASh wrote:
   Hi,

   I have this source:

   importlogging
   importlogging.config

  logging.config.fileConfig(logging.properties)
   log =logging.getLogger(qname)
   log.debug(message)

   --- OUTPUT
   DEBUG logger_test:8:  message
   DEBUG logger_test:8:  message

   --- FILE CONFIG
   [formatters]
   keys: detailed

   [handlers]
   keys: console

   [loggers]
   keys: root, engine

   [formatter_detailed]
   format: %(levelname)s %(module)s:%(lineno)d:  %(message)s

   [handler_console]
   class: StreamHandler
   args: []
   formatter: detailed

   [logger_root]
   level: ERROR
   handlers: console

   [logger_engine]
   level: DEBUG
   qualname: qname
   handlers: console

   ---

   Why do I get the log 2 times?

  Because you add the handler console two times, to logger_engine and
  logger_root. You should only add it to root, or set propagate to false.

  Diez

 What if I want to output only the specific logger to console and
 ignore every other?

Just add the handler to the logger you want and to no other logger.
Then, events logged with the logger you want (and its child loggers)
will be sent to the console. If you don't want the child loggers to
log to console, set their propagate value to 0.

Regards,

Vinay Sajip
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Stefan Behnel
Terry Reedy wrote:
 Everything in DLLs is compiled C extensions.  I see about 15 for Windows
 3.0.

Ah, weren't that wonderful times back in the days of Win3.0, when DLL-hell was
inhabited by only 15 libraries? *sigh*

... although ... wait, didn't Win3.0 have more than that already? Maybe you
meant Windows 1.0?

SCNR-ly,

Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to get the actual address of a object

2008-10-24 Thread mujunshan
On 10月24日, 下午1时10分, James Mills [EMAIL PROTECTED]
wrote:
 On Fri, Oct 24, 2008 at 2:58 PM,  [EMAIL PROTECTED] wrote:
  maybe id(x) can get it ,but how to cast it back into a object

 You can't. Python is NOT C/C++/Java or whatever.

 If you have a variable, x, and you want to copy it
 to another variable, y. Use assignment.

 Most (if not all) objects in python a referenced.
 A lot of types are also immutable.

 Describe your problem, perhaps we may be able to
 provide you a better solution ? Can I statically re-cast
 an object into a different type by getting the address
 of another object  is not a very good problem.

 If you're after, coercing one type into another, for example:



  x = 2
  y = 2
  z = int(y)
  x
 2
  y
 '2'
  z
 2

 cheers
 James

 --
 --
 -- Problems are solved by method

Thank you,James.
My original idea was to study all the contents of any object. I can do
it by using module ctypes.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python equivalent to SharePoint?

2008-10-24 Thread ivandatasync

I have read about both Plone and Alfresco being considered as alternatives to
Sharepoint and unfortunately they may not be enough if you require
everything Sharepoint has too offer. Plone and Alfresco are both great
applications but out of the box they are too focused to be complete
replacements. Sharepoint is quite the Monolithic beast when it comes to both
features and complexity. I have done some Sharepoint development and
customization work in a former life err job and although I would not
wish it on my worst competitor, it is very 'feature' rich. Either way, IMHO,
these all-in-one, frameworks are not the way to go. They just get too large
and encourage companies to keep their eggs all in one basket. 

Although I admit, I am very biased as I work on the Datasync Suite, which is
a Sharepoint competitor. Our approach is to not re-invent the wheel but
rather integrate open source applications that are very good for select
business units under a single web based portal. The applications we
integrate are written in various languages but our Suite that ties them all
together is written entirely in Python. Which has served as a marvelous glue
language and base for application extensions. I'll stop there because I
don't want to turn my post into more of an advertisement then it has already
become but if your interested, the link is in my sig.

Good Luck,
Ivan Ven Osdel
http://www.datasyncsuite.com/


Joe Strout-2 wrote:
 
 We've got a client who has been planning to use SharePoint for  
 managing their organization documents, but has recently dropped that  
 idea and is looking for an alternative.  Is there any Python package  
 with similar functionality?
 
 I confess that I've never used SharePoint myself, and what I know  
 about is mainly from these sources:
 
http://en.wikipedia.org/wiki/SharePoint
http://discuss.joelonsoftware.com/default.asp?joel.3.66103.7
 
 I found a reference to CPS, but its developers have dropped the Python  
 source to rewrite it in Java.  That's disturbing, and I don't want to  
 recommend an abandoned platform.  Anything else I should consider?
 
 Thanks,
 - Joe
 
 --
 http://mail.python.org/mailman/listinfo/python-list
 
 

-- 
View this message in context: 
http://www.nabble.com/Python-equivalent-to-SharePoint--tp19995715p20151313.html
Sent from the Python - python-list mailing list archive at Nabble.com.

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread sturlamolden
On Oct 24, 3:58 pm, Andy O'Meara [EMAIL PROTECTED] wrote:

 This is discussed earlier in the thread--they're unfortunately all
 out.

It occurs to me that tcl is doing what you want. Have you ever thought
of not using Python?

That aside, the fundamental problem is what I perceive a fundamental
design flaw in Python's C API. In Java JNI, each function takes a
JNIEnv* pointer as their first argument. There  is nothing the
prevents you from embedding several JVMs in a process. Python can
create embedded subinterpreters, but it works differently. It swaps
subinterpreters like a finite state machine: only one is concurrently
active, and the GIL is shared. The approach is fine, except it kills
free threading of subinterpreters. The argument seems to be that
Apache's mod_python somehow depends on it (for reasons I don't
understand).





--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara
On Oct 24, 2:12 am, greg [EMAIL PROTECTED] wrote:
 Andy wrote:
  1) Independent interpreters (this is the easier one--and solved, in
  principle anyway, by PEP 3121, by Martin v. Löwis

 Something like that is necessary for independent interpreters,
 but not sufficient. There are also all the built-in constants
 and type objects to consider. Most of these are statically
 allocated at the moment.


Agreed--I  was just trying to speak generally.  Or, put another way,
there's no hope for independent interpreters without the likes of PEP
3121.  Also, as Martin pointed out, there's the issue of module
cleanup some guys here may underestimate (and I'm glad Martin pointed
out the importance of it).  Without the module cleanup, every time a
dynamic library using python loads and unloads you've got leaks.  This
issue is a real problem for us since our software is loaded and
unloaded many many times in a host app (iTunes, WMP, etc).  I hadn't
raised it here yet (and I don't want to turn the discussion to this),
but lack of multiple load and unload support has been another painful
issue that we didn't expect to encounter when we went with python.


  2) Barriers to free threading.  As Jesse describes, this is simply
  just the GIL being in place, but of course it's there for a reason.
  It's there because (1) doesn't hold and there was never any specs/
  guidance put forward about what should and shouldn't be done in multi-
  threaded apps

 No, it's there because it's necessary for acceptable performance
 when multiple threads are running in one interpreter. Independent
 interpreters wouldn't mean the absence of a GIL; it would only
 mean each interpreter having its own GIL.


I see what you're saying, but let's note that what you're talking
about at this point is an interpreter containing protection from the
client level violating (supposed) direction put forth in python
multithreaded guidelines.  Glenn Linderman's post really gets at
what's at hand here.  It's really important to consider that it's not
a given that python (or any framework) has to be designed against
hazardous use.  Again, I refer you to the diagrams and guidelines in
the QuickTime API:

http://developer.apple.com/technotes/tn/tn2125.html

They tell you point-blank what you can and can't do, and it's that's
simple.  Their engineers can then simply create the implementation
around those specs and not weigh any of the implementation down with
sync mechanisms.  I'm in the camp that simplicity and convention wins
the day when it comes to an API.  It's safe to say that software
engineers expect and assume that a thread that doesn't have contact
with other threads (except for explicit, controlled message/object
passing) will run unhindered and safely, so I raise an eyebrow at the
GIL (or any internal helper sync stuff) holding up an thread's
performance when the app is designed to not need lower-level global
locks.

Anyway, let's talk about solutions.  My company looking to support
python dev community endeavor that allows the following:

- an app makes N worker threads (using the OS)

- each worker thread makes its own interpreter, pops scripts off a
work queue, and manages exporting (and then importing) result data to
other parts of the app.  Generally, we're talking about CPU-bound work
here.

- each interpreter has the essentials (e.g. math support, string
support, re support, and so on -- I realize this is open-ended, but
work with me here).

Let's guesstimate about what kind of work we're talking about here and
if this is even in the realm of possibility.  If we find that it *is*
possible, let's figure out what level of work we're talking about.
From there, I can get serious about writing up a PEP/spec, paid
support, and so on.

Regards,
Andy





--
http://mail.python.org/mailman/listinfo/python-list


Re: look-behind fixed width issue (package re)

2008-10-24 Thread Peng Yu
 Most probably a backport to Python 2.6 or even 2.5 under a different
 module name like re_ng wouldn't be too difficult to do for anybody that
 needs the new functionality and knows a bit about building extension
 modules.

I did a google a search. But I don't find any document that describe
it. Does it have almost the same functionality as the re package that
will be in Python 2.7? Where is the decumentation of it?

If I just need variable width look-behind, I just replace re with
re_ng on python 2.5 or 2.6, right? Is re_ng available on python 2.4?

Thanks,
Peng
--
http://mail.python.org/mailman/listinfo/python-list


Re: More efficient array processing

2008-10-24 Thread sturlamolden
On Oct 23, 8:11 pm, John [H2O] [EMAIL PROTECTED] wrote:

 datagrid = numpy.zeros(360,180,3,73,20)

On a 32 bit system, try this instead:

datagrid = numpy.zeros((360,180,3,73,20), dtype=numpy.float32)

(if you can use single precision that is.)












--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Peter Pearson
On 24 Oct 2008 13:17:45 GMT, Steven D'Aprano wrote:

 What are programmers coming to these days? When I was their age, we were 
 expected to *read* the error messages our compilers gave us, not turn to 
 the Interwebs for help as soon there was the tiniest problem.

Yes, and what's more, the text of the error message was
IEH208.  After reading it several times, one looked it up
in a big fat set of books, where one found the explanation:

  IEH208: Your program contains an error.
  Correct the error and resubmit your job.

An excellent system for purging the world of the weak and
timid.

-- 
To email me, substitute nowhere-spamcop, invalid-net.
--
http://mail.python.org/mailman/listinfo/python-list


Re: look-behind fixed width issue (package re)

2008-10-24 Thread Steven D'Aprano
On Fri, 24 Oct 2008 07:43:16 -0700, Peng Yu wrote:

 Most probably a backport to Python 2.6 or even 2.5 under a different
 module name like re_ng wouldn't be too difficult to do for anybody that
 needs the new functionality and knows a bit about building extension
 modules.
 
 I did a google a search. But I don't find any document that describe it.
 Does it have almost the same functionality as the re package that will
 be in Python 2.7? Where is the decumentation of it?
 
 If I just need variable width look-behind, I just replace re with re_ng
 on python 2.5 or 2.6, right? Is re_ng available on python 2.4?

re_ng doesn't exist yet, because Python 2.7 doesn't exist yet. 2.6 has 
only just come out -- it will probably be at least a year before 2.7 is 
out, and only then might people start back-porting the new re engine to 
2.5 or 2.6.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara


 That aside, the fundamental problem is what I perceive a fundamental
 design flaw in Python's C API. In Java JNI, each function takes a
 JNIEnv* pointer as their first argument. There  is nothing the
 prevents you from embedding several JVMs in a process. Python can
 create embedded subinterpreters, but it works differently. It swaps
 subinterpreters like a finite state machine: only one is concurrently
 active, and the GIL is shared.

Bingo, it seems that you've hit it right on the head there.  Sadly,
that's why I regard this thread largely futile (but I'm an optimist
when it comes to cool software communities so here I am).  I've been
afraid to say it for fear of getting mauled by everyone here, but I
would definitely agree if there was a context (i.e. environment)
object passed around then perhaps we'd have the best of all worlds.
*winces*



  This is discussed earlier in the thread--they're unfortunately all
  out.

 It occurs to me that tcl is doing what you want. Have you ever thought
 of not using Python?

Bingo again.  Our research says that the options are tcl, perl
(although it's generally untested and not recommended by the
community--definitely dealbreakers for a commercial user like us), and
lua.  Also, I'd rather saw off my own right arm than adopt perl, so
that's out.  :^)

As I mentioned, we're looking to either (1) support a python dev
community effort, (2) make our own high-performance python interpreter
(that uses an env object as you described), or (3) drop python and go
to lua.  I'm favoring them in the order I list them, but the more I
discuss the issue with folks here, the more people seem to be
unfortunately very divided on (1).

Andy



--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Steven D'Aprano
On Fri, 24 Oct 2008 14:53:19 +, Peter Pearson wrote:

 On 24 Oct 2008 13:17:45 GMT, Steven D'Aprano wrote:

 What are programmers coming to these days? When I was their age, we
 were expected to *read* the error messages our compilers gave us, not
 turn to the Interwebs for help as soon there was the tiniest problem.
 
 Yes, and what's more, the text of the error message was IEH208.  After
 reading it several times, one looked it up in a big fat set of books,
 where one found the explanation:
 
   IEH208: Your program contains an error. Correct the error and resubmit
   your job.
 
 An excellent system for purging the world of the weak and timid.

You had reference books? You were lucky! When I was lad, we couldn't 
afford reference books. If we wanted to know what an error code meant, we 
had to rummage through the bins outside of compiler vendors' offices 
looking for discarded documentation.



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: python extensions: including project local headers

2008-10-24 Thread J Kenneth King
Philip Semanchuk [EMAIL PROTECTED] writes:

 On Oct 23, 2008, at 3:18 PM, J Kenneth King wrote:

 Philip Semanchuk [EMAIL PROTECTED] writes:

 On Oct 23, 2008, at 11:36 AM, J Kenneth King wrote:


 Hey everyone,

 I'm working on a python extension wrapper around Rob Hess'
 implementation of a SIFT feature detector. I'm working on a
 computer-vision based project that requires interfacing with
 Python at
 the higher layers, so I figured the best way to handle this would be
 in
 C (since my initial implementation in python was ungodly and slow).

 I can get distutils to compile the extension and install it in the
 python path, but when I go to import it I get the wonderful
 exception:

 ImportError: /usr/lib/python2.5/site-packages/pysift.so: undefined
 symbol: _sift_features


 Kenneth,
 You're close but not interpreting the error quite correctly. This
 isn't an error from the compiler or preprocessor, it's a library
 error. Assuming this is dynamically linked, your OS is reporting
 that,
 at runtime, it can't find the library that contains _sift_features.
 Make sure that it's somewhere where your OS can find it.

 This is basically what I was looking for help with. So far the project
 directory is:

 /pysift
  /sift
..
/include
  ..
  sift.h
/src
  ..
  sift.c
  /src
pysift.c
  setup.py

 I thought I could just #include sift.h in pysift.c as long as
 distutils passed the right -I path to gcc.

 That's true, and it sounds like you've got that part working.


 Maybe I should compile the sift code as a shared object and link it to
 my extension? How would I get distutils to build the makefile and tell
 gcc how to link it?

 Thanks for the reply. Python has spoiled me and my C is rather
 rusty. :)

 I don't know how to get setup.py to build a shared object separately.
 I am in the same Python/C situation as you. I'm scrubbing the rust off
 of my C skills and I'm also a n00b at developing extensions. I've
 learned a lot from looking at other people's setup code, so maybe I
 can help you there.

 My posix_ipc module links to the realtime lib rt and here's the
 relevant snippets of setup.py:

 --
 import distutils.core as duc

 libraries = [ ]

 libraries.append(rt)

 source_files = [posix_ipc_module.c]

 ext_modules = [ duc.Extension(posix_ipc,
   source_files,
   define_macros=define_macros,
   libraries=libraries
  )
   ]

 duc.setup(name=posix_ipc, version=VERSION, ext_modules=ext_modules)

 --

 You can download the whole thing here if you want to examine all the
 code:
 http://semanchuk.com/philip/posix_ipc/

 HTH
 Philip

I'll take a look, thanks! :)
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Patrick Stinson
I'm not finished reading the whole thread yet, but I've got some
things below to respond to this post with.

On Thu, Oct 23, 2008 at 9:30 AM, Glenn Linderman [EMAIL PROTECTED] wrote:
 On approximately 10/23/2008 12:24 AM, came the following characters from the
 keyboard of Christian Heimes:

 Andy wrote:

 2) Barriers to free threading.  As Jesse describes, this is simply
 just the GIL being in place, but of course it's there for a reason.
 It's there because (1) doesn't hold and there was never any specs/
 guidance put forward about what should and shouldn't be done in multi-
 threaded apps (see my QuickTime API example).  Perhaps if we could go
 back in time, we would not put the GIL in place, strict guidelines
 regarding multithreaded use would have been established, and PEP 3121
 would have been mandatory for C modules.  Then again--screw that, if I
 could go back in time, I'd just go for the lottery tickets!! :^)


 I've been following this discussion with interest, as it certainly seems
 that multi-core/multi-CPU machines are the coming thing, and many
 applications will need to figure out how to use them effectively.

 I'm very - not absolute, but very - sure that Guido and the initial
 designers of Python would have added the GIL anyway. The GIL makes Python
 faster on single core machines and more stable on multi core machines. Other
 language designers think the same way. Ruby recently got a GIL. The article
 http://www.infoq.com/news/2007/05/ruby-threading-futures explains the
 rationales for a GIL in Ruby. The article also holds a quote from Guido
 about threading in general.

 Several people inside and outside the Python community think that threads
 are dangerous and don't scale. The paper
 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf sums it up
 nicely, It explains why modern processors are going to cause more and more
 trouble with the Java approach to threads, too.

 Reading this PDF paper is extremely interesting (albeit somewhat dependent
 on understanding abstract theories of computation; I have enough math
 background to follow it, sort of, and most of the text can be read even
 without fully understanding the theoretical abstractions).

 I have already heard people talking about Java applications are buggy.  I
 don't believe that general sequential programs written in Java are any
 buggier than programs written in other languages... so I had interpreted
 that to mean (based on some inquiry) that complex, multi-threaded Java
 applications are buggy.  And while I also don't believe that complex,
 multi-threaded programs written in Java are any buggier than complex,
 multi-threaded programs written in other languages, it does seem to be true
 that Java is one of the currently popular languages in which to write
 complex, multi-threaded programs, because of its language support for
 threads and concurrency primitives.  These reports were from people that are
 not programmers, but are field IT people, that have bought and/or support
 software and/or hardware with drivers, that are written in Java, and seem to
 have non-ideal behavior, (apparently only) curable by stopping/restarting
 the application or driver, or sometimes requiring a reboot.

 The paper explains many traps that lead to complex, multi-threaded programs
 being buggy, and being hard to test.  I have worked with parallel machines,
 applications, and databases for 25 years, and can appreciate the succinct
 expression of the problems explained within the paper, and can, from
 experience, agree with its premises and conclusions.  Parallel applications
 only have been commercial successes when the parallelism is tightly
 constrained to well-controlled patterns that could be easily understood.
  Threads, especially in cooperation with languages that use memory
 pointers, have the potential to get out of control, in inexplicable ways.


 Python *must* gain means of concurrent execution of CPU bound code
 eventually to survive on the market. But it must get the right means or we
 are going to suffer the consequences.

 This statement, after reading the paper, seems somewhat in line with the
 author's premise that language acceptability requires that a language be
 self-contained/monolithic, and potentially sufficient to implement itself.
  That seems to also be one of the reasons that Java is used today for
 threaded applications.  It does seem to be true, given current hardware
 trends, that _some mechanism_ must be provided to obtain the benefit of
 multiple cores/CPUs to a single application, and that Python must either
 implement or interface to that mechanism to continue to be a viable language
 for large scale application development.

 Andy seems to want an implementation of independent Python processes
 implemented as threads within a single address space, that can be
 coordinated by an outer application.  This actually corresponds to the model
 promulgated in the paper as being most likely to succeed.  

Re: Py2exe and Module Error...

2008-10-24 Thread Mike Driscoll
On Oct 23, 5:02 pm, [EMAIL PROTECTED] wrote:
 On Oct 22, 8:33 pm, Gabriel Genellina [EMAIL PROTECTED]
 wrote:



  En Wed, 22 Oct 2008 20:34:39 -0200, [EMAIL PROTECTED] escribió:

   I am using py2exe and everything is working fine except one module,
   ClientCookie, found here:

  http://wwwsearch.sourceforge.net/ClientCookie/

   Keeps coming up as not found no matter what I do. I have tried all
   these combinations from the command line:

  Add `import ClientCookie` to your setup.py (to make sure you actually
  *can* import it in your development environment).
  Also declare it as a required *package* (not module):

  setup(windows=[C:\\exe\\pos_final2.py],
         ...
         options={'py2exe': {
           'packages': ['ClientCookie',]
           })

  --
  Gabriel Genellina

 Ok, thank you for your reply Gabriel. I did as you said, including
 adding 'import ClientCookie' to setup.py and that worked fine when
 running the script it found it. However, when actually running it
 through py2exe after adding the package as you have said, it still
 says 'No module named ClientCookie'

 Any help would be greatly appreciated.

 Vince

Try asking at the py2exe mailing list as well. They can probably give
you some pointers.

Mike
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara

Glenn, great post and points!


 Andy seems to want an implementation of independent Python processes
 implemented as threads within a single address space, that can be
 coordinated by an outer application.  This actually corresponds to the
 model promulgated in the paper as being most likely to succeed.

Yeah, that's the idea--let the highest levels run and coordinate the
show.


 It does seem simpler and more efficient to simply copy
 data from one memory location to another, rather than send it in a
 message, especially if the data are large.

That's the rub...  In our case, we're doing image and video
manipulation--stuff not good to be messaging from address space to
address space.  The same argument holds for numerical processing with
large data sets.  The workers handing back huge data sets via
messaging isn't very attractive.

 One thing Andy hasn't yet explained (or I missed) is why any of his
 application is coded in a language other than Python.  

Our software runs in real time (so performance is paramount),
interacts with other static libraries, depends on worker threads to
perform real-time image manipulation, and leverages Windows and Mac OS
API concepts and features.  Python's performance hits have generally
been a huge challenge with our animators because they often have to go
back and massage their python code to improve execution performance.
So, in short, there are many reasons why we use python as a part
rather than a whole.

The other area of pain that I mentioned in one of my other posts is
that what we ship, above all, can't be flaky.  The lack of module
cleanup (intended to be addressed by PEP 3121), using a duplicate copy
of the python dynamic lib, and namespace black magic to achieve
independent interpreters are all examples that have made using python
for us much more challenging and time-consuming then we ever
anticipated.

Again, if it turns out nothing can be done about our needs (which
appears to be more and more like the case), I think it's important for
everyone here to consider the points raised here in the last week.
Moreover, realize that the python dev community really stands to gain
from making python usable as a tool (rather than a monolith).  This
fact alone has caused lua to *rapidly* rise in popularity with
software companies looking to embed a powerful, lightweight
interpreter in their software.

As a python language fan an enthusiast, don't let lua win!  (I say
this endearingly of course--I have the utmost respect for both
communities and I only want to see CPython be an attractive pick when
a company is looking to embed a language that won't intrude upon their
app's design).


Andy
--
http://mail.python.org/mailman/listinfo/python-list


print statements not sent to nohup.out

2008-10-24 Thread John [H2O]

Just a quick question.. what do I need to do so that my print statements are
caught by nohup??

Yes, I should probably be 'logging'... but hey..

Thanks!
-- 
View this message in context: 
http://www.nabble.com/print-statements-not-sent-to-nohup.out-tp20152780p20152780.html
Sent from the Python - python-list mailing list archive at Nabble.com.

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Patrick Stinson
We are in the same position as Andy here.

I think that something that would help people like us produce
something in code form is a collection of information outlining the
problem and suggested solutions, appropriate parts of the CPython's
current threading API, and pros and cons of the many various proposed
solutions to the different levels of the problem. The most valuable
information I've found is contained in the many (lengthy!) discussions
like this one, a few related PEP's, and the CPython docs, but has
anyone condensed the state of the problem into a wiki or something
similar? Maybe we should start one?

For example, Guido's post here
http://www.artima.com/weblogs/viewpost.jsp?thread=214235describes some
possible solutions to the problem, like interpreter-specific locks, or
fine-grained object locks, and he also mentions the primary
requirement of not harming from the performance of single-threaded
apps. As I understand it, that requirement does not rule out new build
configurations that provide some level of concurrency, as long as you
can still compile python so as to perform as well on single-threaded
apps.

To add to the heap of use cases, the most important thing to us is to
simple have the python language and the sip/PyQt modules available to
us. All we wanted to do was embed the interpreter and language core as
a local scripting engine, so had we patched python to provide
concurrent execution, we wouldn't have cared about all of the other
unsuppported extension modules since our scripts are quite
application-specific.

It seems to me that the very simplest move would be to remove global
static data so the app could provide all thread-related data, which
Andy suggests through references to the QuickTime API. This would
suggest compiling python without thread support so as to leave it up
to the application.

Anyway, I'm having fun reading all of these papers and news postings,
but it's true that code talks, and it could be a little easier if the
state of the problems was condensed. This could be an intense and fun
project, but frankly it's a little tough to keep it all in my head. Is
there a wiki or something out there or should we start one, or do I
just need to read more code?

On Fri, Oct 24, 2008 at 6:40 AM, Andy O'Meara [EMAIL PROTECTED] wrote:
 On Oct 24, 2:12 am, greg [EMAIL PROTECTED] wrote:
 Andy wrote:
  1) Independent interpreters (this is the easier one--and solved, in
  principle anyway, by PEP 3121, by Martin v. Löwis

 Something like that is necessary for independent interpreters,
 but not sufficient. There are also all the built-in constants
 and type objects to consider. Most of these are statically
 allocated at the moment.


 Agreed--I  was just trying to speak generally.  Or, put another way,
 there's no hope for independent interpreters without the likes of PEP
 3121.  Also, as Martin pointed out, there's the issue of module
 cleanup some guys here may underestimate (and I'm glad Martin pointed
 out the importance of it).  Without the module cleanup, every time a
 dynamic library using python loads and unloads you've got leaks.  This
 issue is a real problem for us since our software is loaded and
 unloaded many many times in a host app (iTunes, WMP, etc).  I hadn't
 raised it here yet (and I don't want to turn the discussion to this),
 but lack of multiple load and unload support has been another painful
 issue that we didn't expect to encounter when we went with python.


  2) Barriers to free threading.  As Jesse describes, this is simply
  just the GIL being in place, but of course it's there for a reason.
  It's there because (1) doesn't hold and there was never any specs/
  guidance put forward about what should and shouldn't be done in multi-
  threaded apps

 No, it's there because it's necessary for acceptable performance
 when multiple threads are running in one interpreter. Independent
 interpreters wouldn't mean the absence of a GIL; it would only
 mean each interpreter having its own GIL.


 I see what you're saying, but let's note that what you're talking
 about at this point is an interpreter containing protection from the
 client level violating (supposed) direction put forth in python
 multithreaded guidelines.  Glenn Linderman's post really gets at
 what's at hand here.  It's really important to consider that it's not
 a given that python (or any framework) has to be designed against
 hazardous use.  Again, I refer you to the diagrams and guidelines in
 the QuickTime API:

 http://developer.apple.com/technotes/tn/tn2125.html

 They tell you point-blank what you can and can't do, and it's that's
 simple.  Their engineers can then simply create the implementation
 around those specs and not weigh any of the implementation down with
 sync mechanisms.  I'm in the camp that simplicity and convention wins
 the day when it comes to an API.  It's safe to say that software
 engineers expect and assume that a thread that doesn't have contact
 with other threads 

Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Patrick Stinson
As a side note to the performance question, we are executing python
code in an audio thread that is used in all of the top-end music
production environments. We have found the language to perform
extremely well when executed at control-rate frequency, meaning we
aren't doing DSP computations, just responding to less-frequent events
like user input and MIDI messages.

So we are sitting this music platform with unimaginable possibilities
in the music world (of which python does not play a role), but those
little CPU spikes caused by the GIL at low latencies won't let us have
it. AFAIK, there is no music scripting language out there that would
come close, and yet we are so close! This is a big deal.

On Fri, Oct 24, 2008 at 7:42 AM, Andy O'Meara [EMAIL PROTECTED] wrote:

 Glenn, great post and points!


 Andy seems to want an implementation of independent Python processes
 implemented as threads within a single address space, that can be
 coordinated by an outer application.  This actually corresponds to the
 model promulgated in the paper as being most likely to succeed.

 Yeah, that's the idea--let the highest levels run and coordinate the
 show.


 It does seem simpler and more efficient to simply copy
 data from one memory location to another, rather than send it in a
 message, especially if the data are large.

 That's the rub...  In our case, we're doing image and video
 manipulation--stuff not good to be messaging from address space to
 address space.  The same argument holds for numerical processing with
 large data sets.  The workers handing back huge data sets via
 messaging isn't very attractive.

 One thing Andy hasn't yet explained (or I missed) is why any of his
 application is coded in a language other than Python.

 Our software runs in real time (so performance is paramount),
 interacts with other static libraries, depends on worker threads to
 perform real-time image manipulation, and leverages Windows and Mac OS
 API concepts and features.  Python's performance hits have generally
 been a huge challenge with our animators because they often have to go
 back and massage their python code to improve execution performance.
 So, in short, there are many reasons why we use python as a part
 rather than a whole.

 The other area of pain that I mentioned in one of my other posts is
 that what we ship, above all, can't be flaky.  The lack of module
 cleanup (intended to be addressed by PEP 3121), using a duplicate copy
 of the python dynamic lib, and namespace black magic to achieve
 independent interpreters are all examples that have made using python
 for us much more challenging and time-consuming then we ever
 anticipated.

 Again, if it turns out nothing can be done about our needs (which
 appears to be more and more like the case), I think it's important for
 everyone here to consider the points raised here in the last week.
 Moreover, realize that the python dev community really stands to gain
 from making python usable as a tool (rather than a monolith).  This
 fact alone has caused lua to *rapidly* rise in popularity with
 software companies looking to embed a powerful, lightweight
 interpreter in their software.

 As a python language fan an enthusiast, don't let lua win!  (I say
 this endearingly of course--I have the utmost respect for both
 communities and I only want to see CPython be an attractive pick when
 a company is looking to embed a language that won't intrude upon their
 app's design).


 Andy
 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Terry Reedy

Stefan Behnel wrote:

Terry Reedy wrote:

Everything in DLLs is compiled C extensions.  I see about 15 for Windows
3.0.


Ah, weren't that wonderful times back in the days of Win3.0, when DLL-hell was
inhabited by only 15 libraries? *sigh*

... although ... wait, didn't Win3.0 have more than that already? Maybe you
meant Windows 1.0?

SCNR-ly,


Is that the equivalent of a smilely? or did you really not understand 
what I wrote?


--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread asit
On Oct 24, 8:01 pm, Steven D'Aprano [EMAIL PROTECTED]
cybersource.com.au wrote:
 On Fri, 24 Oct 2008 14:53:19 +, Peter Pearson wrote:
  On 24 Oct 2008 13:17:45 GMT, Steven D'Aprano wrote:

  What are programmers coming to these days? When I was their age, we
  were expected to *read* the error messages our compilers gave us, not
  turn to the Interwebs for help as soon there was the tiniest problem.

  Yes, and what's more, the text of the error message was IEH208.  After
  reading it several times, one looked it up in a big fat set of books,
  where one found the explanation:

IEH208: Your program contains an error. Correct the error and resubmit
your job.

  An excellent system for purging the world of the weak and timid.

 You had reference books? You were lucky! When I was lad, we couldn't
 afford reference books. If we wanted to know what an error code meant, we
 had to rummage through the bins outside of compiler vendors' offices
 looking for discarded documentation.

 --
 Steven

I don't have a reference book. I read from e-buks and some print outs
of Reference Card. Again, I had this error becoz my console has no
colour highlighting feature.
--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Terry Reedy

asit wrote:

what the wrong with the following code 


d={server:mpilgrim,database:master,

... uid:sa,
... pwd:secret}


d

{'pwd': 'secret', 'database': 'master', 'uid': 'sa', 'server':
'mpilgrim'}


[%s=%s % (k,v) for k,v in d.items()]

  File stdin, line 1
[%s=%s % (k,v) for k,v in d.items()]
  ^
SyntaxError: EOL while scanning single-quoted string


By single-quoted, the message mean quoted by ' or  rather than 
triple-quoted with ''' or .


--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 10:40 AM, Andy O'Meara [EMAIL PROTECTED] wrote:
  2) Barriers to free threading.  As Jesse describes, this is simply
  just the GIL being in place, but of course it's there for a reason.
  It's there because (1) doesn't hold and there was never any specs/
  guidance put forward about what should and shouldn't be done in multi-
  threaded apps

 No, it's there because it's necessary for acceptable performance
 when multiple threads are running in one interpreter. Independent
 interpreters wouldn't mean the absence of a GIL; it would only
 mean each interpreter having its own GIL.


 I see what you're saying, but let's note that what you're talking
 about at this point is an interpreter containing protection from the
 client level violating (supposed) direction put forth in python
 multithreaded guidelines.  Glenn Linderman's post really gets at
 what's at hand here.  It's really important to consider that it's not
 a given that python (or any framework) has to be designed against
 hazardous use.  Again, I refer you to the diagrams and guidelines in
 the QuickTime API:

 http://developer.apple.com/technotes/tn/tn2125.html

 They tell you point-blank what you can and can't do, and it's that's
 simple.  Their engineers can then simply create the implementation
 around those specs and not weigh any of the implementation down with
 sync mechanisms.  I'm in the camp that simplicity and convention wins
 the day when it comes to an API.  It's safe to say that software
 engineers expect and assume that a thread that doesn't have contact
 with other threads (except for explicit, controlled message/object
 passing) will run unhindered and safely, so I raise an eyebrow at the
 GIL (or any internal helper sync stuff) holding up an thread's
 performance when the app is designed to not need lower-level global
 locks.

 Anyway, let's talk about solutions.  My company looking to support
 python dev community endeavor that allows the following:

 - an app makes N worker threads (using the OS)

 - each worker thread makes its own interpreter, pops scripts off a
 work queue, and manages exporting (and then importing) result data to
 other parts of the app.  Generally, we're talking about CPU-bound work
 here.

 - each interpreter has the essentials (e.g. math support, string
 support, re support, and so on -- I realize this is open-ended, but
 work with me here).

 Let's guesstimate about what kind of work we're talking about here and
 if this is even in the realm of possibility.  If we find that it *is*
 possible, let's figure out what level of work we're talking about.
 From there, I can get serious about writing up a PEP/spec, paid
 support, and so on.

Point of order! Just for my own sanity if anything :) I think some
minor clarifications are in order.

What are threads within Python:

Python has built in support for POSIX light weight threads. This is
what most people are talking about when they see, hear and say
threads - they mean Posix Pthreads
(http://en.wikipedia.org/wiki/POSIX_Threads) this is not what you
(Adam) seem to be asking for. PThreads are attractive due to the fact
they exist within a single interpreter, can share memory all willy
nilly, etc.

Python does in fact, use OS-Level pthreads when you request multiple threads.

The Global Interpreter Lock is fundamentally designed to make the
interpreter easier to maintain and safer: Developers do not need to
worry about other code stepping on their namespace. This makes things
thread-safe, inasmuch as having multiple PThreads within the same
interpreter space modifying global state and variable at once is,
well, bad. A c-level module, on the other hand, can sidestep/release
the GIL at will, and go on it's merry way and process away.

POSIX Threads/pthreads/threads as we get from Java, allow unsafe
programming styles. These programming styles are of the shared
everything deadlock lol kind. The GIL *partially* protects against
some of the pitfalls. You do not seem to be asking for pthreads :)

http://www.python.org/doc/faq/library/#can-t-we-get-rid-of-the-global-interpreter-lock
http://en.wikipedia.org/wiki/Multi-threading

However, then there are processes.

The difference between threads and processes is that they do *not
share memory* but they can share state via shared queues/pipes/message
passing - what you seem to be asking for - is the ability to
completely fork independent Python interpreters, with their own
namespace and coordinate work via a shared queue accessed with pipes
or some other communications mechanism. Correct?

Multiprocessing, as it exists within python 2.6 today actually forks
(see trunk/Lib/multiprocessing/forking.py) a completely independent
interpreter per process created and then construct pipes to
inter-communicate, and queue to do work coordination. I am not
suggesting this is good for you - I'm trying to get to exactly what
you're asking for.

Fundamentally, allowing total free-threading with Posix threads, 

Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 12:30 PM, Jesse Noller [EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 10:40 AM, Andy O'Meara [EMAIL PROTECTED] wrote:
  2) Barriers to free threading.  As Jesse describes, this is simply
  just the GIL being in place, but of course it's there for a reason.
  It's there because (1) doesn't hold and there was never any specs/
  guidance put forward about what should and shouldn't be done in multi-
  threaded apps

 No, it's there because it's necessary for acceptable performance
 when multiple threads are running in one interpreter. Independent
 interpreters wouldn't mean the absence of a GIL; it would only
 mean each interpreter having its own GIL.


 I see what you're saying, but let's note that what you're talking
 about at this point is an interpreter containing protection from the
 client level violating (supposed) direction put forth in python
 multithreaded guidelines.  Glenn Linderman's post really gets at
 what's at hand here.  It's really important to consider that it's not
 a given that python (or any framework) has to be designed against
 hazardous use.  Again, I refer you to the diagrams and guidelines in
 the QuickTime API:

 http://developer.apple.com/technotes/tn/tn2125.html

 They tell you point-blank what you can and can't do, and it's that's
 simple.  Their engineers can then simply create the implementation
 around those specs and not weigh any of the implementation down with
 sync mechanisms.  I'm in the camp that simplicity and convention wins
 the day when it comes to an API.  It's safe to say that software
 engineers expect and assume that a thread that doesn't have contact
 with other threads (except for explicit, controlled message/object
 passing) will run unhindered and safely, so I raise an eyebrow at the
 GIL (or any internal helper sync stuff) holding up an thread's
 performance when the app is designed to not need lower-level global
 locks.

 Anyway, let's talk about solutions.  My company looking to support
 python dev community endeavor that allows the following:

 - an app makes N worker threads (using the OS)

 - each worker thread makes its own interpreter, pops scripts off a
 work queue, and manages exporting (and then importing) result data to
 other parts of the app.  Generally, we're talking about CPU-bound work
 here.

 - each interpreter has the essentials (e.g. math support, string
 support, re support, and so on -- I realize this is open-ended, but
 work with me here).

 Let's guesstimate about what kind of work we're talking about here and
 if this is even in the realm of possibility.  If we find that it *is*
 possible, let's figure out what level of work we're talking about.
 From there, I can get serious about writing up a PEP/spec, paid
 support, and so on.

 Point of order! Just for my own sanity if anything :) I think some
 minor clarifications are in order.

 What are threads within Python:

 Python has built in support for POSIX light weight threads. This is
 what most people are talking about when they see, hear and say
 threads - they mean Posix Pthreads
 (http://en.wikipedia.org/wiki/POSIX_Threads) this is not what you
 (Adam) seem to be asking for. PThreads are attractive due to the fact
 they exist within a single interpreter, can share memory all willy
 nilly, etc.

 Python does in fact, use OS-Level pthreads when you request multiple threads.

 The Global Interpreter Lock is fundamentally designed to make the
 interpreter easier to maintain and safer: Developers do not need to
 worry about other code stepping on their namespace. This makes things
 thread-safe, inasmuch as having multiple PThreads within the same
 interpreter space modifying global state and variable at once is,
 well, bad. A c-level module, on the other hand, can sidestep/release
 the GIL at will, and go on it's merry way and process away.

 POSIX Threads/pthreads/threads as we get from Java, allow unsafe
 programming styles. These programming styles are of the shared
 everything deadlock lol kind. The GIL *partially* protects against
 some of the pitfalls. You do not seem to be asking for pthreads :)

 http://www.python.org/doc/faq/library/#can-t-we-get-rid-of-the-global-interpreter-lock
 http://en.wikipedia.org/wiki/Multi-threading

 However, then there are processes.

 The difference between threads and processes is that they do *not
 share memory* but they can share state via shared queues/pipes/message
 passing - what you seem to be asking for - is the ability to
 completely fork independent Python interpreters, with their own
 namespace and coordinate work via a shared queue accessed with pipes
 or some other communications mechanism. Correct?

 Multiprocessing, as it exists within python 2.6 today actually forks
 (see trunk/Lib/multiprocessing/forking.py) a completely independent
 interpreter per process created and then construct pipes to
 inter-communicate, and queue to do work coordination. I am not
 suggesting this is good for you - I'm trying 

Re: @property decorator doesn't raise exceptions

2008-10-24 Thread Rafe
On Oct 24, 2:21 am, Christian Heimes [EMAIL PROTECTED] wrote:
 Rafewrote:
  Hi,

  I've encountered a problem which is making debugging less obvious than
  it should be. The @property decorator doesn't always raise exceptions.
  It seems like it is bound to the class but ignored when called. I can
  see the attribute using dir(self.__class__) on an instance, but when
  called, python enters __getattr__. If I correct the bug, the attribute
  calls work as expected and do not call __getattr__.

  I can't seem to make a simple repro. Can anyone offer any clues as to
  what might cause this so I can try to prove it?

 You must subclass from object to get a new style class. properties
 don't work correctly on old style classes.

 Christian

All classes are a sub-class of object. Any other ideas?

- Rafe
--
http://mail.python.org/mailman/listinfo/python-list


Re: OS 10.5 build 64 bits

2008-10-24 Thread Robin Becker

Graham Dumpleton wrote:



  http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX
  
http://developer.apple.com/releasenotes/OpenSource/PerlExtensionsRelNotes/index.html

The latter only works for Apple supplied Python as I understand it.

..
thanks for these, the mod_wsgi build contains interesting stuff; no doubt I'll 
burn my fingers with a few misguided fudges.

--
Robin Becker

--
http://mail.python.org/mailman/listinfo/python-list


Re: Porting VB apps to Python for Window / Linux use

2008-10-24 Thread Ed Leafe

On Oct 18, 2008, at 8:12 AM, Dotan Cohen wrote:


I often see mention of SMBs that either want to upgrade their Windows
installations, or move to Linux, but cannot because of inhouse VB
apps. Are there any Python experts who I can reference them to for
porting? I have nothing on hand at the moment, but I see this as a
need without an obvious answer.


	Sorry for the delay in responding, but someone just pointed out this  
post to me.


	You might want to take a look at Dabo, which is an integrated desktop  
application framework for Python (disclosure: I'm one of the authors).  
It allows you to visually create UIs that run unmodified on Windows,  
Linux and OS X.


You can learn about it at http://dabodev.com


-- Ed Leafe



--
http://mail.python.org/mailman/listinfo/python-list


Re: @property decorator doesn't raise exceptions

2008-10-24 Thread Peter Otten
Rafe wrote:

 On Oct 24, 2:21 am, Christian Heimes [EMAIL PROTECTED] wrote:
 Rafewrote:
  Hi,

  I've encountered a problem which is making debugging less obvious than
  it should be. The @property decorator doesn't always raise exceptions.
  It seems like it is bound to the class but ignored when called. I can
  see the attribute using dir(self.__class__) on an instance, but when
  called, python enters __getattr__. If I correct the bug, the attribute
  calls work as expected and do not call __getattr__.

  I can't seem to make a simple repro. Can anyone offer any clues as to
  what might cause this so I can try to prove it?

 You must subclass from object to get a new style class. properties
 don't work correctly on old style classes.

 Christian
 
 All classes are a sub-class of object. Any other ideas?

Hard to tell when you don't give any code. 

 class A(object):
... @property
... def attribute(self):
... raise AttributeError
... def __getattr__(self, name):
... return nobody expects the spanish inquisition
...
 A().attribute
'nobody expects the spanish inquisition'

Do you mean something like this? I don't think the __getattr__() call can be
avoided here.

Peter
--
http://mail.python.org/mailman/listinfo/python-list


Python barcode decoding

2008-10-24 Thread Robocop
Does anyone know of any decent (open source or commercial) python
barcode recognition tools or libraries.  I need to read barcodes from
pdfs or images, so it will involve some OCR algorithm.  I also only
need to read the code 93 symbology, so it doesn't have to be very
fancy.  The most important thing to me is that it outputs in some
python friendly way, or ideally that it is written in python.  Any
tips would be great!
--
http://mail.python.org/mailman/listinfo/python-list


portable python

2008-10-24 Thread asit
I code in both windows and Linux. As python is portable, the o/p
should be same in both cases. But why the following code is perfect in
windows but error one   in Linux ???

from socket import *
import sys

status={0:open,10049:address not available,10061:closed,
10060:timeout,10056:already connected,10035:filtered,11001:IP
not found,10013:permission denied}

def scan(ip,port,timeout):
s = socket(AF_INET, SOCK_STREAM)
s.settimeout(timeout)
try:
result= s.connect_ex((ip, port))
except:
print Cannot connect to IP
return
s.close()
return status[result]

if (len(sys.argv) == 4):
ip=sys.argv[1]
minrange = int(sys.argv[2])
maxrange = int(sys.argv[3])
timeout = 3

ports=range(minrange,maxrange+1)

for port in ports:
print str(port) +  :  + scan(ip,port,timeout)
else:
print usage :  + sys.argv[0] +  ip-address min-port
range max-port range
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's the perfect (OS independent) way of storing filepaths ?

2008-10-24 Thread Lawrence D'Oliveiro
In message [EMAIL PROTECTED], Steven D'Aprano
wrote:

 Putting preferences files in the user's top level directory is horribly
 inconvenient for the user.

There is a way around this: redefine the HOME environment variable to be the
directory where you want the dotfiles to end up.
--
http://mail.python.org/mailman/listinfo/python-list


URL as input - IOError: [Errno 2] The system cannot find the path specified

2008-10-24 Thread Gilles Ganault
Hello

I'm trying to use urllib to download web pages with the GET method,
but Python 2.5.1 on Windows turns the URL into something funny:


url = amazon.fr/search/index.php?url=search

[...]

IOError: [Errno 2] The system cannot find the path specified:
'amazon.fr\\search\\index.php?url=search'

f = urllib.urlopen(url)


Any idea why it does this?

Thank you.
--
http://mail.python.org/mailman/listinfo/python-list


Re: URL as input - IOError: [Errno 2] The system cannot find the path specified

2008-10-24 Thread Marc 'BlackJack' Rintsch
On Fri, 24 Oct 2008 19:56:04 +0200, Gilles Ganault wrote:

 I'm trying to use urllib to download web pages with the GET method, but
 Python 2.5.1 on Windows turns the URL into something funny:
 
 
 url = amazon.fr/search/index.php?url=search

This URL lacks the protocol!  Correct would be http://amazon.fr… (I 
guess).

 [...]
 
 IOError: [Errno 2] The system cannot find the path specified:
 'amazon.fr\\search\\index.php?url=search'

Without protocol it seems that the 'file://' protocol is used and there 
is no such file on your system.

Ciao,
Marc 'BlackJack' Rintsch
--
http://mail.python.org/mailman/listinfo/python-list


from package import * without overwriting similarly named functions?

2008-10-24 Thread Reckoner

I have multiple packages that have many of the same function names. Is
it possible to do

from package1 import *
from package2 import *

without overwriting similarly named objects from package1 with
material in package2? How about a way to do this that at least gives a
warning?

Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable python

2008-10-24 Thread Marc 'BlackJack' Rintsch
On Fri, 24 Oct 2008 10:42:21 -0700, asit wrote:

 I code in both windows and Linux. As python is portable, the o/p should
 be same in both cases. But why the following code is perfect in windows
 but error one   in Linux ???

So what *is* the error on Linux!?

 def scan(ip,port,timeout):
 s = socket(AF_INET, SOCK_STREAM)
 s.settimeout(timeout)
 try:
 result= s.connect_ex((ip, port))
 except:
 print Cannot connect to IP
 return
 s.close()
 return status[result]

The bare ``except`` catches *all* errors in the ``try`` block, even those 
you might know about because they don't belong to the set of exceptions 
you expected.  Like `NameError`, `MemoryError`, `KeyboardInterrupt`, …

And the function can return two quite different types…

 if (len(sys.argv) == 4):
 ip=sys.argv[1]
 minrange = int(sys.argv[2])
 maxrange = int(sys.argv[3])
 timeout = 3
 
 ports=range(minrange,maxrange+1)
 
 for port in ports:
 print str(port) +  :  + scan(ip,port,timeout)

…one of which is `None` and that will blow up here, regardless of 
platform.

In [18]:  :  + None
---
type 'exceptions.TypeError' Traceback (most recent call 
last)

/home/bj/ipython console in module()

type 'exceptions.TypeError': cannot concatenate 'str' and 'NoneType' 
objects

Ciao,
Marc 'BlackJack' Rintsch
--
http://mail.python.org/mailman/listinfo/python-list


big objects and avoiding deepcopy?

2008-10-24 Thread Reckoner
I am writing an algorithm that takes objects (i.e. graphs with
thousands of nodes) into a hypothetical state. I need to keep a
history of these  hypothetical objects depending on what happens to
them later. Note that these hypothetical objects are intimately
operated on, changed, and made otherwise significantly different from
the objects they were copied from.

I've been using deepcopy to push the objects into the hypothetical
state where I operate on them heavily. This is pretty slow since the
objects are very large.

Is there another way to do this without resorting to deepcopy?

by the way, the algorithm works fine. It's just this part of it that I
am trying to change.

Thanks in advance.
--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-24 Thread Mike Driscoll
On Oct 24, 1:06 pm, Reckoner [EMAIL PROTECTED] wrote:
 I have multiple packages that have many of the same function names. Is
 it possible to do

 from package1 import *
 from package2 import *

 without overwriting similarly named objects from package1 with
 material in package2? How about a way to do this that at least gives a
 warning?

 Thanks.

You can't do something like this:

from package1 import bork
from package2 import bork

and expect python to know that you want the bork from the first
package at one point and the other at another point. The latter will
basically overwrite the former. You should just import them like this:

import package1
import package2

package1.bork()
package2.bork()

Then Python will know what to do. If the name of the package is long,
you can do this too:

import reallylongnamedpackage as p

then it would be p.bork()

Then again, python is open source. Thus, you can modify the source to
do whatever you want if you have the patience and the knowledge to do
so.

Mike
--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-24 Thread Tim Chase

I have multiple packages that have many of the same function names. Is
it possible to do

from package1 import *
from package2 import *

without overwriting similarly named objects from package1 with
material in package2? How about a way to do this that at least gives a
warning?


Yeah, just use

  from package2 import *
  from package1 import *

then nothing in package1 will get tromped upon.

However, best practices suggest leaving them in a namespace and 
not using the import * mechanism for precisely this reason. 
You can always use module aliasing:


  import package1 as p1
  import package2 as p2

so you don't have to type package1.subitem, but can instead 
just write p1.subitem.  I prefer to do this with packages like 
Tkinter:


  import Tkinter as tk

  ...
  tk.Scrollbar...

so it doesn't litter my namespace, but also doesn't require me to 
type Tkinter.Scrollbar, prefixing with Tkinter. for everything.


-tkc




--
http://mail.python.org/mailman/listinfo/python-list


Re: How to examine the inheritance of a class?

2008-10-24 Thread Derek Martin
On Fri, Oct 24, 2008 at 11:59:46AM +1000, James Mills wrote:
 On Fri, Oct 24, 2008 at 11:36 AM, John Ladasky [EMAIL PROTECTED] wrote:
  etc.  The list of subclasses is not fully defined.  It is supposed to
  be extensible by the user.
 
 Developer. NOT User.

It's a semantic argument, but John's semantics are fine.  A library is
code intended to be consumed by developers.  The developers *are* the
users of the library.  *End users* use applications, not libraries.

-- 
Derek D. Martin
http://www.pizzashack.org/
GPG Key ID: 0x81CFE75D



pgpug97BBp01J.pgp
Description: PGP signature
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable python

2008-10-24 Thread Jerry Hill
On Fri, Oct 24, 2008 at 1:42 PM, asit [EMAIL PROTECTED] wrote:
 I code in both windows and Linux. As python is portable, the o/p
 should be same in both cases. But why the following code is perfect in
 windows but error one   in Linux ???

What error message do you get in linux?  How are you running your code
in linux?  Your code seems to generally work on my Ubuntu linux box,
so you need to give us more information.

-- 
Jerry
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable python

2008-10-24 Thread asit
On Oct 24, 11:18 pm, Jerry Hill [EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 1:42 PM, asit [EMAIL PROTECTED] wrote:
  I code in both windows and Linux. As python is portable, the o/p
  should be same in both cases. But why the following code is perfect in
  windows but error one   in Linux ???

 What error message do you get in linux?  How are you running your code
 in linux?  Your code seems to generally work on my Ubuntu linux box,
 so you need to give us more information.

 --
 Jerry

this the o/p
[EMAIL PROTECTED]:~/hack$ python portscan.py 59.93.128.10 10 20
Traceback (most recent call last):
  File portscan.py, line 33, in module
print str(port) +  :  + scan(ip,port,timeout)
  File portscan.py, line 22, in scan
return status[result]
KeyError: 11
[EMAIL PROTECTED]:~/hack$
--
http://mail.python.org/mailman/listinfo/python-list


Re: URL as input - IOError: [Errno 2] The system cannot find the path specified

2008-10-24 Thread Gilles Ganault
On 24 Oct 2008 18:02:45 GMT, Marc 'BlackJack' Rintsch [EMAIL PROTECTED]
wrote:
This URL lacks the protocol!  Correct would be http://amazon.fr… (I 
guess).

Thanks, that did it :)
--
http://mail.python.org/mailman/listinfo/python-list


Python/Django Developer

2008-10-24 Thread harvey

My client in Jersey City, NJ 07302 is looking for a Python Developer. Below
is the job description:


Job Summary:

This is a programming position in the technical department of Advance
Internet, working on application development, application integration,
automated testing and deployment of applications, publishing structure and
unit testing in various development environments.  

Job Functions: 

   Develop extensible online applications
   Integrate vendor code into web tier
   Write and Maintain Unit Tests
   Develop and Maintain Automated Testing Frameworks
   Perform Load/Performance Testing
   Maintain  Enhance Build  Deployment Scripts
   Liaise with programmers (internal  external)
   Ability to set development standards
   Oversee incorporation of applications into web tier
   Assess stability of existing applications
   Coordinate conversion from legacy systems

Supervisory Responsibilities:

   None

Required Knowledge, Skills and Abilities:
Candidate needs to be aggressive in learning new things as well as taking
responsibility for work product and meeting deadlines with minimal
supervision.  They need to have worked in an online environment and have
published applications that have withstood live deployment. 
   Open source familiarity
   Django framework
   Python
   Other frameworks
   At least 2 standard templating languages such as Velocity, PHP, JSP
   Knowledge of quality control methods and philosophy
   Linux command line proficiency
   ANT/Maven, Build, Make
   Project management experience
   Excellent written and oral communication skills

Desired Skills/Experience:

   Moveable Type application knowledge
   Developing for a clustered server environment
   Ability to read/understand C 
   OO Perl

-- 
View this message in context: 
http://www.nabble.com/Python-Django-Developer-tp20155587p20155587.html
Sent from the Python - python-list mailing list archive at Nabble.com.

--
http://mail.python.org/mailman/listinfo/python-list


Urllib vs. FireFox

2008-10-24 Thread Gilles Ganault
Hello

After scratching my head as to why I failed finding data from a web
using the re module, I discovered that a web page as downloaded by
urllib doesn't match what is displayed when viewing the source page in
FireFox.

For instance, when searching Amazon for Wargames:

URLLIB:
a
href=http://www.amazon.fr/Wargames-Matthew-Broderick/dp/B4RJ7H;span
class=srTitleWargames/span/a
  
   ~ Matthew Broderick, Dabney Coleman, John Wood,  et Ally Sheedy
span class=bindingBlock(span class=bindingCassette
vidéo/span - 2000)/span/td/tr

FIREFOX:
 div class=productTitlea
href=http://www.amazon.fr/Wargames-Matthew-Broderick/dp/B4RJ7H/ref=sr_1_1?ie=UTF8s=dvdqid=1224872998sr=8-1;
Wargames/a span class=binding ~ Matthew Broderick, Dabney
Coleman, John Wood,  et Ally Sheedy/spanspan class=binding
(span class=formatCassette vidéo/span - 2000)/span/div

Why do they differ?

Thank you.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Urllib vs. FireFox

2008-10-24 Thread Stefan Behnel
Gilles Ganault wrote:
 After scratching my head as to why I failed finding data from a web
 using the re module, I discovered that a web page as downloaded by
 urllib doesn't match what is displayed when viewing the source page in
 FireFox.
 
 For instance, when searching Amazon for Wargames:
 
 URLLIB:
 a
 href=http://www.amazon.fr/Wargames-Matthew-Broderick/dp/B4RJ7H;span
 class=srTitleWargames/span/a
   
~ Matthew Broderick, Dabney Coleman, John Wood,  et Ally Sheedy
 span class=bindingBlock(span class=bindingCassette
 vidéo/span - 2000)/span/td/tr
 
 FIREFOX:
  div class=productTitlea
 href=http://www.amazon.fr/Wargames-Matthew-Broderick/dp/B4RJ7H/ref=sr_1_1?ie=UTF8s=dvdqid=1224872998sr=8-1;
 Wargames/a span class=binding ~ Matthew Broderick, Dabney
 Coleman, John Wood,  et Ally Sheedy/spanspan class=binding
 (span class=formatCassette vidéo/span - 2000)/span/div
 
 Why do they differ?

The browser sends a different client identifier than urllib, and the server
sends back different page content depending on what client is asking.

Stefan
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara



 The Global Interpreter Lock is fundamentally designed to make the
 interpreter easier to maintain and safer: Developers do not need to
 worry about other code stepping on their namespace. This makes things
 thread-safe, inasmuch as having multiple PThreads within the same
 interpreter space modifying global state and variable at once is,
 well, bad. A c-level module, on the other hand, can sidestep/release
 the GIL at will, and go on it's merry way and process away.

...Unless part of the C module execution involves the need do CPU-
bound work on another thread through a different python interpreter,
right? (even if the interpreter is 100% independent, yikes).  For
example, have a python C module designed to programmatically generate
images (and video frames) in RAM for immediate and subsequent use in
animation.  Meanwhile, we'd like to have a pthread with its own
interpreter with an instance of this module and have it dequeue jobs
as they come in (in fact, there'd be one of these threads for each
excess core present on the machine).  As far as I can tell, it seems
CPython's current state can't CPU bound parallelization in the same
address space (basically, it seems that we're talking about the
embarrassingly parallel scenario raised in that paper).  Why does it
have to be in same address space?  Convenience and simplicity--the
same reasons that most APIs let you hang yourself if the app does dumb
things with threads.  Also, when the data sets that you need to send
to and from each process is large, using the same address space makes
more and more sense.


 So, just to clarify - Andy, do you want one interpreter, $N threads
 (e.g. PThreads) or the ability to fork multiple heavyweight
 processes?

Sorry if I haven't been clear, but we're talking the app starting a
pthread, making a fresh/clean/independent interpreter, and then being
responsible for its safety at the highest level (with the payoff of
each of these threads executing without hinderance).  No different
than if you used most APIs out there where step 1 is always to make
and init a context object and the final step is always to destroy/take-
down that context object.

I'm a lousy writer sometimes, but I feel bad if you took the time to
describe threads vs processes.  The only reason I raised IPC with my
messaging isn't very attractive comment was to respond to Glenn
Linderman's points regarding tradeoffs of shared memory vs no.


Andy



--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread mblume
Am Fri, 24 Oct 2008 05:06:23 -0500 schrieb Tim Chase:

 [%s=%s % (k,v) for k,v in d.items()]
   File stdin, line 1
 [%s=%s % (k,v) for k,v in d.items()]
   ^
 SyntaxError: EOL while scanning single-quoted string
 
 You have three quotation marks...  you want
 
%s=%s
 
 not
 
%s=%s
 
 -tkc

Or he may have wanted
 %s = '%s'
Another neat python feature.
HTH
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Consequences of importing the same module multiple times in C++?

2008-10-24 Thread Robert Dailey
Hi,

I'm currently using boost::python::import() to import Python modules,
so I'm not sure exactly which Python API function it is calling to
import these files. I posted to the Boost.Python mailing list with
this question and they said I'd probably get a better answer here, so
here it goes...

If I do the following:

using namespace boost::python;
import( __main__ ).attr( new_global ) = 40.0f;
import( __main__ ).attr( another_global ) = 100.0f:

Notice that I'm importing twice. What would be the performance
consequences of this? Do both import operations query the disk for the
module and load it into memory? Will the second call simply reference
a cached version of the module loaded at the first import() call?

Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable python

2008-10-24 Thread mblume
Am Fri, 24 Oct 2008 11:33:33 -0700 schrieb asit:

 On Oct 24, 11:18 pm, Jerry Hill [EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 1:42 PM, asit [EMAIL PROTECTED] wrote:
  I code in both windows and Linux. As python is portable, the o/p
  should be same in both cases. But why the following code is perfect
  in windows but error one   in Linux ???

 What error message do you get in linux?  How are you running your code
 in linux?  Your code seems to generally work on my Ubuntu linux box, so
 you need to give us more information.

 --
 Jerry
 
 this the o/p
 [EMAIL PROTECTED]:~/hack$ python portscan.py 59.93.128.10 10 20
 Traceback (most recent call last):
   File portscan.py, line 33, in module
 print str(port) +  :  + scan(ip,port,timeout)
   File portscan.py, line 22, in scan
 return status[result]
 KeyError: 11
 [EMAIL PROTECTED]:~/hack$

Python is certainly portable, but ino order to do its job, it relies
on the underlying OS and libraries to work, which may be different from OS
to OS. The status message you define in your code 
(e.g. 10056:already connected) seem to be specific to Windows, IIRC. 

HTH
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: portable python

2008-10-24 Thread Jerry Hill
On Fri, Oct 24, 2008 at 2:33 PM, asit [EMAIL PROTECTED] wrote:
 this the o/p
 [EMAIL PROTECTED]:~/hack$ python portscan.py 59.93.128.10 10 20
 Traceback (most recent call last):
  File portscan.py, line 33, in module
print str(port) +  :  + scan(ip,port,timeout)
  File portscan.py, line 22, in scan
return status[result]
 KeyError: 11
 [EMAIL PROTECTED]:~/hack$

Oh, connect_ex is returning errno 11, which isn't in your dictionary
of statuses.  Did you think that the eight items in your status
dictionary were the only possible return values of connect_ex?  Since
the socket module is a thin wrapper over the c socket library, you'll
probably need to consult the documentation for that to see exactly
what's going on.  I'd start with man connect on your unix command
line, or this page:
http://www.opengroup.org/onlinepubs/009695399/functions/connect.html

You'd probably be better off using built in modules to map errno to a
message, like this:

from socket import *
import os

def scan(ip, port, timeout):
s = socket(AF_INET, SOCK_STREAM)
s.settimeout(timeout)
errno = s.connect_ex((ip, port))
return os.strerror(errno)



-- 
Jerry
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 3:17 PM, Andy O'Meara [EMAIL PROTECTED] wrote:

 I'm a lousy writer sometimes, but I feel bad if you took the time to
 describe threads vs processes.  The only reason I raised IPC with my
 messaging isn't very attractive comment was to respond to Glenn
 Linderman's points regarding tradeoffs of shared memory vs no.


I actually took the time to bring anyone listening in up to speed, and
to clarify so I could better understand your use case. Don't feel bad,
things in the thread are moving fast and I just wanted to clear it up.

Ideally, we all want to improve the language, and the interpreter.
However trying to push it towards a particular use case is dangerous
given the idea of general use.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Urllib vs. FireFox

2008-10-24 Thread Rex
Right. If you want to get the same results with your Python script
that you did with Firefox, you can modify the browser headers in your
code.

Here's an example with urllib2:
http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http_headers.html

By the way, if you're doing non-trivial web scraping, the mechanize
module might make your work much easier. You can install it with
easy_install.
http://wwwsearch.sourceforge.net/mechanize/

--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-24 Thread Rex
If you're concerned about specific individual functions, you can use:

from package1 import some_function as f1
form package2 import some_function as f2
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Rhamphoryncus
On Oct 24, 1:02 pm, Glenn Linderman [EMAIL PROTECTED] wrote:
 On approximately 10/24/2008 8:42 AM, came the following characters from
 the keyboard of Andy O'Meara:

  Glenn, great post and points!

 Thanks. I need to admit here that while I've got a fair bit of
 professional programming experience, I'm quite new to Python -- I've not
 learned its internals, nor even the full extent of its rich library. So
 I have some questions that are partly about the goals of the
 applications being discussed, partly about how Python is constructed,
 and partly about how the library is constructed. I'm hoping to get a
 better understanding of all of these; perhaps once a better
 understanding is achieved, limitations will be understood, and maybe
 solutions be achievable.

 Let me define some speculative Python interpreters; I think the first is
 today's Python:

 PyA: Has a GIL. PyA threads can run within a process; but are
 effectively serialized to the places where the GIL is obtained/released.
 Needs the GIL because that solves lots of problems with non-reentrant
 code (an example of non-reentrant code, is code that uses global (C
 global, or C static) variables – note that I'm not talking about Python
 vars declared global... they are only module global). In this model,
 non-reentrant code could include pieces of the interpreter, and/or
 extension modules.

 PyB: No GIL. PyB threads acquire/release a lock around each reference to
 a global variable (like with feature). Requires massive recoding of
 all code that contains global variables. Reduces performance
 significantly by the increased cost of obtaining and releasing locks.

 PyC: No locks. Instead, recoding is done to eliminate global variables
 (interpreter requires a state structure to be passed in). Extension
 modules that use globals are prohibited... this eliminates large
 portions of the library, or requires massive recoding. PyC threads do
 not share data between threads except by explicit interfaces.

 PyD: (A hybrid of PyA  PyC). The interpreter is recoded to eliminate
 global variables, and each interpreter instance is provided a state
 structure. There is still a GIL, however, because globals are
 potentially still used by some modules. Code is added to detect use of
 global variables by a module, or some contract is written whereby a
 module can be declared to be reentrant and global-free. PyA threads will
 obtain the GIL as they would today. PyC threads would be available to be
 created. PyC instances refuse to call non-reentrant modules, but also
 need not obtain the GIL... PyC threads would have limited module support
 initially, but over time, most modules can be migrated to be reentrant
 and global-free, so they can be used by PyC instances. Most 3rd-party
 libraries today are starting to care about reentrancy anyway, because of
 the popularity of threads.

PyE: objects are reclassified as shareable or non-shareable, many
types are now only allowed to be shareable.  A module and its classes
become shareable with the use of a __future__ import, and their
shareddict uses a read-write lock for scalability.  Most other
shareable objects are immutable.  Each thread is run in its own
private monitor, and thus protected from the normal threading memory
module nasties.  Alas, this gives you all the semantics, but you still
need scalable garbage collection.. and CPython's refcounting needs the
GIL.


  Our software runs in real time (so performance is paramount),
  interacts with other static libraries, depends on worker threads to
  perform real-time image manipulation, and leverages Windows and Mac OS
  API concepts and features.  Python's performance hits have generally
  been a huge challenge with our animators because they often have to go
  back and massage their python code to improve execution performance.
  So, in short, there are many reasons why we use python as a part
  rather than a whole.
[...]
  As a python language fan an enthusiast, don't let lua win!  (I say
  this endearingly of course--I have the utmost respect for both
  communities and I only want to see CPython be an attractive pick when
  a company is looking to embed a language that won't intrude upon their
  app's design).

I agree with the problem, and desire to make python fill all niches,
but let's just say I'm more ambitious with my solution. ;)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Urllib vs. FireFox

2008-10-24 Thread Mike Driscoll
On Oct 24, 2:53 pm, Rex [EMAIL PROTECTED] wrote:
 Right. If you want to get the same results with your Python script
 that you did with Firefox, you can modify the browser headers in your
 code.

 Here's an example with 
 urllib2:http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http_headers...

 By the way, if you're doing non-trivial web scraping, the mechanize
 module might make your work much easier. You can install it with
 easy_install.http://wwwsearch.sourceforge.net/mechanize/

Or if you just need to query stuff on Amazon, then you might find this
module helpful:

http://pypi.python.org/pypi/Python-Amazon/

---
Mike Driscoll

Blog:   http://blog.pythonlibrary.org
Python Extension Building Network: http://www.pythonlibrary.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python barcode decoding

2008-10-24 Thread Mike Driscoll
On Oct 24, 12:05 pm, Robocop [EMAIL PROTECTED] wrote:
 Does anyone know of any decent (open source or commercial) python
 barcode recognition tools or libraries.  I need to read barcodes from
 pdfs or images, so it will involve some OCR algorithm.  I also only
 need to read the code 93 symbology, so it doesn't have to be very
 fancy.  The most important thing to me is that it outputs in some
 python friendly way, or ideally that it is written in python.  Any
 tips would be great!

The closest thing I've heard of is the bar code stuff in reportlab.
This post talks a little about it:

http://mail.python.org/pipermail/python-list/2000-February/024893.html

And here's the official Reportlab site link: 
http://www.reportlab.org/downloads.html

Mike
--
http://mail.python.org/mailman/listinfo/python-list


Re: from package import * without overwriting similarly named functions?

2008-10-24 Thread Lawrence D'Oliveiro
In message
[EMAIL PROTECTED],
Reckoner wrote:

 I have multiple packages that have many of the same function names. Is
 it possible to do
 
 from package1 import *
 from package2 import *
 
 without overwriting similarly named objects from package1 with
 material in package2?

Avoid wildcard imports.

--
http://mail.python.org/mailman/listinfo/python-list


Global dictionary or class variables

2008-10-24 Thread Mr . SpOOn
Hi,
in an application I have to use some variables with fixed valuse.

For example, I'm working with musical notes, so I have a global
dictionary like this:

natural_notes = {'C': 0, 'D': 2, 'E': 4 }

This actually works fine. I was just thinking if it wasn't better to
use class variables.

Since I have a class Note, I could write:

class Note:
C = 0
D = 2
...

Which style maybe better? Are both bad practices?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about scope

2008-10-24 Thread Lawrence D'Oliveiro
In message [EMAIL PROTECTED], Steven D'Aprano
wrote:

 Why is it a class attribute instead of an instance attribute?

Singleton class.
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's the perfect (OS independent) way of storing filepaths ?

2008-10-24 Thread Paul McNett

Steven D'Aprano wrote:

On Sun, 19 Oct 2008 20:50:46 +0200, Stef Mientki wrote:


Duncan, in windows it's begin to become less common to store settings in
DocsSettings,
because these directories are destroyed by roaming profiles 


The directories aren't destroyed by roaming profiles. When the user logs 
out, they get copied to the server. When they log in at a different 
machine, they get copied to the workstation.


So configuration information saved in c:\Documents and 
Settings\pmcnett\Application Data\My Application gets conveniently 
migrated from machine to machine where I happen to login.


A really nice feature.



Isn't *everything* destroyed by roaming profiles? *wink*


I've heard such bashing of roaming profiles but I've never had anything 
but love for them. That is, until people go and start saving their 
movies and pictures in My Documents. Which is why I set their home 
directory to a server share and have them save their docs there.



Seriously, I don't know anyone who has anything nice to say about roaming 
profiles.


I loathe Windows, but roaming profiles was one thing they did (mostly) 
right. I couldn't be happy in a world that didn't include roaming profiles.


Perhaps I'm not seeing the worst of it as I use Samba on Linux as the PDC?

Anyway, on Windows user configuration information should go in the 
user's Application Data directory. If you don't want it to roam, you can 
instead put it in the (hidden) Local Settings/Application Data directory.


Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to get the actual address of a object

2008-10-24 Thread James Mills
On Sat, Oct 25, 2008 at 12:25 AM,  [EMAIL PROTECTED] wrote:
 Thank you,James.
 My original idea was to study all the contents of any object. I can do
 it by using module ctypes.

You can simply just query it's attributes.

Use __dict__ or dir(obj)

Example:

 x = 10
 dir(x)
['__abs__', '__add__', '__and__', '__class__', '__cmp__',
'__coerce__', '__delattr__', '__div__', '__divmod__', '__doc__',
'__float__', '__floordiv__', '__getattribute__', '__getnewargs__',
'__hash__', '__hex__', '__index__', '__init__', '__int__',
'__invert__', '__long__', '__lshift__', '__mod__', '__mul__',
'__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__',
'__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__',
'__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__',
'__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__',
'__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__',
'__setattr__', '__str__', '__sub__', '__truediv__', '__xor__']


cheers
James

-- 
--
-- Problems are solved by method
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Andy O'Meara

Another great post, Glenn!!  Very well laid-out and posed!! Thanks for
taking the time to lay all that out.


 Questions for Andy: is the type of work you want to do in independent
 threads mostly pure Python? Or with libraries that you can control to
 some extent? Are those libraries reentrant? Could they be made
 reentrant? How much of the Python standard library would need to be
 available in reentrant mode to provide useful functionality for those
 threads? I think you want PyC


I think you've defined everything perfectly, and you're you're of
course correct about my love for for the PyC model.  :^)

Like any software that's meant to be used without restrictions, our
code and frameworks always use a context object pattern so that
there's never and non-const global/shared data).  I would go as far to
say that this is the case with more performance-oriented software than
you may think since it's usually a given for us to have to be parallel
friendly in as many ways as possible.  Perhaps Patrick can back me up
there.

As to what modules are essential...  As you point out, once
reentrant module implementations caught on in PyC or hybrid world, I
think we'd start to see real effort to whip them into compliance--
there's just so much to be gained imho.  But to answer the question,
there's the obvious ones (operator, math, etc), string/buffer
processing (string, re), C bridge stuff (struct, array), and OS basics
(time, file system, etc).  Nice-to-haves would be buffer and image
decompression (zlib, libpng, etc), crypto modules, and xml. As far as
I can imagine, I have to believe all of these modules already contain
little, if any, global data, so I have to believe they'd be super easy
to make PyC happy.  Patrick, what would you see you guys using?


  That's the rub...  In our case, we're doing image and video
  manipulation--stuff not good to be messaging from address space to
  address space.  The same argument holds for numerical processing with
  large data sets.  The workers handing back huge data sets via
  messaging isn't very attractive.

 In the module multiprocessing environment could you not use shared
 memory, then, for the large shared data items?


As I understand things, the multiprocessing puts stuff in a child
process (i.e. a separate address space), so the only to get stuff to/
from it is via IPC, which can include a shared/mapped memory region.
Unfortunately, a shared address region doesn't work when you have
large and opaque objects (e.g. a rendered CoreVideo movie in the
QuickTime API or 300 megs of audio data that just went through a
DSP).  Then you've got the hit of serialization if you're got
intricate data structures (that would normally would need to be
serialized, such as a hashtable or something).  Also, if I may speak
for commercial developers out there who are just looking to get the
job done without new code, it's usually always preferable to just a
single high level sync object (for when the job is complete) than to
start a child processes and use IPC.  The former is just WAY less
code, plain and simple.


Andy


--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Glenn Linderman
On approximately 10/24/2008 1:09 PM, came the following characters from 
the keyboard of Rhamphoryncus:

On Oct 24, 1:02 pm, Glenn Linderman [EMAIL PROTECTED] wrote:
  

On approximately 10/24/2008 8:42 AM, came the following characters from
the keyboard of Andy O'Meara:



Glenn, great post and points!
  

Thanks. I need to admit here that while I've got a fair bit of
professional programming experience, I'm quite new to Python -- I've not
learned its internals, nor even the full extent of its rich library. So
I have some questions that are partly about the goals of the
applications being discussed, partly about how Python is constructed,
and partly about how the library is constructed. I'm hoping to get a
better understanding of all of these; perhaps once a better
understanding is achieved, limitations will be understood, and maybe
solutions be achievable.

Let me define some speculative Python interpreters; I think the first is
today's Python:

PyA: Has a GIL. PyA threads can run within a process; but are
effectively serialized to the places where the GIL is obtained/released.
Needs the GIL because that solves lots of problems with non-reentrant
code (an example of non-reentrant code, is code that uses global (C
global, or C static) variables – note that I'm not talking about Python
vars declared global... they are only module global). In this model,
non-reentrant code could include pieces of the interpreter, and/or
extension modules.

PyB: No GIL. PyB threads acquire/release a lock around each reference to
a global variable (like with feature). Requires massive recoding of
all code that contains global variables. Reduces performance
significantly by the increased cost of obtaining and releasing locks.

PyC: No locks. Instead, recoding is done to eliminate global variables
(interpreter requires a state structure to be passed in). Extension
modules that use globals are prohibited... this eliminates large
portions of the library, or requires massive recoding. PyC threads do
not share data between threads except by explicit interfaces.

PyD: (A hybrid of PyA  PyC). The interpreter is recoded to eliminate
global variables, and each interpreter instance is provided a state
structure. There is still a GIL, however, because globals are
potentially still used by some modules. Code is added to detect use of
global variables by a module, or some contract is written whereby a
module can be declared to be reentrant and global-free. PyA threads will
obtain the GIL as they would today. PyC threads would be available to be
created. PyC instances refuse to call non-reentrant modules, but also
need not obtain the GIL... PyC threads would have limited module support
initially, but over time, most modules can be migrated to be reentrant
and global-free, so they can be used by PyC instances. Most 3rd-party
libraries today are starting to care about reentrancy anyway, because of
the popularity of threads.



PyE: objects are reclassified as shareable or non-shareable, many
types are now only allowed to be shareable.  A module and its classes
become shareable with the use of a __future__ import, and their
shareddict uses a read-write lock for scalability.  Most other
shareable objects are immutable.  Each thread is run in its own
private monitor, and thus protected from the normal threading memory
module nasties.  Alas, this gives you all the semantics, but you still
need scalable garbage collection.. and CPython's refcounting needs the
GIL.
  


Hmm.  So I think your PyE is an instance is an attempt to be more 
explicit about what I said above in PyC: PyC threads do not share data 
between threads except by explicit interfaces.  I consider your 
definitions of shared data types somewhat orthogonal to the types of 
threads, in that both PyA and PyC threads could use these new shared 
data items.


I think/hope that you meant that many types are now only allowed to be 
non-shareable?  At least, I think that should be the default; they 
should be within the context of a single, independent interpreter 
instance, so other interpreters don't even know they exist, much less 
how to share them.  If so, then I understand most of the rest of your 
paragraph, and it could be a way of providing shared objects, perhaps.


I don't understand the comment that CPython's refcounting needs the 
GIL... yes, it needs the GIL if multiple threads see the object, but not 
for private objects... only one threads uses the private objects... so 
today's refcounting should suffice... with each interpreter doing its 
own refcounting and collecting its own garbage.


Shared objects would have to do refcounting in a protected way, under 
some lock.  One easy solution would be to have just two types of 
objects; non-shared private objects in a thread, and global shared 
objects; access to global shared objects would require grabbing the GIL, 
and then accessing the object, and releasing the GIL.   An interface 
could allow for grabbing 

Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 4:51 PM, Andy O'Meara [EMAIL PROTECTED] wrote:

 In the module multiprocessing environment could you not use shared
 memory, then, for the large shared data items?


 As I understand things, the multiprocessing puts stuff in a child
 process (i.e. a separate address space), so the only to get stuff to/
 from it is via IPC, which can include a shared/mapped memory region.
 Unfortunately, a shared address region doesn't work when you have
 large and opaque objects (e.g. a rendered CoreVideo movie in the
 QuickTime API or 300 megs of audio data that just went through a
 DSP).  Then you've got the hit of serialization if you're got
 intricate data structures (that would normally would need to be
 serialized, such as a hashtable or something).  Also, if I may speak
 for commercial developers out there who are just looking to get the
 job done without new code, it's usually always preferable to just a
 single high level sync object (for when the job is complete) than to
 start a child processes and use IPC.  The former is just WAY less
 code, plain and simple.


Are you familiar with the API at all? Multiprocessing was designed to
mimic threading in about every way possible, the only restriction on
shared data is that it must be serializable, but event then you can
override or customize the behavior.

Also, inter process communication is done via pipes. It can also be
done with messages if you want to tweak the manager(s).

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Rhamphoryncus
On Oct 24, 2:59 pm, Glenn Linderman [EMAIL PROTECTED] wrote:
 On approximately 10/24/2008 1:09 PM, came the following characters from
 the keyboard of Rhamphoryncus:
  PyE: objects are reclassified as shareable or non-shareable, many
  types are now only allowed to be shareable.  A module and its classes
  become shareable with the use of a __future__ import, and their
  shareddict uses a read-write lock for scalability.  Most other
  shareable objects are immutable.  Each thread is run in its own
  private monitor, and thus protected from the normal threading memory
  module nasties.  Alas, this gives you all the semantics, but you still
  need scalable garbage collection.. and CPython's refcounting needs the
  GIL.

 Hmm.  So I think your PyE is an instance is an attempt to be more
 explicit about what I said above in PyC: PyC threads do not share data
 between threads except by explicit interfaces.  I consider your
 definitions of shared data types somewhat orthogonal to the types of
 threads, in that both PyA and PyC threads could use these new shared
 data items.

Unlike PyC, there's a *lot* shared by default (classes, modules,
function), but it requires only minimal recoding.  It's as close to
have your cake and eat it too as you're gonna get.


 I think/hope that you meant that many types are now only allowed to be
 non-shareable?  At least, I think that should be the default; they
 should be within the context of a single, independent interpreter
 instance, so other interpreters don't even know they exist, much less
 how to share them.  If so, then I understand most of the rest of your
 paragraph, and it could be a way of providing shared objects, perhaps.

There aren't multiple interpreters under my model.  You only need
one.  Instead, you create a monitor, and run a thread on it.  A list
is not shareable, so it can only be used within the monitor it's
created within, but the list type object is shareable.

I've no interest in *requiring* a C/C++ extension to communicate
between isolated interpreters.  Without that they're really no better
than processes.
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Rhamphoryncus
On Oct 24, 3:02 pm, Glenn Linderman [EMAIL PROTECTED] wrote:
 On approximately 10/23/2008 2:24 PM, came the following characters from the
 keyboard of Rhamphoryncus:

 On Oct 23, 11:30 am, Glenn Linderman [EMAIL PROTECTED] wrote:


 On approximately 10/23/2008 12:24 AM, came the following characters from
 the keyboard of Christian Heimes

 Andy wrote:
 I'm very - not absolute, but very - sure that Guido and the initial
 designers of Python would have added the GIL anyway. The GIL makes
 Python faster on single core machines and more stable on multi core
 machines.

 Actually, the GIL doesn't make Python faster; it is a design decision that
 reduces the overhead of lock acquisition, while still allowing use of global
 variables.

 Using finer-grained locks has higher run-time cost; eliminating the use of
 global variables has a higher programmer-time cost, but would actually run
 faster and more concurrently than using a GIL. Especially on a
 multi-core/multi-CPU machine.

Those globals include classes, modules, and functions.  You can't
have *any* objects shared.  Your interpreters are entirely isolated,
much like processes (and we all start wondering why you don't use
processes in the first place.)

Or use safethread.  It imposes safe semantics on shared objects, so
you can keep your global classes, modules, and functions.  Still need
garbage collection though, and on CPython that means refcounting and
the GIL.


 Another peeve I have is his characterization of the observer pattern.
 The generalized form of the problem exists in both single-threaded
 sequential programs, in the form of unexpected reentrancy, and message
 passing, with infinite CPU usage or infinite number of pending
 messages.


 So how do you get reentrancy is a single-threaded sequential program? I
 think only via recursion? Which isn't a serious issue for the observer
 pattern. If you add interrupts, then your program is no longer sequential.

Sorry, I meant recursion.  Why isn't it a serious issue for
single-threaded programs?  Just the fact that it's much easier to
handle when it does happen?


 Try looking at it on another level: when your CPU wants to read from a
 bit of memory controlled by another CPU it sends them a message
 requesting they get it for us.  They send back a message containing
 that memory.  They also note we have it, in case they want to modify
 it later.  We also note where we got it, in case we want to modify it
 (and not wait for them to do modifications for us).


 I understand that level... one of my degrees is in EE, and I started college
 wanting to design computers (at about the time the first microprocessor chip
 came along, and they, of course, have now taken over). But I was side-lined
 by the malleability of software, and have mostly practiced software during
 my career.

 Anyway, that is the level that Herb Sutter was describing in the Dr Dobbs
 articles I mentioned. And the overhead of doing that at the level of a cache
 line is high, if there is lots of contention for particular memory locations
 between threads running on different cores/CPUs. So to achieve concurrency,
 you must not only limit explicit software locks, but must also avoid memory
 layouts where data needed by different cores/CPUs are in the same cache
 line.

I suspect they'll end up redesigning the caching to use a size and
alignment of 64 bits (or smaller).  Same cache line size, but with
masking.

You still need to minimize contention of course, but that should at
least be more predictable.  Having two unrelated mallocs contend could
suck.


 Message passing vs shared memory isn't really a yes/no question.  It's
 about ratios, usage patterns, and tradeoffs.  *All* programs will
 share data, but in what way?  If it's just the code itself you can
 move the cache validation into software and simplify the CPU, making
 it faster.  If the shared data is a lot more than that, and you use it
 to coordinate accesses, then it'll be faster to have it in hardware.


 I agree there are tradeoffs... unfortunately, the hardware architectures
 vary, and the languages don't generally understand the hardware. So then it
 becomes an OS API, which adds the overhead of an OS API call to the cost of
 the synchronization... It could instead be (and in clever applications is) a
 non-portable assembly level function that wraps on OS locking or waiting
 API.

In practice I highly doubt we'll see anything that doesn't extend
traditional threading (posix threads, whatever MS has, etc).


 Nonetheless, while putting the shared data accesses in hardware might be
 more efficient per unit operation, there are still tradeoffs: A software
 solution can group multiple accesses under a single lock acquisition; the
 hardware probably doesn't have enough smarts to do that. So it may well
 require many more hardware unit operations for the same overall concurrently
 executed function, and the resulting performance may not be any better.

Speculative ll/sc? ;)


 

RE: Python-list Digest, Vol 61, Issue 368

2008-10-24 Thread Warren DeLano
 From: Andy O'Meara [EMAIL PROTECTED]

 Unfortunately, a shared address region doesn't work when you have
 large and opaque objects (e.g. a rendered CoreVideo movie in the
 QuickTime API or 300 megs of audio data that just went through a
 DSP).  Then you've got the hit of serialization if you're got
 intricate data structures (that would normally would need to be
 serialized, such as a hashtable or something).  Also, if I may speak
 for commercial developers out there who are just looking to get the
 job done without new code, it's usually always preferable to just a
 single high level sync object (for when the job is complete) than to

Just to chime as a CPython-based ISV from the scientific visualization
realm, we face the same problem  limitations due to lack of threading
(or at least multiple independent interpreters).  A typical use case
might be a 1-3 GB dataset (molecular dynamics trajectory and derived
state) subjected to asynchronous random read/write by N threads each
running on one of N cores in parallel.

We get by jettisoning CPython almost entirely and working in C for all
tasks other than the most basic operations: thread creation, workload
scheduling, mutexes, and thread deletion.  

The biggest problem is not for the most compute-intensive tasks (where
use of C is justified), but for those relatively short-running but
algorithmically complex tasks which could be done much more easily from
Python than from C (e.g. data organization, U.I. survey/present tasks,
rarely used transformations, ad hoc scripting experiments, etc.).

Cheers,
Warren


--
http://mail.python.org/mailman/listinfo/python-list


Re: Global dictionary or class variables

2008-10-24 Thread Chris Rebert
On Fri, Oct 24, 2008 at 1:44 PM, Mr. SpOOn [EMAIL PROTECTED] wrote:
 Hi,
 in an application I have to use some variables with fixed valuse.

 For example, I'm working with musical notes, so I have a global
 dictionary like this:

 natural_notes = {'C': 0, 'D': 2, 'E': 4 }

 This actually works fine. I was just thinking if it wasn't better to
 use class variables.

 Since I have a class Note, I could write:

 class Note:
C = 0
D = 2
...

 Which style maybe better? Are both bad practices?

Depends. Does your program use the note values as named constants, or
does it lookup the values dynamically based on the note name?
If the former, then the class is marginally better, though putting the
assignments in a (possibly separate) module would probably be best. If
the latter, than the dictionary is fine.

Cheers,
Chris
-- 
Follow the path of the Iguana...
http://rebertia.com

 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


ANN: gui_support v1.5, a convenience library for wxPython

2008-10-24 Thread Stef Mientki

hello,

Although I personally hate to release a new version so soon,
the error reporting is so essential, that updating is a must.

V1.5 changes
- errors (catched by the library) will now give a normal error report
- GUI preview function now available in this library

gui_support is library for easy creation of GUI designs in wxPython.

Brief documentation can be found here
http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_gui_support.html
( as this website will soon be moved,
this docpage can always be found through the redirector
http://pic.flappie.nl
look under paragraph PyLab_Works | GUI_support )

Download:
http://pylab-works.googlecode.com/files/Data_Python_Test_V1_5.zip

cheers,
Stef
--
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary

2008-10-24 Thread Craig Allen
when I was a baby programmer even vendors didn't have documentation to
throw out... we just viewed the dissassembeled opcodes to find out how
things worked... we never did find out much but I could make the speak
click, and we were happy with it.
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to examine the inheritance of a class?

2008-10-24 Thread Craig Allen

 Developer. NOT User.

I go around and around on this issue, and have ended up considering
anyone using my code a user, and if it's a library or class system,
likely that user is a programmer.  I don't really think there is a
strong distinction... more and more users can do sophisticated
configuration which while not like what we do professionally as
software engineers... is not fundamentally different that programming,
making the distinction arbitrary.

I think the issue here for the term user is that there are many
kinds of user, many ways they can customize the system, and the
programmer/developer is just one kind of user.  No?
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to examine the inheritance of a class?

2008-10-24 Thread Craig Allen

 Thank you, Chris.  Class.__bases__ is exactly what I wanted to see.
 And I thought I had tried isinstance(), and found it lacking -- but I
 just tried it again, and it does what I hoped it would do.

While isinstance is no doubt the proper way to access this
information, you may have run into problems because that isinstance
and ducktyping do not always work well together... because having a
particular interface does not mean you inherited that interface.
Checking __bases__ will not solve the problem either though, so best
to use isinstance if you can and document what you do well enough to
help people avoid breaking the assumptions of your system.

not that you didn't know that... but I thought it was worth saying
because I, at least, more and more do meta-class style techniques
which unfortunately do not play well with certain methods of checking
type.

-cba



--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >