ANN: XYZCommander-0.0.2

2009-10-23 Thread Max E. Kuznecov
I'm pleased to announce the XYZCommander version 0.0.2!

XYZCommander is a pure console visual file manager.
Main features:

* Tight integration with python run–time system — most of the
settings can be changed on the fly via management console.
* Powerful configuration system - define own actions, aliases,
internal commands, key bindings.
* Extensible plug-in system - even core functionality implemented
mainly using plug–ins, keeping base system small and clean.
* Events  hooks subsystem - a flexible way of reacting on certain
system events.
* Customizable look-n-feel - every widget component look can be
changed via skins.
* Unicode support

XYZCommander runs on *nix platform and requires python version = 2.5
and urwid library.

Change log for 0.0.2:

* [UI] New box widget: ButtonBox.
Widget shows a dialog box with custom buttons.

* [UI] New method :sys:panel:get_active()
Method returns list of tagged VFSObject instances or list of single selected
object if none tagged.

* [UI] New method :sys:panel:get_current().
Method returns VFSObject instance of current directory.

* [UI] New method :sys:panel:vfs_driver().
Method returns vfs driver used by current object

* [UI] New method :sys:cmd:put().
Method allows to put arbitrary string to command line.

* [UI] New method :sys:panel:search_cycle().
Method allows to search objects in current
working directory in direction downwards-from up to selected file.
Set this method as default binding for META-S.

* [PLUGIN] New method :vfs:vfsutils:remove()
Method shows a dialog for [recursively] deleting VFSObjects.
Bound to F8.

* [PLUGIN] New method :vfs:vfsutils:copy()
Method shows a dialog for [recursively] copying VFSObjects.
Bound to F5.

* [PLUGIN] New method :vfs:vfsutils:move()
Method shows a dialog for [recursively] moving VFSObjects.
Bound to F6.

* [PLUGIN] New plugin :misc:where.
Plugin provides an ability to load/restore path locations on both panels

* [PLUGIN] New plugin :fsrules:magic.
Plugin adds an ability to match files based on magic database.

* [VFS] Implemented Tar VFS module. Default actions set to use it on
*.tar, *.tar.gz and *.tar.bz2 named files.

* [CORE] New events and hooks mechanism.
It is now possible to set own hooks on all events.

* [SKINS] New box rule attribute - input.
It is used for any text input widgets

-- 
~syhpoon
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Announcing IronPython 2.0.3

2009-10-23 Thread David DiCato
Hello Python Community,

I am delighted to announce the release of IronPython 2.0.3. This release is a 
minor update to IronPython 2.0.2 and the latest in a series of CPython 
2.5-compatible releases running on the .NET platform. Again, our priority was 
to make IronPython 2.0.3 a bugfix release that remains backwards-compatible 
with IronPython 2.0.2. In particular, we focused on issues the IronPython 
community brought to our attention through http://www.codeplex.com/. As such, 
there have been important improvements on the compatibility and stability of 
IronPython as summarized below.

You can download IronPython 2.0.3 at: 
http://ironpython.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=30416

Silverlight users: As of IronPython 2.0.2, a new version of Silverlight, namely 
Silverlight 3, is required to build the Silverlight Release and Silverlight 
Debug configurations of IronPython.sln. Please update Silverlight accordingly 
if you intend to do so.

The following issues were fixed:

* 24224 - UTF-8 encoding sometimes broken!

* 19510 - Need to recognize DefaultMemberAttribute for 
__getitem__/__setitem__

* 24129 - 2.0.3: not object-with-__len__-returning-nonzero should not 
be 1

* 21976 - 2.0.3: Executables created by Pyc.py broken without access to 
original Python sources

* 24452 - 2.0: Fix FxCop warnings

* 24453 - 2.0: Cannot build FxCop build configuration of 
IronPython.Modules.csproj

* 24571 - 2.0.3: help(Array[Int32]) causes a traceback

* 24373 - empty sys.argv in compiled scripts for 2.0

* 24475 - Creating a low-permission version of PythonEngine fails post 
2.0.0

* An issue where sys.argv lacks its first argument (the executable 
name) in compiled scripts

* A failure in partial trust on Windows 7 due to a SecurityException.

Special thanks goes out to kanryu, fwereade, kuno, kylehr, and Vassi for 
bringing these issues to our attention. Thanks for spending the time and effort 
that allows us to continue improving IronPython!

- The IronPython Team

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[ANN] Advanced Scientific Programming in Python Winter School in Warsaw, Poland

2009-10-23 Thread Tiziano Zito

Advanced Scientific Programming in Python

a Winter School by the G-Node and University of Warsaw

Scientists spend more and more time writing, maintaining, and
debugging software. While techniques for doing this efficiently have
evolved, only few scientists actually use them. As a result, instead
of doing their research, they spend far too much time writing
deficient code and reinventing the wheel. In this course we will
present a selection of advanced programming techniques with
theoretical lectures and practical exercises tailored to the needs
of a programming scientist. New skills will be tested in a real
programming project: we will team up to develop an entertaining
scientific computer game.

We'll use the Python programming language for the entire course.
Python works as a simple programming language for beginners, but
more importantly, it also works great in scientific simulations and
data analysis. Clean language design and easy extensibility are
driving Python to become a standard tool for scientific computing.
Some of the most useful open source libraries for scientific
computing and visualization will be presented.

This winter school is targeted at Post-docs and PhD students from
all areas. Substantial proficiency in Python or in another language
(e.g. Java, C/C++, MATLAB, Mathematica) is absolutely required. An
optional, one-day introduction to Python is offered to participants
without prior experience with the language.

Date and Location: February 8th — 12th, 2010. Warsaw, Poland.

Preliminary Program:
- Day 0 (Mon Feb 8) — [Optional] Dive into Python
- Day 1 (Tue Feb 9) — Software Carpentry
   • Documenting code and using version control
   • Test-driven development and unit testing
   • Debugging, profiling and benchmarking techniques
   • Object-oriented programming, design patterns, and agile
 programming
- Day 2 (Wed Feb 10) — Scientific Tools for Python
   • NumPy, SciPy, Matplotlib
   • Data serialization: from pickle to databases
   • Programming project in the afternoon
- Day 3 (Thu Feb 11) — The Quest for Speed
   • Writing parallel applications in Python
   • When parallelization does not help: the starving CPUs 
 problem
   • Programming project in the afternoon
- Day 4 (Fri Feb 12) — Practical Software Development
   • Software design
   • Efficient programming in teams
   • Quality Assurance
   • Programming project final

Applications:
Applications should be sent before December 6th, 2009 to: 
python-wintersch...@g-node.org
No fee is charged but participants should take care of travel,
living, and accommodation expenses. Applications should include full
contact information (name, affiliation, email  phone), a *short* CV
and a *short* statement addressing the following questions:

  • What is your educational background?
  • What experience do you have in programming?
  • Why do you think “Advanced Scientific Programming in Python” is
an appropriate course for your skill profile?

Candidates will be selected on the basis of their profile. Places
are limited: early application is recommended.  Notifications of
acceptance will be sent by December 14th, 2009.

Faculty
• Francesc Alted, author of PyTables, 
  Castelló de la Plana, Spain [Day 3]
• Pietro Berkes, Volen Center for Complex Systems, 
  Brandeis University, USA [Day 1]
• Zbigniew Jędrzejewski-Szmek, Institute of Experimental Physics,
  University of Warsaw, Poland [Day 0]
• Eilif Muller, Laboratory of Computational Neuroscience,
  Ecole Polytechnique Fédérale de Lausanne, Switzerland [Day 3]
• Bartosz Teleńczuk, Institute for Theoretical Biology,
  Humboldt-Universität zu Berlin, Germany [Day 2]
• Niko Wilbert, Institute for Theoretical Biology,
  Humboldt-Universität zu Berlin, Germany [Day 1]
• Tiziano Zito, Bernstein Center for Computational Neuroscience,
  Berlin, Germany [Day 4]

Organized by Piotr Durka, Joanna and Zbigniew Jędrzejewscy-Szmek 
(Institute of Experimental Physics, University of Warsaw), and
Tiziano Zito (German Neuroinformatics Node of the INCF).

Website: http://www.g-node.org/python-winterschool
Contact: python-wintersch...@g-node.org
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: XYZCommander-0.0.2 (added missing homepage URL)

2009-10-23 Thread Max E. Kuznecov
I'm pleased to announce the XYZCommander version 0.0.2!

XYZCommander is a pure console visual file manager.
Main features:

   * Tight integration with python run–time system — most of the
settings can be changed on the fly via management console.
   * Powerful configuration system - define own actions, aliases,
internal commands, key bindings.
   * Extensible plug-in system - even core functionality implemented
mainly using plug–ins, keeping base system small and clean.
   * Events  hooks subsystem - a flexible way of reacting on certain
system events.
   * Customizable look-n-feel - every widget component look can be
changed via skins.
   * Unicode support

XYZCommander runs on *nix platform and requires python version = 2.5
and urwid library.

Change log for 0.0.2:

* [UI] New box widget: ButtonBox.
Widget shows a dialog box with custom buttons.

* [UI] New method :sys:panel:get_active()
Method returns list of tagged VFSObject instances or list of single selected
object if none tagged.

* [UI] New method :sys:panel:get_current().
Method returns VFSObject instance of current directory.

* [UI] New method :sys:panel:vfs_driver().
Method returns vfs driver used by current object

* [UI] New method :sys:cmd:put().
Method allows to put arbitrary string to command line.

* [UI] New method :sys:panel:search_cycle().
Method allows to search objects in current
working directory in direction downwards-from up to selected file.
Set this method as default binding for META-S.

* [PLUGIN] New method :vfs:vfsutils:remove()
Method shows a dialog for [recursively] deleting VFSObjects.
Bound to F8.

* [PLUGIN] New method :vfs:vfsutils:copy()
Method shows a dialog for [recursively] copying VFSObjects.
Bound to F5.

* [PLUGIN] New method :vfs:vfsutils:move()
Method shows a dialog for [recursively] moving VFSObjects.
Bound to F6.

* [PLUGIN] New plugin :misc:where.
Plugin provides an ability to load/restore path locations on both panels

* [PLUGIN] New plugin :fsrules:magic.
Plugin adds an ability to match files based on magic database.

* [VFS] Implemented Tar VFS module. Default actions set to use it on
*.tar, *.tar.gz and *.tar.bz2 named files.

* [CORE] New events and hooks mechanism.
It is now possible to set own hooks on all events.

* [SKINS] New box rule attribute - input.
It is used for any text input widgets

Homepage: http://xyzcmd.syhpoon.name/

-- 
~syhpoon
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


PAMIE and beautifulsoup problem

2009-10-23 Thread elca

hello,
currently im making some web scrap script.
and i was choice PAMIE to use my script.
actually im new to python and programming.
so i have no idea ,if i use PAMIE,it really helpful to make script to relate
with win32-python.
ok my problem is ,
while im making script,i was encounter two probelm.
first , i want to let work my script Beautifulsoup and PAMIE.
so i was googled, and only i can found 1 hint.
follow script is which i was found in google.
but it not work for me.
im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't
make it working.

from BeautifulSoup import BeautifulSoup
Import cPAMIE
url = 'http://www.cnn.com'
ie = cPAMIE.PAMIE(url)
bs = BeautifulSoup(ie.pageText())

and follow is my script. how to make it to work ?
from BeautifulSoup import BeautifulSoup
from PAM30 import PAMIE

url = 'http://www.cnn.com'
ie = PAMIE(url)
bs = BeautifulSoup(ie.pageText())

my second problem is,while im making script,i think sometime i need normal
IE interface.
is it possible to change PAMIE's IE interface to just normal IE
interface(InternetExplorer.Application)?
i don't want to open new IE window to work with normal IE interface,want to
continue work with current PAMIE's IE windows.
sorry for my bad english
Paul
-- 
View this message in context: 
http://www.nabble.com/PAMIE-and-beautifulsoup-problem-tp26021305p26021305.html
Sent from the Python - python-list mailing list archive at Nabble.com.

-- 
http://mail.python.org/mailman/listinfo/python-list


AttributeError: 'SSLSocket' object has no attribute 'producer_fifo'

2009-10-23 Thread VYAS ASHISH M-NTB837
I am getting the following error when I try to run my program to post
and receive xmls to an https server.
 
 
Traceback (most recent call last):
  File C:\Python31\lib\threading.py, line 509, in _bootstrap_inner
self.run()
  File C:\Python31\lib\threading.py, line 462, in run
self._target(*self._args, **self._kwargs)
  File D:\XPress_v1.3\XPress\Model.py, line 3328, in run
asyncore.loop()
  File C:\Python31\lib\asyncore.py, line 206, in loop
poll_fun(timeout, map)
  File C:\Python31\lib\asyncore.py, line 124, in poll
is_w = obj.writable()
  File C:\Python31\lib\asynchat.py, line 222, in writable
return self.producer_fifo or (not self.connected)
  File C:\Python31\lib\asyncore.py, line 398, in __getattr__
return getattr(self.socket, attr)
AttributeError: 'SSLSocket' object has no attribute 'producer_fifo'

Does any one know what is wrong here?
 
Regards,
Ashish 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyodbc - problem passing None as parameter

2009-10-23 Thread Frank Millman
Tim Golden wrote:
 Frank Millman wrote:

 cur.execute('select * from ctrl.dirusers where todate is ?', None)
 Traceback (most recent call last):
   File stdin, line 1, in module pyodbc.ProgrammingError: ('42000', 
 [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax 
 near @P1'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server 
 Driver][SQL Server]Statement(s) could not be prepared. (8180))


 I would estimate that it's because you're using
 where todate is ? in your WHERE clause, which
 can only possibly be followed by a NULL -- thus making
 it a not-meaningfully parameterisable query.


Thanks for the response, Tim.

Why do you say that this is not-meaningfully parameterisable?

I want the final WHERE clause to show 'WHERE todate IS NULL'.

As I showed in my first example, pyodbc has no problem converting 'select 
?', None into select NULL. I don't see why this should be any different.

For the record, psycopg2 on Postgresql has no problem with this.

As a workaround, I suppose I could scan the argument list, and if I find a 
None, substitute the ? with NULL in the SQL statement itself.

It would be interesting to view the SQL statement that pyodbc passes to SQL 
Server for execution. Does anyone know if it is possible to set a parameter 
anywhere to enable this?

Thanks

Frank



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with code = Extract numerical value to variable

2009-10-23 Thread Steve
Sorry I'm not being clear

Input**
sold: 16
sold: 20
sold: 2
sold: 0
sold: storefront
7
0
storefront
sold
null

Output
16
20
2
0
0
7
0
0
0
0



-- 
http://mail.python.org/mailman/listinfo/python-list


RE: AttributeError: 'SSLSocket' object has no attribute 'producer_fifo'

2009-10-23 Thread VYAS ASHISH M-NTB837
Tried using asyncore.dispatcher_with_send in place of
asynchat.async_chat and after a few request-responses, I get this:
 
Exception in thread Thread-2:
Traceback (most recent call last):
  File C:\Python31\lib\threading.py, line 509, in _bootstrap_inner
self.run()
  File C:\Python31\lib\threading.py, line 462, in run
self._target(*self._args, **self._kwargs)
  File D:\XPress_v1.3\XPress\Model.py, line 3328, in run
asyncore.loop()
  File C:\Python31\lib\asyncore.py, line 206, in loop
poll_fun(timeout, map)
  File C:\Python31\lib\asyncore.py, line 124, in poll
is_w = obj.writable()
  File C:\Python31\lib\asyncore.py, line 516, in writable
return (not self.connected) or len(self.out_buffer)
  File C:\Python31\lib\asyncore.py, line 399, in __getattr__
return getattr(self.socket, attr)
AttributeError: 'SSLSocket' object has no attribute 'out_buffer'
 
Someone please throw some light on this!
 
Ashish



From: VYAS ASHISH M-NTB837 
Sent: Friday, October 23, 2009 11:35 AM
To: python-list@python.org
Subject: AttributeError: 'SSLSocket' object has no attribute
'producer_fifo'


I am getting the following error when I try to run my program to post
and receive xmls to an https server.
 
 
Traceback (most recent call last):
  File C:\Python31\lib\threading.py, line 509, in _bootstrap_inner
self.run()
  File C:\Python31\lib\threading.py, line 462, in run
self._target(*self._args, **self._kwargs)
  File D:\XPress_v1.3\XPress\Model.py, line 3328, in run
asyncore.loop()
  File C:\Python31\lib\asyncore.py, line 206, in loop
poll_fun(timeout, map)
  File C:\Python31\lib\asyncore.py, line 124, in poll
is_w = obj.writable()
  File C:\Python31\lib\asynchat.py, line 222, in writable
return self.producer_fifo or (not self.connected)
  File C:\Python31\lib\asyncore.py, line 398, in __getattr__
return getattr(self.socket, attr)
AttributeError: 'SSLSocket' object has no attribute 'producer_fifo'

Does any one know what is wrong here?
 
Regards,
Ashish 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6 Deprecation Warnings with __new__ — Can someone expla in why?

2009-10-23 Thread Terry Reedy

Consider this:

def blackhole(*args, **kwds): pass

The fact that it accept args that it ignores could be considered 
misleading or even a bug.  Now modify it to do something useful, like 
return a new, naked, immutable object that is the same for every call 
except for identity, and which still totally ignores the args as 
irrelavant. Call it object.__new__. It is just as misleading, if not 
more so.


In 3.x, the mistake has been fixed.
 object(1)
Traceback (most recent call last):
  File pyshell#9, line 1, in module
object(1)
TypeError: object.__new__() takes no parameters

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Terry Reedy

Qrees wrote:

Hello

As my Master's dissertation I chose Cpython optimization. That's why
i'd like to ask what are your suggestions what can be optimized. Well,
I know that quite a lot. I've downloaded the source code (I plan to
work on Cpython 2.6 and I've downloaded 2.6.3 release). By looking at
the code I've found comment's like this can be optimized by... etc.
but maybe you guide me what should I concentrate on in my work?


2.6.3 is about to be replaced with 2.6.4, but even that will not have 
improvements that have already been added to 3.x or 2.7. I strongly 
suggest that you work with the 3.2 code, since there is then a maximal 
chance that your improvements will actually be adopted.

 tjr

--
http://mail.python.org/mailman/listinfo/python-list


PyGUI menubar

2009-10-23 Thread dr k
I want to completely eliminate the menu bar from my PyGUI 2.0.5 application.
the obvious thing,

app.menus = []

doesn't work. i want not only the menus but the menu bar to disappear. help?
[ a quick look at the code makes me suspect that it cannot be done
presently but maybe there is a sneaky way. ]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread MRAB

Olof Bjarnason wrote:
[snip]
A short question after having read through most of this thread, on the 
same subject (time-optimizing CPython):


http://mail.python.org/pipermail/python-list/2007-September/098964.html

We are experiencing multi-core processor kernels more and more these 
days. But they are all still connected to the main memory, right?


To me that means, even though some algorithm can be split up into 
several threads that run on different cores of the processor, that any 
algorithm will be memory-speed limited. And memory access is a quite 
common operation for most algorithms.


Then one could ask oneself: what is the point of multiple cores, if 
memory bandwidth is the bottleneck? Specifically, what makes one expect 
any speed gain from parallelizing a sequential algorithm into four 
threads, say, when the memory shuffling is the same speed in both 
scenarios? (Assuming memory access is much slower than ADDs, JMPs and 
such instructions - a quite safe assumption I presume)


[ If every core had it's own primary memory, the situation would be 
different. It would be more like the situation in a distributed/internet 
based system, spread over several computers. One could view each core as 
a separate computer actually ]



Don't forget about the on-chip cache! :-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: a splitting headache

2009-10-23 Thread Paul Rudin
Mensanator mensana...@aol.com writes:

 No one ever considers making life easy for the user.

That's a bizarre assertion.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a splitting headache

2009-10-23 Thread Mensanator
On Oct 22, 1:22 pm, Paul Rudin paul.nos...@rudin.co.uk wrote:
 Mensanator mensana...@aol.com writes:
  No one ever considers making life easy for the user.

 That's a bizarre assertion.

I have a bad habit of doing that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a splitting headache

2009-10-23 Thread Mensanator
On Oct 22, 2:23 pm, Falcolas garri...@gmail.com wrote:
 On Oct 22, 11:56 am, Mensanator mensana...@aol.com wrote:
 [massive snip]

  Yes, AFTER you read the docs.

 Not to feed the troll,

I prefer the term gadfly.

 but obligatory reference to XKCD:

 http://xkcd.com/293/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Olof Bjarnason
2009/10/22 MRAB pyt...@mrabarnett.plus.com

 Olof Bjarnason wrote:
 [snip]

  A short question after having read through most of this thread, on the
 same subject (time-optimizing CPython):

 http://mail.python.org/pipermail/python-list/2007-September/098964.html

 We are experiencing multi-core processor kernels more and more these days.
 But they are all still connected to the main memory, right?

 To me that means, even though some algorithm can be split up into several
 threads that run on different cores of the processor, that any algorithm
 will be memory-speed limited. And memory access is a quite common operation
 for most algorithms.

 Then one could ask oneself: what is the point of multiple cores, if memory
 bandwidth is the bottleneck? Specifically, what makes one expect any speed
 gain from parallelizing a sequential algorithm into four threads, say, when
 the memory shuffling is the same speed in both scenarios? (Assuming memory
 access is much slower than ADDs, JMPs and such instructions - a quite safe
 assumption I presume)

 [ If every core had it's own primary memory, the situation would be
 different. It would be more like the situation in a distributed/internet
 based system, spread over several computers. One could view each core as a
 separate computer actually ]

  Don't forget about the on-chip cache! :-)


Sorry for continuing slightly OT:

Yes, that makes matters even more interesting.

Caches for single-cpu-boards speed up memory access quite dramatically. Are
caches for multi-core boards shared among the cores? Or do each core have a
separate cache? I can only imagine how complicated the read/write logic must
be of these tiny electronic devices, in any case.

Of course caches makes the memory access-operations must faster, but I'm
guessing register instructions are still orders of magnitude faster than
(cached) memory access. (or else registers would not really be needed - you
could just view the whole primary memory as an array of registers!)

So I think my first question is still interesting: What is the point of
multiple cores, if memory is the bottleneck?
(it helps to think of algorithms such as line-drawing or ray-tracing, which
is easy to parallellize, yet I believe are still faster using a single core
instead of multiple because of the read/write-to-memory-bottleneck. It does
help to bring more workers to the mine if only one is allowed access at a
time, or more likely, several are allowed yet it gets so crowded that
queues/waiting is inevitable)

-- 

 http://mail.python.org/mailman/listinfo/python-list




-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Olof Bjarnason
2009/10/23 Olof Bjarnason olof.bjarna...@gmail.com



 2009/10/22 MRAB pyt...@mrabarnett.plus.com

 Olof Bjarnason wrote:
 [snip]

  A short question after having read through most of this thread, on the
 same subject (time-optimizing CPython):

 http://mail.python.org/pipermail/python-list/2007-September/098964.html

 We are experiencing multi-core processor kernels more and more these
 days. But they are all still connected to the main memory, right?

 To me that means, even though some algorithm can be split up into several
 threads that run on different cores of the processor, that any algorithm
 will be memory-speed limited. And memory access is a quite common operation
 for most algorithms.

 Then one could ask oneself: what is the point of multiple cores, if
 memory bandwidth is the bottleneck? Specifically, what makes one expect any
 speed gain from parallelizing a sequential algorithm into four threads, say,
 when the memory shuffling is the same speed in both scenarios? (Assuming
 memory access is much slower than ADDs, JMPs and such instructions - a quite
 safe assumption I presume)

 [ If every core had it's own primary memory, the situation would be
 different. It would be more like the situation in a distributed/internet
 based system, spread over several computers. One could view each core as a
 separate computer actually ]

  Don't forget about the on-chip cache! :-)


 Sorry for continuing slightly OT:

 Yes, that makes matters even more interesting.

 Caches for single-cpu-boards speed up memory access quite dramatically. Are
 caches for multi-core boards shared among the cores? Or do each core have a
 separate cache? I can only imagine how complicated the read/write logic must
 be of these tiny electronic devices, in any case.

 Of course caches makes the memory access-operations must faster, but I'm
 guessing register instructions are still orders of magnitude faster than
 (cached) memory access. (or else registers would not really be needed - you
 could just view the whole primary memory as an array of registers!)

 So I think my first question is still interesting: What is the point of
 multiple cores, if memory is the bottleneck?
 (it helps to think of algorithms such as line-drawing or ray-tracing, which
 is easy to parallellize, yet I believe are still faster using a single core
 instead of multiple because of the read/write-to-memory-bottleneck. It does
 help to bring more workers to the mine if


um typo It does NOT help to ..

only one is allowed access at a time, or more likely, several are allowed
 yet it gets so crowded that queues/waiting is inevitable)

 --

 http://mail.python.org/mailman/listinfo/python-list




 --
 twitter.com/olofb
 olofb.wordpress.com
 olofb.wordpress.com/tag/english




-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with my program

2009-10-23 Thread Lie Ryan

tanner barnes wrote:
Ok so im in need of some help! I have a program with 2 classes and in 
one 4 variables are created (their name, height, weight, and grade). 
What im trying to make happen is to get the variables from the first 
class and use them in the second class.



Windows 7: It helps you do more. Explore Windows 7. 
http://www.microsoft.com/Windows/windows-7/default..aspx?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_evergreen3:102009




There are many ways to do that, which one is appropriate depends on what 
you're trying to do.


1. Inheritance.

class A(object):
def __init__(self):
self.inst_var = 'instance variable'

class B(A)
pass

a = A()
b = B()

2. Property

class A(object):
def __init__(self):
self.inst_var = 'instance variable'

class B(object):
@property
def inst_var(self):
return a.inst_var
@inst_var.setter
def inst_var(self, val):
a.inst_var = val

a = A()
b = B()

3. Simple copying

class A(object):
def __init__(self):
self.inst_var = 'instance variable'

class B(object):
def __init__(self):
self.inst_var = a.inst_var

a = A()
b = B()

--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with my program

2009-10-23 Thread Alan Gauld


tanner barnes tanner...@hotmail.com wrote

I have a program with 2 classes and in one 4 variables 
are created (their name, height, weight, and grade). 
What im trying to make happen is to get the variables 
from the first class and use them in the second class.


In general thats not a good idea. Each class should 
manage its own data and you should pass objects 
between objects. So first check that whatever your 
second class wants to do with the data from your first 
class can't be done by the first class itself via a method.


If you really do need to access the data there are 
several approaches:


1) Direct access via the object

class C:
  def __init__(s):
  s.x = 42

class D:
   def f(s, c):
  s.y = c.x# direct access to a C instance

2) Access via a getter method returning each variable:

class C:
  def __init__(s):
  s.x = 42
  def getX(s):
  return s.x

class D:
   def f(s, c):
  s.y = c.getX()# access via a getter method

3) Access via a logical operation that returns all public data

class C:
  def __init__(s):
  s.x = 42
  s.y = 66
  def getData(s):
  return (s.x, s.y)

class D:
   def f(s, c):
  s.x, s.y = c.getData()# access via a logical access method

My personal preference in Python is (1) for occasional single 
value access (akin to a friend function in C++) or (3) for more 
regular access requests.


But most of all I try to avoid cross class data access wherever possible. 
As Peter Coad put it Objects should do it to themselves

Otherwise known as the Law of Demeter.

HTH,

--
Alan Gauld
Author of the Learn to Program web site
http://www.alan-g.me.uk/

--
http://mail.python.org/mailman/listinfo/python-list


Re: problem with pythonw.exe

2009-10-23 Thread Christian Heimes
Martin Shaw wrote:
 I have a tkinter application running on my windows xp work machine and I am
 attempting to stop the console from appearing when the application runs.
 I've researched around and the way to do this appears to be to use
 pythonw.exe instead of python.exe. However when I try to run pythonw.exe
 from the command prompt it simply does nothing. I can't find anything like
 this where I've searched. I've tried reinstalling python. Pythonw.exe
 appears to work when i run it through cygwin however I don't really want to
 use cygwin for this application. Any idea as to what might be the problem?

Windows GUI programs don't have any standard streams. stdin, stdout and
stderr aren't attached so any print statement or traceback isn't shown.
Could this explain the behavior?

Christian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyodbc - problem passing None as parameter

2009-10-23 Thread Frank Millman
Tim Goldenwrote:
 Frank Millman wrote:

 I want the final WHERE clause to show 'WHERE todate IS NULL'.

 Of course, I understand that. What I mean is that if a piece
 of SQL say:

  WHERE table.column IS ?

 then the only possible (meaningful) value ? can have is
 NULL (or None, in python-speak). In fact, the IS NULL is
 really a sort of unary operator-expression, not an operator
 with a value.



 As a workaround, I suppose I could scan the argument list, and if I find 
 a None, substitute the ? with NULL in the SQL statement itself.


 Well, the code I posted previously, although tedious taken to
 extremes, will do that. (I have seen and used code like that in a number 
 of production systems).


Thanks, Tim, for the detailed explanation. I appreciate your taking the 
time.

It was difficult for me to use the code that you posted, because under my 
present setup I define my SQL statements externally, and the WHERE clause 
has to conform to one or more rows of six columns -
Test (WHERE or AND or OR)
Left bracket (either present or blank)
Column name
Operator
Expression
Right bracket (either present or blank)

I therefore used the workaround that I mentioned above, and it seems to work 
fine, for both pyodbc and psycopg2.

Once again, many thanks.

Frank



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Stefan Behnel
 Olof Bjarnason wrote:
 [snip]
 A short question after having read through most of this thread, on the
 same subject (time-optimizing CPython):

 http://mail.python.org/pipermail/python-list/2007-September/098964.html

 We are experiencing multi-core processor kernels more and more these
 days. But they are all still connected to the main memory, right?

 To me that means, even though some algorithm can be split up into
 several threads that run on different cores of the processor, that any
 algorithm will be memory-speed limited. And memory access is a quite
 common operation for most algorithms.

 Then one could ask oneself: what is the point of multiple cores, if
 memory bandwidth is the bottleneck? Specifically, what makes one
 expect any speed gain from parallelizing a sequential algorithm into
 four threads, say, when the memory shuffling is the same speed in both
 scenarios? (Assuming memory access is much slower than ADDs, JMPs and
 such instructions - a quite safe assumption I presume)

Modern (multi-core) processors have several levels of caches that interact
with the other cores in different ways.

You should read up on NUMA.

http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyodbc - problem passing None as parameter

2009-10-23 Thread Tim Golden

Frank Millman wrote:
Thanks, Tim, for the detailed explanation. I appreciate your taking the 
time.


It was difficult for me to use the code that you posted, because under my 
present setup I define my SQL statements externally, and the WHERE clause 
has to conform to one or more rows of six columns -

Test (WHERE or AND or OR)
Left bracket (either present or blank)
Column name
Operator
Expression
Right bracket (either present or blank)

I therefore used the workaround that I mentioned above, and it seems to work 
fine, for both pyodbc and psycopg2.


I'm glad you managed to get something working. :)
I do similar things when needs be, being
more of a pragmatist than a purist most of the time.

(The should-NULL-be-special debate is occasionally
as fiery in the SQL world as the should-self-be-automatic
or tabs-or-spaces quesetions are in the Python world).

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with regular expression finding multiple floats

2009-10-23 Thread Edward Dolan
On Oct 22, 3:26 pm, Jeremy jlcon...@gmail.com wrote:
 My question is, how can I use regular expressions to find two OR three
 or even an arbitrary number of floats without repeating %s?  Is this
 possible?

 Thanks,
 Jeremy

Any time you have tabular data such as your example, split() is
generally the first choice. But since you asked, and I like fscking
with regular expressions...

import re

# I modified your data set just a bit to show that it will
# match zero or more space separated real numbers.

data =

1.E-08

1.E-08 1.58024E-06 0.0048 1.E-08 1.58024E-06
0.0048
1.E-07 2.98403E-05
0.0018
foo bar
baaz
1.E-06 8.85470E-06
0.0026
1.E-05 6.08120E-06
0.0032
1.E-03 1.61817E-05
0.0022
1.E+00 8.34460E-05
0.0014
2.E+00 2.31616E-05
0.0017
5.E+00 2.42717E-05
0.0017
total 1.93417E-04
0.0012


ntuple = re.compile
(r
# match beginning of line (re.M in the
docs)
^
# chew up anything before the first real (non-greedy -
 ?)
.*?
# named match (turn the match into a named atom while allowing
irrelevant (groups))
(?
Pntuple
  # match one
real
  [-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d
+)?
  # followed by zero or more space separated
reals
  ([ \t]+[-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d+)?)
*)
# match end of line (re.M in the
docs)
$
, re.X | re.M) # re.X to allow comments and arbitrary
whitespace

print [tuple(mo.group('ntuple').split())
   for mo in re.finditer(ntuple, data)]

Now compare the previous post using split with this one. Even with the
comments in the re, it's still a bit difficult to read. Regular
expressions
are brittle. My code works fine for the data above but if you change
the
structure the re will probably fail. At that point, you have to fiddle
with
the re to get it back on course.

Don't get me wrong, regular expressions are hella fun to play with.
You have
to ask yourself, Do I really _need_ to use a regular expression here?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with regular expression finding multiple floats

2009-10-23 Thread Edward Dolan
I can see why this line could wrap
 1.E-08 1.58024E-06 0.0048 1.E-08 1.58024E-06
 0.0048
But this one?
 1.E-07 2.98403E-05
 0.0018

anyway, here is the code - http://codepad.org/Z7eWBusl
-- 
http://mail.python.org/mailman/listinfo/python-list


Validating positional arguments in optparse

2009-10-23 Thread Filip Gruszczyński
optparse module is quite smart, when it comes to validating options,
like assuring, that certain option must be an integer. However, I
can't find any information about validating, that positional arguments
were provided and I can't find methods, that would allow defining
positional argument in OptionParser. Is it possible to do this, or was
it intentionaly left this way?

-- 
Filip Gruszczyński
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Validating positional arguments in optparse

2009-10-23 Thread Jean-Michel Pichavant

Filip Gruszczyński wrote:

optparse module is quite smart, when it comes to validating options,
like assuring, that certain option must be an integer. However, I
can't find any information about validating, that positional arguments
were provided and I can't find methods, that would allow defining
positional argument in OptionParser. Is it possible to do this, or was
it intentionaly left this way?

  

You can't, unlike argparse.



   * *The argparse module can handle positional and optional arguments,
 while optparse can handle only optional arguments*. (See
 add_argument()
 
http://argparse.googlecode.com/svn/tags/r101/doc/add_argument.html#add_argument.)
   * The argparse module isn’t dogmatic about what your command line
 interface should look like - options like -file or /file are
 supported, as are required options. Optparse refuses to support
 these features, preferring purity over practicality.
   * The argparse module produces more informative usage messages,
 including command-line usage determined from your arguments, and
 help messages for both positional and optional arguments. The
 optparse module requires you to write your own usage string, and
 has no way to display help for positional arguments.
   * The argparse module supports action that consume a variable number
 of command-line args, while optparse requires that the exact
 number of arguments (e.g. 1, 2, or 3) be known in advance. (See
 add_argument()
 
http://argparse.googlecode.com/svn/tags/r101/doc/add_argument.html#add_argument.)
   * The argparse module supports parsers that dispatch to
 sub-commands, while optparse requires setting
 allow_interspersed_args and doing the parser dispatch manually.
 (See add_subparsers()
 
http://argparse.googlecode.com/svn/tags/r101/doc/other-methods.html#add_subparsers.)
   * The argparse module allows the type and action parameters to
 add_argument()
 
http://argparse.googlecode.com/svn/tags/r101/doc/add_argument.html#add_argument
 to be specified with simple callables, while optparse requires
 hacking class attributes like STORE_ACTIONS or CHECK_METHODS to
 get proper argument checking. (See add_argument()
 
http://argparse.googlecode.com/svn/tags/r101/doc/add_argument.html#add_argument).
   * 

That being said, I still stick with optparse. I prefer the dogmatic 
interface that makes all my exe use the exact same (POSIX) convention. I 
really don't care about writing /file instead of --file


JM
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Validating positional arguments in optparse

2009-10-23 Thread Filip Gruszczyński
 That being said, I still stick with optparse. I prefer the dogmatic
 interface that makes all my exe use the exact same (POSIX) convention. I
 really don't care about writing /file instead of --file

I would like to keep POSIX convention too, but just wanted
OptionParser to do the dirty work of checking that arguments are all
right for me and wanted to know the reason, it doesn't.

I guess I should write a subclass, which would do just that.


-- 
Filip Gruszczyński
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with code = Extract numerical value to variable

2009-10-23 Thread Sion Arrowsmith
Steve  zerocostprod...@gmail.com wrote:
If there is a number in the line I want the number otherwise I want a
0
I don't think I can use strip because the lines have no standards

What do you think strip() does? Read
http://docs.python.org/library/stdtypes.html#str.lstrip
*carefully* (help(''.lstrip) is slightly ambiguous).
set(string.printable).difference(string.digits) may help.

-- 
\S

   under construction

-- 
http://mail.python.org/mailman/listinfo/python-list


Help with code = Extract numerical value to variable

2009-10-23 Thread Dave Angel

Steve wrote:

Sorry I'm not being clear

Input**
sold: 16
sold: 20
sold: 2
sold: 0
sold: storefront
7
0
storefront
sold
null

Output
16
20
2
0
0
7
0
0
0
0


  
Since you're looking for only digits, simply make a string containing 
all characters that aren't digits.


Now, loop through the file and use str.translate() to delete all the 
non-digits from each line, using the above table.


Check if the result is , and if so, substitute 0.  Done.

DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Bug? concatenate a number to a backreference: re.sub(r'(zzz:)xxx', r'\1'+str(4444), somevar)

2009-10-23 Thread abdulet
Well its this normal? i want to concatenate a number to a
backreference in a regular expression. Im working in a multprocess
script so the first what i think is in an error in the multiprocess
logic but what a sorprise!!! when arrived to this conclussion after
some time debugging i see that:

import re
aa = zzz:xxx
re.sub(r'(zzz:).*',r'\1'+str(),aa)
'[33'

¿?¿?¿? well lets put a : after the backreference

aa = zzz:xxx
re.sub(r'(zzz).*',r'\1:'+str(),aa)
'zzz:'

now its the expected result so
should i expect that python concatenate the string to the
backreference before substitute the backreference? or is a bug

tested on:
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit
(Intel)] on win32
Python 2.5.2 (r252:60911, Jan  4 2009, 17:40:26) [GCC 4.3.2] on linux2

with the same result

Cheers!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problem with pythonw.exe

2009-10-23 Thread Dave Angel

Martin Shaw wrote:

Hi,

I have a tkinter application running on my windows xp work machine and I am
attempting to stop the console from appearing when the application runs.
I've researched around and the way to do this appears to be to use
pythonw.exe instead of python.exe. However when I try to run pythonw.exe
from the command prompt it simply does nothing. I can't find anything like
this where I've searched. I've tried reinstalling python. Pythonw.exe
appears to work when i run it through cygwin however I don't really want to
use cygwin for this application. Any idea as to what might be the problem?

Thanks in advance,

Martin

  






Exactly what program is pythonw executing?  You need to search your PATH 
to see what's on it.  For example, if your PATH has a c:\bat directory 
on it, and in that directory you have a one-line batch file:


   @c:\python26\pythonw.exe

then you're seeing the expected behavior.  You'd need to add a parameter 
to the batch file, probably %*


Or you could be pointing at some other executable.

The other possibility for you is to use the .pyw file association that 
your install probably set up for you.  Rename your main script to have a 
.pyw extension, and then just type it as your command.  To check file 
associations, use assoc and ftype utilities, both standard on Windows XP 
and Vista.


DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Olof Bjarnason
2009/10/23 Stefan Behnel stefan...@behnel.de

  Olof Bjarnason wrote:
  [snip]
  A short question after having read through most of this thread, on the
  same subject (time-optimizing CPython):
 
  http://mail.python.org/pipermail/python-list/2007-September/098964.html
 
  We are experiencing multi-core processor kernels more and more these
  days. But they are all still connected to the main memory, right?
 
  To me that means, even though some algorithm can be split up into
  several threads that run on different cores of the processor, that any
  algorithm will be memory-speed limited. And memory access is a quite
  common operation for most algorithms.
 
  Then one could ask oneself: what is the point of multiple cores, if
  memory bandwidth is the bottleneck? Specifically, what makes one
  expect any speed gain from parallelizing a sequential algorithm into
  four threads, say, when the memory shuffling is the same speed in both
  scenarios? (Assuming memory access is much slower than ADDs, JMPs and
  such instructions - a quite safe assumption I presume)

 Modern (multi-core) processors have several levels of caches that interact
 with the other cores in different ways.

 You should read up on NUMA.

 http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access


Thanks that was a good read. Basically NUMA addresses the problems I mention
by creating several primary memories (called banks) - making the motherboard
contain several computers (=cpu+primary memory) instead of one primary
memory and several cores, if I read it correctly.


 Stefan
 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug? concatenate a number to a backreference: re.sub(r'(zzz:)xxx', r'\1'+str(4444), somevar)

2009-10-23 Thread Peter Otten
abdulet wrote:

 Well its this normal? i want to concatenate a number to a
 backreference in a regular expression. Im working in a multprocess
 script so the first what i think is in an error in the multiprocess
 logic but what a sorprise!!! when arrived to this conclussion after
 some time debugging i see that:
 
 import re
 aa = zzz:xxx
 re.sub(r'(zzz:).*',r'\1'+str(),aa)
 '[33'

If you perform the addition you get r\1. How should the regular 
expression engine interpret that? As the backreference to group 1, 13, ... 
or 1? It picks something completely different, [33, because \133 is 
the octal escape sequence for [:

 chr(0133)
'['

You can avoid the ambiguity with

extra = str(number)
extra = re.escape(extra) 
re.sub(expr r\g1 + extra, text)

The re.escape() step is not necessary here, but a good idea in the general 
case when extra is an arbitrary string.

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug? concatenate a number to a backreference: re.sub(r'(zzz:)xxx', r'\1'+str(4444), somevar)

2009-10-23 Thread abdulet
On 23 oct, 13:54, Peter Otten __pete...@web.de wrote:
 abdulet wrote:
  Well its this normal? i want to concatenate a number to a
  backreference in a regular expression. Im working in a multprocess
  script so the first what i think is in an error in the multiprocess
  logic but what a sorprise!!! when arrived to this conclussion after
  some time debugging i see that:

  import re
  aa = zzz:xxx
  re.sub(r'(zzz:).*',r'\1'+str(),aa)
  '[33'

 If you perform the addition you get r\1. How should the regular
 expression engine interpret that? As the backreference to group 1, 13, ...
 or 1? It picks something completely different, [33, because \133 is
 the octal escape sequence for [:

  chr(0133)

 '['

 You can avoid the ambiguity with

 extra = str(number)
 extra = re.escape(extra)
 re.sub(expr r\g1 + extra, text)

 The re.escape() step is not necessary here, but a good idea in the general
 case when extra is an arbitrary string.

 Peter
Aha!!! nice thanks i don't see that part of the re module
documentation and it was in front of my eyes :(( like always its
something silly jjj so thanks again and yes!! is a nice idea to escape
the variable ;)

cheers
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Antoine Pitrou
Le Fri, 23 Oct 2009 09:45:06 +0200, Olof Bjarnason a écrit :
 
 So I think my first question is still interesting: What is the point of
 multiple cores, if memory is the bottleneck?

Why do you think it is, actually? Some workloads are CPU-bound, some 
others are memory- or I/O-bound.

You will find plenty of benchmarks on the Web showing that some workloads 
scale almost linearly to the number of CPU cores, while some don't.

-- 
http://mail.python.org/mailman/listinfo/python-list


search python wifi module

2009-10-23 Thread Clint Mourlevat

hello,

i search a wifi module python on windows, i have already scapy !

thanks
@+ 
--

http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread paulC
On Oct 23, 3:18 am, Brian Quinlan br...@sweetapp.com wrote:
 My test reduction:

 import multiprocessing
 import queue

 def _process_worker(q):
      while True:
          try:
              something = q.get(block=True, timeout=0.1)
          except queue.Empty:
              return
          else:
              print('Grabbed item from queue:', something)

 def _make_some_processes(q):
      processes = []
      for _ in range(10):
          p = multiprocessing.Process(target=_process_worker, args=(q,))
          p.start()
          processes.append(p)
      return processes

 def _do(i):
      print('Run:', i)
      q = multiprocessing.Queue()
      for j in range(30):
          q.put(i*30+j)
      processes = _make_some_processes(q)

      while not q.empty():
          pass

 #    The deadlock only occurs on Mac OS X and only when these lines
 #    are commented out:
 #    for p in processes:
 #        p.join()

 for i in range(100):
      _do(i)

 --

 Output (on Mac OS X using the svn version of py3k):
 % ~/bin/python3.2 moprocessmoproblems.py
 Run: 0
 Grabbed item from queue: 0
 Grabbed item from queue: 1
 Grabbed item from queue: 2
 ...
 Grabbed item from queue: 29
 Run: 1

 At this point the script produces no additional output. If I uncomment
 the lines above then the script produces the expected output. I don't
 see any docs that would explain this problem and I don't know what the
 rule would be e.g. you just join every process that uses a queue before
   the queue is garbage collected.

 Any ideas why this is happening?

 Cheers,
 Brian

I can't promise a definitive answer but looking at the doc.s:-

isAlive()
Return whether the thread is alive.

Roughly, a thread is alive from the moment the start() method
returns until its run() method terminates. The module function
enumerate() returns a list of all alive threads.

I guess that the word 'roughly' indicates that returning from the start
() call does not mean that all the threads have actually started, and
so calling join is illegal. Try calling isAlive on all the threads
before returning from _make_some_processes.

Regards, Paul C.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to schedule system calls with Python

2009-10-23 Thread Jorgen Grahn
On Thu, 2009-10-22, Al Fansome wrote:


 Jorgen Grahn wrote:
 On Fri, 2009-10-16, Jeremy wrote:
 On Oct 15, 6:32 pm, MRAB pyt...@mrabarnett.plus.com wrote:
 TerryP wrote:
 On Oct 15, 7:42 pm, Jeremy jlcon...@gmail.com wrote:
 I need to write a Python script that will call some command line
 programs (using os.system).  I will have many such calls, but I want
 to control when the calls are made.  I won't know in advance how long
 each program will run and I don't want to have 10 programs running
 when I only have one or two processors.  I want to run one at a time
 (or two if I have two processors), wait until it's finished, and then
 call the next one.
 ...
 You could use multithreading: put the commands into a queue; start the
 same number of worker threads as there are processors; each worker
 thread repeatedly gets a command from the queue and then runs it using
 os.system(); if a worker thread finds that the queue is empty when it
 tries to get a command, then it terminates.
 Yes, this is it.  If I have a list of strings which are system
 commands, this seems like a more intelligent way of approaching it.
 My previous response will work, but won't take advantage of multiple
 cpus/cores in a machine without some manual manipulation.  I like this
 idea.
 
 Note that you do not need *need* multithreading for this. To me it
 seems silly to have N threads sitting just waiting for one process
 each to die -- those threads contribute nothing to the multiprocessing
 you want.
...

 Another way to approach this, if you do want to use threads, is to use a 
 counting semaphore. Set it to the maximum number of threads you want to 
 run at any one time. Then loop starting up worker threads in the main 
 thread. acquire() the semaphore before starting the next worker thread; 
 when the semaphore reaches 0, your main thread will block. Each worker 
 thread should then release() the semaphore when it  exits; this will 
 allow the main thread to move on to creating the next worker thread.

 This doesn't address the assignment of threads to CPU cores, but I have 
 used this technique many times, and it is simple and fairly easy to 
 implement. [---]

But do you *want* to handle the CPUs manually, anyway? Your program
typically has no idea what other processes are running on the machine
or how important they are.

(Of course, in this case the threads do next to nothing, so controlling
them on that detailed level neither helps nor hurts performance.)

/Jorgen

-- 
  // Jorgen Grahn grahn@  Oo  o.   .  .
\X/ snipabacken.se   O  o   .
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread Brian Quinlan

On 24 Oct 2009, at 00:02, paulC wrote:


On Oct 23, 3:18 am, Brian Quinlan br...@sweetapp.com wrote:

My test reduction:

import multiprocessing
import queue

def _process_worker(q):
 while True:
 try:
 something = q.get(block=True, timeout=0.1)
 except queue.Empty:
 return
 else:
 print('Grabbed item from queue:', something)

def _make_some_processes(q):
 processes = []
 for _ in range(10):
 p = multiprocessing.Process(target=_process_worker,  
args=(q,))

 p.start()
 processes.append(p)
 return processes

def _do(i):
 print('Run:', i)
 q = multiprocessing.Queue()
 for j in range(30):
 q.put(i*30+j)
 processes = _make_some_processes(q)

 while not q.empty():
 pass

#The deadlock only occurs on Mac OS X and only when these lines
#are commented out:
#for p in processes:
#p.join()

for i in range(100):
 _do(i)

--

Output (on Mac OS X using the svn version of py3k):
% ~/bin/python3.2 moprocessmoproblems.py
Run: 0
Grabbed item from queue: 0
Grabbed item from queue: 1
Grabbed item from queue: 2
...
Grabbed item from queue: 29
Run: 1

At this point the script produces no additional output. If I  
uncomment

the lines above then the script produces the expected output. I don't
see any docs that would explain this problem and I don't know what  
the
rule would be e.g. you just join every process that uses a queue  
before

  the queue is garbage collected.

Any ideas why this is happening?

Cheers,
Brian


I can't promise a definitive answer but looking at the doc.s:-

isAlive()
   Return whether the thread is alive.

   Roughly, a thread is alive from the moment the start() method
returns until its run() method terminates. The module function
enumerate() returns a list of all alive threads.

I guess that the word 'roughly' indicates that returning from the  
start

() call does not mean that all the threads have actually started, and
so calling join is illegal. Try calling isAlive on all the threads
before returning from _make_some_processes.

Regards, Paul C.
--
http://mail.python.org/mailman/listinfo/python-list


Hey Paul,

I guess I was unclear in my explanation - the deadlock only happens  
when I *don't* call join.


Cheers,
Brian


--
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Olof Bjarnason
2009/10/23 Antoine Pitrou solip...@pitrou.net

 Le Fri, 23 Oct 2009 09:45:06 +0200, Olof Bjarnason a écrit :
 
  So I think my first question is still interesting: What is the point of
  multiple cores, if memory is the bottleneck?

 Why do you think it is, actually? Some workloads are CPU-bound, some
 others are memory- or I/O-bound.


Big I/O operations have even less chance of gaining anything from being
parallellized (quite the opposite I would guess due to synchronization
overhead).

Operations that uses very little memory, say some mathematical computations
that can fit in 16 registers, have a lot to gain in being parallellized, of
course. Maybe computing small-formulae fractals is a good example of this?

My question was regarding the most hard-to-tell and, sadly also the most
common, trying to speed up algorithms that work over big data structures in
primary memory. Things like drawing lines, doing image processing,
multiplication of large matrices, ray tracing Basically anything with
large amounts of input information/buffers/structures, stored in primary
memory.

This would be way to speed up things in an image processing algorithm:
1. divide the image into four subimages
2. let each core process each part independently
3. fixmerge (along split lines for example) into a resulting, complete
image

Notice that no gain would be achieved if the memory was shared among the
cores, at least not if the operation-per-pixel is faster than the read/write
into memory.


 You will find plenty of benchmarks on the Web showing that some workloads
 scale almost linearly to the number of CPU cores, while some don't.

 --
 http://mail.python.org/mailman/listinfo/python-list




-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with regular expression finding multiple floats

2009-10-23 Thread Jeremy
On Oct 23, 3:48 am, Edward Dolan byteco...@gmail.com wrote:
 On Oct 22, 3:26 pm, Jeremy jlcon...@gmail.com wrote:

  My question is, how can I use regular expressions to find two OR three
  or even an arbitrary number of floats without repeating %s?  Is this
  possible?

  Thanks,
  Jeremy

 Any time you have tabular data such as your example, split() is
 generally the first choice. But since you asked, and I like fscking
 with regular expressions...

 import re

 # I modified your data set just a bit to show that it will
 # match zero or more space separated real numbers.

 data =
 
 1.E-08

 1.E-08 1.58024E-06 0.0048 1.E-08 1.58024E-06
 0.0048
 1.E-07 2.98403E-05
 0.0018
 foo bar
 baaz
 1.E-06 8.85470E-06
 0.0026
 1.E-05 6.08120E-06
 0.0032
 1.E-03 1.61817E-05
 0.0022
 1.E+00 8.34460E-05
 0.0014
 2.E+00 2.31616E-05
 0.0017
 5.E+00 2.42717E-05
 0.0017
 total 1.93417E-04
 0.0012
 

 ntuple = re.compile
 (r
 # match beginning of line (re.M in the
 docs)
 ^
 # chew up anything before the first real (non-greedy - ?)

 .*?
 # named match (turn the match into a named atom while allowing
 irrelevant (groups))
 (?
 Pntuple
   # match one
 real
   [-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d
 +)?
   # followed by zero or more space separated
 reals
   ([ \t]+[-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d+)?)
 *)
 # match end of line (re.M in the
 docs)
 $
 , re.X | re.M) # re.X to allow comments and arbitrary
 whitespace

 print [tuple(mo.group('ntuple').split())
        for mo in re.finditer(ntuple, data)]

 Now compare the previous post using split with this one. Even with the
 comments in the re, it's still a bit difficult to read. Regular
 expressions
 are brittle. My code works fine for the data above but if you change
 the
 structure the re will probably fail. At that point, you have to fiddle
 with
 the re to get it back on course.

 Don't get me wrong, regular expressions are hella fun to play with.
 You have
 to ask yourself, Do I really _need_ to use a regular expression here?

In this simplified example I don't really need regular expressions.
However I will need regular expressions for more complex problems and
I'm trying to become more proficient at using regular expressions.  I
tried to simplify this so as not to bother the mailing list too much.

Thanks for the great suggestion.  It looks like it will work fine, but
I can't get it to work.  I downloaded the simple script you put on
http://codepad.org/Z7eWBusl  but it only prints an empty list.  Am I
missing something?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: search python wifi module

2009-10-23 Thread Jorgen Grahn
On Fri, 2009-10-23, Clint Mourlevat wrote:
 hello,

 i search a wifi module python on windows, i have already scapy !

What is a wifi module?  Your OS is supposed to hide networking
implementation details (Ethernet, PPP, Wi-fi, 3G ...) and provide
specific management interfaces when needed. What are you trying to do?

/Jorgen

-- 
  // Jorgen Grahn grahn@  Oo  o.   .  .
\X/ snipabacken.se   O  o   .
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Validating positional arguments in optparse

2009-10-23 Thread Robert Kern

On 2009-10-23 05:54 AM, Filip Gruszczyński wrote:

That being said, I still stick with optparse. I prefer the dogmatic
interface that makes all my exe use the exact same (POSIX) convention. I
really don't care about writing /file instead of --file


I would like to keep POSIX convention too, but just wanted
OptionParser to do the dirty work of checking that arguments are all
right for me and wanted to know the reason, it doesn't.

I guess I should write a subclass, which would do just that.


If you don't want to allow /file, just don't use it. argparse's default is to 
only allow --options and not /options. You can configure it to do otherwise, but 
you are in complete control. Your programs will all follow the same convention.


I highly, highly recommend argparse over optparse.

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with code = Extract numerical value to variable

2009-10-23 Thread rurpy
On 10/23/2009 05:16 AM, Dave Angel wrote:
 Steve wrote:
 Sorry I'm not being clear

 Input**
 sold: 16
 sold: 20
 sold: 2
 sold: 0
 sold: storefront
 7
 0
 storefront
 sold
 null

 Output
 16
 20
 2
 0
 0
 7
 0
 0
 0
 0

 Since you're looking for only digits, simply make a string containing
 all characters that aren't digits.

 Now, loop through the file and use str.translate() to delete all the
 non-digits from each line, using the above table.

 Check if the result is , and if so, substitute 0.  Done.

delchars = ''.join([chr(i) for i in range (256) if chr(i)'0' or chr(i)
'9'])
f = open ('your_input_filename.txt')
for line in f:
num = line.translate (None, delchars)
if num == '': num = '0'
print num

IMO, Python's translate function in unpleasant enough
to use that I might use a regex here, although the
code above is likely faster if you are running this on
large input files.

import re
f = open (your_input_filename.txt)
for line in f:
mo = re.search ('\d+', line)
if mo: print mo.group(0)
else: print '0'
-- 
http://mail.python.org/mailman/listinfo/python-list


breaking out to the debugger (other than x=1/0 !)

2009-10-23 Thread bdb112
After a while programming in python, I still don't know how to break
out to the debugger other than inserting an instruction to cause an
exception.
x=1/0

In IDL one woudl write

stop,'reason for stopping...'
at which point you can inspect locals (as in pdb) and continue (but
you can't with pdb if python stopped because of an exception)

I am using ipython -pylab -pdb (python 2.5,2.6)
Yes, I realise that I could start with the debugger, and set break
points, but that can be slower and sometimes cause problems, and I
like ipython's magic features.

Also, I don't know how to stop cleanly handing control back to ipython
inside a program - e.g. after printing help text.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: breaking out to the debugger (other than x=1/0 !)

2009-10-23 Thread Diez B. Roggisch
bdb112 wrote:

 After a while programming in python, I still don't know how to break
 out to the debugger other than inserting an instruction to cause an
 exception.
 x=1/0
 
 In IDL one woudl write
 
 stop,'reason for stopping...'
 at which point you can inspect locals (as in pdb) and continue (but
 you can't with pdb if python stopped because of an exception)
 
 I am using ipython -pylab -pdb (python 2.5,2.6)
 Yes, I realise that I could start with the debugger, and set break
 points, but that can be slower and sometimes cause problems, and I
 like ipython's magic features.
 
 Also, I don't know how to stop cleanly handing control back to ipython
 inside a program - e.g. after printing help text.

I use 

 import pdb; pdb.set_trace()

Of course that can't be deleted as breakpoint - but it suits me well.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Read any function in runtime

2009-10-23 Thread joao abrantes
Hey. I want to make a program like this:

print Complete the function f(x)=

then the user would enter x+2 or 1/x or any other function that only uses
the variable x. Then my python program would calculate f(x) in some points
for example in f(2),f(4).. etc . How can I do this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Read any function in runtime

2009-10-23 Thread Matt McCredie
joao abrantes senhor.abrantes at gmail.com writes:

 
 Hey. I want to make a program like this:print Complete the function
f(x)=then the user would enter x+2 or 1/x or any other function that only uses
the variable x. Then my python program would calculate f(x) in some points for
example in f(2),f(4).. etc . How can I do this?
 

check out 'eval' or 'exec'. 

statement = raw_input(Complete the function f(x)=)
print eval(statement, {'x':2})
print eval(statement, {'x':4})
print eval(statement, {'x':6})


or with 'exec':

statement = raw_input(Complete the function f(x)=)
exec f = lambda x:+statement in {}

print f(2)
print f(4)
print f(6)


Matt McCredie


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: breaking out to the debugger (other than x=1/0 !)

2009-10-23 Thread bdb112
That's perfect - and removing the breakpoint is not an issue for me
as it is normally conditional on a debug level, which I can change
from pydb

if debuglvl3:
import pydb
pydb.set_trace()
'in XXX:  c to continue'

The text line is a useful prompt

(The example here is for pydb which works as well (and is more like
gdb).



On Oct 23, 12:07 pm, Diez B. Roggisch de...@nospam.web.de wrote:
 bdb112 wrote:
  After a while programming in python, I still don't know how to break
  out to the debugger other than inserting an instruction to cause an
  exception.
  x=1/0

  In IDL one woudl write

  stop,'reason for stopping...'
  at which point you can inspect locals (as in pdb) and continue (but
  you can't with pdb if python stopped because of an exception)

  I am using ipython -pylab -pdb (python 2.5,2.6)
  Yes, I realise that I could start with the debugger, and set break
  points, but that can be slower and sometimes cause problems, and I
  like ipython's magic features.

  Also, I don't know how to stop cleanly handing control back to ipython
  inside a program - e.g. after printing help text.

 I use

  import pdb; pdb.set_trace()

 Of course that can't be deleted as breakpoint - but it suits me well.

 Diez

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unicode and dbf files

2009-10-23 Thread Ethan Furman

John Machin wrote:

On Oct 23, 3:03 pm, Ethan Furman et...@stoneleaf.us wrote:


John Machin wrote:


On Oct 23, 7:28 am, Ethan Furman et...@stoneleaf.us wrote:



Greetings, all!



I would like to add unicode support to my dbf project.  The dbf header
has a one-byte field to hold the encoding of the file.  For example,
\x03 is code-page 437 MS-DOS.



My google-fu is apparently not up to the task of locating a complete
resource that has a list of the 256 possible values and their
corresponding code pages.



What makes you imagine that all 256 possible values are mapped to code
pages?


I'm just wanting to make sure I have whatever is available, and
preferably standard.  :D



So far I have found this, plus variations:http://support.microsoft.com/kb/129631



Does anyone know of anything more complete?



That is for VFP3. Try the VFP9 equivalent.



dBase 5,5,6,7 use others which are not defined in publicly available
dBase docs AFAICT. Look for language driver ID and LDID. Secondary
source: ESRI support site.


Well, a couple hours later and still not more than I started with.
Thanks for trying, though!



Huh? You got tips to (1) the VFP9 docs (2) the ESRI site (3) search
keywords and you couldn't come up with anything??


Perhaps nothing new would have been a better description.  I'd already 
seen the clicketyclick site (good info there), and all I found at ESRI 
were folks trying to figure it out, plus one link to a list that was no 
different from the vfp3 list (or was it that the list did not give the 
hex values?  Either way, of no use to me.)


I looked at dbase.com, but came up empty-handed there (not surprising, 
since they are a commercial company).


I searched some more on Microsoft's site in the VFP9 section, and was 
able to find the code page section this time.  Sadly, it only added 
about seven codes.


At any rate, here is what I have come up with so far.  Any corrections 
and/or additions greatly appreciated.


code_pages = {
'\x01' : ('ascii', 'U.S. MS-DOS'),
'\x02' : ('cp850', 'International MS-DOS'),
'\x03' : ('cp1252', 'Windows ANSI'),
'\x04' : ('mac_roman', 'Standard Macintosh'),
'\x64' : ('cp852', 'Eastern European MS-DOS'),
'\x65' : ('cp866', 'Russian MS-DOS'),
'\x66' : ('cp865', 'Nordic MS-DOS'),
'\x67' : ('cp861', 'Icelandic MS-DOS'),
'\x68' : ('cp895', 'Kamenicky (Czech) MS-DOS'), # iffy
'\x69' : ('cp852', 'Mazovia (Polish) MS-DOS'),  # iffy
'\x6a' : ('cp737', 'Greek MS-DOS (437G)'),
'\x6b' : ('cp857', 'Turkish MS-DOS'),

'\x78' : ('big5', 'Traditional Chinese (Hong Kong SAR, Taiwan)\
   Windows'),   # wag
'\x79' : ('iso2022_kr', 'Korean Windows'),  # wag
'\x7a' : ('iso2022_jp_2', 'Chinese Simplified (PRC, Singapore)\
   Windows'),   # wag
'\x7b' : ('iso2022_jp', 'Japanese Windows'),# wag
'\x7c' : ('cp874', 'Thai Windows'), # wag
'\x7d' : ('cp1255', 'Hebrew Windows'),
'\x7e' : ('cp1256', 'Arabic Windows'),
'\xc8' : ('cp1250', 'Eastern European Windows'),
'\xc9' : ('cp1251', 'Russian Windows'),
'\xca' : ('cp1254', 'Turkish Windows'),
'\xcb' : ('cp1253', 'Greek Windows'),
'\x96' : ('mac_cyrillic', 'Russian Macintosh'),
'\x97' : ('mac_latin2', 'Macintosh EE'),
'\x98' : ('mac_greek', 'Greek Macintosh') }

~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: A new way to configure Python logging

2009-10-23 Thread Jean-Michel Pichavant

Vinay Sajip wrote:

If you use the logging package but don't like using the ConfigParser-based
configuration files which it currently supports, keep reading. I'm proposing to
provide a new way to configure logging, using a Python dictionary to hold
configuration information. It means that you can convert a text file such as

# logconf.yml: example logging configuration
formatters:
  brief:
format: '%(levelname)-8s: %(name)-15s: %(message)s'
  precise:
format: '%(asctime)s %(name)-15s %(levelname)-8s %(message)s'
handlers:
  console:
class : logging.StreamHandler
formatter : brief
level : INFO
stream : ext://sys.stdout
  file:
class : logging.handlers.RotatingFileHandler
formatter : precise
filename : logconfig.log
maxBytes : 100
backupCount : 3
  email:
class: logging.handlers.SMTPHandler
mailhost: localhost
fromaddr: my_...@domain.tld
toaddrs:
  - support_t...@domain.tld
  - dev_t...@domain.tld
subject: Houston, we have a problem.
loggers:
  foo:
level : ERROR
handlers: [email]
  bar.baz:
level: WARNING
root:
  level : DEBUG
  handlers  : [console, file]
# -- EOF --

into a working configuration for logging. The above text is in YAML format, and
can easily be read into a Python dict using PyYAML and the code

import yaml; config = yaml.load(open('logconf.yml', 'r'))

but if you're not using YAML, don't worry. You can use JSON, Python source code
or any other method to construct a Python dict with the configuration
information, then call the proposed new configuration API using code like

import logging.config

logging.config.dictConfig(config)

to put the configuration into effect.

For full details of the proposed change to logging, see PEP 391 at

http://www.python.org/dev/peps/pep-0391/

I need your feedback to make this feature as useful and as easy to use as
possible. I'm particularly interested in your comments about the dictionary
layout and how incremental logging configuration should work, but all feedback
will be gratefully received. Once implemented, the configuration format will
become subject to backward compatibility constraints and therefore hard to
change, so get your comments and ideas in now!

Thanks in advance,


Vinay Sajip


  
For my part, I'm configuring the loggers in the application entry point 
file, in python code. I'm not sure I am that concerned. However being a 
great fan of this module, I kindly support you for any improvements you 
may add to this module and appreciate all the work you've already done 
so far.


Cheers,

Jean-Michel

--
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Antoine Pitrou
Le Fri, 23 Oct 2009 15:53:02 +0200, Olof Bjarnason a écrit :
 
 This would be way to speed up things in an image processing algorithm:
 1. divide the image into four subimages 2. let each core process each
 part independently 3. fixmerge (along split lines for example) into a
 resulting, complete image

Well, don't assume you're the first to think about that.
I'm sure that performance-conscious image processing software already has 
this kind of tile-based optimizations.
(actually, if you look at benchmarks of 3D rendering which are regularly 
done by enthusiast websites, it shows exactly that)

Antoine.

-- 
http://mail.python.org/mailman/listinfo/python-list


Copying a ZipExtFile

2009-10-23 Thread Moore, Mathew L
Hello all,

A newbie here.  I was wondering why the following fails on Python 2.6.2 
(r262:71605) on win32.  Am I doing something inappropriate?

Interestingly, it works in 3.1, but would like to also get it working in 2.6.

Thanks in advance,
--Matt


import io
import shutil
import tempfile
import zipfile

with tempfile.TemporaryFile() as f:
# (Real code retrieves archive via urllib2.urlopen().)
zip = zipfile.ZipFile(f, mode='w')
zip.writestr('unknowndir/src.txt', 'Hello, world!')
zip.close();

# (Pretend we just downloaded the zip file.)
f.seek(0)

# Result of urlopen() is not seekable, but ZipFile requires a
# seekable file.  Work around this by copying the file into a
# memory stream.
with io.BytesIO() as memio:
shutil.copyfileobj(f, memio)
zip = zipfile.ZipFile(file=memio)
# Can't use zip.extract(), because I want to ignore paths
# within archive.
src = zip.open('unknowndir/src.txt')
with open('dst.txt', mode='wb') as dst:
shutil.copyfileobj(src, dst)


The last line throws an Error:


Traceback (most recent call last):
  File test.py, line 25, in module
shutil.copyfileobj(src, dst)
  File C:\Python26\lib\shutil.py, line 27, in copyfileobj
buf = fsrc.read(length)
  File C:\Python26\lib\zipfile.py, line 594, in read
bytes = self.fileobj.read(bytesToRead)
TypeError: integer argument expected, got 'long'


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A new way to configure Python logging

2009-10-23 Thread Vinay Sajip
 For my part, I'm configuring the loggers in the application entry point 

 file, in python code. I'm not sure I am that concerned. However being a 
 great fan of this module, I kindly support you for any improvements you 
 may add to this module and appreciate all the work you've already done 
 so far.

Thanks, I also appreciate your comments on python-list to help out users new to 
logging or having trouble with it.

If you're happy configuring in code, that's fine. The new functionality is for 
users who want to do declarative configuration using YAML, JSON or Python 
source (Django is possibly going to use a dict declared in Python source in the 
Django settings module to configure logging for Django sites).

Best regards,

Vinay Sajip


  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Olof Bjarnason

 
  This would be way to speed up things in an image processing algorithm:
  1. divide the image into four subimages 2. let each core process each
  part independently 3. fixmerge (along split lines for example) into a
  resulting, complete image

 Well, don't assume you're the first to think about that.
 I'm sure that performance-conscious image processing software already has
 this kind of tile-based optimizations.
 (actually, if you look at benchmarks of 3D rendering which are regularly
 done by enthusiast websites, it shows exactly that)

 No I didn't assume I was the first to think about that - I wanted to learn
more about how optimization at all is possible/viable with multi-core
motherboards, when the memory speed is the bottleneck anyway, regardless of
smart caching technologies.

I still have not received a convincing answer :)

Antoine.

--

  http://mail.python.org/mailman/listinfo/python-list




-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cpython optimization

2009-10-23 Thread Jason Sewall
On Fri, Oct 23, 2009 at 2:31 PM, Olof Bjarnason
olof.bjarna...@gmail.com wrote:
 
  This would be way to speed up things in an image processing algorithm:
  1. divide the image into four subimages 2. let each core process each
  part independently 3. fixmerge (along split lines for example) into a
  resulting, complete image

 Well, don't assume you're the first to think about that.
 I'm sure that performance-conscious image processing software already has
 this kind of tile-based optimizations.
 (actually, if you look at benchmarks of 3D rendering which are regularly
 done by enthusiast websites, it shows exactly that)

This is indeed a tried-and-true method for parallelizing certain image
and other grid-based algorithms, but it is in fact not appropriate for
a wide variety of techniques. Things like median filters, where f(A|B)
!= f(A)|f(B) (with | as some sort of concatenation), will not be able
to generate correct results given the scheme you outlined.

 No I didn't assume I was the first to think about that - I wanted to learn
 more about how optimization at all is possible/viable with multi-core
 motherboards, when the memory speed is the bottleneck anyway, regardless of
 smart caching technologies.

 I still have not received a convincing answer :)

Give Ulrich Drepper's What Every Programmer Should Know about Memory
a read (http://people.redhat.com/drepper/cpumemory.pdf) and you'll
hear all you want to know (and more) about how the memory hierarchy
plays with multi-core.

I don't contribute to CPython, but I am virtually certain that they
are not interested in having the compiler/interpreter try to apply
some generic threading to arbitrary code. The vast majority of Python
code wouldn't benefit from it even if it worked well, and I'm _very_
skeptical that there is any silver bullet for parallelizing general
code. If you think of one, tell Intel, they'll hire you.

_Perhaps_ the numpy or scipy people (I am not associated with either
of them) would be interested in some threading for certain array
operations. Maybe you could write something on top of what they have
to speed up array ops.

Jason
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread paulC

 Hey Paul,

 I guess I was unclear in my explanation - the deadlock only happens  
 when I *don't* call join.

 Cheers,
 Brian

Whoops, my bad.

Have you tried replacing prints with writing a another output Queue?
I'm wondering if sys.stdout has a problem.

Regards, Paul C.
-- 
http://mail.python.org/mailman/listinfo/python-list


How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Peng Yu
As far as I know, linux doesn't support a system level way to figure
out all the symbolic links point to a give file. I could do a system
wide search to look for any symbolic link that point to the file that
I am interested in. But this will be too slow when there are many
files in the systems.

I'm thinking of writing a daemon program which will build a database
on all the symbolic links that point to any files. Later on, whenever
I change or remove any file or symbolic link, I'll will notify the
daemon process the changes. By keeping this daemon process running, I
can quickly figure out what symbolic links are pointing to any give
file.

But I have never make a daemon program like this in python. Could
somebody point me what packages I need in order to make a daemon
process like this? Thank you!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Falcolas
On Oct 23, 1:25 pm, Peng Yu pengyu...@gmail.com wrote:
 As far as I know, linux doesn't support a system level way to figure
 out all the symbolic links point to a give file. I could do a system
 wide search to look for any symbolic link that point to the file that
 I am interested in. But this will be too slow when there are many
 files in the systems.

 I'm thinking of writing a daemon program which will build a database
 on all the symbolic links that point to any files. Later on, whenever
 I change or remove any file or symbolic link, I'll will notify the
 daemon process the changes. By keeping this daemon process running, I
 can quickly figure out what symbolic links are pointing to any give
 file.

 But I have never make a daemon program like this in python. Could
 somebody point me what packages I need in order to make a daemon
 process like this? Thank you!

I would recommend looking into some articles on creating well behaved
daemons and review python recipes for creating daemonic processes.
From there, it's mostly a matter of writing code which is fairly self
reliant. The ability to write to the system logs (Python module
syslog) helps quite a bit.

http://www.google.com/search?q=writing+daemons
http://code.activestate.com/recipes/278731/

I typically write a program which will run from the command line well,
then add a switch to make it a daemon. That way, you have direct
control over it while writing the daemon, but can then daemonize it
(using the activestate recipe) without making changes to the code.

Garrick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Grant Edwards
On 2009-10-23, Peng Yu pengyu...@gmail.com wrote:

 [...]

 I'm thinking of writing a daemon program [...]

 But I have never make a daemon program like this in python. Could
 somebody point me what packages I need in order to make a daemon
 process like this?

http://www.google.com/search?q=python+daemon

-- 
Grant Edwards   grante Yow! Did an Italian CRANE
  at   OPERATOR just experience
   visi.comuninhibited sensations in
   a MALIBU HOT TUB?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Falcolas
On Oct 23, 1:38 pm, Falcolas garri...@gmail.com wrote:
 On Oct 23, 1:25 pm, Peng Yu pengyu...@gmail.com wrote:



  As far as I know, linux doesn't support a system level way to figure
  out all the symbolic links point to a give file. I could do a system
  wide search to look for any symbolic link that point to the file that
  I am interested in. But this will be too slow when there are many
  files in the systems.

  I'm thinking of writing a daemon program which will build a database
  on all the symbolic links that point to any files. Later on, whenever
  I change or remove any file or symbolic link, I'll will notify the
  daemon process the changes. By keeping this daemon process running, I
  can quickly figure out what symbolic links are pointing to any give
  file.

  But I have never make a daemon program like this in python. Could
  somebody point me what packages I need in order to make a daemon
  process like this? Thank you!

 I would recommend looking into some articles on creating well behaved
 daemons and review python recipes for creating daemonic processes.
 From there, it's mostly a matter of writing code which is fairly self
 reliant. The ability to write to the system logs (Python module
 syslog) helps quite a bit.

 http://www.google.com/search?q=writing+daemonshttp://code.activestate.com/recipes/278731/

 I typically write a program which will run from the command line well,
 then add a switch to make it a daemon. That way, you have direct
 control over it while writing the daemon, but can then daemonize it
 (using the activestate recipe) without making changes to the code.

 Garrick

One other note - sorry for the double post - if you look at other
programs which maintain a DB of files, such as unix' slocate program,
update the DB as a daily cron job. You may want to also consider this
route.

Garrick
-- 
http://mail.python.org/mailman/listinfo/python-list


Dll in python

2009-10-23 Thread snonca
hello

I would like to know how to create dll in python to implement a
project. NET

There is a Tutorial

Thanks
Luis
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dll in python

2009-10-23 Thread Diez B. Roggisch

snonca schrieb:

hello

I would like to know how to create dll in python to implement a
project. NET

There is a Tutorial



Take a look at iron-python.

Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dll in python

2009-10-23 Thread Martien Verbruggen
On Fri, 23 Oct 2009 13:09:10 -0700 (PDT),
snonca luis.bi...@gmail.com wrote:
 hello

 I would like to know how to create dll in python to implement a
 project. NET

Are you maybe looking for this:

http://pythonnet.sourceforge.net/

Martien
-- 
 | 
Martien Verbruggen   | If it isn't broken, it doesn't have
first.l...@heliotrope.com.au | enough features yet.
 | 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dll in python

2009-10-23 Thread Aahz
In article f57b5b5b-7a51-4611-9611-49e4068ac...@d4g2000vbm.googlegroups.com,
snonca  luis.bi...@gmail.com wrote:

 [...]

Was I the only person who read the Subject: line and thought, How do you
roll D11, anyway?
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

In the end, outside of spy agencies, people are far too trusting and
willing to help.  --Ira Winkler
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A new way to configure Python logging

2009-10-23 Thread Wolodja Wentland
On Thu, Oct 22, 2009 at 09:25 +, Vinay Sajip wrote:

 I need your feedback to make this feature as useful and as easy to use as
 possible. I'm particularly interested in your comments about the dictionary
 layout and how incremental logging configuration should work, but all feedback
 will be gratefully received. Once implemented, the configuration format will
 become subject to backward compatibility constraints and therefore hard to
 change, so get your comments and ideas in now!

First and foremost: A big *THANK YOU* for creating and maintaining the
logging module. I use it in every single piece of software I create and
am very pleased with it.

You asked for feedback on incremental logging and I will just describe
how I use the logging module in an application.

Almost all applications I write consist of a/many script(s) (foo-bar,
foo-baz, ...) and a associated package (foo).

Logger/Level Hierarchy
--

I usually register a logger 'foo' within the application and one logger
for each module in the package, so the resulting logger hierarchy will
look like this:

foo
 |__bar
 |__baz
 |__newt
|___witch

I set every loggers log level to DEBUG and use the respective logger in
each module to log messages with different log levels. A look at the
code reveals, that I use log levels in the following way:

* DEBUG - Low level chatter:
* Called foo.bar.Shrubbery.find()
* Set foo.newt.Witch.age to 23
* ...

* INFO - Messages of interest to the user:
* Read configuration from ~/.foorc
* Processing Swallow: Unladen

* WARNING - yeah, just that (rarely used)
* Use of deprecated...

* ERROR:
* No such file or directory: ...
* Bravery fail: Sir Robin

Among other levels specific to the application, like PERFORMANCE for
performance related unit tests, ...

And *yes*: I use the logging module to output messages to the user of
which I think she *might* be interested in seeing/saving them.

Application User Interface
--

I like to give my users great freedom in configuring the application and
its output behaviour. I therefore usually have the following command
line options:

-q, --quiet No output at all
-v, --verbose   More output (Messages with Level = INFO)
--debug All messages

And I like the idea to enable the user to configure logging to the
console and to a log file independently, so I also provide;

   --log-file=FILE  Path of a file logged messages will get saved to
   --log-file-level=LEVEL   Messages with level = LEVEL will be saved

Sometimes I need special LogRecord handling, for example, if I want to
enable the user to save logs to a html file, for which I write a HTML
Handler and expose the templates (mako, jinja, you-name-it) used for
generating the HTML to the user.

The last facet of the logging system I expose to the user is the format
of the log messages. I usually do this within the applications
configuration file (~/.foorc) in a special section [Logging].

Implementation
--

You have rightfully noted in the PEP, that the ConfigParser method
is not really suitable for incremental configuration and I therefore
configure the logging system programmatically.

I create all loggers with except the root (foo) with:

LOG = logging.getLogger(__name__)
LOG.setLevel(logging.DEBUG)

within each module and then register suitable handlers *with the root
logger* to process incoming LogRecords. That means that I usually have a
StreamHandler, a FileHandler among other more specific ones.

The Handlers I register have suitable Filters associated with them,
so that it is easy to just add multiple handlers for various levels to
the root handler without causing LogRecords to get logged multiple
times.

I have *never* had to register any handlers with loggers further down in
the hierarchy. I much rather like to combine suitable Filters and
Handlers at the root logger. But that might just be me and due to my
very restricted needs. What is a use case for that?

The unsuitabililty of the ConfigParser method however is *not* due to the
*format* of the textual logging configuration (ie. ini vs dict) but
rather due to the fact that the logging library does not expose all
aspects of the configuration to the programmer *after* it was configured
with .fileConfig().

Please contradict me if I am wrong here, but there seems to be *no* method 
to change/delete handlers/loggers  once they are configured. Surely I
could temper with logging's internals, but I don't want to do this.

PEP 391
---

I like PEP 391 a lot. Really! Thanks for it. The configuration format is
very concise and easily readable. I like the idea of decoupling the
object ultimately used for configuring (the dict) from the storage of
that object (pickled, YAML, JSON, ...).

What I dislike is the fact that I will still not be able to use it with
all its potential. If PEP 391 would have already been implemented right
now I 

Re: Dll in python

2009-10-23 Thread Rami Chowdhury

On Fri, 23 Oct 2009 14:08:58 -0700, Aahz a...@pythoncraft.com wrote:

In article  
f57b5b5b-7a51-4611-9611-49e4068ac...@d4g2000vbm.googlegroups.com,

snonca  luis.bi...@gmail.com wrote:


[...]


Was I the only person who read the Subject: line and thought, How do you
roll D11, anyway?


Surely it's just like a slightly unbalanced D12?

--
Rami Chowdhury
Never attribute to malice that which can be attributed to stupidity --  
Hanlon's Razor

408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
--
http://mail.python.org/mailman/listinfo/python-list


Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Varnon Varnon
I'm sure this is a simple problem, or at least I hope it is, but I'm
not an experience programer and the solution eludes me.

My realm of study is the behavioral sciences. I want to write a
program to help me record data from movie files.
Currently I have a program that can record the time of a keystroke so
that I can use that to obtain frequency, duration and other temporal
characteristics of the behaviors in my movies.

What I really want, is a way to start playing the movie. Right now I
have to play the movie, then switch to my program. I would love it if
it were possible for me to have my program send a message to quicktime
that says play. Or any other work around really. If python could
play the movie, that would work just as well.

I'm using a mac btw.

Any suggestions?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Chris Rebert
On Fri, Oct 23, 2009 at 3:07 PM, Varnon Varnon varnonz...@gmail.com wrote:
 I'm sure this is a simple problem, or at least I hope it is, but I'm
 not an experience programer and the solution eludes me.

 My realm of study is the behavioral sciences. I want to write a
 program to help me record data from movie files.
 Currently I have a program that can record the time of a keystroke so
 that I can use that to obtain frequency, duration and other temporal
 characteristics of the behaviors in my movies.

 What I really want, is a way to start playing the movie. Right now I
 have to play the movie, then switch to my program. I would love it if
 it were possible for me to have my program send a message to quicktime
 that says play. Or any other work around really. If python could
 play the movie, that would work just as well.

 I'm using a mac btw.

 Any suggestions?

import subprocess
subprocess.Popen([open, path/to/the/movie.file])

Docs for the subprocess module: http://docs.python.org/library/subprocess.html
For information on the Mac OS X open command, `man open` from Terminal.

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread Brian Quinlan


On 24 Oct 2009, at 06:01, paulC wrote:



Hey Paul,

I guess I was unclear in my explanation - the deadlock only happens
when I *don't* call join.

Cheers,
Brian


Whoops, my bad.

Have you tried replacing prints with writing a another output Queue?
I'm wondering if sys.stdout has a problem.


Removing the print from the subprocess doesn't prevent the deadlock.

Cheers,
Brian
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Terry Reedy

Chris Rebert wrote:

On Fri, Oct 23, 2009 at 3:07 PM, Varnon Varnon varnonz...@gmail.com wrote:

I'm sure this is a simple problem, or at least I hope it is, but I'm
not an experience programer and the solution eludes me.

My realm of study is the behavioral sciences. I want to write a
program to help me record data from movie files.
Currently I have a program that can record the time of a keystroke so
that I can use that to obtain frequency, duration and other temporal
characteristics of the behaviors in my movies.

What I really want, is a way to start playing the movie. Right now I
have to play the movie, then switch to my program. I would love it if
it were possible for me to have my program send a message to quicktime
that says play. Or any other work around really. If python could
play the movie, that would work just as well.

I'm using a mac btw.

Any suggestions?


import subprocess
subprocess.Popen([open, path/to/the/movie.file])

Docs for the subprocess module: http://docs.python.org/library/subprocess.html
For information on the Mac OS X open command, `man open` from Terminal.


Or, for more control, look at pygame or other Python game frameworks.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Chris Varnon
Thanks, That works wonderfuly. Once I set quicktimes preferences to
play on open it opens and plays the movie exactly like I want.
But now I need a line of code to bring python to the front again so it
can read my input. Any more suggestions?

On Fri, Oct 23, 2009 at 5:17 PM, Chris Rebert c...@rebertia.com wrote:
 On Fri, Oct 23, 2009 at 3:07 PM, Varnon Varnon varnonz...@gmail.com wrote:
 I'm sure this is a simple problem, or at least I hope it is, but I'm
 not an experience programer and the solution eludes me.

 My realm of study is the behavioral sciences. I want to write a
 program to help me record data from movie files.
 Currently I have a program that can record the time of a keystroke so
 that I can use that to obtain frequency, duration and other temporal
 characteristics of the behaviors in my movies.

 What I really want, is a way to start playing the movie. Right now I
 have to play the movie, then switch to my program. I would love it if
 it were possible for me to have my program send a message to quicktime
 that says play. Or any other work around really. If python could
 play the movie, that would work just as well.

 I'm using a mac btw.

 Any suggestions?

 import subprocess
 subprocess.Popen([open, path/to/the/movie.file])

 Docs for the subprocess module: http://docs.python.org/library/subprocess.html
 For information on the Mac OS X open command, `man open` from Terminal.

 Cheers,
 Chris
 --
 http://blog.rebertia.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Chris Rebert
 On Fri, Oct 23, 2009 at 5:17 PM, Chris Rebert c...@rebertia.com wrote:
 On Fri, Oct 23, 2009 at 3:07 PM, Varnon Varnon varnonz...@gmail.com wrote:
 I'm sure this is a simple problem, or at least I hope it is, but I'm
 not an experience programer and the solution eludes me.

 My realm of study is the behavioral sciences. I want to write a
 program to help me record data from movie files.
 Currently I have a program that can record the time of a keystroke so
 that I can use that to obtain frequency, duration and other temporal
 characteristics of the behaviors in my movies.

 What I really want, is a way to start playing the movie. Right now I
 have to play the movie, then switch to my program. I would love it if
 it were possible for me to have my program send a message to quicktime
 that says play. Or any other work around really. If python could
 play the movie, that would work just as well.

 I'm using a mac btw.

 Any suggestions?

 import subprocess
 subprocess.Popen([open, path/to/the/movie.file])

 Docs for the subprocess module: 
 http://docs.python.org/library/subprocess.html
 For information on the Mac OS X open command, `man open` from Terminal.

On Fri, Oct 23, 2009 at 4:05 PM, Chris Varnon varnonz...@gmail.com wrote:
 Thanks, That works wonderfuly. Once I set quicktimes preferences to
 play on open it opens and plays the movie exactly like I want.
 But now I need a line of code to bring python to the front again so it
 can read my input. Any more suggestions?

Add the -g option so focus isn't given to Quicktime (this is covered
in the manpage I pointed you to):

subprocess.Popen([open, -g, path/to/the/movie.file])

Also, in the future, try to avoid top-posting (see
http://en.wikipedia.org/wiki/Posting_style).

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python help: Sending a play command to quicktime, or playing a movie in python

2009-10-23 Thread Chris Varnon
 On Fri, Oct 23, 2009 at 5:17 PM, Chris Rebert c...@rebertia.com wrote:
 On Fri, Oct 23, 2009 at 3:07 PM, Varnon Varnon varnonz...@gmail.com wrote:
 I'm sure this is a simple problem, or at least I hope it is, but I'm
 not an experience programer and the solution eludes me.

 My realm of study is the behavioral sciences. I want to write a
 program to help me record data from movie files.
 Currently I have a program that can record the time of a keystroke so
 that I can use that to obtain frequency, duration and other temporal
 characteristics of the behaviors in my movies.

 What I really want, is a way to start playing the movie. Right now I
 have to play the movie, then switch to my program. I would love it if
 it were possible for me to have my program send a message to quicktime
 that says play. Or any other work around really. If python could
 play the movie, that would work just as well.

 I'm using a mac btw.

 Any suggestions?

 import subprocess
 subprocess.Popen([open, path/to/the/movie.file])

 Docs for the subprocess module: 
 http://docs.python.org/library/subprocess.html
 For information on the Mac OS X open command, `man open` from Terminal.

 On Fri, Oct 23, 2009 at 4:05 PM, Chris Varnon varnonz...@gmail.com wrote:
 Thanks, That works wonderfuly. Once I set quicktimes preferences to
 play on open it opens and plays the movie exactly like I want.
 But now I need a line of code to bring python to the front again so it
 can read my input. Any more suggestions?

 Add the -g option so focus isn't given to Quicktime (this is covered
 in the manpage I pointed you to):

 subprocess.Popen([open, -g, path/to/the/movie.file])

 Also, in the future, try to avoid top-posting (see
 http://en.wikipedia.org/wiki/Posting_style).

 Cheers,
 Chris
 --
 http://blog.rebertia.com


Wonderful. I totally missed the -g option.
I use pygame for the input handling currently. Maybe its not the most
elegant solution, but it's what I knew how to do. I just wasn't sure
how to do that one last bit.
Thanks a bunch!

Also, I typicaly don't post over email lists, so I didn't think about
the top-posting. This is the prefered method right?
Thanks again.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PySerial

2009-10-23 Thread Ronn Ross
I'm using pySerial to connect to a serial port (rs232) on a windows xp
machine. I'm using python interactive interpretor to interact with the
device. I type the following:
import serial
ser = serial.Serial(2)
ser.write(command)

But this does nothing to the control. I have been able to connect via puTTY
to verify that the command and the device are working. Next I tried to open
the port before
writing. It looks like this:
import serial
ser = serial.Serial(2)
ser.open()

It returns that an error. It states that I do not have permissions? I don't
know how to resolve either issue. Any help would be greatly appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PySerial

2009-10-23 Thread Chris Rebert
On Thu, Oct 22, 2009 at 4:43 PM, Ronn Ross ronn.r...@gmail.com wrote:
 I'm using pySerial to connect to a serial port (rs232) on a windows xp
 machine. I'm using python interactive interpretor to interact with the
 device. I type the following:
 import serial
 ser = serial.Serial(2)
 ser.write(command)

 But this does nothing to the control. I have been able to connect via puTTY
 to verify that the command and the device are working. Next I tried to open
 the port before
 writing. It looks like this:
 import serial
 ser = serial.Serial(2)
 ser.open()

 It returns that an error. It states that I do not have permissions? I don't
 know how to resolve either issue. Any help would be greatly appreciated.

Have you tried setting the baud rate? (the `baudrate` param to
Serial's constructor)
Why are you using port #2 and not #0?

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PySerial

2009-10-23 Thread Ronn Ross
I have tried setting the baud rate with no success. Also I'm using port #2
because Im using a usb to serial cable.

On Fri, Oct 23, 2009 at 7:51 PM, Chris Rebert c...@rebertia.com wrote:

 On Thu, Oct 22, 2009 at 4:43 PM, Ronn Ross ronn.r...@gmail.com wrote:
  I'm using pySerial to connect to a serial port (rs232) on a windows xp
  machine. I'm using python interactive interpretor to interact with the
  device. I type the following:
  import serial
  ser = serial.Serial(2)
  ser.write(command)
 
  But this does nothing to the control. I have been able to connect via
 puTTY
  to verify that the command and the device are working. Next I tried to
 open
  the port before
  writing. It looks like this:
  import serial
  ser = serial.Serial(2)
  ser.open()
 
  It returns that an error. It states that I do not have permissions? I
 don't
  know how to resolve either issue. Any help would be greatly appreciated.

 Have you tried setting the baud rate? (the `baudrate` param to
 Serial's constructor)
 Why are you using port #2 and not #0?

 Cheers,
 Chris
 --
 http://blog.rebertia.com

-- 
http://mail.python.org/mailman/listinfo/python-list


How to modify local variables from internal functions?

2009-10-23 Thread kj



I like Python a lot, and in fact I'm doing most of my scripting in
Python these days, but one thing that I absolutely *DETEST*
about Python is that it does allow an internal function to modify
variables in the enclosing local scope.  This willful hobbling of
internal functions seems to me so perverse and unnecessary that it
delayed my adoption of Python by about a decade.  Just thinking
about it brings me to the brink of blowing a gasket...  I must go
for a walk...



OK, I'm better now.

Anyway, I recently wanted to write a internal helper function that
updates an internal list and returns True if, after this update,
the list is empty, and once more I bumped against this hated
feature.  What I wanted to write, if Python did what I wanted it
to, was this:

def spam():
jobs = None 
def check_finished(): 
   jobs = look_for_more_jobs()
   return not jobs

if check_finished():
return 

process1(jobs)

if check_finished():
return

process2(jobs)

if check_finished():
return

process3(jobs)

In application in question, the availability of jobs can change
significantly over the course of the function's execution (jobs
can expire before they are fully processed, and new ones can arise),
hence the update-and-check prior to the calls to process1, process2,
process3.

But, of course, the above does not work in Python, because the jobs
variable local to spam does not get updated by check_finished.
G!

I ended up implementing check_finished as this stupid-looking
monstrosity:

def spam():
jobs = []  
def check_finished(jobs):
while jobs:
jobs.pop()
jobs.extend(look_for_more_jobs())
return not jobs

if check_finished(jobs):
return

# etc.

Is there some other trick to modify local variables from within
internal functions?

TIA!

kynn
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to modify local variables from internal functions?

2009-10-23 Thread Chris Rebert
On Fri, Oct 23, 2009 at 5:19 PM, kj no.em...@please.post wrote:
 I like Python a lot, and in fact I'm doing most of my scripting in
 Python these days, but one thing that I absolutely *DETEST*
 about Python is that it does allow an internal function to modify
 variables in the enclosing local scope.  This willful hobbling of
 internal functions seems to me so perverse and unnecessary that it
 delayed my adoption of Python by about a decade.  Just thinking
 about it brings me to the brink of blowing a gasket...  I must go
 for a walk...
snip
 Anyway, I recently wanted to write a internal helper function that
 updates an internal list and returns True if, after this update,
 the list is empty, and once more I bumped against this hated
 feature.  What I wanted to write, if Python did what I wanted it
 to, was this:

 def spam():
    jobs = None
    def check_finished():
       jobs = look_for_more_jobs()
       return not jobs

    if check_finished():
        return

    process1(jobs)

    if check_finished():
        return

    process2(jobs)

    if check_finished():
        return

    process3(jobs)
snip
 Is there some other trick to modify local variables from within
 internal functions?

The `nonlocal` statement?:
http://www.python.org/dev/peps/pep-3104/

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to modify local variables from internal functions?

2009-10-23 Thread Stephen Hansen
On Fri, Oct 23, 2009 at 5:19 PM, kj no.em...@please.post wrote:

 I like Python a lot, and in fact I'm doing most of my scripting in
 Python these days, but one thing that I absolutely *DETEST*
 about Python is that it does allow an internal function to modify
 variables in the enclosing local scope.  This willful hobbling of
 internal functions seems to me so perverse and unnecessary that it
 delayed my adoption of Python by about a decade.  Just thinking
 about it brings me to the brink of blowing a gasket...  I must go
 for a walk...
 [...]

Is there some other trick to modify local variables from within
 internal functions?


In Python 3, there's the nonlocal keyword to let you do just this.
Otherwise, no. The only way to do what you want is to use some mutable
object to 'hold' your data in the enclosing namespace, and modify that
object. You can use a list like you did, or some simple class namespace:
pass with ns = namespace() and ns.jobs = 1 or whatever you prefer.

But otherwise, you just can't do that. Gotta deal.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Aahz
In article mailman.1942.1256325954.2807.python-l...@python.org,
Peng Yu  pengyu...@gmail.com wrote:

I'm thinking of writing a daemon program which will build a database
on all the symbolic links that point to any files. Later on, whenever
I change or remove any file or symbolic link, I'll will notify the
daemon process the changes. By keeping this daemon process running, I
can quickly figure out what symbolic links are pointing to any give
file.

But I have never make a daemon program like this in python. Could
somebody point me what packages I need in order to make a daemon
process like this? Thank you!

inotify
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

In the end, outside of spy agencies, people are far too trusting and
willing to help.  --Ira Winkler
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a daemon program to monitor symbolic links?

2009-10-23 Thread Lawrence D'Oliveiro
In message mailman.1942.1256325954.2807.python-l...@python.org, Peng Yu 
wrote:

 As far as I know, linux doesn't support a system level way to figure
 out all the symbolic links point to a give file.

Do you know of a system that does?

 I'm thinking of writing a daemon program which will build a database
 on all the symbolic links that point to any files. Later on, whenever
 I change or remove any file or symbolic link, I'll will notify the
 daemon process the changes.

What if the change is made to a removable/hot-pluggable volume while it's 
mounted on another system?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to modify local variables from internal functions?

2009-10-23 Thread MRAB

kj wrote:



I like Python a lot, and in fact I'm doing most of my scripting in
Python these days, but one thing that I absolutely *DETEST*
about Python is that it does allow an internal function to modify
variables in the enclosing local scope.  This willful hobbling of
internal functions seems to me so perverse and unnecessary that it
delayed my adoption of Python by about a decade.  Just thinking
about it brings me to the brink of blowing a gasket...  I must go
for a walk...



OK, I'm better now.

Anyway, I recently wanted to write a internal helper function that
updates an internal list and returns True if, after this update,
the list is empty, and once more I bumped against this hated
feature.  What I wanted to write, if Python did what I wanted it
to, was this:

def spam():
jobs = None 
def check_finished(): 
   jobs = look_for_more_jobs()

   return not jobs

if check_finished():
return 


process1(jobs)

if check_finished():
return

process2(jobs)

if check_finished():
return

process3(jobs)

In application in question, the availability of jobs can change
significantly over the course of the function's execution (jobs
can expire before they are fully processed, and new ones can arise),
hence the update-and-check prior to the calls to process1, process2,
process3.

But, of course, the above does not work in Python, because the jobs
variable local to spam does not get updated by check_finished.
G!


So you didn't try this?

def spam():
jobs = []

def check_finished():
jobs[:] = look_for_more_jobs()
return not jobs

if check_finished():
return

process1(jobs)

if check_finished():
return

process2(jobs)

if check_finished():
return

process3(jobs)


Actually, I'd refactor and reduce it to this:

def spam():
for proc in [process1, process2, process3]:
jobs = look_for_more_jobs()
if not jobs:
return
proc(jobs)

--
http://mail.python.org/mailman/listinfo/python-list


Re: python pyodbc - connect error

2009-10-23 Thread Gabriel Genellina
En Thu, 22 Oct 2009 22:33:52 -0300, Threader Slash  
threadersl...@gmail.com escribió:


Hi again.. I have done the same test using pyodbc, but to MySQL ODBC  
driver.
It works fine for MySQL. The problem still remains to Lotus Notes. Any  
other

hints please?


http://www.connectionstrings.com

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Is there a command that returns the number of substrings in a string?

2009-10-23 Thread Peng Yu
For example, the long string is 'abcabc' and the given string is
'abc', then 'abc' appears 2 times in 'abcabc'. Currently, I am calling
'find()' multiple times to figure out how many times a given string
appears in a long string. I'm wondering if there is a function in
python which can directly return this information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread Gabriel Genellina
En Thu, 22 Oct 2009 23:18:32 -0300, Brian Quinlan br...@sweetapp.com  
escribió:


I don't like a few things in the code:


def _do(i):
 print('Run:', i)
 q = multiprocessing.Queue()
 for j in range(30):
 q.put(i*30+j)
 processes = _make_some_processes(q)

 while not q.empty():
 pass


I'd use time.sleep(0.1) or something instead of this busy wait, but see  
below.



#The deadlock only occurs on Mac OS X and only when these lines
#are commented out:
#for p in processes:
#p.join()


I don't know how multiprocessing deals with it, but if you don't join() a  
process it may become a zombie, so it's probably better to always join  
them. In that case I'd just remove the wait for q.empty() completely.



for i in range(100):
 _do(i)


Those lines should be guarded with: if __name__ == '__main__':

I don't know if fixing those things will fix your problem, but at least  
the code will look neater...


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a command that returns the number of substrings in a string?

2009-10-23 Thread Stephen Hansen
On Fri, Oct 23, 2009 at 7:31 PM, Peng Yu pengyu...@gmail.com wrote:

 For example, the long string is 'abcabc' and the given string is
 'abc', then 'abc' appears 2 times in 'abcabc'. Currently, I am calling
 'find()' multiple times to figure out how many times a given string
 appears in a long string. I'm wondering if there is a function in
 python which can directly return this information.


 'abcabc'.count('abc')
2
 print ''.count.__doc__
S.count(sub[, start[, end]]) - int

Return the number of non-overlapping occurrences of substring sub in
string S[start:end].  Optional arguments start and end are interpreted
as in slice notation.

HTH,

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a command that returns the number of substrings in a string?

2009-10-23 Thread Erik Max Francis

Peng Yu wrote:

For example, the long string is 'abcabc' and the given string is
'abc', then 'abc' appears 2 times in 'abcabc'. Currently, I am calling
'find()' multiple times to figure out how many times a given string
appears in a long string. I'm wondering if there is a function in
python which can directly return this information.


The .count string method.

--
Erik Max Francis  m...@alcyone.com  http://www.alcyone.com/max/
 San Jose, CA, USA  37 18 N 121 57 W  AIM/Y!M/Skype erikmaxfrancis
  Diplomacy and defense are not substitutes for one another. Either
   alone would fail. -- John F. Kennedy, 1917-1963
--
http://mail.python.org/mailman/listinfo/python-list


Re: PySerial

2009-10-23 Thread Gabriel Genellina
En Fri, 23 Oct 2009 20:56:21 -0300, Ronn Ross ronn.r...@gmail.com  
escribió:


I have tried setting the baud rate with no success. Also I'm using port  
#2

because Im using a usb to serial cable.


Note that Serial(2) is known as COM3 in Windows, is it ok?

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: PAMIE and beautifulsoup problem

2009-10-23 Thread Gabriel Genellina

En Fri, 23 Oct 2009 03:03:56 -0300, elca high...@gmail.com escribió:


follow script is which i was found in google.
but it not work for me.
im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't
make it working.


You'll have to provide more details. *What* happened? You got an  
exception? Please post the complete exception traceback.



from BeautifulSoup import BeautifulSoup
Import cPAMIE
url = 'http://www.cnn.com'
ie = cPAMIE.PAMIE(url)
bs = BeautifulSoup(ie.pageText())


Also, don't re-type the code. Copy and paste it, directly from the program  
that failed.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: AttributeError: 'SSLSocket' object has no attribute 'producer_fifo'

2009-10-23 Thread Gabriel Genellina
En Fri, 23 Oct 2009 03:27:40 -0300, VYAS ASHISH M-NTB837  
ashish.v...@motorola.com escribió:



Tried using asyncore.dispatcher_with_send in place of
asynchat.async_chat and after a few request-responses, I get this:
Exception in thread Thread-2:
Traceback (most recent call last):
  File C:\Python31\lib\threading.py, line 509, in _bootstrap_inner
self.run()
  File C:\Python31\lib\threading.py, line 462, in run
self._target(*self._args, **self._kwargs)
  File D:\XPress_v1.3\XPress\Model.py, line 3328, in run
asyncore.loop()
  File C:\Python31\lib\asyncore.py, line 206, in loop
poll_fun(timeout, map)
  File C:\Python31\lib\asyncore.py, line 124, in poll
is_w = obj.writable()
  File C:\Python31\lib\asyncore.py, line 516, in writable
return (not self.connected) or len(self.out_buffer)
  File C:\Python31\lib\asyncore.py, line 399, in __getattr__
return getattr(self.socket, attr)
AttributeError: 'SSLSocket' object has no attribute 'out_buffer'
Someone please throw some light on this!


How do you manage asyncore's map? It should contain dispatcher objects --  
looks like you've got objects of another type there.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Copying a ZipExtFile

2009-10-23 Thread Gabriel Genellina
En Fri, 23 Oct 2009 14:15:33 -0300, Moore, Mathew L moor...@battelle.org  
escribió:



with io.BytesIO() as memio:
shutil.copyfileobj(f, memio)
zip = zipfile.ZipFile(file=memio)
# Can't use zip.extract(), because I want to ignore paths
# within archive.
src = zip.open('unknowndir/src.txt')
with open('dst.txt', mode='wb') as dst:
shutil.copyfileobj(src, dst)


The last line throws an Error:


Traceback (most recent call last):
  File test.py, line 25, in module
shutil.copyfileobj(src, dst)
  File C:\Python26\lib\shutil.py, line 27, in copyfileobj
buf = fsrc.read(length)
  File C:\Python26\lib\zipfile.py, line 594, in read
bytes = self.fileobj.read(bytesToRead)
TypeError: integer argument expected, got 'long'


Try adding a length parameter to the copyfileobj call, so the copy is done  
in small enough chunks.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing deadlock

2009-10-23 Thread Brian Quinlan


On 24 Oct 2009, at 14:10, Gabriel Genellina wrote:

En Thu, 22 Oct 2009 23:18:32 -0300, Brian Quinlan  
br...@sweetapp.com escribió:


I don't like a few things in the code:


def _do(i):
print('Run:', i)
q = multiprocessing.Queue()
for j in range(30):
q.put(i*30+j)
processes = _make_some_processes(q)

while not q.empty():
pass


I'd use time.sleep(0.1) or something instead of this busy wait, but  
see below.


This isn't my actual code, it is a simplification of my code designed  
to minimally demonstrate a possible bug in multiprocessing.





#The deadlock only occurs on Mac OS X and only when these lines
#are commented out:
#for p in processes:
#p.join()


I don't know how multiprocessing deals with it, but if you don't  
join() a process it may become a zombie, so it's probably better to  
always join them. In that case I'd just remove the wait for  
q.empty() completely.


I'm actually not looking for workarounds. I want to know if this is a  
multiprocessing bug or if I am misunderstanding the multiprocessing  
docs somehow and my demonstrated usage pattern is somehow incorrect.


Cheers,
Brian




for i in range(100):
_do(i)


Those lines should be guarded with: if __name__ == '__main__':

I don't know if fixing those things will fix your problem, but at  
least the code will look neater...


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


--
http://mail.python.org/mailman/listinfo/python-list


[issue7188] optionxform documentation confusing

2009-10-23 Thread Martin v . Löwis

New submission from Martin v. Löwis mar...@v.loewis.de:

In
http://stackoverflow.com/questions/1611799/preserve-case-in-configparser, 
somebody
is confused about adjusting ConfigParser.optionxform. The documentation
is indeed confusing, in particular by claiming that you should set it
to str(). Even if you get what is meant by setting (i.e. not
calling), it's still nonsensical to suggest that it should be set to
an empty string.

--
assignee: georg.brandl
components: Documentation
messages: 94375
nosy: georg.brandl, loewis
severity: normal
status: open
title: optionxform documentation confusing

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7188
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7006] The replacement suggested for callable(x) in py3k is not equivalent

2009-10-23 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Not really, that was the last thing to get this issue closed.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7006
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7188] optionxform documentation confusing

2009-10-23 Thread Georg Brandl

Georg Brandl ge...@python.org added the comment:

Thanks, fixed in r75623.

--
resolution:  - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7188
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6670] Printing the 'The Python Tutorial'

2009-10-23 Thread brimac

brimac bri...@bcs.org added the comment:

Georg, Ezio
Many Thanks
brimac

2009/10/22 Georg Brandl rep...@bugs.python.org


 Georg Brandl ge...@python.org added the comment:

 OK, fixed in Sphinx, and in the Python stylesheet in r75617.

 --
 resolution:  - fixed
 status: open - closed

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue6670
 ___


--
Added file: http://bugs.python.org/file15186/unnamed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6670
___Georg, EziobrMany Thanksbrbrimacbrbrdiv class=gmail_quote2009/10/22 
Georg Brandl span dir=ltrlt;a 
href=mailto:rep...@bugs.python.org;rep...@bugs.python.org/agt;/spanbrblockquote
 class=gmail_quote style=border-left: 1px solid rgb(204, 204, 204); margin: 
0pt 0pt 0pt 0.8ex; padding-left: 1ex;
br
Georg Brandl lt;a href=mailto:ge...@python.org;ge...@python.org/agt; 
added the comment:br
br
OK, fixed in Sphinx, and in the Python stylesheet in r75617.br
br
--br
resolution:  -gt; fixedbr
status: open -gt; closedbr
divdiv/divdiv class=h5br
___br
Python tracker lt;a 
href=mailto:rep...@bugs.python.org;rep...@bugs.python.org/agt;br
lt;a href=http://bugs.python.org/issue6670; 
target=_blankhttp://bugs.python.org/issue6670/agt;br
___br
/div/div/blockquote/divbr
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7189] struct.calcsize returns strange size

2009-10-23 Thread Igor Mikushkin

New submission from Igor Mikushkin igor.mikush...@gmail.com:

I think in second case struct size should be 53.

In [31]: struct.calcsize('ihhi35scc')
Out[31]: 49

In [32]: struct.calcsize('ihhi35scci')
Out[32]: 56

--
components: Library (Lib)
messages: 94379
nosy: igor.mikushkin
severity: normal
status: open
title: struct.calcsize returns strange size
type: behavior
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7189
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >