python-graph-1.6.0 released

2009-06-07 Thread Pedro Matiello
python-graph 
release 1.6.0
http://code.google.com/p/python-graph/ 
 

python-graph is a library for working with graphs in Python. 

This software provides a suitable data structure for representing 
graphs and a whole set of important algorithms. 

The code is appropriately documented and API reference is generated 
automatically by epydoc. 

Provided features and algorithms: 

 * Support for directed, undirected, weighted and non-weighted graphs
 * Support for hypergraphs
 * Canonical operations
 * XML import and export
 * DOT-Language output (for usage with Graphviz)
 * Random graph generation

 * Accessibility (transitive closure)
 * Breadth-first search
 * Critical path algorithm 
 * Cut-vertex and cut-edge identification 
 * Depth-first search
 * Heuristic search (A* algorithm)
 * Identification of connected components
 * Minimum spanning tree (Prim's algorithm)
 * Mutual-accessibility (strongly connected components)
 * Shortest path search (Dijkstra's algorithm)
 * Topological sorting
 * Transitive edge identification 

The 1.6.x series is our refactoring series. Along the next releases,
we'll change the API so we can better prepare the codebase to new
features. If you want a softer, directed transition, upgrade your code
to every release in the 1.6.x series. On the other hand, if you'd
rather fix everything at once, you can wait for 1.7.0.

Download: http://code.google.com/p/python-graph/downloads/list
(tar.bz2, zip and egg packages are available.)

Installing:
If you have easy_install on your system, you can simply run: 
# easy_install python-graph


-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


ANN: Webware for Python 1.0.2 released

2009-06-07 Thread Christoph Zwerschke

Webware for Python 1.0.2 has been released.

This is the second bugfix release for Webware for Python release 1.0,
mainly fixing some problems and shortcomings of the PSP plug-in.
See the WebKit and PSP release notes for details.

Webware for Python is a suite of Python packages and tools for
developing object-oriented, web-based applications. The suite uses well
known design patterns and includes a fast Application Server, Servlets,
Python Server Pages (PSP), Object-Relational Mapping, Task Scheduling,
Session Management, and many other features. Webware is very modular and
easily extended.

Webware for Python is well proven and platform-independent. It is
compatible with multiple web servers, database servers and operating
systems.

Check out the Webware for Python home page at http://www.w4py.org

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations.html


progress.py 1.0.0 - Track and display progress, providing estimated completion time.

2009-06-07 Thread Tim Newsome
I couldn't find any module that would just let me add progress to all the
random little scripts I write. So I wrote one. You can download and read
about it at http://www.casualhacker.net/blog/progress_py/
Below is the module's description. I welcome any comments and suggestions.

Tim

DESCRIPTION
This module provides 2 classes to simply add progress display to any
application. Programs with more complex GUIs might still want to use it
for the
estimates of time remaining it provides.

Use the ProgressDisplay class if your work is done in a simple loop:
from progress import *
for i in ProgressDisplay(range(500)):
do_work()

If do_work() doesn't send any output to stdout you can use the
following,
which will cause a single status line to be printed and updated:
from progress import *
for i in ProgressDisplay(range(500), display=SINGLE_LINE):
do_work()

For more complex applications, you will probably need to manage a
Progress
object yourself:
from progress import *
progress = Progress(task_size)
for part in task:
do_work()
progress.increment()
progress.print_status_line()

If you have a more sophisticated GUI going on, you can still use
Progress
objects to give you a good estimate of time remaining:
from progress import *
progress = Progress(task_size)
for part in task:
do_work()
progress.increment()
update_gui(progress.percentage(), progress.time_remaining())
-- 
Tim Newsome nuisance.at.casualhacker.net http://www.casualhacker.net/
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Re: #! to two different pythons?

2009-06-07 Thread Duncan Booth
m...@pixar.com wrote:

 Benjamin Peterson benja...@python.org wrote:
 #!/usr/bin/env python
 
 But how can I handle this with two differently named pythons?
 
 #!/usr/anim/menv/bin/pypix
 #!/Users/mh/py/bin/python
 
 Thanks!
 Mark
 

If you install using with a setup.py that uses distutils then one of the  
features is that any script whose first line begins '#!' and contains 
'python' has that first line replaced by the current interpreter.

So in this example if you install your code with:

   /usr/anim/menv/bin/pypix setup.py install

#!/usr/bin/env python would be replaced by your desired 
#!/usr/anim/menv/bin/pypix.

Of course for your current purposes using setup.py might be overkill.
-- 
http://mail.python.org/mailman/listinfo/python-list


Properties for several keywords

2009-06-07 Thread Kless
I've to write properties for several keywords with the same code, it
only changes the name of each property:

-
@property
   def foo(self):
  return self._foo

@foo.setter
   def foo(self, txt):
  self._foo = self._any_function(txt)

# 

@property
   def bar(self):
  return self._bar

@bar.setter
   def bar(self, txt):
  self._bar = self._any_function(txt)
-


Is possible to simplify it?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Properties for several keywords

2009-06-07 Thread Kless
On 7 jun, 11:45, Kless jonas@googlemail.com wrote:
 I've to write properties for several keywords with the same code, it
 only changes the name of each property:

 -
 @property
    def foo(self):
       return self._foo

 @foo.setter
    def foo(self, txt):
       self._foo = self._any_function(txt)

 # 

 @property
    def bar(self):
       return self._bar

 @bar.setter
    def bar(self, txt):
       self._bar = self._any_function(txt)
 -

 Is possible to simplify it?

Sorry for indent the decorator.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making the case for repeat

2009-06-07 Thread bearophileHUGS
pataphor:
 The problem is posting *this*
 function would kill my earlier repeat for sure. And it already had a
 problem with parameters  0 (Hint: that last bug has now become a
 feature in the unpostable repeat implementation)

Be bold, kill your precedent ideas, and post the Unpostable :-)
Despite people here complaining a bit, you can't hurt posting some
lines of safe code here :-)
But I agree with Steven D'Aprano, adding some explaining words to your
proposals helps.

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is reduce() foldl() or foldr()?

2009-06-07 Thread Tim Northover
Steven D'Aprano st...@remove-this-cybersource.com.au writes:

 Calling all functional programming fans... is Python's built-in reduce() 
 a left-fold or a right-fold?

I get:

 reduce(lambda a, b: a/b, [1.0, 2.0, 3.0])
0.1

which looks like a left fold to me.

Tim.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is reduce() foldl() or foldr()?

2009-06-07 Thread Peter Otten
Steven D'Aprano wrote:

 Calling all functional programming fans... is Python's built-in reduce()
 a left-fold or a right-fold?
 
 Wikipedia says it's a left-fold:
 
 http://en.wikipedia.org/wiki/Fold_(higher-order_function)

Wikipedia is correct:

 from __future__ import division
 (1/(2/(3/(4/5 # right
1.875
 1/2)/3)/4)/5) # left
0.0083332
 reduce(lambda x, y: x/y, range(1, 6))
0.0083332

 but other people say it's a right-fold, e.g.:
 
 ... there is a `foldr` in Haskell that just works like `reduce()`
 http://mail.python.org/pipermail/python-list/2007-November/638647.html
 
 
 and
 
 Note that Python already has a variation of foldr, called reduce.
 http://blog.sigfpe.com/2008/02/purely-functional-recursive-types-in.html

The explicit foldr() function given in this blog agrees with Wikipedia's 
definition:

 def foldr(a, b, l):
... if l == []:
... return b
... else:
... return a(l[0], foldr(a, b, l[1:]))
...
 foldr(lambda x, y: x/y, 5, range(1, 5))
1.875
 
Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is reduce() foldl() or foldr()?

2009-06-07 Thread Scott David Daniels

Steven D'Aprano wrote:
Calling all functional programming fans... is Python's built-in reduce() 
a left-fold or a right-fold? ...
So which is correct? Or is it that different people have different 
definitions of foldl() and foldr()?


Just test.  Floating point addition is not associative, so:

 a = reduce(float.__add__, (4095 * 2. **n for n in range(100)))
 b = reduce(float.__add__,(4095*2.**n for n in reversed(range(100
 a - b
5.7646075230342349e+17

So, since a  b, it must be foldl (the first addition happens to the
first of the list).

Foldl is the eager-beaver's dream; foldr is the procrastinator's dream.

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Keeping console window open

2009-06-07 Thread Fencer
Hello, I need to write a simple utility program that will be used under 
Windows. I want to write the utility in python and it will be run by 
double-clicking the the .py-file.


I put a raw_input('Press enter to exit) at the end so the console window 
wouldn't just disappear when the program is finished.


Anyway, I wrote a few lines of code and when I first tried to run it by 
double-clicking the .py-file the console window still disappeared right 
away. So, in order to see what was happening, I ran it from a shell and 
it turned out to be a missing import. My question is how can I trap 
errors encountered by the interpreter (if that is the right way to put 
it) in order to keep the console window open so one has a chance to see 
the error message?


- Fencer
--
http://mail.python.org/mailman/listinfo/python-list


how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread higer
My file contains such strings :
\xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a

I want to read the content of this file and transfer it to the
corresponding gbk code,a kind of Chinese character encode style.
Everytime I was trying to transfer, it will output the same thing no
matter which method was used.
 It seems like that when Python reads it, Python will taks '\' as a
common char and this string at last will be represented as \\xe6\\x97\
\xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a , then the \ can be 'correctly'
output,but that's not what I want to get.

Anyone can help me?


Thanks in advance.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Keeping console window open

2009-06-07 Thread Tomasz Zieliński
On 7 Cze, 14:49, Fencer no.i.d...@want.mail.from.spammers.com wrote:
 My question is how can I trap
 errors encountered by the interpreter (if that is the right way to put
 it) in order to keep the console window open so one has a chance to see
 the error message?


Interpreter errors are same beasts as exceptions,
so you can try:...except: them.

--
Tomasz Zieliński
http://pyconsultant.eu
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Keeping console window open

2009-06-07 Thread Can Xue
2009/6/7 Fencer no.i.d...@want.mail.from.spammers.com


 Anyway, I wrote a few lines of code and when I first tried to run it by
 double-clicking the .py-file the console window still disappeared right
 away. So, in order to see what was happening, I ran it from a shell and it
 turned out to be a missing import. My question is how can I trap errors
 encountered by the interpreter (if that is the right way to put it) in order
 to keep the console window open so one has a chance to see the error
 message?


I don't think this (force a console window to be kept openning only for
debug purpose) is a good idea. If you just write some lines to learn
something, you may run your script in a console window or in an IDE. If you
are writing a big project, you'd better catch those exception and store logs
in some files.

However, if you still need force a console window be kept openning after the
script end, you may use module atexit.

-- 
XUE Can
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pylint naming conventions?

2009-06-07 Thread Esmail

Ben Finney wrote:

Esmail ebo...@hotmail.com writes:


I am confused by pylint's naming conventions, I don't think the are in
tune with Python's style recommendations (PEP 8?)

Anyone else think this?


It's hard to know, without examples. Can you give some output of pylint
that you think doesn't agree with PEP 8?


Sure, I will next time I have a nice self-contained example. Perhaps not that
many people are using pylint? I was expecting a bunch of messages either
contradicting my observation or agreeing with it :-) .. but perhaps this
indicates that there's no issue.

I'll try to come up with a nice short code example in the next few days
to demonstrate what I think the problem is and post it, thanks for the
suggestion.

Esmail


--
http://mail.python.org/mailman/listinfo/python-list


Re: Keeping console window open

2009-06-07 Thread Scott David Daniels

Tomasz Zieliński wrote:

On 7 Cze, 14:49, Fencer no.i.d...@want.mail.from.spammers.com wrote:

My question is how can I trap
errors encountered by the interpreter (if that is the right way to put
it) in order to keep the console window open so one has a chance to see
the error message?


Interpreter errors are same beasts as exceptions,
so you can try:...except: them.


To be a trifle more explicit, turn:

  ...
  if __name__ == '__main__':
 main()

into:
  ...
  if __name__ == '__main__':
 try:
 main()
 except Exception, why:
 print 'Failed:', why
 import sys, traceback
 traceback.print_tb(sys.exc_info()[2])
 raw_input('Leaving: ')

Note that building your script like this also allows you to
open the interpretter, and type:
import mymodule
mymodule.main()
in order to examine how it runs.

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


The pysync library - Looking for the code, but all download links are broken

2009-06-07 Thread jpersson
Hi,

From the description the library pysync seems to be a perfect match
for some of the things I would like to accomplish in my own project.
The problem is that all the download links seems to be broken, so I
cannot download the code.

http://freshmeat.net/projects/pysync/

I've also tried to mail the author, but the mail bounces. Is there
anyone out there who happens to have a tarball of this library and who
could send me a copy I would be very grateful.

Cheers
//Jan Persson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Keeping console window open

2009-06-07 Thread Fencer

Scott David Daniels wrote:

To be a trifle more explicit, turn:

  ...
  if __name__ == '__main__':
 main()

into:
  ...
  if __name__ == '__main__':
 try:
 main()
 except Exception, why:
 print 'Failed:', why
 import sys, traceback
 traceback.print_tb(sys.exc_info()[2])
 raw_input('Leaving: ')

Note that building your script like this also allows you to
open the interpretter, and type:
import mymodule
mymodule.main()
in order to examine how it runs.


Thanks alot, this was exactly what I was looking for!



--Scott David Daniels
scott.dani...@acm.org

--
http://mail.python.org/mailman/listinfo/python-list


how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread R. David Murray
higer higerinbeij...@gmail.com wrote:
 My file contains such strings :
 \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a

If those bytes are what is in the file (and it sounds like they are),
then the data in your file is not in UTF8 encoding, it is in ASCII
encoded as hexidecimal escape codes.

 I want to read the content of this file and transfer it to the
 corresponding gbk code,a kind of Chinese character encode style.

You'll have to convert it from hex-escape into UTF8 first, then.

Perhaps better would be to write the original input files in UTF8,
since it sounds like that is what you were intending to do.

--
R. David Murray http://www.bitdance.com
IT ConsultingSystem AdministrationPython Programming

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jeff M.
On Jun 7, 1:56 am, Paul Rubin http://phr...@nospam.invalid wrote:
 Jeff M. mass...@gmail.com writes:
Even the lightest weight
user space (green) threads need a few hundred instructions, minimum,
to amortize the cost of context switching
  There's always a context switch. It's just whether or not you are
  switching in/out a virtual stack and registers for the context or the
  hardware stack/registers.

 I don't see the hundreds of instructions in that case.  

 http://shootout.alioth.debian.org/u32q/benchmark.php?test=threadring;...

 shows GHC doing 50 million lightweight thread switches in 8.47
 seconds, passing a token around a thread ring.  Almost all of that is
 probably spent acquiring and releasing the token's lock as the token
 is passed from one thread to another.  That simply doesn't leave time
 for hundreds of instructions per switch.

Who said there has to be? Sample code below (just to get the point
across):

struct context {
   vir_reg pc, sp, bp, ... ;
   object* stack;

   // ...

   context* next;
};

struct vm {
   context* active_context;
};

void switch_context(vm* v)
{
   // maybe GC v-active_context before switching

   v-active_context = v-active_context-next;
}

Also, there isn't hundreds of instructions with multiplexing,
either. It's all done in hardware. Take a look at the disassembly for
any application: one that uses native threads on a platform that
supports preemption. You won't see any instructions anywhere in the
program that perform a context switch. If you did that would be
absolutely horrible. Imagine if the compiler did something like this:

while(1)
{
  // whatever
}

do_context_switch_here();

That would suck. ;-)

That's not to imply that there isn't a cost; there's always a cost.
The example above just goes to show that for green threads, the cost
[of the switch] can be reduced down to a single pointer assignment.

Jeff M.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is reduce() foldl() or foldr()?

2009-06-07 Thread Piet van Oostrum
 Tim Northover t.p.northo...@sms.ed.ac.uk (TN) escribió:

TN Steven D'Aprano st...@remove-this-cybersource.com.au writes:
 Calling all functional programming fans... is Python's built-in reduce() 
 a left-fold or a right-fold?

TN I get:

 reduce(lambda a, b: a/b, [1.0, 2.0, 3.0])
TN 0.1

TN which looks like a left fold to me.

Yes, see the Haskell result:

Prelude foldl (/) 1.0 [1.0, 2.0, 3.0]
0.1
Prelude foldr (/) 1.0 [1.0, 2.0, 3.0]
1.5

-- 
Piet van Oostrum p...@cs.uu.nl
URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4]
Private email: p...@vanoostrum.org
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Roedy Green wrote:
 On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
 kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
 said :
Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.
 
 Could what you are saying be summed up by saying, The more threads
 you have the more important it is to keep your threads independent,
 sharing as little data as possible.

I see no problem with mutable shared state.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
George Neuner wrote:
 On Fri, 05 Jun 2009 16:26:37 -0700, Roedy Green
 see_webs...@mindprod.com.invalid wrote:
On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
said :
Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.

Could what you are saying be summed up by saying, The more threads
you have the more important it is to keep your threads independent,
sharing as little data as possible.
 
 And therein lies the problem of leveraging many cores.  There is a lot
 of potential parallelism in programs (even in Java :) that is lost
 because it is too fine a grain for threads.

That will always be true so it conveys no useful information to the
practitioner.

 Even the lightest weight 
 user space (green) threads need a few hundred instructions, minimum,
 to amortize the cost of context switching.

Work items in Cilk are much faster than that.

 Add to that the fact that programmers have shown themselves, on
 average, to be remarkably bad at figuring out what _should_ be done in
 parallel - as opposed to what _can_ be done - and you've got a clear
 indicator that threads, as we know them, are not scalable except under
 a limited set of conditions.

Parallelism is inherently not scalable. I see no merit in speculating about
the ramifications of average programmers alleged inabilities.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Lew

Scott David Daniels wrote:

the nub of the problem is not on the benchmarks.  There is something
to be said for the good old daays when you looked up the instruction
timings that you used in a little document for your machine, and could
know the cost of any loop.  We are faster now, but part of the cost of
that speed is that timing is a black art.  


Those good old days never existed.  Those manuals never accounted for things 
that affected timing even then, like memory latency or refresh time.  SRAM 
cache made things worse, since the published timings never mentioned 
cache-miss delays.  Though memory cache might seem a recent innovation, it's 
been around a while.  It would be challenging to find any published timing 
since the commercialization of computers that would actually tell the cost of 
any loop.


Things got worse when chips like the '86 family acquired multiple instructions 
for doing loops, still worse when pre-fetch pipelines became deeper and wider, 
absolutely Dark Art due to multi-level memory caches becoming universal, and 
throw-your-hands-up-and-leave-for-the-corner-bar with multiprocessor NUMA 
systems.  OSes and high-level languages complicate the matter - you never know 
how much time slice you'll get or how your source got compiled or optimized by 
run-time.


So the good old days are a matter of degree and self-deception - it was easier 
to fool ourselves then that we could at least guess timings proportionately if 
not absolutely, but things definitely get more unpredictable over evolution.


--
Lew
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread John Machin
On Jun 7, 10:55 pm, higer higerinbeij...@gmail.com wrote:
 My file contains such strings :
 \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a

Are you sure? Does that occupy 9 bytes in your file or 36 bytes?


 I want to read the content of this file and transfer it to the
 corresponding gbk code,a kind of Chinese character encode style.
 Everytime I was trying to transfer, it will output the same thing no
 matter which method was used.
  It seems like that when Python reads it, Python will taks '\' as a
 common char and this string at last will be represented as \\xe6\\x97\
 \xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a , then the \ can be 'correctly'
 output,but that's not what I want to get.

 Anyone can help me?


try this:

utf8_data = your_data.decode('string-escape')
unicode_data = utf8_data.decode('utf8')
# unicode derived from your sample looks like this 日期: is that what
you expected?
gbk_data = unicode_data.encode('gbk')

If that doesn't work, do three things:
(1) give us some unambiguous hard evidence about the contents of your
data:
e.g. # assuming Python 2.x
your_data = open('your_file.txt', 'rb').read(36)
print repr(your_data)
print len(your_data)
print your_data.count('\\')
print your_data.count('x')

(2) show us the source of the script that you used
(3) Tell us what doesn't work means in this case

Cheers,
John


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Arved Sandstrom

Jon Harrop wrote:

Roedy Green wrote:

On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
said :

Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.

Could what you are saying be summed up by saying, The more threads
you have the more important it is to keep your threads independent,
sharing as little data as possible.


I see no problem with mutable shared state.


In which case, Jon, you're in a small minority.

AHS
--
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread Neuruss
On 5 jun, 06:29, Nick Craig-Wood n...@craig-wood.com wrote:
 Luis M  González luis...@gmail.com wrote:

   I am very excited by this project (as well as by pypy) and I read all
   their plan, which looks quite practical and impressive.
   But I must confess that I can't understand why LLVM is so great for
   python and why it will make a difference.

 CPython uses a C compiler to compile the python code (written in C)
 into native machine code.

 unladen-swallow uses an llvm-specific C compiler to compile the CPython
 code (written in C) into LLVM opcodes.

 The LLVM virtual machine executes those LLVM opcodes.  The LLVM
 virtual machine also has a JIT (just in time compiler) which converts
 the LLVM op-codes into native machine code.

 So both CPython and unladen-swallow compile C code into native machine
 code in different ways.

 So why use LLVM?  This enables unladen swallow to modify the python
 virtual machine to target LLVM instead of the python vm opcodes.
 These can then be run using the LLVM JIT as native machine code and
 hence run all python code much faster.

 The unladen swallow team have a lot more ideas for optimisations, but
 this seems to be the main one.

 It is an interesting idea for a number of reasons, the main one as far
 as I'm concerned is that it is more of a port of CPython to a new
 architecture than a complete re-invention of python (like PyPy /
 IronPython / jython) so stands a chance of being merged back into
 CPython.

 --
 Nick Craig-Wood n...@craig-wood.com --http://www.craig-wood.com/nick

Thanks Nick,
ok, let me see if I got it:
The Python vm is written in c, and generates its own bitecodes which
in turn get translated to machine code (one at a time).
Unladen Swallow aims to replace this vm by one compiled with the llvm
compiler, which I guess will generate different bytecodes, and in
addition, supplies a jit for free. Is that correct?

It's confussing to think about a compiler which is also a virtual
machine, which also has a jit...
Another thing that I don't understand is about the upfront
compilation.
Actually, the project plan doesn't mention it, but I read a comment on
pypy's blog about a pycon presentation, where they said it would be
upfront compilation (?). What does it mean?

I guess it has nothing to do with the v8 strategy, because unladen
swallow will be a virtual machine, while v8 compiles everything to
machine code on the first run. But I still wonder what this mean and
how this is different.

By the way, I already posted a couple of question on unladen's site.
But now I see the discussion is way to low level for me, and I
wouldn't want to interrupt with my silly basic questions...

Luis
-- 
http://mail.python.org/mailman/listinfo/python-list


pyc-files contains absolute paths, is this a bug ?

2009-06-07 Thread Stef Mientki

hello,

AFAIK I read that pyc files can be transferred to other systems.
I finally got a windows executable working through py2exe,
but still have some troubles, moving the directory around.

I use Python 2.5.2.
I use py2exe to make a distro
I can unpack the distro, on a clean computer, anywhere where I like, and 
it runs fine.


Now when I've run it once,
I move the subdirectory to another location,
at it doesn't run.

Looking with a hex editor into some pyc-files,
I see absolute paths to the old directory.

Is this normal,
or am I doing something completely wrong ?

thanks,
Stef Mientki



--
http://mail.python.org/mailman/listinfo/python-list


Python preprosessor

2009-06-07 Thread Tuomas Vesterinen
I am developing a Python application as a Python2.x and Python3.0 
version. A common code base would make the work easier. So I thought to 
try a preprosessor. GNU cpp handles this kind of code correct:


test_cpp.py
#ifdef python2
print u'foo', u'bar'
#endif
#ifdef python3
print('foo', 'bar')
#endif
end code

results:
 cpp -E -Dpython2 test_cpp.py
...
print u'foo', u'bar'

Any other suggestions?

Tuomas Vesterinen

--
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating Over Dictionary From Arbitrary Location

2009-06-07 Thread Aahz
In article mailman.1241.1244301490.8015.python-l...@python.org,
akindo  akind...@hotmail.com wrote:

So, it seems I want the best of both worlds: specific indexing using  
my own IDs/keys (not just by list element location), sorting and the  
ability to start iterating from a specific location. I am trying to  
prevent having to scan through a list from the beginning to find the  
desired ID, and then return all elements after that. Is there another  
data structure or design pattern which will do what I want? Thanks a  
lot for any help! :)

One option is to maintain both a dict and a list; it doesn't cost much
memory because the individual elements don't get copied.  See various
ordered dictionary implementations for ideas on coding this.
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

If you don't know what your program is supposed to do, you'd better not
start writing it.  --Dijkstra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python preprosessor

2009-06-07 Thread Peter Otten
Tuomas Vesterinen wrote:

 I am developing a Python application as a Python2.x and Python3.0
 version. A common code base would make the work easier. So I thought to
 try a preprosessor. GNU cpp handles this kind of code correct:

 Any other suggestions?

http://docs.python.org/dev/3.1/library/2to3.html


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyc-files contains absolute paths, is this a bug ?

2009-06-07 Thread Steven D'Aprano
On Sun, 07 Jun 2009 18:16:26 +0200, Stef Mientki wrote:

 hello,
 
 AFAIK I read that pyc files can be transferred to other systems. I
 finally got a windows executable working through py2exe, but still have
 some troubles, moving the directory around.

Sounds like a py2exe problem, not a Python problem. Perhaps you should 
ask them?

https://lists.sourceforge.net/lists/listinfo/py2exe-users


 I use Python 2.5.2.
 I use py2exe to make a distro
 I can unpack the distro, on a clean computer, anywhere where I like, and
 it runs fine.
 
 Now when I've run it once,
 I move the subdirectory to another location, at it doesn't run.

Define doesn't run.

You mean the exe file doesn't launch at all? Does Windows display an 
error message? 

Or perhaps it launches, then immediately exists? Launches, then crashes? 
Does it show up in the process list at all? Or something else?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is reduce() foldl() or foldr()?

2009-06-07 Thread Paul Rubin
Steven D'Aprano st...@remove-this-cybersource.com.au writes:
 Calling all functional programming fans... is Python's built-in reduce() 
 a left-fold or a right-fold?

It's a left fold. 

 but other people say it's a right-fold, e.g.:
 ... there is a `foldr` in Haskell that just works like `reduce()`

That is correct in the sense that a coding situation where you'd use
reduce in Python would often lead you to use foldr (with its different
semantics) in Haskell.  This is because of Haskell's lazy evaluation.
Example: suppose you have a list of lists, like xss =
[[1,2],[3,4,5],[6,7]] and you want to concatenate them all.  (++) is
Haskell's list concatenation function, like Python uses + for list
concatenation.  So you could say

   ys = foldl (++) [] xss 

but if I have it right, that would have to traverse the entire input
list before it gives you any of the answer, which can be expensive for
a long list, or fail totally for an infinite list.  foldr on the other
hand can generate the result lazily, in sync with the way the caller
consumes the elements, like writing a generator in Haskell.  The
tutorial

   http://learnyouahaskell.com/higher-order-functions#folds

explains this a bit more.  You might also like the Real World Haskell
book:

   http://book.realworldhaskell.org
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread MRAB

Neuruss wrote:

On 5 jun, 06:29, Nick Craig-Wood n...@craig-wood.com wrote:

Luis M  González luis...@gmail.com wrote:


 I am very excited by this project (as well as by pypy) and I read all
 their plan, which looks quite practical and impressive.
 But I must confess that I can't understand why LLVM is so great for
 python and why it will make a difference.

CPython uses a C compiler to compile the python code (written in C)
into native machine code.

unladen-swallow uses an llvm-specific C compiler to compile the CPython
code (written in C) into LLVM opcodes.

The LLVM virtual machine executes those LLVM opcodes.  The LLVM
virtual machine also has a JIT (just in time compiler) which converts
the LLVM op-codes into native machine code.

So both CPython and unladen-swallow compile C code into native machine
code in different ways.

So why use LLVM?  This enables unladen swallow to modify the python
virtual machine to target LLVM instead of the python vm opcodes.
These can then be run using the LLVM JIT as native machine code and
hence run all python code much faster.

The unladen swallow team have a lot more ideas for optimisations, but
this seems to be the main one.

It is an interesting idea for a number of reasons, the main one as far
as I'm concerned is that it is more of a port of CPython to a new
architecture than a complete re-invention of python (like PyPy /
IronPython / jython) so stands a chance of being merged back into
CPython.

--
Nick Craig-Wood n...@craig-wood.com --http://www.craig-wood.com/nick


Thanks Nick,
ok, let me see if I got it:
The Python vm is written in c, and generates its own bitecodes which
in turn get translated to machine code (one at a time).
Unladen Swallow aims to replace this vm by one compiled with the llvm
compiler, which I guess will generate different bytecodes, and in
addition, supplies a jit for free. Is that correct?


[snip]
No. CPython is written in C (hence the name). It compiles Python source
code to bytecodes. The bytecodes are instructions for a VM which is
written in C, and they are interpreted one by one. There's no
compilation to machine code.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python preprosessor

2009-06-07 Thread Tuomas Vesterinen

Peter Otten wrote:

Tuomas Vesterinen wrote:


I am developing a Python application as a Python2.x and Python3.0
version. A common code base would make the work easier. So I thought to
try a preprosessor. GNU cpp handles this kind of code correct:



Any other suggestions?


http://docs.python.org/dev/3.1/library/2to3.html




I am intensively using 2to3.py. So I have 2 codebase: one in py2 and the 
other in py3. When changing the code I have to do things to 2 separate 
codebase in the repository. There are no patch2to3 or patch3to2, So I 
thought that for ensuring the common functionality of both version it 
would be nice to bring both versions into a common codebase. So I can 
update only one code and automate builds and tests for both versions.


Tuomas Vesterinen
--
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread Nick Craig-Wood
Neuruss luis...@gmail.com wrote:
  ok, let me see if I got it:
  The Python vm is written in c, and generates its own bitecodes which
  in turn get translated to machine code (one at a time).
  Unladen Swallow aims to replace this vm by one compiled with the llvm
  compiler, which I guess will generate different bytecodes, and in
  addition, supplies a jit for free. Is that correct?

Pretty good I think!

  It's confussing to think about a compiler which is also a virtual
  machine, which also has a jit...

Well the compiler is actually gcc with a llvm opcodes backend.

  Another thing that I don't understand is about the upfront
  compilation.
  Actually, the project plan doesn't mention it, but I read a comment on
  pypy's blog about a pycon presentation, where they said it would be
  upfront compilation (?). What does it mean?

I don't know, I'm afraid.

  I guess it has nothing to do with the v8 strategy, because unladen
  swallow will be a virtual machine, while v8 compiles everything to
  machine code on the first run. But I still wonder what this mean and
  how this is different.

-- 
Nick Craig-Wood n...@craig-wood.com -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Arved Sandstrom wrote:
 Jon Harrop wrote:
 I see no problem with mutable shared state.

 In which case, Jon, you're in a small minority.

No. Most programmers still care about performance and performance means
mutable state.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Joshua Cranmer

Jon Harrop wrote:

No. Most programmers still care about performance and performance means
mutable state.


[ Citation needed ].

Most programmers I've met could care less about performance.

--
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth

--
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread bearophileHUGS
Luis M. González:
 it seems they intend to do upfront
 compilation. How?

Unladen swallow developers want to try everything (but black magic and
necromancy) to increase the speed of Cpython. So they will try to
compile up-front if/where they can (for example most regular
expressions are known at compile time, so there's no need to compile
them at run time. I don't know if Cpython compiles them before running
time).

What I like of Unladen swallow is that it's a very practical approach,
very different in style from ShedSkin and PyPy (and it's more
ambitious than Psyco). I also like Unladen swallow because they are
the few people that have the boldness to do something to increase the
performance of Python for real.
They have a set of reference benchmarks that are very real, not
synthetic at all, so for example they refuse to use Pystones and the
like.

What I don't like of Unladen swallow is that integrating LLVM with the
CPython codebase looks like a not much probable thing. Another thing I
don't like is the name of such project, it's not easy to pronounce for
non-English speaking people.

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get the class name

2009-06-07 Thread Tuomas Vesterinen

Kless wrote:

Is there any way of to get the class name to avoid to have that write
it?

---
class Foo:
   super(Foo, self)
---


* Using Py 2.6.2


 class Foo(object):
... def cls(self):
... return self.__class__
...
 Foo().cls()
class '__main__.Foo'
--
http://mail.python.org/mailman/listinfo/python-list


pypi compatible software

2009-06-07 Thread Aljosa Mohorovic
i've been searching for a recommended way to setup private pypi
repository and i've found several options:
- PloneSoftwareCenter
- http://code.google.com/p/pypione/
- http://pypi.python.org/pypi/haufe.eggserver
- http://www.chrisarndt.de/projects/eggbasket/
- http://pypi.python.org/pypi/ClueReleaseManager

so now i have a few questions:
- is code behind pypi.python.org available and why i can't find some
up-to-date official document or howto for private pypi repository
(maybe it exists but i just can't find it)?
- since python is used in commercial environments i guess they
actually have private pypi repositories, where can i find docs that
defines what pypi is and how to implement it?
- can you recommend 1 option (software providing pypi like service)
that can be easily installed and configured as a service running on
apache2/mod_wsgi, has an active community and docs howto setup?
- i did found http://www.python.org/dev/peps/pep-0381/ - is this the
spec that above mentioned software is based upon?

any additional info or comments appreciated.

Aljosa Mohorovic
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Patricia Shanahan

Jon Harrop wrote:

Arved Sandstrom wrote:

Jon Harrop wrote:

I see no problem with mutable shared state.

In which case, Jon, you're in a small minority.


No. Most programmers still care about performance and performance means
mutable state.



I don't see why that would affect whether one thinks there are problems.

In my opinion, shared mutable state has a lot of problems. It is also
sometimes the best design for performance reasons.

Patricia
--
http://mail.python.org/mailman/listinfo/python-list


Re: The pysync library - Looking for the code, but all download links are broken

2009-06-07 Thread Søren - Peng - Pedersen
I think what you are looking for can be found at:

http://www.google.com/codesearch/p?hl=en#RncWxgazS6A/pysync-2.24/test/testdata.pyq=pysync%20package:%22http://minkirri.apana.org.au/~abo/projects/pysync/arc/pysync-2.24.tar.bz2%22%20lang:python
or http://shortlink.dk/58664133

I am not affiliated with the project in any way, and I can't find a
way to download the entire thing in one go. So if you do salvage a
working copy please let me (and the rest of the community) know.

//Søren - Peng - Pedersen
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Arved Sandstrom wrote:
 Jon Harrop wrote:
 Arved Sandstrom wrote:
 Jon Harrop wrote:
 I see no problem with mutable shared state.

 In which case, Jon, you're in a small minority.
 
 No. Most programmers still care about performance and performance means
 mutable state.
 
 Quite apart from performance and mutable state, I believe we were
 talking about mutable _shared_ state. And this is something that gets a
 _lot_ of people into trouble.

Nonsense. Scientists have been writing parallel programs for decades using
shared state extensively without whining about it. Databases are mutable
shared state but millions of database programmers solve real problems every
day without whining about it.

Use your common sense and you can write efficient parallel programs today
with little difficulty. I do.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Joshua Cranmer wrote:
 Jon Harrop wrote:
 No. Most programmers still care about performance and performance means
 mutable state.
 
 [ Citation needed ].
 
 Most programmers I've met could care less about performance.

Then they have no need for parallelism in the first place.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Properties for several keywords

2009-06-07 Thread Kless
On 7 jun, 11:45, Kless jonas@googlemail.com wrote:
 I've to write properties for several keywords with the same code, it
 only changes the name of each property:

 -
 @property
    def foo(self):
       return self._foo

 @foo.setter
    def foo(self, txt):
       self._foo = self._any_function(txt)

 # 

 @property
 def bar(self):
    return self._bar

 @bar.setter
 def bar(self, txt):
    self._bar = self._any_function(txt)
 -

 Is possible to simplify it?

Please, is there any solution for this problem?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get local copy of docs?

2009-06-07 Thread kj
In eff6307d-e3d5-4b26-ac7c-a658f1b96...@z7g2000vbh.googlegroups.com TonyM 
foss...@gmail.com writes:

http://docs.python.org/download.html

Perfect.  Thanks!

kynn
-- 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread Paul Rubin
bearophileh...@lycos.com writes:
 What I like of Unladen swallow is that it's a very practical approach,
 very different in style from ShedSkin and PyPy (and it's more
 ambitious than Psyco). I also like Unladen swallow because they are
 the few people that have the boldness to do something to increase the
 performance of Python for real.

IMHO the main problem with the Unladen Swallow approach is that it
would surprise me if CPython really spends that much of its time
interpreting byte code.  Is there some profiling output around?  My
guess is that CPython spends an awful lot of time in dictionary
lookups for method calls, plus incrementing and decrementing ref
counts and stuff like that.  Plus, the absence of a relocating garbage
collector may mess up cache hit ratios pretty badly.  Shed Skin as I
understand it departs in some ways from Python semantics in order to
get better compiler output, at the expense of breaking some Python
programs.  I think that is the right approach, as long as it's not
done too often.  That's the main reason why I think it's unfortunate
that Python 3.0 broke backwards compatibility at the particular time
that it did.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jeff M.
On Jun 7, 3:19 pm, Arved Sandstrom dces...@hotmail.com wrote:
 Jon Harrop wrote:
  Arved Sandstrom wrote:
  Jon Harrop wrote:
  I see no problem with mutable shared state.
  In which case, Jon, you're in a small minority.

  No. Most programmers still care about performance and performance means
  mutable state.

 Quite apart from performance and mutable state, I believe we were
 talking about mutable _shared_ state. And this is something that gets a
 _lot_ of people into trouble.


Mutable shared state gets _bad_ (err.. perhaps inexperienced would
be a better adjective) programmers - who don't know what they are
doing - in trouble. There are many problem domains that either benefit
greatly from mutable shared states or can't [easily] be done without
them. Unified memory management being an obvious example... there are
many more. Unshared state has its place. Immutable state has its
place. Shared immutable state has its place. Shared mutable place has
its place.

Jeff M.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Lew

Jon Harrop wrote:

I see no problem with mutable shared state.

In which case, Jon, you're in a small minority.


Patricia Shanahan wrote:

In my opinion, shared mutable state has a lot of problems. It is also
sometimes the best design for performance reasons.


As Dr. Jon pointed out upthread, one can write decent code with mutable shared 
state.  It is also true that mutable state presents a lot of problems - 
potential problems, ones that can be solved, but not ones that can be solved 
thoughtlessly.  On the flip side, one can write a tremendous amount of 
effective multi-threaded code involving shared mutable state with attention to 
a few rules of thumb, like always synchronize access and don't use different 
monitors to do so.


Unlike some environments (e.g., database management systems), Java's tools to 
manage concurrency are explicit and low level.  The programmer's job is to 
make sure those tools are used correctly to avoid problems.  As long as they 
do that, then there is no special problem with shared mutable state.


There is, however, a cost.  Certain things must happen slower when you share 
mutable state, than when you share immutable state or don't share state. 
Certain things must happen when you share mutable state, regardless of speed, 
because without them your code doesn't work.  For some reason, concurrent 
programming is an area often not well understood by a significant percentage 
of workaday programmers.  When problems do arise, they tend to be 
probabilistic in nature and vary widely with system characteristics like 
attempted load.


So the meeting ground is, yes, concurrent mutable state can present problems 
if not properly managed.  Properly managing such is not necessarily a huge 
burden, but it must be borne.  When done properly, shared mutable state will 
not present problems in production.


--
Lew
--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Jeff M. wrote:
 On Jun 7, 3:19 pm, Arved Sandstrom dces...@hotmail.com wrote:
 Jon Harrop wrote:
  Arved Sandstrom wrote:
  Jon Harrop wrote:
  I see no problem with mutable shared state.
  In which case, Jon, you're in a small minority.

  No. Most programmers still care about performance and performance means
  mutable state.

 Quite apart from performance and mutable state, I believe we were
 talking about mutable _shared_ state. And this is something that gets a
 _lot_ of people into trouble.
 
 Mutable shared state gets _bad_ (err.. perhaps inexperienced would
 be a better adjective) programmers - who don't know what they are
 doing - in trouble. There are many problem domains that either benefit
 greatly from mutable shared states or can't [easily] be done without
 them. Unified memory management being an obvious example... there are
 many more. Unshared state has its place. Immutable state has its
 place. Shared immutable state has its place. Shared mutable place has
 its place.

Exactly. I don't believe that shared mutable state is any harder to do
correctly than the next solution and it is almost always the most efficient
solution and the sole purpose of writing parallel programs to leverage
multicores is performance.

A bad developer can screw up anything. I see no reason to think that shared
mutable state is any more fragile than the next thing a bad developer can
screw up.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pypi compatible software

2009-06-07 Thread Martin v. Löwis
 - is code behind pypi.python.org available and why i can't find some
 up-to-date official document or howto for private pypi repository
 (maybe it exists but i just can't find it)?

The code is at

https://svn.python.org/packages/

Instructions for installing it are at

http://wiki.python.org/moin/CheeseShopDev

I don't know why you can't find it, perhaps you didn't search long
enough.

 - since python is used in commercial environments i guess they
 actually have private pypi repositories, where can i find docs that
 defines what pypi is and how to implement it?

IMO, there can't possibly be a private pypi repository. By (my)
definition, PyPI is *the* Python Package Index - anything else holding
packages can't be *the* index. So: pypi is a short name for the machine
pypi.python.org, and it is not possible to implement it elsewhere,
unless you have write access to the nameserver of python.org.

Maybe you are asking for a specification of how setuptools and
easy_install access the index, to build package repositories other than
PyPI. This is specified at

http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api

If you are asking for something else, please be more explicit what
it is that you ask for.

 - can you recommend 1 option (software providing pypi like service)
 that can be easily installed and configured as a service running on
 apache2/mod_wsgi, has an active community and docs howto setup?

If you want a plain package repository, I recommend to use Apache
without any additional software. It's possible to provide a package
repository completely out of static files - no dynamic code is
necessary.

 - i did found http://www.python.org/dev/peps/pep-0381/ - is this the
 spec that above mentioned software is based upon?

No, that's what the title of the document says: a Mirroring
infrastructure for PyPI.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Lew wrote:
 As Dr. Jon pointed out upthread, one can write decent code with mutable
 shared
 state.  It is also true that mutable state presents a lot of problems -
 potential problems, ones that can be solved, but not ones that can be
 solved
 thoughtlessly.  On the flip side, one can write a tremendous amount of
 effective multi-threaded code involving shared mutable state with
 attention to a few rules of thumb, like always synchronize access and
 don't use different monitors to do so.
 
 Unlike some environments (e.g., database management systems), Java's tools
 to
 manage concurrency are explicit and low level.  The programmer's job is to
 make sure those tools are used correctly to avoid problems.  As long as
 they do that, then there is no special problem with shared mutable state.
 
 There is, however, a cost.  Certain things must happen slower when you
 share mutable state, than when you share immutable state or don't share
 state. Certain things must happen when you share mutable state, regardless
 of speed,
 because without them your code doesn't work.  For some reason, concurrent
 programming is an area often not well understood by a significant
 percentage
 of workaday programmers.  When problems do arise, they tend to be
 probabilistic in nature and vary widely with system characteristics like
 attempted load.
 
 So the meeting ground is, yes, concurrent mutable state can present
 problems
 if not properly managed.  Properly managing such is not necessarily a huge
 burden, but it must be borne.  When done properly, shared mutable state
 will not present problems in production.

I agree entirely but my statements were about parallelism and not
concurrency. Parallel and concurrent programming have wildly different
characteristics and solutions. I don't believe shared mutable state is
overly problematic in the context of parallelism. Indeed, I think it is
usually the best solution in that context.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Lew

Jon Harrop wrote:

I agree entirely but my statements were about parallelism and not
concurrency. Parallel and concurrent programming have wildly different
characteristics and solutions. I don't believe shared mutable state is
overly problematic in the context of parallelism. Indeed, I think it is
usually the best solution in that context.


Interesting distinction.  Would it be fair to compare concurrent programming 
to the bricks used to build the parallel program's edifice?


--
Lew
--
http://mail.python.org/mailman/listinfo/python-list


Re: Properties for several keywords

2009-06-07 Thread Scott David Daniels

Kless wrote:

On 7 jun, 11:45, Kless jonas@googlemail.com wrote:

I've to write properties for several keywords with the same code, it
only changes the name of each property:

...

Is possible to simplify it?

Please, is there any solution for this problem?


Read up on property.  It is the core of your answer.  That being said,
from your recent messages, it looks a lot like you are fighting the
language, rather than using it.  Pythonic code is clear about what
it is doing; avoid tricky automatic stuff.


def mangled(name, doc_text=None):
funny_name = '_' + name
def getter(self):
return getattr(self, funny_name)
def setter(self, text):
setattr(self, funny_name, self._mangle(text))
return property(getter, setter, doc=doc_text)


class MangledParts(object):
blue = mangled('blue', our side)
red = mangled('red', their side)
white = mangled('white', the poor civilians)

def _mangle(self, text):
print text
return text.join('')


--Scott David Daniels
scott.dani...@acm.org

--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Ken T.
On Sun, 07 Jun 2009 11:16:46 -0400, Lew wrote:

 So the good old days are a matter of degree and self-deception - it was
 easier to fool ourselves then that we could at least guess timings
 proportionately if not absolutely, but things definitely get more
 unpredictable over evolution.

As I recall I could get exact timings on my 6502 based Commodore 64.  The 
issues you speak of simply weren't issues. 

-- 
Ken T.
http://www.electricsenator.net

  Never underestimate the power of stupid people in large groups.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread bearophileHUGS
Paul Rubin:

IMHO the main problem with the Unladen Swallow approach is that it would 
surprise me if CPython really spends that much of its time interpreting byte 
code.

Note that Py3 already has a way to speed up byte code interpretation
where compiled by GCC or Intel compiler (it's a very old strategy used
by Forth compilers, that in Py3 is used only partially. The strongest
optimizations used many years ago in Forth aren't used yet in Py3,
probably to keep the Py3 virtual machine simpler, or maybe because
there are not enough Forth experts among them).

Unladen swallow developers are smart and they attack the problem from
every side they can think of, plus some sides they can't think of. Be
ready for some surprises :-)


Is there some profiling output around?

I am sure they swim every day in profiler outputs :-) But you have to
ask to them.


Plus, the absence of a relocating garbage collector may mess up cache hit 
ratios pretty badly.

I guess they will try to add a relocating GC too, of course. Plus some
other strategy. And more. And then some cherries on top, with whipped
cream just to be sure.


Shed Skin as I understand it departs in some ways from Python semantics in 
order to get better compiler output, at the expense of breaking some Python 
programs.  I think that is the right approach, as long as it's not done too 
often.

ShedSkin (SS) is a beast almost totally different from CPython, SS
compiles an implicitly static subset of Python to C++. So it breaks
most real Python programs, and it doesn't use the Python std lib (it
rebuilds one in C++ or compiled Python), and so on.
SS may be useful for people that don't want to mess with the
intricacies of Cython (ex-Pyrex) and its tricky reference count, to
create compiled python extensions. But so far I think nearly no one is
using SS for such purpose, so it may be a failed experiment (SS is
also slow in compiling, it's hard to make it compile more than
1000-5000 lines of code).

Bye,
bearophile
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-07 Thread Brian
On Fri, Jun 5, 2009 at 3:29 AM, Nick Craig-Wood n...@craig-wood.com wrote:



 It is an interesting idea for a number of reasons, the main one as far
 as I'm concerned is that it is more of a port of CPython to a new
 architecture than a complete re-invention of python (like PyPy /
 IronPython / jython) so stands a chance of being merged back into
 CPython.


Blatant fanboyism. PyPy also has a chance of being merged back into Python
trunk.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can it be shorter?

2009-06-07 Thread Aaron Brady
On Jun 6, 8:07 am, tsangpo tsangpo.newsgr...@gmail.com wrote:
 I want to ensure that the url ends with a '/', now I have to do thisa like
 below.
 url = url + '' if url[-1] == '/' else '/'

 Is there a better way?

url+= { '/': '' }.get( url[ -1 ], '/' )

Shorter is always better.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can it be shorter?

2009-06-07 Thread Brian
Since extra slashes at the end of a URL are ignored, that means I win!

url+='/'

On Sun, Jun 7, 2009 at 4:45 PM, Aaron Brady castiro...@gmail.com wrote:

 On Jun 6, 8:07 am, tsangpo tsangpo.newsgr...@gmail.com wrote:
  I want to ensure that the url ends with a '/', now I have to do thisa
 like
  below.
  url = url + '' if url[-1] == '/' else '/'
 
  Is there a better way?

 url+= { '/': '' }.get( url[ -1 ], '/' )

 Shorter is always better.
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Arved Sandstrom

Lew wrote:

Jon Harrop wrote:

I agree entirely but my statements were about parallelism and not
concurrency. Parallel and concurrent programming have wildly different
characteristics and solutions. I don't believe shared mutable state is
overly problematic in the context of parallelism. Indeed, I think it is
usually the best solution in that context.


Interesting distinction.  Would it be fair to compare concurrent 
programming to the bricks used to build the parallel program's edifice?


Way too much of a fine distinction. While they are in fact different, 
the point of concurrent programming is to structure programs as a group 
of computations, which can be executed in parallel (however that might 
actually be done depending on how many processors there are). Parallel 
computing means to carry out many computations simultaneously. These are 
interleaved definitions. And they are *not* wildly different.


If you talk about shared mutable state, it is not as easy to use as Dr 
Harrop seems to think it is. Maybe in his experience it has been, but in 
general it's no trivial thing to manage. Lew, you probably summarized it 
best a few posts upstream.


AHS
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python preprosessor

2009-06-07 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tuomas Vesterinen wrote:
 I am intensively using 2to3.py. So I have 2 codebase: one in py2 and the
 other in py3.

The expectation would be that you only maintain the py2 code and
automatically generate the py3 code on demand using 2to3.

It is possible to have the same code run under both versions, but not
recommended.  As an example I do this for the APSW test suite, mostly
because the test code deliberately explores various corner cases that
2to3 could not convert (nor should it try!).  Here is how I do the whole
Unicode thing:

if py3: # defined earlier
  UPREFIX=
else:
  UPREFIX=u

def u(x): # use with raw strings
  return eval(UPREFIX+'''+x+''')

# Example of use
u(r\N${BLACK STAR}\u234)

You can pull similar stunts for bytes (I use buffers in py2), long ints
(needing L suffix in some py2 versions), the next() builtin from py3,
exec syntax differences etc.  You can see more details in the first 120
lines of http://code.google.com/p/apsw/source/browse/apsw/trunk/tests.py

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkosTxEACgkQmOOfHg372QS4rQCgl1ymNME2kdHTBUoc7/f2e+W6
cbMAmwf7mArr7hVA8k/US53JE59ChnIt
=pQ92
-END PGP SIGNATURE-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making the case for repeat

2009-06-07 Thread Raymond Hettinger
[pataphor]
 So here is my proposed suggestion for a once and for all reconciliation
 of various functions in itertools that can not stand on their own and
 keep a straight face.

Interesting phraseology ;-)  Enticing and yet fallacious in its
presumption of known and accepted usability problems.  FWIW, when I
designed the module, I started by researching constructs that had
proven success in functional languages and then adapted them to the
needs of Python applications.  That being said, I'm always open to
hearing new ideas.

After reading this thread a couple times, I have a few thoughts
to offer.

1. The Pythonic Way(tm) is to avoid combining too much functionality
in a single function, preferring to split when possible.  That is why
ifilter() and ifilterfalse() are separate functions.

(FWIW, the principle is considered pythonic because it was articulated
by Guido and has been widely applied throughout the language.)

There is a natural inclination to do the opposite.  We factor code
to eliminate redundancy, but that is not always a good idea with
an API.  The goal for code factoring is to minimize redundancy.
The goal for API design is having simple parts that are easily
learned and can be readily combined (i.e. the notion of an
iterator algebra).

It is not progress to mush the parts together in a single function
requiring multiple parameters.

2. I question the utility of some combining repeat() and cycle()
because I've not previously seen the two used together.

OTOH, there may be some utility to producing a fixed number of cycles
(see the ncycles() recipe in the docs).  Though, if I thought this
need
arose very often (it has never been requested), the straight-forward
solution would be to add a times argument to cycle(), patterned
after repeat()'s use of a times argument.

3. Looking at the sample code provided in your post, I would suggest
rewriting it as a factory function using the existing tools as
components.  That way, the result of the function will still run
at C speed and not be slowed by corner cases or unused parameters.
(see the ncycles recipe for an example of how to do this).

4. The suggested combined function seems to emphasize truncated
streams (i.e. a fixed number of repetitions or cycles).  This is
at odds with the notion of a toolset designed to allow lazy
infinite iterators to be fed to consumer functions that truncate
on the shortest iterable.  For example, the toolset supports:

   izip(mydata, count(), repeat(datetime.now()))

in preference to:

   izip(mydata, islice(count(), len(mydata)), repeat(datetime.now
(),times=len(mydata)))

To gain a better appreciation for this style (and for the current
design of itertools), look at the classic Hughes' paper Why
Functional
Programming Matters.

http://www.math.chalmers.se/~rjmh/Papers/whyfp.pdf


Raymond

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread John Machin
On Jun 8, 12:13 am, R. David Murray rdmur...@bitdance.com wrote:
 higer higerinbeij...@gmail.com wrote:
  My file contains such strings :
  \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a

 If those bytes are what is in the file (and it sounds like they are),
 then the data in your file is not in UTF8 encoding, it is in ASCII
 encoded as hexidecimal escape codes.

OK, I'll bite: what *ASCII* character is encoded as either \xe6 or
r\xe6 by what mechanism in which parallel universe?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Arved Sandstrom wrote:
 Lew wrote:
 Interesting distinction.  Would it be fair to compare concurrent
 programming to the bricks used to build the parallel program's edifice?
 
 Way too much of a fine distinction. While they are in fact different,
 the point of concurrent programming is to structure programs as a group
 of computations, which can be executed in parallel (however that might
 actually be done depending on how many processors there are).

No. Concurrent programming is about interleaving computations in order to
reduce latency. Nothing to do with parallelism.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Lew wrote:
 Jon Harrop wrote:
 I agree entirely but my statements were about parallelism and not
 concurrency. Parallel and concurrent programming have wildly different
 characteristics and solutions. I don't believe shared mutable state is
 overly problematic in the context of parallelism. Indeed, I think it is
 usually the best solution in that context.
 
 Interesting distinction.  Would it be fair to compare concurrent
 programming to the bricks used to build the parallel program's edifice?

Concurrent programming certainly underpins the foundations of almost all
parallel programs. Not least at the level of the OS scheduling the threads
than run the parallel programs. However, that knowledge is probably more
confusing than helpful here.

In essence, concurrent programming is concerned with reducing latency (e.g.
evading blocking) by interleaving computations whereas parallel programming
is concerned with maximizing throughput by performing computations at the
same time.

Historically, concurrency has been of general interest on single core
machines in the context of operating systems and IO and has become more
important recently due to the ubiquity of web programming. Parallelism was
once only important to computational scientists programming shared-memory
supercomputers and enterprise developers programming distributed-memory
clusters but the advent of multicore machines on the desktop and in the
games console has pushed parallelism into the lime light for ordinary
developers when performance is important.

Solutions for concurrent and parallel programming are also wildly different.
Concurrent programming typically schedules work items that are expected to
block on a shared queue for a pool of dozens or even hundreds of threads.
Parallel programming typically schedules work items that are expected to
not block on wait-free work-stealing queues for a pool of n threads
where n is the number of cores. Solutions for concurrent programming
(such as the .NET thread pool and asynchronous workflows in F#) can be used
as a poor man's solution for parallel programming but the overheads are
huge because they were not designed for this purpose so performance is much
worse than necessary. Solutions for parallel programming (e.g. Cilk, the
Task Parallel Library) are virtually useless for concurrent programming
because you quickly end up with all n threads blocked and the whole
program stalls.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread MRAB

John Machin wrote:

On Jun 8, 12:13 am, R. David Murray rdmur...@bitdance.com wrote:

higer higerinbeij...@gmail.com wrote:

My file contains such strings :
\xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a

If those bytes are what is in the file (and it sounds like they are),
then the data in your file is not in UTF8 encoding, it is in ASCII
encoded as hexidecimal escape codes.


OK, I'll bite: what *ASCII* character is encoded as either \xe6 or
r\xe6 by what mechanism in which parallel universe?


Maybe he means that the file itself is in ASCII.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Keeping console window open

2009-06-07 Thread Dave Angel

Fencer wrote:
div class=moz-text-flowed style=font-family: -moz-fixedScott 
David Daniels wrote:

To be a trifle more explicit, turn:

  ...
  if __name__ == '__main__':
 main()

into:
  ...
  if __name__ == '__main__':
 try:
 main()
 except Exception, why:
 print 'Failed:', why
 import sys, traceback
 traceback.print_tb(sys.exc_info()[2])
 raw_input('Leaving: ')

Note that building your script like this also allows you to
open the interpretter, and type:
import mymodule
mymodule.main()
in order to examine how it runs.


Thanks alot, this was exactly what I was looking for!



--Scott David Daniels
scott.dani...@acm.org


/div

Notice also that you'd then move all the imports inside main(), rather 
than putting them at outer scope.  Now some imports, such as sys and os, 
may want to be at outer scope, but if they don't work,. your system is 
seriously busted.



--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Patricia Shanahan

Jon Harrop wrote:
...

Historically, concurrency has been of general interest on single core
machines in the context of operating systems and IO and has become more
important recently due to the ubiquity of web programming. Parallelism was
once only important to computational scientists programming shared-memory
supercomputers and enterprise developers programming distributed-memory
clusters but the advent of multicore machines on the desktop and in the
games console has pushed parallelism into the lime light for ordinary
developers when performance is important.

...

Parallelism has also been important, for a long time, to multiprocessor
operating system developers. I got my first exposure to parallel
programming, in the 1980's, working on NCR's VRX operating system.

Patricia
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread John Machin
On Jun 8, 10:20 am, MRAB pyt...@mrabarnett.plus.com wrote:
 John Machin wrote:
  On Jun 8, 12:13 am, R. David Murray rdmur...@bitdance.com wrote:
  higer higerinbeij...@gmail.com wrote:
  My file contains such strings :
  \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a
  If those bytes are what is in the file (and it sounds like they are),
  then the data in your file is not in UTF8 encoding, it is in ASCII
  encoded as hexidecimal escape codes.

  OK, I'll bite: what *ASCII* character is encoded as either \xe6 or
  r\xe6 by what mechanism in which parallel universe?

 Maybe he means that the file itself is in ASCII.

Maybe indeed, but only so because hex escape codes are by design in
ASCII. in ASCII is redundant ... I can't imagine how the OP parsed
ASCII omitted 'because it is' encoded given that his native
tongue's grammar varies from that of English in several interesting
ways :-)
-- 
http://mail.python.org/mailman/listinfo/python-list


pythoncom and writing file summary info on Windows

2009-06-07 Thread mattleftbody
Hello,
I was trying to put together a script that would write things like the
Author and Title metadata fields of a file under Windows.  I got the
win32 extensions installed and found a few things that look like they
should work, though I'm not getting the result I would expect.
Hopefully someone has worked through this area and can point me in the
right direction.

so after various imports I try the following:

file=pythoncom.StgOpenStorageEx(fname, m, storagecon.STGFMT_FILE, 0 ,
pythoncom.IID_IPropertySetStorage)

summ=file.Create(pythoncom.FMTID_SummaryInformation,
  pythoncom.IID_IPropertySetStorage,
 
storagecon.PROPSETFLAG_DEFAULT,
storagecon.STGM_READWRITE|storagecon.STGM_CREATE|
storagecon.STGM_SHARE_EXCLUSIVE)

everything seems fine so far, until I try and write which doesn't seem
to do anything.  I tried the following three ways:

summ.WriteMultiple((storagecon.PIDSI_AUTHOR, storagecon.PIDSI_TITLE),
('Author', 'Title'))

summ.WriteMultiple((4,2 ), ('Author', 'Title'))

summ.WritePropertyNames((4,2 ), ('Author', 'Title'))


Any advice would be most appreciated!

Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get the class name

2009-06-07 Thread Robert Kern

On 2009-06-07 15:41, Tuomas Vesterinen wrote:

Kless wrote:

Is there any way of to get the class name to avoid to have that write
it?

---
class Foo:
super(Foo, self)
---


* Using Py 2.6.2


  class Foo(object):
... def cls(self):
... return self.__class__
...
  Foo().cls()
class '__main__.Foo'


You definitely don't want to use that for super(). If you actually have an 
instance of a subclass of Foo, you will be giving the wrong information. 
Basically, there is no (sane) way to do what the OP wants. If there were, that 
information would not be necessary to give to super().


--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Dave Angel

Lew wrote:
div class=moz-text-flowed style=font-family: -moz-fixedScott 
David Daniels wrote:

the nub of the problem is not on the benchmarks.  There is something
to be said for the good old daays when you looked up the instruction
timings that you used in a little document for your machine, and could
know the cost of any loop.  We are faster now, but part of the cost of
that speed is that timing is a black art.  


Those good old days never existed.  Those manuals never accounted for 
things that affected timing even then, like memory latency or refresh 
time.  SRAM cache made things worse, since the published timings never 
mentioned cache-miss delays.  Though memory cache might seem a recent 
innovation, it's been around a while.  It would be challenging to find 
any published timing since the commercialization of computers that 
would actually tell the cost of any loop.


Things got worse when chips like the '86 family acquired multiple 
instructions for doing loops, still worse when pre-fetch pipelines 
became deeper and wider, absolutely Dark Art due to multi-level memory 
caches becoming universal, and 
throw-your-hands-up-and-leave-for-the-corner-bar with multiprocessor 
NUMA systems.  OSes and high-level languages complicate the matter - 
you never know how much time slice you'll get or how your source got 
compiled or optimized by run-time.


So the good old days are a matter of degree and self-deception - it 
was easier to fool ourselves then that we could at least guess timings 
proportionately if not absolutely, but things definitely get more 
unpredictable over evolution.


Nonsense.  The 6502 with static memory was precisely predictable, and 
many programmers (working in machine language, naturally) counted on 
it.  Similarly the Novix 4000, when programmed in its native Forth.


And previous to that, I worked on several machines (in fact, I wrote the 
assembler and debugger for two of them) where the only variable was the 
delay every two milliseconds for dynamic memory refresh.  Separate 
control memory and data memory, and every instruction precisely 
clocked.  No instruction prefetch, no cache memory.  What you see is 
what you get.


Would I want to go back there?  No.  Sub-megaherz clocks with much less 
happening on each clock means we were operating at way under .01% of 
present day.


--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Ken T. wrote:
 On Sun, 07 Jun 2009 11:16:46 -0400, Lew wrote:
 So the good old days are a matter of degree and self-deception - it was
 easier to fool ourselves then that we could at least guess timings
 proportionately if not absolutely, but things definitely get more
 unpredictable over evolution.
 
 As I recall I could get exact timings on my 6502 based Commodore 64.  The
 issues you speak of simply weren't issues.

Let's not forget Elite for the 6502 exploiting predictable performance in
order to switch graphics modes partway down the vsync!

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Arved Sandstrom

Jon Harrop wrote:

Arved Sandstrom wrote:

Lew wrote:

Interesting distinction.  Would it be fair to compare concurrent
programming to the bricks used to build the parallel program's edifice?

Way too much of a fine distinction. While they are in fact different,
the point of concurrent programming is to structure programs as a group
of computations, which can be executed in parallel (however that might
actually be done depending on how many processors there are).


No. Concurrent programming is about interleaving computations in order to
reduce latency. Nothing to do with parallelism.


Jon, I do concurrent programming all the time, as do most of my peers. 
Way down on the list of why we do it is the reduction of latency.


AHS
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get the class name

2009-06-07 Thread Carl Banks
On Jun 7, 1:14 pm, Kless jonas@googlemail.com wrote:
 Is there any way of to get the class name to avoid to have that write
 it?

 ---
 class Foo:
    super(Foo, self)
 ---

 * Using Py 2.6.2

If you are using emacs you can use the put the following elisp code in
your .emacs file, defines a command to automatically insert the a
super call with the class and method name filled in:

http://code.activestate.com/recipes/522990/


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread higer
On Jun 7, 11:25 pm, John Machin sjmac...@lexicon.net wrote:
 On Jun 7, 10:55 pm, higer higerinbeij...@gmail.com wrote:

  My file contains such strings :
  \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a



 Are you sure? Does that occupy 9 bytes in your file or 36 bytes?


It was saved in a file, so it occupy 36 bytes. If I just use a
variable to contain this string, it can certainly work out correct
result,but how to get right answer when reading from file.



  I want to read the content of this file and transfer it to the
  corresponding gbk code,a kind of Chinese character encode style.
  Everytime I was trying to transfer, it will output the same thing no
  matter which method was used.
   It seems like that when Python reads it, Python will taks '\' as a
  common char and this string at last will be represented as \\xe6\\x97\
  \xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a , then the \ can be 'correctly'
  output,but that's not what I want to get.

  Anyone can help me?

 try this:

 utf8_data = your_data.decode('string-escape')
 unicode_data = utf8_data.decode('utf8')
 # unicode derived from your sample looks like this 日期: is that what
 you expected?

You are right , the result is 日期 which I just expect. If you save the
string in a variable, you surely can get the correct result. But it is
just a sample, so I give a short string, what if so many characters in
a file?

 gbk_data = unicode_data.encode('gbk')


I have tried this method which you just told me, but unfortunately it
does not work(mess code).


 If that doesn't work, do three things:
 (1) give us some unambiguous hard evidence about the contents of your
 data:
 e.g. # assuming Python 2.x

My Python versoin is 2.5.2

 your_data = open('your_file.txt', 'rb').read(36)
 print repr(your_data)
 print len(your_data)
 print your_data.count('\\')
 print your_data.count('x')


The result is:

'\\xe6\\x97\\xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a'
36
9
9

 (2) show us the source of the script that you used

def UTF8ToChnWords():
f = open(123.txt,rb)
content=f.read()
print repr(content)
print len(content)
print content.count(\\)
print content.count(x)

pass
if __name__ == '__main__':
UTF8ToChnWords()

 (3) Tell us what doesn't work means in this case

It doesn't work because no matter in what way we deal with it we often
get 36 bytes string not 9 bytes.Thus, we can not get the correct
answer.


 Cheers,
 John

Thank you very much,
higer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread higer
On Jun 8, 8:20 am, MRAB pyt...@mrabarnett.plus.com wrote:
 John Machin wrote:
  On Jun 8, 12:13 am, R. David Murray rdmur...@bitdance.com wrote:
  higer higerinbeij...@gmail.com wrote:
  My file contains such strings :
  \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a
  If those bytes are what is in the file (and it sounds like they are),
  then the data in your file is not in UTF8 encoding, it is in ASCII
  encoded as hexidecimal escape codes.

  OK, I'll bite: what *ASCII* character is encoded as either \xe6 or
  r\xe6 by what mechanism in which parallel universe?

 Maybe he means that the file itself is in ASCII.

Yes,my file itself is in ASCII.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Arved Sandstrom wrote:
 Jon Harrop wrote:
 Arved Sandstrom wrote:
 Lew wrote:
 Interesting distinction.  Would it be fair to compare concurrent
 programming to the bricks used to build the parallel program's edifice?
 Way too much of a fine distinction. While they are in fact different,
 the point of concurrent programming is to structure programs as a group
 of computations, which can be executed in parallel (however that might
 actually be done depending on how many processors there are).
 
 No. Concurrent programming is about interleaving computations in order to
 reduce latency. Nothing to do with parallelism.
 
 Jon, I do concurrent programming all the time, as do most of my peers.
 Way down on the list of why we do it is the reduction of latency.

What is higher on the list?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Ken T.
On Mon, 08 Jun 2009 02:39:40 +0100, Jon Harrop wrote:

 Ken T. wrote:
 On Sun, 07 Jun 2009 11:16:46 -0400, Lew wrote:
 So the good old days are a matter of degree and self-deception - it
 was easier to fool ourselves then that we could at least guess timings
 proportionately if not absolutely, but things definitely get more
 unpredictable over evolution.
 
 As I recall I could get exact timings on my 6502 based Commodore 64. 
 The issues you speak of simply weren't issues.
 
 Let's not forget Elite for the 6502 exploiting predictable performance
 in order to switch graphics modes partway down the vsync!

That actually didn't require predictable timing.  You could tell the 
video chip to send you an interrupt when it got to a given scan line.  I 
used this myself.  

Elite was cool though.  I wasted many hours playing that game. 

-- 
Ken T.
http://www.electricsenator.net

  Duct tape is like the force.  It has a light side, and a dark side,
  and it holds the universe together ...
-- Carl Zwanzig
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pylint naming conventions?

2009-06-07 Thread David Stanek
On Sun, Jun 7, 2009 at 9:23 AM, Esmailebo...@hotmail.com wrote:
 Ben Finney wrote:

 Esmail ebo...@hotmail.com writes:

 I am confused by pylint's naming conventions, I don't think the are in
 tune with Python's style recommendations (PEP 8?)

 Anyone else think this?

 It's hard to know, without examples. Can you give some output of pylint
 that you think doesn't agree with PEP 8?

 Sure, I will next time I have a nice self-contained example. Perhaps not
 that
 many people are using pylint? I was expecting a bunch of messages either
 contradicting my observation or agreeing with it :-) .. but perhaps this
 indicates that there's no issue.

It is my understanding that it does check for PEP8 names. Even if it doesn't
it is really easy to change. If you run 'pylint --generate-rcfile' (i think)
it will output the configuration that it is using. You can then save this
off and customize it.


 I'll try to come up with a nice short code example in the next few days
 to demonstrate what I think the problem is and post it, thanks for the
 suggestion.

If you didn't have an example handy what prompted you to start this thread?

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Paul Rubin
George Neuner gneun...@comcast.net writes:
 Even the lightest weight
 user space (green) threads need a few hundred instructions, minimum,
 to amortize the cost of context switching.

I thought the definition of green threads was that multiplexing them
doesn't require context switches.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Paul Rubin
Jeff M. mass...@gmail.com writes:
   Even the lightest weight
   user space (green) threads need a few hundred instructions, minimum,
   to amortize the cost of context switching
 There's always a context switch. It's just whether or not you are
 switching in/out a virtual stack and registers for the context or the
 hardware stack/registers.

I don't see the hundreds of instructions in that case.  

http://shootout.alioth.debian.org/u32q/benchmark.php?test=threadringlang=ghcid=3

shows GHC doing 50 million lightweight thread switches in 8.47
seconds, passing a token around a thread ring.  Almost all of that is
probably spent acquiring and releasing the token's lock as the token
is passed from one thread to another.  That simply doesn't leave time
for hundreds of instructions per switch.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can it be shorter?

2009-06-07 Thread Paul Rubin
Aaron Brady castiro...@gmail.com writes:
 url+= { '/': '' }.get( url[ -1 ], '/' )
 
 Shorter is always better.

url = url.rstrip('/') + '/'
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Paul Rubin wrote:
 Jeff M. mass...@gmail.com writes:
   Even the lightest weight
   user space (green) threads need a few hundred instructions,
   minimum, to amortize the cost of context switching
 There's always a context switch. It's just whether or not you are
 switching in/out a virtual stack and registers for the context or the
 hardware stack/registers.
 
 I don't see the hundreds of instructions in that case.
 

http://shootout.alioth.debian.org/u32q/benchmark.php?test=threadringlang=ghcid=3
 
 shows GHC doing 50 million lightweight thread switches in 8.47
 seconds, passing a token around a thread ring.  Almost all of that is
 probably spent acquiring and releasing the token's lock as the token
 is passed from one thread to another.  That simply doesn't leave time
 for hundreds of instructions per switch.

And Haskell is not exactly fast...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Jon Harrop
Scott David Daniels wrote:
 Lew wrote:
 Scott David Daniels wrote:
 the nub of the problem is not on the benchmarks.  There is something
 to be said for the good old daays when you looked up the instruction
 timings that you used in a little document for your machine, and could
 know the cost of any loop.  We are faster now, but part of the cost of
 that speed is that timing is a black art.
 
 Those good old days never existed.  Those manuals never accounted for
 things that affected timing even then, like memory latency or refresh
 time.
 
 Well, as Gilbert and Sullivan wrote:
   - What, never?
   - No, never!
   - What, Never?
   - Well, hardly ever.
 Look up the LGP-30.  It was quite predictable.  It has been a while.

Same for early ARMs.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: spammers on pypi

2009-06-07 Thread Scott David Daniels

Lawrence D'Oliveiro wrote:
In message 
dd8295d3-61ab-4cc9-86b8-1e04f3edd...@f16g2000vbf.googlegroups.com, joep 
wrote:



Is there a way to ban spammers from pypi?


Yes, but it doesn't work.


And if you ever do discover something that _does_ work:
 (1) You'll have discovered perpetual motion.
 (2) You'll probably get terribly rich from selling it.
 (3) You'll probably advertize your new solution via bulk
 e-mail to all e-mail addresses you know of that might
 be interested in learning of your great discovery.

:-)

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Scott David Daniels

Lew wrote:

Scott David Daniels wrote:

the nub of the problem is not on the benchmarks.  There is something
to be said for the good old daays when you looked up the instruction
timings that you used in a little document for your machine, and could
know the cost of any loop.  We are faster now, but part of the cost of
that speed is that timing is a black art.  


Those good old days never existed.  Those manuals never accounted for 
things that affected timing even then, like memory latency or refresh 
time.


Well, as Gilbert and Sullivan wrote:
 - What, never?
 - No, never!
 - What, Never?
 - Well, hardly ever.
Look up the LGP-30.  It was quite predictable.  It has been a while.

--Scott David Daniels
scott.dani...@acm.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Roedy Green
On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
said :

Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.

Could what you are saying be summed up by saying, The more threads
you have the more important it is to keep your threads independent,
sharing as little data as possible.
-- 
Roedy Green Canadian Mind Products
http://mindprod.com

Never discourage anyone... who continually makes progress, no matter how slow.
~ Plato 428 BC died: 348 BC at age: 80
-- 
http://mail.python.org/mailman/listinfo/python-list


openhook

2009-06-07 Thread Gaudha
Can anybody tell me what is meant by 'openhook' ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Complexity And Tedium of Software Engineering

2009-06-07 Thread verec

On 2009-06-05 21:03:33 +0100, Kenneth Tilton kentil...@gmail.com said:


When progress stops we will have time to polish our systems, not before.


Is that an endorsement of mediocrity?
--
JFB

--
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-07 Thread Raymond Wiker
Roedy Green see_webs...@mindprod.com.invalid writes:

 On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
 kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
 said :

Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.

 Could what you are saying be summed up by saying, The more threads
 you have the more important it is to keep your threads independent,
 sharing as little data as possible.

Absolutely... not a new observation, either, as it follows
directly from Amdahl's law. 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Complexity And Tedium of Software Engineering

2009-06-07 Thread Kenneth Tilton

verec wrote:

On 2009-06-05 21:03:33 +0100, Kenneth Tilton kentil...@gmail.com said:


When progress stops we will have time to polish our systems, not before.


Is that an endorsement of mediocrity?


No, of General Patton.

hth, kt
--
http://mail.python.org/mailman/listinfo/python-list


can it be shorter?

2009-06-07 Thread tsangpo
I want to ensure that the url ends with a '/', now I have to do thisa like 
below.

url = url + '' if url[-1] == '/' else '/'

Is there a better way? 


--
http://mail.python.org/mailman/listinfo/python-list


Get the class name

2009-06-07 Thread Kless
Is there any way of to get the class name to avoid to have that write
it?

---
class Foo:
   super(Foo, self)
---


* Using Py 2.6.2
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pylint naming conventions?

2009-06-07 Thread Ben Finney
David Stanek dsta...@dstanek.com writes:

 On Sun, Jun 7, 2009 at 9:23 AM, Esmailebo...@hotmail.com wrote:
  I'll try to come up with a nice short code example in the next few
  days to demonstrate what I think the problem is and post it, thanks
  for the suggestion.
 
 If you didn't have an example handy what prompted you to start this
 thread?

My understanding of Esmail's original message was that, like many of us
on first running ‘pylint’ against an existing code base, the output is
astonishingly verbose and tedious to read. By the above I presume he's
being a good forum member and trying to find a minimal example that
shows the problem clearly :-)

-- 
 \“Kill myself? Killing myself is the last thing I'd ever do.” |
  `\—Homer, _The Simpsons_ |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to transfer my utf8 code saved in a file to gbk code

2009-06-07 Thread Mark Tolonen


higer higerinbeij...@gmail.com wrote in message 
news:0c786326-1651-42c8-ba39-4679f3558...@r13g2000vbr.googlegroups.com...

On Jun 7, 11:25 pm, John Machin sjmac...@lexicon.net wrote:

On Jun 7, 10:55 pm, higer higerinbeij...@gmail.com wrote:

 My file contains such strings :
 \xe6\x97\xa5\xe6\x9c\x9f\xef\xbc\x9a





Are you sure? Does that occupy 9 bytes in your file or 36 bytes?



It was saved in a file, so it occupy 36 bytes. If I just use a
variable to contain this string, it can certainly work out correct
result,but how to get right answer when reading from file.


Did you create this file?  If it is 36 characters, it contains literal 
backslash characters, not the 9 bytes that would correctly encode as UTF-8. 
If you created the file yourself, show us the code.







 I want to read the content of this file and transfer it to the
 corresponding gbk code,a kind of Chinese character encode style.
 Everytime I was trying to transfer, it will output the same thing no
 matter which method was used.
  It seems like that when Python reads it, Python will taks '\' as a
 common char and this string at last will be represented as \\xe6\\x97\
 \xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a , then the \ can be 'correctly'
 output,but that's not what I want to get.

 Anyone can help me?

try this:

utf8_data = your_data.decode('string-escape')
unicode_data = utf8_data.decode('utf8')
# unicode derived from your sample looks like this 日期: is that what
you expected?


You are right , the result is 日期 which I just expect. If you save the
string in a variable, you surely can get the correct result. But it is
just a sample, so I give a short string, what if so many characters in
a file?


gbk_data = unicode_data.encode('gbk')



I have tried this method which you just told me, but unfortunately it
does not work(mess code).


How are you determining this is 'mess code'?  How are you viewing the 
result?  You'll need to use a viewer that understands GBK encoding, such as 
Chinese Window's Notepad.






If that doesn't work, do three things:
(1) give us some unambiguous hard evidence about the contents of your
data:
e.g. # assuming Python 2.x


My Python versoin is 2.5.2


your_data = open('your_file.txt', 'rb').read(36)
print repr(your_data)
print len(your_data)
print your_data.count('\\')
print your_data.count('x')



The result is:

'\\xe6\\x97\\xa5\\xe6\\x9c\\x9f\\xef\\xbc\\x9a'
36
9
9


(2) show us the source of the script that you used


def UTF8ToChnWords():
   f = open(123.txt,rb)
   content=f.read()
   print repr(content)
   print len(content)
   print content.count(\\)
   print content.count(x)


Try:

utf8data = content.decode('string-escape')
unicodedata = utf8data.decode('utf8')
gbkdata = unicodedata.encode('gbk')
print len(gbkdata),repr(gbkdata)
open(456.txt,wb).write(gbkdata)

The print should give:

6 '\xc8\xd5\xc6\xda\xa3\xba'

This is correct for GBK encoding.  456.txt should contain the 6 bytes of GBK 
data.  View the file with a program that understand GBK encoding.


-Mark


--
http://mail.python.org/mailman/listinfo/python-list


[issue4749] Issue with RotatingFileHandler logging handler on Windows

2009-06-07 Thread Frans

Frans frans.van.nieuwenho...@gmail.com added the comment:

I ran into the same problem with RotatingFileHandler from a
multithreaded daemon under Ubuntu. I Googled around and found the
ConcurrentLogHandler on pypi
(http://pypi.python.org/pypi/ConcurrentLogHandler). It solved the problem.

--
nosy: +Frans

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4749
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6228] round() error

2009-06-07 Thread steve21

New submission from steve21 steve872929...@yahoo.com.au:

I wish to round the float 697.04157958254996 to 10 decimal digits after
the decimal point.

$ python3.0
Python 3.0.1 (r301:69556, Jun  7 2009, 14:51:41)
[GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2
Type help, copyright, credits or license for more information.

 697.04157958254996
697.04157958254996  # python float can represent this number exactly

 697.0415795825  # this is the expected result
697.04157958250005  # this is the closest python float representation

 round(697.04157958254996, 10)
697.04157958259998  # error

round() gives a result that is closer to
697.0415795826
than the expected result of
697.0415795825
- it has not rounded to the closest 10th decimal digit after the decimal
point.

(python 2.6.2 has the same problem)

--
messages: 89029
nosy: steve21
severity: normal
status: open
title: round() error
type: behavior
versions: Python 2.6, Python 3.0

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6228
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6229] Installation python on mac

2009-06-07 Thread eric

New submission from eric mul...@gmx.ch:

Hello
i wan't to install the python 3.0 on my mac. the python image, .dmg
file, i download from this site, run the installation. After this, the
framework doesn't installation in the folder
/System/Library/Frameworks/Python.framework.
How can i change the installation folder?

thx kostonstyle

--
messages: 89030
nosy: kostonstyle
severity: normal
status: open
title: Installation python on mac
versions: Python 3.0

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6229
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >