multiple assignments (was: My first Python program)

2010-10-14 Thread Ethan Furman

Ian Kelly wrote:

 here is an example
where the order of assignment actually matters:

 >>> d['a'] = d = {}
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'd' is not defined
 >>> d = d['a'] = {}
 >>> d
{'a': {...}}

As you can see, they're assigned left-to-right.




Ah!  I was thinking the assignments went in a filter fashion, but now 
what I think is happening is that the first item is bound to the last, 
then the next item is bound to the last, etc, etc.


Is this correct?

~Ethan~


--
http://mail.python.org/mailman/listinfo/python-list


Re: Whining about "struct"

2010-10-14 Thread Lawrence D'Oliveiro
In message , Tim Roberts wrote:

> I have a bad memory.  I admit it.  Because of that, the Python "help"
> system is invaluable to me.

I’ve tried using that occasionally, but found it’s easier by far to have a 
Web page open with full documentation that I can read and flip through while 
simultaneously trying things in a terminal window.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help!!!

2010-10-14 Thread Lawrence D'Oliveiro
In message , Jorge 
Biquez wrote:

> I was a teacher of Computer Sciences for some
> years in my case, women were better
> programming than men. but sure, on the IT
> industry the percentage of men is a lot more than the one of women. Why?

Did you follow up your graduates to see what kind of jobs they ended up 
doing?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing" [OT]

2010-10-14 Thread Antoon Pardon
On Wed, Oct 13, 2010 at 07:31:59PM +, Steven D'Aprano wrote:
> On Wed, 13 Oct 2010 16:17:19 +0200, Antoon Pardon wrote:
> 
> > On Wed, Oct 13, 2010 at 01:20:30PM +, Steven D'Aprano wrote:
> >> On Tue, 12 Oct 2010 22:13:26 -0700, RG wrote:
> >> 
> >> >> The formula: circumference = 2 x pi x radius is taught in primary
> >> >> schools, yet it's actually a very difficult formula to prove!
> >> > 
> >> > What's to prove?  That's the definition of pi.
> >> 
> >> Incorrect -- it's not necessarily so that the ratio of the
> >> circumference to the radius of a circle is always the same number. It
> >> could have turned out that different circles had different ratios.
> > 
> > If that is your concern, you should have reacted to the previous poster
> > since in that case his equation couldn't be proven either.
> 
> "Very difficult to prove" != "cannot be proven".

Your missing the point. You started talking about non-euclidean geometries
as an argument against the notion that pi was defined as the ratio of
the circumference and the diameter. But in non-euclidean geometries
the equation doesn't hold. So either you think non-euclidian geometries
matter and in that case you should have questioned the equation or
you accept that the context was euclidian geometries and in that case
non euclidian considerations don't matter.

> > Since by not reacting to the previous poster, you implicitely accepted
> > the equation and thus the context in which it is true: euclidean
> > geometry. So I don't think that concerns that fall outside this context
> > have any relevance.
> 
> You've missed the point that, 4000 years later it is easy to take pi for 
> granted, but how did anyone know that it was special? After all, there is 
> a very similar number 3.1516... but we haven't got a name for it and 
> there's no formulae using it. Nor do we have a name for the ratio of the 
> radius of a circle to the proportion of the plane that is uncovered when 
> you tile it with circles of that radius, because that ratio isn't (as far 
> as I know) constant.

Your confusing the concept with its specific numerical value. It's not
uncommon in mathematics to give a name to a number that is defined in
a specific way, without knowing its numerical value.

> Perhaps this will help illustrate what I'm talking about... the 
> mathematician Mitchell Feigenbaum discovered in 1975 that, for a large 
> class of chaotic systems, the ratio of each bifurcation interval to the 
> next approached a constant:
> 
> ?? = 4.66920160910299067185320382...
> 
> Every chaotic system (of a certain kind) will bifurcate at the same rate. 
> This constant has been described as being as fundamental to mathematics 
> as pi or e. Feigenbaum didn't just *define* this constant, he discovered 
> it by *proving* that the ratio of bifurcation intervals was constant. 
> Nobody had any idea that this was the case until he did so.

So? That the ratio of the circumference and the diameter of a circel was
constant was proven a long way before people had the tools to calculate
that ratio to very high precision. They did that by noting that the
ratios of the circumference of a regular polygon to the diameter of the
inscribed and outscribed circle were constants and converged to each
other as the number of sides increased.

So there is no problem defining pi as the ratio between the circumference
and the diameter of a circle even if one has only very crude approximations
to the numerical value of that ratio.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help needed - To get path of a directory

2010-10-14 Thread Tim Golden

On 13/10/10 15:26, Bishwarup Banerjee wrote:

I want to get the absolute path of the Directory I pass explicitly. Like
functionName("\abcd").


On 13/10/2010 05:44, Kingsley Turner wrote:

One way to achieve this is to fetch a recursive directory list for all
drives, and then search for your directory_name.endswith("abcd") in each
list. But what would you do with multiple occurrences ?


[... snip useful code ...]


I don't know how to enumerate all your windows device letters.


http://timgolden.me.uk/python/win32_how_do_i/find-drive-types.html

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Whining about "struct"

2010-10-14 Thread Tim Golden

On 14/10/2010 05:30, Tim Roberts wrote:

I have a bad memory.  I admit it.  Because of that, the Python "help"
system is invaluable to me.  Up through Python 2.5, I could get a quick
reference to the format specifiers for the struct module via
   import struct; help(struct)

I used that a LOT.

But in Python 2.6, the struct module moved from Python code to C code, and
that helpful help string was removed.

Is that still gone in Python 3.1?  What are the chances of reinstating that
helpful chart?


It's back again in 2.7 & 3.1. (I haven't bothered to track the code through
subversion; I just tried all the versions I have :) )

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using csv.DictReader with \r\n in the middle of fields

2010-10-14 Thread pstatham
On Oct 13, 4:01 pm, Neil Cerutti  wrote:
> On 2010-10-13, pstatham  wrote:
>
> > Hopefully this will interest some, I have a csv file (can be
> > downloaded fromhttp://www.paulstathamphotography.co.uk/45.txt) which
> > has five fields separated by ~ delimiters. To read this I've been
> > using a csv.DictReader which works in 99% of the cases. Occasionally
> > however the description field has errant \r\n characters in the middle
> > of the record. This causes the reader to assume it's a new record and
> > try to read it.
>
> Here's an alternative idea. Working with csv module for this job
> is too difficult for me. ;)
>
> import re
>
> record_re = 
> "(?P.*?)~(?P.*?)~(?P.*?)~(?P.*?)~(?P.*?)\n(.*)"
>
> def parse_file(fname):
>     with open(fname) as f:
>         data = f.read()
>         m = re.match(record_re, data, flags=re.M | re.S)
>         while m:
>             yield m.groupdict()
>             m = re.match(record_re, m.group(6), flags=re.M | re.S)
>
> for record in parse_file('45.txt'):
>     print(record)
>
> --
> Neil Cerutti

Thanks guys, I can't alter the source data.

I wouldn't of considered regex, but it's a good idea as I can then
define my own record structure instead of reader dictating to me what
a record is.
-- 
http://mail.python.org/mailman/listinfo/python-list


Get alternative char name with unicodedata.name() if no formal one defined

2010-10-14 Thread Dirk Wallenstein
Hi,
I'd like to get control char names for the first 32 codepoints, but they
apparently only have an alias and no official name. Is there a way to
get the alternative character name (alias) in Python?

-- 
Greetings,
Dirk
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiple assignments

2010-10-14 Thread Ben Finney
Ethan Furman  writes:

> Ah!  I was thinking the assignments went in a filter fashion, but now
> what I think is happening is that the first item is bound to the last,
> then the next item is bound to the last, etc, etc.
>
> Is this correct?

Assignment is always the same direction: the rightmost object is the
target, and every reference on the left of an assignment operator (the
‘=’ operator) gets bound to that same object.

-- 
 \   “There's no excuse to be bored. Sad, yes. Angry, yes. |
  `\Depressed, yes. Crazy, yes. But there's no excuse for boredom, |
_o__)  ever.” —Viggo Mortensen |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing" [OT]

2010-10-14 Thread Gregory Ewing

Steven D'Aprano wrote:
under Euclidean 
geometry, there was a time when people didn't know whether or not the 
ratio of circumference to radius was or wasn't a constant, and proving 
that it is a constant is non-trivial.


I'm not sure that the construction you mentioned proves that
either, because it relies on the same assumptions about scaling
of polygons that one makes about circles in Euclidean geometry.

Seems to me the significance of it is not that it proves
anything about the constness of pi, but that it provides a way
of *calculating* pi to any desired accuracy. Before that,
people had to rely on measurements of physical circles to
come up with estimates for the value of pi.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


processing input from multiple files

2010-10-14 Thread Christopher Steele
Hi

I've been trying to decode a series of observations from multiple files
(each file is a different time) and put each type of observation into their
own separate file. The script runs successfully for one file but whenever I
try it for more they just overwrite each other. I'm new to python and I'm
not sure how to go about efficiently running through the process once and
then appending to the output file for all other input files. Has anyone done
something similar to this before?



If it helps, I'll also attach a sample of one of the input files


#!/usr/bin/python

import sys
import os
import re
import fileinput

#load in file list
#obs = os.system('ls s[i,m,n]uk[0,2,4][1,2,3]d_??00P.DATA')
obs = ['siuk21d_0300P.DATA', 'siuk21d_0900P.DATA']
print obs
#code for file type "datalist"
#fname = "datalist_201081813.txt"


#output files
foutname1 = 'prestest.txt'
foutname2 = 'temptest.txt'
foutname3 = 'tempdtest.txt'
foutname4 = 'wspeedtest.txt'
foutname5 = 'winddtest.txt'


#prepare times
time=[]
year="2009"
month="09"
day="18"
hour=[]

#outputs
pres_out = ''
temp_out = ''
dtemp_out = ''
dir_out = ''
speed_out = ''
x =''


#load in station file with lat/lons
file2 = open("uk_stations.txt","r")
stations = file2.readlines()
ids=[]
names=[]
lats=[]
lons=[]
for item in stations:
item_list = item.strip().split(',')
ids.append(item_list[0])
names.append(item_list[1])
lats.append(item_list[2])
lons.append(item_list[3])

#create loop over file list
time= [item.split('_')[1].split('.')[0] for item in obs]
print time
for x in time:
hour= x[:2]
print hour
newtime = year+month+day+'_'+hour+'00'
print newtime
for file  in fileinput.input(obs):
data=file[:file.find(' 333 ')]
#data=st[split:]
print data
elements=data.split(' ')
print elements
station_id = elements[0]
try:
index = ids.index(station_id)
lat = lats[index]
lon = lons[index]
message_type = 'ADPSFC'
except:
print 'Station ID',station_id,'not in list!'
lat = lon = 'NaN'
message_type = 'Bad_station_id'
try:
temp = [item for item in elements if item.startswith('1')][0]
temperature = float(temp[2:])/10
sign = temp[1]
if sign == 1:
   temperature=-temperature
except:
temperature='NaN'

try:
dtemp = [item for item in elements if item.startswith('2')][0]
dtemperature = float(dtemp[2:])/10
sign = dtemp[1]
if sign == 1:
dtemperature=-dtemperature
except:
detemperature='NaN'
try:
press = [item for item in elements[2:] if item.startswith('4')][0]
if press[1]=='9':
pressure = float(press[1:])/10
else:
pressure = float(press[1:])/10+1000
except:
pressure = 'NaN'

try:
wind = elements[elements.index(temp)-1]
direction = float(wind[1:3])*10
speed = float(wind[3:])*0.51444
except:
direction=speed='NaN'



newline =
message_type+c+str(station_id)+c+newtime+c+lat+c+lon+c+c+"-"+c+ "002"
+c+"-"+c+"-"+c+str(pressure)+c
pres_out+=newline+'\n'


newline2 =
message_type+c+str(station_id)+c+newtime+c+lat+c+lon+c+c+"-"+c+ "011"
+c+"-"+c+"-"+c+str(temperature)+c
print newline2
temp_out+=newline2+'\n'
fout = open(foutname2,'w')
fout.writelines(temp_out)
fout.close()




newline3 =
message_type+c+str(station_id)+c+newtime+c+lat+c+lon+c+c+"-"+c+ "017"
+c+"-"+c+"-"+c+str(dtemperature)+c
print newline3
dtemp_out+=newline3+'\n'
fout = open(foutname3,'w')
fout.writelines(dtemp_out)
fout.close()


newline4 =
message_type+c+str(station_id)+c+newtime+c+lat+c+lon+c+c+"-"+c+ "031"
+c+"-"+c+"-"+c+str(direction)+c
print newline4
dir_out+=newline4+'\n'
fout = open(foutname4,'w')
fout.writelines(dir_out)
fout.close()


newline5 =
message_type+c+str(station_id)+c+newtime+c+lat+c+lon+c+c+"-"+c+
"032"+c+"-"+c+"-"+c+str(speed)+c
print newline5
speed_out+=newline5+'\n'


fout = open(foutname1,'w')
fout.writelines(pres_out)
fout.close()
fout = open(foutname2,'w')
fout.writelines(temp_out)
fout.close()
fout = open(foutname3,'w')
fout.writelines(dtemp_out)
fout.close()
fout = open(foutname4,'w')
fout.writelines(dir_out)
fout.close()
fout = open(foutname5,'w')
fout.writelines(speed_out)
fout.close()










cheers

Chris


siuk21d_0300P.DATA
Description: Binary data
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Whining about "struct"

2010-10-14 Thread Brendan Simon (eTRIX)
 On 14/10/10 5:17 PM, python-list-requ...@python.org wrote:
> Subject:
> Whining about "struct"
> From:
> Tim Roberts 
> Date:
> Wed, 13 Oct 2010 21:30:38 -0700
>
> To:
> python-list@python.org
>
>
> I have a bad memory.  I admit it.  Because of that, the Python "help"
> system is invaluable to me.  Up through Python 2.5, I could get a quick
> reference to the format specifiers for the struct module via
>   import struct; help(struct)
>
> I used that a LOT.
>
> But in Python 2.6, the struct module moved from Python code to C code, and
> that helpful help string was removed.
>
> Is that still gone in Python 3.1?  What are the chances of reinstating that
> helpful chart?

It works ok for me on Mac OS X for both Python 2.6.5 (python.org) and
Python 2.6.1 (Apple) installations.

-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: automated daily snapshot builds for PyQt and friend on openSUSE build service

2010-10-14 Thread Hans-Peter Jansen
[Sorry for cross posting]

Hi PyQtnistas,

I proudly announce the availability of automated builds of the most 
current PyQt and related packages including snapshots on openSUSEs 
build service for openSUSE 11.1, 11.2 and 11.3, here:

https://build.opensuse.org/project/monitor?project=home%3Afrispete%3APyQt
https://build.opensuse.org/project/monitor?project=home%3Afrispete%3APyQt-next

New sip4, PyQt3 and PyQt4 snapshots and release get build against a 
range of gcc and Qt versions automatically, e.g. without human 
intervention (if all goes well, famous last words..). dip and 
PyQtMobility will probably follow soon.

If you add both

home:frispete:PyQt
home:frispete:PyQt-next 

to your list of repos, than you get the current snapshot builds of 
qscintilla, sip4, PyQt3 and PyQt4, with dependent packages, like 
PyQwt5, PyKDE3 and PyKDE4. Omitting or deactivating the latter, you can 
switch back to the current released versions with:

zypper dup -r home_frispete_PyQt

BTW, home:frispete:PyQt contains the builds of the current versions of a 
lot of our favorite stuff: e.g. eric4, PyQwt5. eric is lacking the 
newest release, but I didn't manage to automate the sourceforge 
download process, yet.

How to choose your target?

Depending on which other repos you're using, choose your target 
accordingly, e.g. if you have the KDE:Distro:Stable (KDE 4.4) repo 
included, use KDE_Distro_Stable_openSUSE_11.x, 
KDE_Distro_Factory_openSUSE_11.x for KDE:Distro:Factory (KDE 4.5), or 
none of them, then use plain openSUSE_11.x. Note, that you implicitely 
choose your systems Qt4 version with this decision. I hope, this 
fullfills the most common needs. 

PyKDE4 is only provided for KDE_Distro_Stable_x ATM, since I didn't got 
around splitting this package into a 4.4 and 4.5 version.

All in all, these repos provide the the cheapest way of keeping current 
with the PyQt project, that I know of. 

Comments welcome.

Enjoy,
Pete
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 249 (database api) -- executemany() with iterable?

2010-10-14 Thread M.-A. Lemburg
Terry Reedy wrote:
> On 10/12/2010 11:10 AM, Roy Smith wrote:
>> PEP 249 says about executemany():
>>
>>  Prepare a database operation (query or command) and then
>>  execute it against all parameter sequences or mappings
>>  found in the sequence seq_of_parameters.
>>
>> are there any plans to update the api to allow an iterable instead of
>> a sequence?
> 
> That question would best be addressed to the pep author
> Marc-André Lemburg 

Questions about the DB-API should be discussed on the Python DB-SIG
list (put on CC):

http://mail.python.org/mailman/listinfo/db-sig

Regarding your question:

At the time the PEP was written, Python did not have iterables.

However, even with iterables, please keep in mind that pushing
the data row-per-row over a network does not result in good
performance, so using an iterable will make you update slower.

cursor.executemany() is meant to allow the database module
to optimize sending bulk data to the database and ideally,
it will send the whole sequence to the database in one go.

If you want to efficiently run an update with millions of
entries based on an iterable, it is better to use an intermediate
loop which builds sequences of say 1000 rows and then processes
those with a cursor.executemany() call.

You will likely also do this in multiple transactions to
prevent the database from creating a multi-GB transaction log
for the upload.

Another aspect to keep in mind is error reporting. When sending
bulk data to a database, some databases only report "error"
for the whole data block, so finding the problem can be
troublesome. For that reason, using smaller blocks is better
even when having the data available as real sequence.

Hope that helps,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Oct 14 2010)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing" [OT]

2010-10-14 Thread Arnaud Delobelle
Steven D'Aprano  writes:

> On Wed, 13 Oct 2010 21:52:54 +0100, Arnaud Delobelle wrote:
>> 
>> Given two circles with radii r1 and r2, circumferences C1 and C2, one is
>> obviously the scaled-up version of the other, therefore the ratio of
>> their circumferences is equal to the ratio of their radii:
>
> That's exactly the sort of thing Peter Nilsson was talking about when he 
> said "Most attempts by students collapse because they assume the formula 
> in advance". It might be "obvious" to you that the two circles are merely 
> scaled up versions of each other, but that is equivalent to assuming that 
> the ratio of the circumference to radius is a constant. Well, yes, it is 
> (at least under Euclidean geometry), but assuming it is a constant 
> doesn't allow you to prove it is a constant -- that's circular reasoning, 
> if you excuse the pun.

There is no circular reasoning.  Read on to find out why.

A circle is, by definition, the locus of points equidistant from a given
point (called its centre), and this constant distance is what we call
its radius.

Let's have two circles with the same centre and radii r1 and r2.  Let's
scale up (from the centre) the first one by a factor r2/r1.  Because all
the points the first circle are r1 units of length away from the centre,
all the points on the scaled up version are r1*r2/r1 = r2 units of
length from the centre.  So the scaled up version of the first circle
*is* the second circle.

I'll let you solve the case when the centres are distinct.

-- 
Arnaud

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: send command to parent shell

2010-10-14 Thread Diez B. Roggisch
Martin Landa  writes:

> Hi,
>
> is there a way how to send command from python script to the shell
> (known id) from which the python script has been called? More
> precisely, the goal is to exit running bash (on Linux) or cmd (on
> Windows) directly from wxPython application, currently user needs to
> quit wxPython application and then underlaying command prompt by
> 'exit' command.

Why is it started from the shell then in the first place? And if I did
it, I beg you to *not* close it... if I wanted that, I would have used 

 exec program

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Boolean value of generators

2010-10-14 Thread Tony
I have been using generators for the first time and wanted to check for
an empty result.  Naively I assumed that generators would give
appopriate boolean values.  For example

def xx():
  l = []
  for x in l:
yield x

y = xx()
bool(y)


I expected the last line to return False but it actually returns True.
Is there anyway I can enhance my generator or iterator to have the
desired effect?

Regards

Tony Middleton.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket problem and html problem

2010-10-14 Thread Stefan Behnel

bussiere bussiere, 11.10.2010 08:30:

here is my code and two questions :
why it says to me that i can't bind the socket ?
normally it had closed it and kill it :/
and why it returns me plain text and not html ?


I think the reason why no-one answered yet is that it's not immediately 
clear what you are talking about. You posted a bunch of source code with 
basically no introduction to it, and the tiny bit of textual context that 
you provide is not easy to understand either. Try to make it easier for 
others to help you.


Give this a read:

http://www.catb.org/esr/faqs/smart-questions.html

Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Performance evaluation of HTTPS library

2010-10-14 Thread Ashish
On Oct 13, 6:12 pm, Antoine Pitrou  wrote:
> On Wed, 13 Oct 2010 05:27:29 -0700 (PDT)Ashish  wrote:
>
> > Well, CBSocket is socket implementation that calls my callback on
> > data.
> > Both my classes AsyncHTTPSConnection and AsyncHTTPConnection use it
> > and use it the same way ( self.sock = CBSocket(sock2) ).
> > The implemetation of AsyncHTTPConnection differs from
> > AsyncHTTPSConnection only in connect method: sock2 =
> > ssl.wrap_socket(sock, self.key_file, self.cert_file)
>
> > class CBSocket(asynchat.async_chat):
>
> [...]
>
> Ok, this won't work as expected. The first issue is that
> ssl.wrap_socket() is a blocking operation, where your client will send
> data and wait for the server reply (it's the SSL's handshake),
> *before* the socket has been set in non-blocking mode by asyncore. It
> means that your client will remain idle a lot of time, and explains
> that neither the client nor the server reach 100% CPU utilization.
>
> The second issue is that combining SSL and asyncore is more complicated
> than that; there are various situations to consider which your code
> doesn't address. The stdlib right now doesn't provide SSL support for
> asyncore (seehttp://bugs.python.org/issue10084), so you would have to
> do it yourself. I don't think it's worth the trouble, and would
> recommend switching your client to a simple thread-based approach,
> where you handle each HTTP(S) connection in a separate thread and stick
> to blocking I/O.
>
> Regards
>
> Antoine.

I am impressed by the knowledge and also thankful to you for helping
me out.

I thought threads will be costly to use and if I go for say 200
parallel connections with 200 total threads (+ a few more I have in my
tool), it may not be efficient either. Let me try to change the
implementation to use threads + blocking i/o and get back with
results.

One more question: If I run the tool from multicore machine, will
python3.1 or 3.2 be able to actually use multicore? or it will be
running only on one core?

Thanks
Ashish.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: A new version (0.2.5) of the Python module which wraps GnuPG has been released.

2010-10-14 Thread Vinay Sajip
A new version of the Python module which wraps GnuPG has been
released.

What Changed?
=
This is a minor enhancement and bug-fix release. See the project
website ( http://code.google.com/p/python-gnupg/ ) for more
information. Summary:

Detached signatures can now be created and verified.
There's slightly better support for RSA and IDEA.
Some bugs whih surfaced when encrypting non-ASCII data have been
fixed.

The current version passes all tests on Windows (Python 2.4, 2.5, 2.6,
3.1, Jython 2.5.1) and Ubuntu (Python 2.4, 2.5, 2.6, 2.7, 3.0, 3.1,
3.2, Jython 2.5.1).

What Does It Do?

The gnupg module allows Python programs to make use of the
functionality provided by the Gnu Privacy Guard (abbreviated GPG or
GnuPG). Using this module, Python programs can encrypt and decrypt
data, digitally sign documents and verify digital signatures, manage
(generate, list and delete) encryption keys, using proven Public Key
Infrastructure (PKI) encryption technology based on OpenPGP.

This module is expected to be used with Python versions >= 2.4, as it
makes use of the subprocess module which appeared in that version of
Python. This module is a newer version derived from earlier work by
Andrew Kuchling, Richard Jones and Steve Traugott.

A test suite using unittest is included with the source distribution.

Simple usage:

>>> import gnupg
>>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory')
>>> gpg.list_keys()
[{
  ...
  'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2',
  'keyid': '197D5DAC68F1AAB2',
  'length': '1024',
  'type': 'pub',
  'uids': ['', 'Gary Gross (A test user) ']},
 {
  ...
  'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A',
  'keyid': '0C5FEFA7A921FC4A',
  'length': '1024',
  ...
  'uids': ['', 'Danny Davis (A test user) ']}]
>>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A'])
>>> str(encrypted)
'-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n
\nhQIOA/6NHMDTXUwcEAf
...
-END PGP MESSAGE-\n'
>>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret')
>>> str(decrypted)
'Hello, world!'
>>> signed = gpg.sign("Goodbye, world!", passphrase='secret')
>>> verified = gpg.verify(str(signed))
>>> print "Verified" if verified else "Not verified"
'Verified'

For more information, visit http://code.google.com/p/python-gnupg/ -
as always, your feedback is most welcome (especially bug reports,
patches and suggestions for improvement). Enjoy!

Cheers

Vinay Sajip
Red Dove Consultants Ltd.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 249 (database api) -- executemany() with iterable?

2010-10-14 Thread Martin Gregorie
On Thu, 14 Oct 2010 16:36:34 +1300, Lawrence D'Oliveiro wrote:

> In message <4cb5e659$0$1650$742ec...@news.sonic.net>, John Nagle wrote:
> 
>>  Also note that there are some issues with doing a huge volume of
>> updates in one MySQL InnoDB transaction.  The system has to keep the
>> data needed to undo the updates, and there's a limit on the amount of
>> pending transaction history that can be stored.
> 
> How does “load data” avoid this? Is that not a transaction too?
>
Not usually. Its faster because there's no journalling overhead. The 
loader takes out an exclusive table lock, dumps the data into the table, 
rebuilds indexes and releases the lock. I can't comment about MySQL 
(don't use it) but this has been the case on the RDBMS databases I have 
used.
 
> Seems to me this isn’t going to help, since both old and new tables are
> on the same disk, after all. And it’s the disk access that’s the
> bottleneck.
>
There's a lot of overhead in journalling - much more than in applying 
changes to a table. The before and after images *must* be flushed to disk 
on commit. In UNIX terms fsync() must be called on the journal file(s) 
and this is an expensive operation on all OSes because committing a 
series of small transactions can cause the same disk block to be written 
several times. However, the table pages can safely be left in the DBMS 
cache and flushed as part of normal cache operation since, after a crash, 
the table changes can always be recovered from a journal roll-forward. A 
good DBMS will do that automatically when its restarted.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org   |
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance evaluation of HTTPS library

2010-10-14 Thread Antoine Pitrou
On Thu, 14 Oct 2010 05:06:30 -0700 (PDT)
Ashish  wrote:
> 
> One more question: If I run the tool from multicore machine, will
> python3.1 or 3.2 be able to actually use multicore? or it will be
> running only on one core?

Only partly. Pure Python code is serialized (by the Global Interpreter
Lock), but some internal C code, such as SSL and socket routines, can
run in parallel with other code.

Regards

Antoine.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scheme as a virtual machine?

2010-10-14 Thread Pascal J. Bourguignon
namekuseijin  writes:

> On 13 out, 19:41, p...@informatimago.com (Pascal J. Bourguignon)
> wrote:
>> namekuseijin  writes:
>> > On 11 out, 08:49, Oleg  Parashchenko  wrote:
>> >> Hello,
>>
>> >> I'd like to try the idea that Scheme can be considered as a new
>> >> portable assembler. We could code something in Scheme and then compile
>> >> it to PHP or Python or Java or whatever.
>>
>> >> Any suggestions and pointers to existing and related work are welcome.
>> >> Thanks!
>>
>> >> My current approach is to take an existing Scheme implementation and
>> >> hijack into its backend. At this moment Scheme code is converted to
>> >> some representation with a minimal set of bytecodes, and it should be
>> >> quite easy to compile this representation to a target language. After
>> >> some research, the main candidates are Gambit, Chicken and CPSCM:
>>
>> >>http://uucode.com/blog/2010/09/28/r5rs-scheme-as-a-virtual-machine-i/...
>>
>> >> If there is an interest in this work, I could publish progress
>> >> reports.
>>
>> >> --
>> >> Oleg Parashchenko  o...@http://uucode.com/http://uucode.com/blog/ XML, 
>> >> TeX, Python, Mac, Chess
>>
>> > it may be assembler, too bad scheme libs are scattered around written
>> > in far too many different flavors of assembler...
>>
>> > It warms my heart though to realize that Scheme's usual small size and
>> > footprint has allowed for many quality implementations targetting many
>> > different backends, be it x86 assembly, C, javascript or .NET.  Take
>> > python and you have a slow c bytecode interpreter and a slow
>> > bytecode .NET compiler.  Take haskell and its so friggin' huge and
>> > complex that its got its very own scary monolithic gcc.  When you
>> > think of it, Scheme is the one true high-level language with many
>> > quality perfomant backends -- CL has a few scary compilers for native
>> > code, but not one to java,
>>
>> Yep, it only has two for java.
>
> I hope those are not Clojure and Qi... :p

No, they're CLforJava and ABCL.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Cameron Simpson
On 14Oct2010 10:16, Tony  wrote:
| I have been using generators for the first time and wanted to check for
| an empty result.  Naively I assumed that generators would give
| appopriate boolean values.  For example
| 
| def xx():
|   l = []
|   for x in l:
| yield x
| 
| y = xx()
| bool(y)
| 
| 
| I expected the last line to return False but it actually returns True.
| Is there anyway I can enhance my generator or iterator to have the
| desired effect?

The generator is not the same as the values it yields.
What you're doing is like this:

  >>> def f():
  ...   return False
  ... 
  >>> bool(f)
  True
  >>> bool(f())
  False

In your code, xx() returns a generator object. It is not None, nor any
kind of "false"-ish value. So bool() returns True.

The generator hasn't even _run_ at that point, so nobody has any idea if
iterating over it will return an empty sequence.

What you want is something like this:

  values = list(xx())
  bool(values)

or more clearly:

  gen = xx()
  values = list(gen)
  bool(values)

You can see here that you actually have to iterate over the generator
before you know if it is (will be) empty.

Try this program:

  def lines():
print "opening foo"
for line in open("foo"):
  yield line
print "closing foo"

  print "get generator"
  L = lines()
  print "iterate"
  text = list(L)
  print "done"

For me it does this:

  get generator
  iterate
  opening foo
  closing foo
  done

You can see there that the generator _body_ doesn't even run until you
start the iteration.

Does this clarify things for you?

Cheers,
-- 
Cameron Simpson  DoD#743
http://www.cskk.ezoshosting.com/cs/

Winter is gods' way of telling us to polish.
- Peter Harper  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: My first Python program

2010-10-14 Thread Hallvard B Furuseth
Seebs writes:

>> You can't really rely on the destructor __del__ being called.
>
> Interesting.  Do I just rely on files getting closed?

Sometimes, but that's not it.  Think Lisp, not C++.  __del__ is not that
useful. Python is garbage-collected and variables have dynamic lifetime,
so the class cannot expect __del__ to be called in a timely manner.
Destructors have several issues, see __del__ in the Python reference.

A class which holds an OS resource like a file, should provide a context
manager and/or a release function, the latter usually called in a
'finally:' block.  When the caller doesn't bother with either, the class
often might as well depend on the destructor in 'file'.

Still, open().read() is common.  open().write() is not.  The C
implementation of Python is reference-counted on top of GC, so the file
is closed immediately.  But this way, exceptions from close() are lost.
Python cannot propagate them up the possibly-unrelated call chain.


Some other points:

For long strings, another option is triple-quoting as you've seen in doc
strings: print """foo
bar""".


class SourceFile(object):
def emit(self, template, func = None):
# hey, at least it's not a global variable, amirite?
self.file.write(SourceFile.copyright)
def main():
SourceFile.copyright = copyright_file.read()

emit() can use self.copyright instead of SourceFile.copyright.

I've written such code, but I suppose the proper way is to use a
classmethod to set it, so you can see in the class how the copyright
gets there.  SourceFile.() and self.() both
get called with the class as 1st argument.

class SourceFile(object):
def setup_copyright(cls, fname):
cls.copyright = open(fname).read()
setup_copyright = classmethod(setup_copyright)
# In python >= 2.4 you can instead say @classmethod above the def.
def main():
SourceFile.setup_copyright('guts/COPYRIGHT')


SourceFile.__repr__() looks like it should be a __str__().  I haven't
looked at how you use it though.  But __repr__ is supposed to
look like a Python expression to create the instance: repr([2]) = '[2]',
or a generic '': repr(id) = ''.


"How new are list comprehensions?"

Python 2.0, found as follows:
- Google python list comprehensions.
- Check the PEP (Python Enhancement Proposal) which shows up.  PEPs
  are the formal documents for info to the community, for the Python
  development process, etc.  :
Title:  List Comprehensions
Status: Final
Type:   Standards Track
Python-Version: 2.0


-- 
Hallvard
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Peter Otten
Tony wrote:

> I have been using generators for the first time and wanted to check for
> an empty result.  Naively I assumed that generators would give
> appopriate boolean values.  For example
> 
> def xx():
>   l = []
>   for x in l:
> yield x
> 
> y = xx()
> bool(y)
> 
> 
> I expected the last line to return False but it actually returns True.
> Is there anyway I can enhance my generator or iterator to have the
> desired effect?

* What would you expect 

def f():
if random.randrange(2):
yield 42

print bool(f())

to print? Schrödinger's Cat?

* You can wrap your generator into an object that reads one item in advance. 
A slightly overengineered example:

http://code.activestate.com/recipes/577361-peek-ahead-an-iterator/

* I would recommend that you avoid the above approach. Pythonic solutions 
favour EAFP (http://docs.python.org/glossary.html#term-eafp) over look-
before-you-leap:

try:
value = next(y)
except StopIteration:
print "ran out of values"
else:
do_something_with(value)

or

value = next(y, default)

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: send command to parent shell

2010-10-14 Thread Giampaolo Rodolà
On Wed, 13 Oct 2010 06:30:15 -0700, Martin Landa wrote:
> is there a way how to send command from python script to the shell
> (known id) from which the python script has been called?

By using psutil (http://code.google.com/p/psutil/):

giampa...@ubuntu:~$ python
Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil, os
>>> me = psutil.Process(os.getpid())
>>> me.name
'python'
>>> parent = me.parent
>>> parent.name
'bash'
>>> parent.kill()


Regards,

--- Giampaolo
http://code.google.com/p/pyftpdlib/
http://code.google.com/p/psutil/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: send command to parent shell

2010-10-14 Thread Giampaolo Rodolà
Sorry I realize now that you wrote "how to send a command to the
shell" and not "how to kill the shell".
In this case I don't know exactly what you mean.



Regards,

--- Giampaolo
http://code.google.com/p/pyftpdlib/
http://code.google.com/p/psutil/

2010/10/14 Giampaolo Rodolà :
> On Wed, 13 Oct 2010 06:30:15 -0700, Martin Landa wrote:
>> is there a way how to send command from python script to the shell
>> (known id) from which the python script has been called?
>
> By using psutil (http://code.google.com/p/psutil/):
>
> giampa...@ubuntu:~$ python
> Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)
> [GCC 4.4.5] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import psutil, os
 me = psutil.Process(os.getpid())
 me.name
> 'python'
 parent = me.parent
 parent.name
> 'bash'
 parent.kill()
>
>
> Regards,
>
> --- Giampaolo
> http://code.google.com/p/pyftpdlib/
> http://code.google.com/p/psutil/
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: processing input from multiple files

2010-10-14 Thread Christopher Steele
The issue is that I need to be able to both, split the names of the files so
that I can extract the relevant times, and open each individual file and
process each line individually. Once I have achieved this I need to append
the sorted files onto one another in one long file so that I can pass them
into a verification package. I've tried changing the name to textline and I
get the same result - the sorted files overwrite one another.
The data are actually meteorological observations and I need to manipulate
them in order to test the performance of a model. The 333 denotes that cloud
observations are going to follow - something that is not always reported at
stations.

I hope this has helped

Chris


On Thu, Oct 14, 2010 at 3:16 PM, John Posner  wrote:

> On 10/14/2010 6:08 AM, Christopher Steele wrote:
>
>> Hi
>>
>> I've been trying to decode a series of observations from multiple files
>> (each file is a different time) and put each type of observation into
>> their own separate file. The script runs successfully for one file but
>> whenever I try it for more they just overwrite each other.
>>
>
> fileinput.input() iterates over *lines* not entire *files*. So take a look
> at this location in the code:
>
>
>  for file  in fileinput.input(obs):
>  data=file[:file.find(' 333 ')]
>
> Did you mean your iteration variable to be "file", implying that it will
> hold an entire file of input data?
>
> If you meant the iteration variable to be named "textline" instead of
> "file", is it guaranteed that string '  333  ' will occur in every such text
> line?
>
>
> -John
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: processing input from multiple files

2010-10-14 Thread John Posner

On 10/14/2010 6:08 AM, Christopher Steele wrote:

Hi

I've been trying to decode a series of observations from multiple files
(each file is a different time) and put each type of observation into
their own separate file. The script runs successfully for one file but
whenever I try it for more they just overwrite each other.


fileinput.input() iterates over *lines* not entire *files*. So take a 
look at this location in the code:


  for file  in fileinput.input(obs):
  data=file[:file.find(' 333 ')]

Did you mean your iteration variable to be "file", implying that it will 
hold an entire file of input data?


If you meant the iteration variable to be named "textline" instead of 
"file", is it guaranteed that string '  333  ' will occur in every such 
text line?



-John
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding python2.5 migration from windows xp to windows 7

2010-10-14 Thread nn
On Oct 14, 2:37 am, python_tsp  wrote:
> Hi,
>
> We have a Python based test framework which is being used in various
> projects.
>
> Our current environment is
> Python (ver 2.5.1)
> wxPython (wxPython2.8-win32-ansi-2.8.6.0-py25)
> pywin32-210.win32-py2.5
> vcredist_x86.exe
> pyserial-2.2
>
> Our Framework is being currently used in Windows XP.
>
> The issue is:
> Soon our environment will be migrated from Windows XP to Windows 7.
> In this regard, I would be in need of suggestions/ideas/information
> regarding migration of our existing framework into windows 7
> environment.
>
> Do i need to migrate our framework from Python 2.5 to either Python
> 2.6 or directly to Python 3.0 ? What happens to all supporting
> packages..etc
>
> Which is the best way ?
>
> We tried out of some option of using our framework under virtual xp
> context of windows 7.Thou it works for timebeing,i am not interested
> to having the same as kind of way of working for future.
>
> Please help
>
> Many thanks in advance
>
> - Pramod

I don't have Windows 7, but can't you just install Python 2.5 on
Windows 7? As far as I know Windows 7 is mostly backwards compatible.
-- 
http://mail.python.org/mailman/listinfo/python-list


Excellent website for IT professionals

2010-10-14 Thread It_wise
Excellent website for IT professionals.

http://digg.com/news/technology/what_is_CISCO_CCNA_Boot_Camp

Good luck.


-- 
http://mail.python.org/mailman/listinfo/python-list


imaplib AND date format

2010-10-14 Thread harryos
In imaplib.IMAP4.search() the search string SENTON can be used
'(SENTON 22-Jun-2010)'  .
But the RFC 2060 defines search key as
SENTON   Messages whose [RFC-822] Date: header is within the
 specified date.
and in RFC822 it is given as,
date=  1*2DIGIT month 2DIGIT; day month year
   ;  e.g.
20 Jun 82

The format 22 Jun 10 will cause aSEARCH command error:
 BAD ['Could not parse command']

Is this  deliberate or is it an anomaly ?
regards,
harry
-- 
http://mail.python.org/mailman/listinfo/python-list


Does everyone keep getting recruiting emails from google?

2010-10-14 Thread Daniel Fetchinson
I keep getting recruiting emails from charlesngu...@google.com about
working for google as an engineer. The messages are pretty much the
same and go like this:


I am part of the Google Staffing team and was wondering if you would
be open to exploring engineering opportunities with Google. I am
impressed with your background and thought your skills could be a fit
for our team.

I am currently looking for Engineers with hybrid Unix/Linux Systems
Administrators who possess experience in coding in C/C++ or Java
and/or scripting skills (Perl, Python, or Shell).

The Google.com Engineering Team is one of the most visible and
respected teams within Google, and the most mission critical. The team
is responsible for keeping the Google site and infrastructure up and
running 24/7, 365 days/year. They are dedicated to the scalability and
availability for the performance of Google applications. In short,
they maintain, monitor, and improve all Google services. Locations
primarily concentrated in Mt. View, Dublin, Zurich, with distributed
teams in San Francisco, Santa Monica, Boston, Kirkland, Seattle, New
York, London, and Sydney.

If you are interested, please email me an updated resume. If the
timing isn’t right for you to make a move, I would love to connect on
LinkedIn, and hopefully, we can keep in touch. Any referrals would be
appreciated!
-

I'm guessing I'm not the only one on this list to get these emails and
suspect that pretty much everyone gets them. Is that the case? If yes,
what's the point of spamming a more-or-less random set of people who
although are probably interested in IT-related stuff but who can
otherwise also be a set of dogs. Aren't enough people applying without
this?

Just wondering,
Daniel


-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: what happens to Popen()'s parent-side file descriptors?

2010-10-14 Thread Roger Davis
Many thanks to all who responded to my question! It's nice to know, as
someone new to Python, that there are lots of well-informed people out
there willing to help with such issues.

Thanks, Mike, for your pipes suggestion, I will keep that in mind for
future projects.

Seebs, you are of course correct that the example I quoted (`cat |
grep | whatever`) is best done internally with the re module and built-
in language features, and in fact that has already been done wherever
possible. I should have picked a better example, there are numerous
cases where I am calling external programs whose functionality is not
duplicated by Python features.

'Nobody' (clearly a misnomer!) and Chris, thanks for your excellent
explanations about garbage collection. (Chris, I believe you must have
spent more time looking at the subprocess source and writing your
response than I have spent writing my code.) GC is clearly at the
heart of my lack of understanding on this point. It sounds like, from
what Chris said, that *any* file descriptor
would be closed when GC occurs if it is no longer referenced,
subprocess-related or not. BTW, and this comment is not at all
intended for any of you who have already very generously and patiently
explained this stuff to me, it does seem like it might be a good idea
to provide documentation on some of these more important GC details
for pretty much any class, especially ones which have lots of murky OS
interaction. I have to admit that in this case it makes perfect sense
to close parent pipe descriptors there as I can't think of any reason
why you might want to keep one open after your object is no longer
referenced or your child exits.

It sounds to me that, although my code might be safe now as is, I
probably need to do an explicit p.stdXXX.close() myself for any pipes
which I open via Popen() as soon as I am done with them. Documentation
on python.org states that GC can be postponed or omitted altogether, a
possibility that Chris mentions in his comments. Other documentation
states that there is no harm in doing multiple close()es on the same
file, so I assume that neither my code nor the subprocess GC code will
break if the other does the deed first. If anybody thinks this is a
bad idea, please comment.

On a related point here, I have one case where I need to replace the
shell construct

   externalprog otherfile

I suppose I could just use os.system() here but I'd rather keep the
Unix shell completely out of the picture (which is why I am moving
things to Python to begin with!), so I'm just doing a simple open() on
somefile and otherfile and then passing those file handles into
Popen() for stdin and stdout. I am already closing those open()ed file
handles after the child completes, but I suppose that I probably
should also explicitly close Popen's p.stdin and p.stdout, too. (I'm
guessing they might be dup()ed from the original file handles?)


Thanks again to all!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread Grant Edwards
On 2010-10-14, Daniel Fetchinson  wrote:

> I keep getting recruiting emails from charlesngu...@google.com about
> working for google as an engineer. The messages are pretty much the
> same and go like this:

I got one a year or two back (from somebody else at google).  I
replied saying that I wasn't interested, and that was the end of it.

-- 
Grant Edwards   grant.b.edwardsYow! Oh my GOD -- the
  at   SUN just fell into YANKEE
  gmail.comSTADIUM!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiple assignments (was: My first Python program)

2010-10-14 Thread Ian Kelly
On Wed, Oct 13, 2010 at 3:53 PM, Ethan Furman  wrote:

> Ian Kelly wrote:
>
>>  here is an example
>> where the order of assignment actually matters:
>>
>>  >>> d['a'] = d = {}
>> Traceback (most recent call last):
>>  File "", line 1, in 
>> NameError: name 'd' is not defined
>>  >>> d = d['a'] = {}
>>  >>> d
>> {'a': {...}}
>>
>> As you can see, they're assigned left-to-right.
>>
>
> 
>
> Ah!  I was thinking the assignments went in a filter fashion, but now what
> I think is happening is that the first item is bound to the last, then the
> next item is bound to the last, etc, etc.
>
> Is this correct?
>

Exactly.

Cheers,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Eric] ANN: automated daily snapshot builds for PyQt and friend on openSUSE build service

2010-10-14 Thread Detlev Offenbach
On Donnerstag, 14. Oktober 2010, Hans-Peter Jansen wrote:
> [Sorry for cross posting]
> 
> Hi PyQtnistas,
> 
> I proudly announce the availability of automated builds of the most
> current PyQt and related packages including snapshots on openSUSEs
> build service for openSUSE 11.1, 11.2 and 11.3, here:
> 
> https://build.opensuse.org/project/monitor?project=home%3Afrispete%3APyQt
> https://build.opensuse.org/project/monitor?project=home%3Afrispete%3APyQt-n
> ext
> 
> New sip4, PyQt3 and PyQt4 snapshots and release get build against a
> range of gcc and Qt versions automatically, e.g. without human
> intervention (if all goes well, famous last words..). dip and
> PyQtMobility will probably follow soon.
> 
> If you add both
> 
>   home:frispete:PyQt
>   home:frispete:PyQt-next
> 
> to your list of repos, than you get the current snapshot builds of
> qscintilla, sip4, PyQt3 and PyQt4, with dependent packages, like
> PyQwt5, PyKDE3 and PyKDE4. Omitting or deactivating the latter, you can
> switch back to the current released versions with:
> 
>   zypper dup -r home_frispete_PyQt
> 
> BTW, home:frispete:PyQt contains the builds of the current versions of a
> lot of our favorite stuff: e.g. eric4, PyQwt5. eric is lacking the
> newest release, but I didn't manage to automate the sourceforge
> download process, yet.

Why don't you get the software via the eric repositories? See the eric web 
page for details.

Do you provide packages to be used with Python3 as well?

> 
> How to choose your target?
> 
> Depending on which other repos you're using, choose your target
> accordingly, e.g. if you have the KDE:Distro:Stable (KDE 4.4) repo
> included, use KDE_Distro_Stable_openSUSE_11.x,
> KDE_Distro_Factory_openSUSE_11.x for KDE:Distro:Factory (KDE 4.5), or
> none of them, then use plain openSUSE_11.x. Note, that you implicitely
> choose your systems Qt4 version with this decision. I hope, this
> fullfills the most common needs.
> 
> PyKDE4 is only provided for KDE_Distro_Stable_x ATM, since I didn't got
> around splitting this package into a 4.4 and 4.5 version.
> 
> All in all, these repos provide the the cheapest way of keeping current
> with the PyQt project, that I know of.
> 
> Comments welcome.
> 
> Enjoy,
> Pete
> ___
> Eric mailing list
> e...@riverbankcomputing.com
> http://www.riverbankcomputing.com/mailman/listinfo/eric


-- 
Detlev Offenbach
det...@die-offenbachs.de
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scheme as a virtual machine?

2010-10-14 Thread namekuseijin
On 14 out, 00:26, Ertugrul Söylemez  wrote:
> BTW, you mentioned symbols ('$', '.' and '>>='), which are not syntactic
> sugar at all.  They are just normal functions, for which it makes sense
> to be infix.  The fact that you sold them as syntactic sugar or
> "perlisms" proves that you have no idea about the language, so stop
> crying.  Also Python-style significant whitespace is strictly optional.
> It's nice though.  After all most Haskell programmers prefer it.

it still makes haskell code scattered with perlisms, be it syntax or
function name... in practice, Haskell code is ridden with such
perlisms and significant whitespace, and infix function application
and more special cases.  All of these contribute to a harder to parse
language and to less compilers for it.

> > And one as complex and scary beast as gcc... that's the cost of a very
> > irregular syntax...
>
> What also proves that you have no idea is the fact that there is no
> Haskell compiler called 'gcc'.  That's the GNU C compiler.

ORLY?

do you understand what a comparison is?

> Glasgow Haskell Compiler, GHC, and it's by far not the only one.  It's
> just the one most people use, and there is such a compiler for all
> languages.

yeah, there's also some Yale Haskell compiler in some graveyard, last
time I heard...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: what happens to Popen()'s parent-side file descriptors?

2010-10-14 Thread Chris Torek
In article <8bec27dd-b1da-4aa3-81e8-9665db040...@n40g2000vbb.googlegroups.com>
>'Nobody' (clearly a misnomer!) and Chris, thanks for your excellent
>explanations about garbage collection. (Chris, I believe you must have
>spent more time looking at the subprocess source and writing your
>response than I have spent writing my code.)

Well, I just spent a lot of time looking at the code earlier
this week as I was thinking about using it in a program that is
required to be "highly reliable" (i.e., to never lose data, even
if Things Go Wrong, like disks get full and sub-commands fail).

(Depending on shell version, "set -o pipefail" can allow
"cheating" here, i.e., with subprocess, using shell=True and
commands that have the form "a | b":

$ (exit 0) | (exit 2) | (exit 0)
$ echo $?
0
$ set -o pipefail
$ (exit 0) | (exit 2) | (exit 0)
$ echo $?
2

but -o pipefail is not POSIX and I am not sure I can count on
it.)

>GC is clearly at the heart of my lack of understanding on this
>point. It sounds like, from what Chris said, that *any* file
>descriptor would be closed when GC occurs if it is no longer
>referenced, subprocess-related or not.

Yes -- but, as noted elsethread, "delayed" failures from events
like "disk is full, can't write last bits of data" become problematic.

>It sounds to me that, although my code might be safe now as is, I
>probably need to do an explicit p.stdXXX.close() myself for any pipes
>which I open via Popen() as soon as I am done with them.

Or, use the p.communicate() function, which contains the explicit
close.  Note that if you are using a unidirectional pipe and do
your own I/O -- as in your example -- calling p.communicate()
will just do the one attempt to read from the pipe and then close
it, so you can ignore the result:

import subprocess
p = subprocess.Popen(["cat", "/etc/motd"], stdout=subprocess.PIPE)
for line in p.stdout:
print line.rstrip()
p.communicate()

The last call returns ('', None) (note: not ('', '') as I suggested
earlier, I actually typed this one in on the command line).  Run
python with strace and you can observe the close call happen --
this is the [edited to fit] output after entering the p.communicate()
line:

read(0, "\r", 1)= 1
write(1, "\n", 1
)   = 1
rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0

[I push "enter", readline echos a newline and does tty ioctl()s]

rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigaction(SIGWINCH, {SIG_DFL}, {0xb759ed10, [], SA_RESTART}, 8) = 0
time(NULL)  = 1287075471

[no idea what these are really for, but the signal manipulation
appears to be readline()]

fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
_llseek(3, 0, 0xbf80d490, SEEK_CUR) = -1 ESPIPE (Illegal seek)
read(3, "", 8192)   = 0
close(3)= 0

[fd 3 is the pipe reading from "cat /etc/motd" -- no idea what the
fstat64() and _llseek() are for here, but the read() and close() are
from the communicate() function]

waitpid(13775, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 13775

[this is from p.wait()]

write(1, "(\'\', None)\n", 11('', None)
)  = 11

[this is the result being printed, and the rest is presumably
readline() again]

ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0
ioctl(0, TIOCGWINSZ, {ws_row=44, ws_col=80, ...}) = 0
ioctl(0, TIOCSWINSZ, {ws_row=44, ws_col=80, ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost ...}) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigaction(SIGWINCH, {0xb759ed10, [], SA_RESTART}, {SIG_DFL}, 8) = 0
write(1, ">>> ", 4>>> ) = 4
select(1, [0], NULL, NULL, NULL

>On a related point here, I have one case where I need to replace the
>shell construct
>
>   externalprog otherfile
>
>I suppose I could just use os.system() here but I'd rather keep the
>Unix shell completely out of the picture (which is why I am moving
>things to Python to begin with!), so I'm just doing a simple open() on
>somefile and otherfile and then passing those file handles into
>Popen() for stdin and stdout. I am already closing those open()ed file
>handles after the child completes, but I suppose that I probably
>should also explicitly close Popen's p.stdin and p.stdout, too. (I'm
>guessing they might be dup()ed from the

Re: Boolean value of generators

2010-10-14 Thread Carl Banks
On Oct 14, 2:16 am, Tony  wrote:
> I have been using generators for the first time and wanted to check for
> an empty result.  Naively I assumed that generators would give
> appopriate boolean values.  For example
>
> def xx():
>   l = []
>   for x in l:
>     yield x
>
> y = xx()
> bool(y)
>
> I expected the last line to return False but it actually returns True.
> Is there anyway I can enhance my generator or iterator to have the
> desired effect?

In general, the only way to test if a generator is empty is to try to
consume an item.  (It's possible to write an iterator that consumes an
item and caches it to be returned on the next next(), and whose
boolean status indicates if there's an item left.  I would guess the
recipe Peter Otten pointed you to does that.)

The unfortunate thing about this is that functions written to iterate
over sequences that test if the sequence is empty with a boolean test
cannot be used with generators, and will fail silently.  This hurts
duck typing.

This became an issue some releases ago (2.4, I think) when someone
decided duck typing was a good thing and so it would be a good idea if
iterators that did know if they were empty had a boolean status
indicating as such.  GvR angrily told them to change it back next
release.  I have to agree with GvR here: at least this way there is a
simple rule whether boolean test works.  (Sequences return boolean
status indicating if they're empty; other iterators return True.)  The
better thing would be if boolean wasn't used to test for emptiness at
all; the whole concept of booleans in Python is overloaded and that
hurts duck typing.



Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Carl Banks
On Oct 14, 6:36 am, Peter Otten <__pete...@web.de> wrote:
> * I would recommend that you avoid the above approach. Pythonic solutions
> favour EAFP (http://docs.python.org/glossary.html#term-eafp) over look-
> before-you-leap:
>
> try:
>     value = next(y)
> except StopIteration:
>     print "ran out of values"
> else:
>     do_something_with(value)
>
> or
>
> value = next(y, default)


Good idea but not always convenient.  Sometimes you have to perform
some setup ahead of time if there are any items, and must not perform
that setup if the there are no items.  It's a PITA to use EAFP for
that, which is why an iterator that consumes and caches can be a
useful thing.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread Matteo Landi
I got one a couple of months ago. I answered back I was interested and
then we scheduled a phone conversation.

Good luck,
Matteo

On Thu, Oct 14, 2010 at 5:58 PM, Grant Edwards  wrote:
> On 2010-10-14, Daniel Fetchinson  wrote:
>
>> I keep getting recruiting emails from charlesngu...@google.com about
>> working for google as an engineer. The messages are pretty much the
>> same and go like this:
>
> I got one a year or two back (from somebody else at google).  I
> replied saying that I wasn't interested, and that was the end of it.
>
> --
> Grant Edwards               grant.b.edwards        Yow! Oh my GOD -- the
>                                  at               SUN just fell into YANKEE
>                              gmail.com            STADIUM!!
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Matteo Landi
http://www.matteolandi.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Hyperlink to a file using python

2010-10-14 Thread Pratik Khemka


 I think I did not frame the question in a proper manner..
 
I want to open pratik.html which is there in the same folder as the python 
program. I do not want to specify the path like you can see below in the code 
(blue) c:\Documents and Settings\My Documents..The reason for this is that I 
want the code to be portable , ie others should also be able to run the program 
on their computers in whichever folder they want to. In this situation the code 
wont work on other computers due to the path name specified.
 
Currently I am using this code below :
sheet.write(4,3,"file:///c:\Documents and Settings\My 
Documents\pratik.html",hyperlink_style)
 
What I want to know is that if there is a way to remove the blue part (path to 
file)..I think it should be possible because the file is present in the same 
folder as the python program..
Currently the hyperlink only works if the blue part is also there..I am sorry 
if this question probably does not belong to this group and maybe belongs more 
to the excel group.

Thanks a lot for all the help..I really aprreciate it..
Pratik
 
 
> Date: Wed, 13 Oct 2010 15:19:54 -0700
> Subject: Re: Hyperlink to a file using python
> From: c...@rebertia.com
> To: pratikkhe...@hotmail.com
> CC: python-list@python.org
> 
> >> To: python-list@python.org
> >> From: em...@fenx.com
> >> Subject: Re: Hyperlink to a file using python
> >> Date: Wed, 13 Oct 2010 14:19:36 -0700
> >>
> >> On 10/13/2010 1:57 PM Pratik Khemka said...
> >> >
> >> > I want to create a hyperlink in my excel sheet using python such that
> >> > when you click on that link (which is a file name (html file)), the file
> >> > automatically opens. This file is present in the same folder in which the
> >> > python code file is present.
> >> >
> >> > I am using xlwt module
> >> >
> >> > link= 'abcd.html'
> >> > sheet.write(x, y, link, format_style)
> >>
> >> Hmmm... my excel does that automagically when I
> >> type "http://xx.yy.zz/word.html into a cell.
> >>
> >> What happens when you use "http://a.b.c/abcd.html";?
> 
> On Wed, Oct 13, 2010 at 3:13 PM, Pratik Khemka  
> wrote:
> > This file is present on my hardrive..This file is present in the same folder
> > in which the python code file is present...so http: wont work..
> 
> Have you tried a file:/// URI?
> http://en.wikipedia.org/wiki/File_URI_scheme
> 
> Cheers,
> Chris
  -- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Paul Rubin
Carl Banks  writes:
> In general, the only way to test if a generator is empty is to try to
> consume an item.  (It's possible to write an iterator that consumes an
> item and caches it to be returned on the next next(), and whose
> boolean status indicates if there's an item left. ...)

I remember thinking that Python would be better off if all generators
automatically cached an item, so you could test for emptiness, look
ahead at the next item without consuming it, etc.  This might have been
a good change to make in Python 3.0 (it would have broken compatibility
with 2.x) but it's too late now.
-- 
http://mail.python.org/mailman/listinfo/python-list


python/c api

2010-10-14 Thread Tony
hi,

is the python/c api extensively used? and what world-famous software
use it? thanks!

tony
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Re: Re: UTF-8 problem encoding and decoding in Python3

2010-10-14 Thread hidura
Finally did it, thank you all for your help, the code i will upload because  
can be used by Python 3 for handle the wsgi issue of the Bytes!

Almar, sorry for the mails gmails sometimes sucks!!

On Oct 14, 2010 1:00pm, hid...@gmail.com wrote:
Finally did it, thank you all for your help, the code i will upload  
because can be used by Python 3 for handle the wsgi issue of the Bytes!



On Oct 12, 2010 5:28pm, Almar Klein almar.kl...@gmail.com> wrote:
>
>
>
> So if you can, you could make sure to send the file as just bytes,
>
> or if it must be a string, base64 encoded. If this is not possible
>
> you can try the code below to obtain the bytes, not a very fast
>
> solution, but it should work (Python 3):
>
>
>
>
>
> MAP = {}
>
> for i in range(256):
>
> MAP[tmp] = eval("'\\u%04i'" % i)
>
>
>
> >
>
> > # Let's say 'a' is your string
>
> > b''.join([MAP[c] for c in a])
>
> >
>
>
>
>
> I don't know what you're trying to do here.
>
>
>
> 1. 'tmp' is the same for every iteration of the 'for' loop.
>
>
>
> 2. A Unicode escape sequence expects 4 hexadecimal digits; the 'i'
>
> format gives a decimal number.
>
>
>
> 3. Using 'eval' to make a string this way is the long (and wrong) way
>
> to do it; chr(i) would have the same effect.
>
>
>
> 4. The result of the eval is a string, but you're performing a join
>
> with a bytestring, hence the exception.
> Mmm, you're right. I didn't look at this carefully enough, and then  
made an error in copying the source code. Sorry for that ...

>
> Here's a solution that should work (if I understand your problem  
correctly):

>
> your_bytes = bytes([ord(c) for c in your_string])
>
> Almar
>
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help needed - To get path of a directory

2010-10-14 Thread Emmanuel Surleau
> Dear Emmanuel,
> 
> Thank you for your reply.
> Actually what I want to do is, at the run time I want to know the location
> of a specific directory.
> Then I will add some file name to the path and load the file.
> The directory can reside in any drive, depending on the user.

Well... If you don't even know the path of the directory relative to the root 
of the drive... I hope your user is not in any hurry. Scanning a single drive 
is potentially very time-consuming, scanning them all will be awful 
(especially if your user has mounted, say, a huge network drive...).

Don't you have any other, smarter way of finding out the directory? Like 
reading a configuration file or an entry in the registry?

Cheers,

Emm
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hyperlink to a file using python

2010-10-14 Thread MRAB

On 14/10/2010 18:54, Pratik Khemka wrote:


I think I did not frame the question in a proper manner..

I want to open pratik.html which is there in the same folder as the
python program. I do not want to specify the path like you can see below
in the code (blue) *_c:\Documents_ and Settings\My Documents..*The
reason for this is that I want the code to be portable , ie others
should also be able to run the program on their computers in whichever
folder they want to. In this situation the code wont work on other
computers due to the path name specified.

Currently I am using this code below :
*sheet.write(4,3,"**file:///c:\Documents** and Settings\My
Documents\pratik.html",hyperlink_style)*
**
What I want to know is that if there is a way to remove the blue part
(path to file)..I think it should be possible because the /file is
present in the same folder as the python program/..
Currently the hyperlink only works if the blue part is also there..I am
sorry if this question probably does not belong to this group and maybe
belongs more to the excel group.

Thanks a lot for all the help..I really aprreciate it..


[snip]
Try a relative hyperlink like "file:pratik.html".
--
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Albert Hopkins
On Thu, 2010-10-14 at 10:16 +0100, Tony wrote:
> I have been using generators for the first time and wanted to check for
> an empty result.  Naively I assumed that generators would give
> appopriate boolean values.  For example
> 
> def xx():
>   l = []
>   for x in l:
> yield x
> 
> y = xx()
> bool(y)
> 

As people have already mentioned, generators are objects and objects
(usually) evaluate to True.

There may be times, however, that a generator may "know" that it
doesn't/isn't/won't generate any values, and so you may be able to
override boolean evaluation.  Consider this example:

class DaysSince(object):
def __init__(self, day):
self.day = day
self.today = datetime.date.today()

def __nonzero__(self):
if self.day > self.today:
return False
return True

def __iter__(self):
one_day = datetime.timedelta(1)
new_day = self.day
while True:
new_day = new_day + one_day
if new_day <= self.today:
yield new_day
else:
break

g1 = DaysSince(datetime.date(2010, 10, 10))
print bool(g1)
for day in g1:
print day

g2 = DaysSince(datetime.date(2011, 10, 10))
print bool(g2)
for day in g2:
print
day   


> True
> 2010-10-11
> 2010-10-12
> 2010-10-13
> 2010-10-14
> False



-- 
http://mail.python.org/mailman/listinfo/python-list


GCC process not working as expected when called in Python (3.1.2) subprocess-shell, but OK otherwise

2010-10-14 Thread Kingsley Turner

 Hi,

I'm using GCC as a pre-processor for a C-like language (EDDL) to handle all 
the includes, macros, etc. producing a single source file for another 
compiler.  My python code massages the inputs (which arrive in a .zip file), 
then calls GCC.


I have a problem where if I call GCC from my python script some of the 
#defines are not processed in the output.  However if I paste the exact same 
GCC command-line into a shell, I get a correct output.


I'm calling GCC in this manner:

### Execute GCC, keep stdout & stderr
err_out = open(error_filename,"wb")
process = subprocess.Popen(gcc_command, stderr=err_out, bufsize=81920, 
cwd=global_options['tmp'])

gcc_exit_code = process.wait()
log("GCC Exit Code %u" % (gcc_exit_code))
err_out.close()

where gcc_command is:

"/usr/bin/gcc -I /tmp/dd-compile_1286930109.99 -I 
/home/foo/eddl-includes -D__TOKVER__=600 -ansi -nostdinc -v -x c -E -o 
/tmp/dd-compile_1286930109.99/11130201.ddl.OUT 
/tmp/dd-compile_1286930109.99/11130201.ddl"


So when this code spawns GCC, the compiler does not really work 100%, but if 
I paste this exact command line, the output is perfect.


I'm not really sure how to debug this.  I already checked the ulimits, and 
permissions shouldn't be a problem since it's all run by the same user, I 
also checked the environment - these were copied into the subshell.


GCC produces no warnings, or errors.  The output is mostly OK, some other 
macros have been processed.
If I diff the working output with the non-working one, the differences are 
only a bunch of skipped #defines.


gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)
Python 3.1.2 (release31-maint, Sep 17 2010, 20:27:33)
Linux 2.6.35-22-generic #34-Ubuntu SMP Sun Oct 10 09:26:05 UTC 2010 x86_64 
GNU/Linux


Any suggestions for helping me debug this would be much appreciated.

thanks,
-kt

PS> this query has also been posted to the GCC-help list



--
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Tim Chase

On 10/14/10 12:53, Paul Rubin wrote:

Carl Banks  writes:

In general, the only way to test if a generator is empty is to try to
consume an item.  (It's possible to write an iterator that consumes an
item and caches it to be returned on the next next(), and whose
boolean status indicates if there's an item left. ...)


I remember thinking that Python would be better off if all generators
automatically cached an item, so you could test for emptiness, look
ahead at the next item without consuming it, etc.  This might have been
a good change to make in Python 3.0 (it would have broken compatibility
with 2.x) but it's too late now.


Generators can do dangerous things...I'm not sure I'd *want* to 
have Python implicitly cache generators without an explicit 
wrapper to request it:


 import os
 from fnmatch import fnmatch

 def delete_info(root, pattern):
   for path, dirs, files in os.walk(root):
 for fname in files:
   if fnmatch(fname, pattern):
 full_path = os.path.join(path, fname)
 info = gather_info(full_path)
 os.unlink(full_path)
 yield full_path, info

 location = '/'
 user_globspec = '*.*'
 deleter = delete_info(location, user_globspec)
 if some_user_condition_determined_after_generator_creation:
   for path, info in deleter:
 report(path, info)

-tkc
--
http://mail.python.org/mailman/listinfo/python-list


VipIMAGE 2011 – ECCOMAS Thematic Conference - FIRS T ANNOUNCE

2010-10-14 Thread tava...@fe.up.pt

---

International ECCOMAS Thematic Conference VipIMAGE 2011 - III ECCOMAS
THEMATIC
CONFERENCE ON COMPUTATIONAL VISION AND MEDICAL IMAGE PROCESSING
12-14th October 2011, Olhão, Algarve, Portugal
www.fe.up.pt/~vipimage

FIRST ANNOUNCE

(We would appreciate if you could distribute this information by your
colleagues
and co-workers.)

---

Dear Colleague,

We are glad to announce the International Conference VipIMAGE 2011 -
III ECCOMAS THEMATIC CONFERENCE ON COMPUTATIONAL VISION AND MEDICAL
IMAGE PROCESSING that will be held in Real Marina Hotel & Spa, Olhão,
Algarve, Portugal, on October 12-14, 2011.


Possible Topics (not limited to)

- Signal and Image Processing
- Computational Vision
- Medical Imaging
- Physics of Medical Imaging
- Tracking and Analysis of Movement
- Simulation and Modeling
- Image Acquisition
- Shape Reconstruction
- Objects Segmentation, Matching, Simulation
- Data Interpolation, Registration, Acquisition and Compression
- 3D Vision
- Virtual Reality
- Software Development for Image Processing and Analysis
- Computer Aided Diagnosis, Surgery, Therapy, and Treatment
- Computational Bioimaging and Visualization
- Telemedicine Systems and their Applications


Invited Lecturers

- Armando J. Pinho - University of Aveiro, Portugal
- Irene M. Gamba - The University of Texas at Austin, USA
- Marc Pollefeys - ETH Zurich, Switzerland
- Marc Thiriet - Universite Pierre et Marie Curie (Paris VI), France
- Xavier Roca Marvà - Autonomous University of Barcelona, Spain
- Stan Sclaroff - Boston University, USA


Thematic Sessions

Proposals to organize Thematic Session within VipIMAGE 2011 are mostly
welcome.
The organizers of the selected thematic sessions will be included in
the conference scientific committee and will have a reduced
registration fee. They will be responsible for the dissemination of
their thematic session, may invite expertise researchers to have
invited keynotes during their session and will participate in the
review process of the submitted contributions.
Proposals for Thematic Sessions should be submitted by email to the
conference co-chairs (tava...@fe.up.pt, rna...@fe.up.pt)


Publications

The proceedings book will be published by the Taylor & Francis Group.
The organizers will encourage the submission of extended versions of
the accepted papers to related International Journals; in particular,
for special issues dedicated to the conference.
As what happened with the first two editions of VipIMAGE, a book with
20 invited works from the most important ones presented in
VipIMAGE2011 will be published by Springer.


Important dates

- Deadline for Thematic Sessions proposals: 15th January 2011
- Thematic Sessions notification: 31th January 2011
- Deadline for Extended Abstracts: 15th March 2011
- Authors Notification: 15th April 2011
- Deadline for Lectures and Papers: 15th June 2011


We are looking forward to see you in Algarve next year.

Kind regards,

João Manuel R. S. Tavares
Renato Natal Jorge
(conference co-chairs)

PS. For further details please see the conference website at:
www.fe.up.pt/~vipimage
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: processing input from multiple files

2010-10-14 Thread John Posner

On 10/14/2010 10:44 AM, Christopher Steele wrote:
The issue is that I need to be able to both, split the names of the 
files so that I can extract the relevant times, and open each 
individual file and process each line individually. Once I have 
achieved this I need to append the sorted files onto one another in 
one long file so that I can pass them into a verification package. 
I've tried changing the name to textline and I get the same result


I'm very happy to hear that changing the name of a variable did not 
affect the way the program works! Anything else would be worrisome.




- the sorted files overwrite one another.


Variable *time* names a list, with one member for each input file. But 
variable *newtime* names a scalar value, not a list. That looks like a 
problem to me. Either of the following changes might help:


Original:

  for x in time:
  hour= x[:2]
  print hour
  newtime = year+month+day+'_'+hour+'00'

Alternative #1:

  newtime = []
  for x in time:
  hour= x[:2]
  print hour
  newtime.append(year+month+day+'_'+hour+'00')

Alternative #2:
  newtime = [year + month + day + '_' + x[:2] + '00' for x in time]


HTH,
John

--
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread Seebs
On 2010-10-14, Daniel Fetchinson  wrote:
> I keep getting recruiting emails from charlesngu...@google.com about
> working for google as an engineer.

I've gotten one of those, ever, and it named a specific person who had
referred me.

(It turns out to be a moot point, $DAYJOB has telecommuting, and telecommuting
beats pretty much every other possible consideration in a job for me.)

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nos...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python/c api

2010-10-14 Thread Diez B. Roggisch
Tony  writes:

> hi,
>
> is the python/c api extensively used? and what world-famous software
> use it? thanks!

It is, for a lot of extensions for python, and a lot of embedding python
into a software. For example Ableton Live, an audio sequencer. Arc GIS
has it, and the Eve Online. Many more do, I guess.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: My first Python program

2010-10-14 Thread Seebs
On 2010-10-14, Hallvard B Furuseth  wrote:
> A class which holds an OS resource like a file, should provide a context
> manager and/or a release function, the latter usually called in a
> 'finally:' block.  When the caller doesn't bother with either, the class
> often might as well depend on the destructor in 'file'.

That makes sense.

In this case, I'm pretty sure context managers aren't the right tool (even
apart from version questions), because they appear to be syntax-level
tools -- but it's a runtime decision how many files I have to open and
close.

> For long strings, another option is triple-quoting as you've seen in doc
> strings: print """foo
> bar""".

I assume that this inserts a newline, though, and in this case I don't
want that.

> I've written such code, but I suppose the proper way is to use a
> classmethod to set it, so you can see in the class how the copyright
> gets there.  SourceFile.() and self.() both
> get called with the class as 1st argument.

Oh, that makes more sense.

> def setup_copyright(cls, fname):
> cls.copyright = open(fname).read()
> setup_copyright = classmethod(setup_copyright)
> # In python >= 2.4 you can instead say @classmethod above the def.

I *think* I get to assume 2.4.

> SourceFile.__repr__() looks like it should be a __str__().  I haven't
> looked at how you use it though.  But __repr__ is supposed to
> look like a Python expression to create the instance: repr([2]) = '[2]',
> or a generic '': repr(id) = ''.

Ahh!  I didn't realize that.  I was using repr as the "expand on it enough
that you can see what it is" form -- more for debugging than for
something parsable.

> Python 2.0, found as follows:
> - Google python list comprehensions.
> - Check the PEP (Python Enhancement Proposal) which shows up.  PEPs
>   are the formal documents for info to the community, for the Python
>   development process, etc.  :
> Title:List Comprehensions
> Status:   Final
> Type: Standards Track
> Python-Version: 2.0

Ah-hah!  That's helpful.  Thanks -- now I know how to find out next time
I'm curious.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nos...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: GCC process not working as expected when called in Python (3.1.2) subprocess-shell, but OK otherwise

2010-10-14 Thread Diez B. Roggisch
Kingsley Turner  writes:

>  Hi,
>
> I'm using GCC as a pre-processor for a C-like language (EDDL) to
> handle all the includes, macros, etc. producing a single source file
> for another compiler.  My python code massages the inputs (which
> arrive in a .zip file), then calls GCC.
>
> I have a problem where if I call GCC from my python script some of the
> #defines are not processed in the output.  However if I paste the
> exact same GCC command-line into a shell, I get a correct output.
>
> I'm calling GCC in this manner:
>
> ### Execute GCC, keep stdout & stderr
> err_out = open(error_filename,"wb")
> process = subprocess.Popen(gcc_command, stderr=err_out,
> bufsize=81920, cwd=global_options['tmp'])
> gcc_exit_code = process.wait()
> log("GCC Exit Code %u" % (gcc_exit_code))
> err_out.close()
>
> where gcc_command is:
>
> "/usr/bin/gcc -I /tmp/dd-compile_1286930109.99 -I 
> /home/foo/eddl-includes -D__TOKVER__=600 -ansi -nostdinc -v -x c -E -o 
> /tmp/dd-compile_1286930109.99/11130201.ddl.OUT
> /tmp/dd-compile_1286930109.99/11130201.ddl"
>
> So when this code spawns GCC, the compiler does not really work 100%,
> but if I paste this exact command line, the output is perfect.
>
> I'm not really sure how to debug this.  I already checked the ulimits,
> and permissions shouldn't be a problem since it's all run by the same
> user, I also checked the environment - these were copied into the
> subshell.
>
> GCC produces no warnings, or errors.  The output is mostly OK, some
> other macros have been processed.
> If I diff the working output with the non-working one, the differences
> are only a bunch of skipped #defines.
>
> gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)
> Python 3.1.2 (release31-maint, Sep 17 2010, 20:27:33)
> Linux 2.6.35-22-generic #34-Ubuntu SMP Sun Oct 10 09:26:05 UTC 2010
> x86_64 GNU/Linux
>
> Any suggestions for helping me debug this would be much appreciated.

sounds nasty. Only thing I can imagine is that GCC wants specific
environment variables to exist. Maybe using "shell=True" helps? Or
capturing and passing an explicit environment?

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: what happens to Popen()'s parent-side file descriptors?

2010-10-14 Thread Seebs
On 2010-10-14, Roger Davis  wrote:
> Seebs, you are of course correct that the example I quoted (`cat |
> grep | whatever`) is best done internally with the re module and built-
> in language features, and in fact that has already been done wherever
> possible. I should have picked a better example, there are numerous
> cases where I am calling external programs whose functionality is not
> duplicated by Python features.

Fair enough.  It's just a common pitfall when moving from shell to basically
any other language.  My first attempts to code in C involved a lot of
building up of system() calls.  :P

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nos...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: RE: Hyperlink to a file using python

2010-10-14 Thread Dave Angel



On 2:59 PM, Pratik Khemka wrote:


  I think I did not frame the question in a proper manner..

I want to open pratik.html which is there in the same folder as the python 
program. I do not want to specify the path like you can see below in the code 
(blue) c:\Documents and Settings\My Documents..The reason for this is that I 
want the code to be portable , ie others should also be able to run the program 
on their computers in whichever folder they want to. In this situation the code 
wont work on other computers due to the path name specified.

Currently I am using this code below :
sheet.write(4,3,"file:///c:\Documents and Settings\My 
Documents\pratik.html",hyperlink_style)

What I want to know is that if there is a way to remove the blue part (path to 
file)..I think it should be possible because the file is present in the same 
folder as the python program..
Currently the hyperlink only works if the blue part is also there..I am sorry 
if this question probably does not belong to this group and maybe belongs more 
to the excel group.

Thanks a lot for all the help..I really aprreciate it..
Pratik

(You top-posted, so I had to remove all the earlier stuff which is now 
out of order.)


I'm assuming you're generating this spreadsheet from python.  So if you 
know what the format of the string is, generate it accordingly.  You can 
use __file__ to get full the path of the python script, use 
os.path.dirname() on it, and use os.path.join() to combine it with the 
"pratik.html". You might also need abspath().


Once the spreadsheet has been written, it's not portable to other 
machines.  But fixing that would be an Excel problem, and you're asking 
in python forum.


DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: GCC process not working as expected when called in Python (3.1.2) subprocess-shell, but OK otherwise

2010-10-14 Thread Chris Rebert
On Wed, Oct 13, 2010 at 7:06 PM, Kingsley Turner
 wrote:
>  Hi,
>
> I'm using GCC as a pre-processor for a C-like language (EDDL) to handle all
> the includes, macros, etc. producing a single source file for another
> compiler.  My python code massages the inputs (which arrive in a .zip file),
> then calls GCC.
>
> I have a problem where if I call GCC from my python script some of the
> #defines are not processed in the output.  However if I paste the exact same
> GCC command-line into a shell, I get a correct output.
>
> I'm calling GCC in this manner:
>
>    ### Execute GCC, keep stdout & stderr
>    err_out = open(error_filename,"wb")
>    process = subprocess.Popen(gcc_command, stderr=err_out, bufsize=81920,
> cwd=global_options['tmp'])
>    gcc_exit_code = process.wait()
>    log("GCC Exit Code %u" % (gcc_exit_code))
>    err_out.close()
>
> where gcc_command is:
>
>    "/usr/bin/gcc -I /tmp/dd-compile_1286930109.99 -I /home/foo/eddl-includes
> -D__TOKVER__=600 -ansi -nostdinc -v -x c -E -o
> /tmp/dd-compile_1286930109.99/11130201.ddl.OUT
> /tmp/dd-compile_1286930109.99/11130201.ddl"

Quoting http://docs.python.org/library/subprocess.html#subprocess.Popen
, emphasis mine, and keeping in mind that you're passing gcc_command
as the "args" argument to Popen's constructor:

"On Unix, with shell=False (default): [...] args should normally be
**a sequence**. If a string is specified for args, it will be used as
the name or path of the program to execute; ***this will only work if
the program is being given no arguments***."

Clearly you are trying to run GCC with arguments, hence your problem.
Either pass shell=True to Popen(), or, preferably, change gcc_command
to a properly tokenized list of strings; see the docs section I linked
to, it gives an example of how to do so.

Cheers,
Chris
--
Lesson: Read the `subprocess` docs. They've gotten better.
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread John Nagle

On 10/14/2010 8:49 AM, Daniel Fetchinson wrote:

I keep getting recruiting emails from charlesngu...@google.com about
working for google as an engineer. The messages are pretty much the
same and go like this:


I am part of the Google Staffing team and was wondering if you would
be open to exploring engineering opportunities with Google.


   Me too.  I figured it was some headhunter agency pretending to
represent Google.

   I got a real query once from Google because of something I'd
posted on comp.lang.c++, but they sounded like they had a clue.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: My first Python program

2010-10-14 Thread Hallvard B Furuseth
Seebs writes:

>> For long strings, another option is triple-quoting as you've seen in doc
>> strings: print """foo
>> bar""".
>
> I assume that this inserts a newline, though, and in this case I don't
> want that.

True.
$ python
>>> """foo  
... bar"""
'foo\nbar'
>>> """foo\
... bar"""
'foobar'
>>> "foo\ 
... bar"
'foobar'
>>> "foo"  "bar"
'foobar'

>> SourceFile.__repr__() looks like it should be a __str__().  I haven't
>> looked at how you use it though.  But __repr__ is supposed to
>> look like a Python expression to create the instance: repr([2]) = '[2]',
>> or a generic '': repr(id) = ''.
>
> Ahh!  I didn't realize that.  I was using repr as the "expand on it enough
> that you can see what it is" form -- more for debugging than for
> something parsable.

No big deal, then, except in the "idiomatic Python" sense.
__str__ for the informal(?) string representation of the object,
  def __repr__(self):
  return "<%s object with %s>" % (self.__class__.__name__, )
and you have a generic 2nd case, but looks like it'll be unusually long
in this case, or just define some ordinary member name like info() or
debug().

-- 
Hallvard
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Cameron Simpson
On 14Oct2010 14:13, Tim Chase  wrote:
| On 10/14/10 12:53, Paul Rubin wrote:
| >Carl Banks  writes:
| >>In general, the only way to test if a generator is empty is to try to
| >>consume an item.  (It's possible to write an iterator that consumes an
| >>item and caches it to be returned on the next next(), and whose
| >>boolean status indicates if there's an item left. ...)
| >
| >I remember thinking that Python would be better off if all generators
| >automatically cached an item, so you could test for emptiness, look
| >ahead at the next item without consuming it, etc.  This might have been
| >a good change to make in Python 3.0 (it would have broken compatibility
| >with 2.x) but it's too late now.
| 
| Generators can do dangerous things...I'm not sure I'd *want* to have
| Python implicitly cache generators without an explicit wrapper to
| request it: [... damaging counter example ...]

+1 to this. Speaking for myself, I would _not_ want a generator to
commence execution unless I overtly iterate over it.

I suppose we can cue the "hasattr() runs getattr(), ouch!" discussion
here:-)

Cheers,
-- 
Cameron Simpson  DoD#743
http://www.cskk.ezoshosting.com/cs/

I had no problem avoiding London before it was built.
- ir_jo...@csd.brispoly.ac.uk (Ian Johnson)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread James Harris
On 14 Oct, 16:49, Daniel Fetchinson  wrote:
> I keep getting recruiting emails from charlesngu...@google.com about
> working for google as an engineer. The messages are pretty much the
> same and go like this:

...

> I'm guessing I'm not the only one on this list to get these emails and
> suspect that pretty much everyone gets them. Is that the case? If yes,
> what's the point of spamming a more-or-less random set of people who
> although are probably interested in IT-related stuff but who can
> otherwise also be a set of dogs. Aren't enough people applying without
> this?

I had one - or, rather, two - because I'd posted to a Unix admin
newsgroup. Could have been legit but wasn't of interest so I didn't
follow it up so can't add any more.

James
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding python2.5 migration from windows xp to windows 7

2010-10-14 Thread CM
On Oct 14, 2:37 am, python_tsp  wrote:
> Hi,
>
> We have a Python based test framework which is being used in various
> projects.
>
> Our current environment is
> Python (ver 2.5.1)
> wxPython (wxPython2.8-win32-ansi-2.8.6.0-py25)
> pywin32-210.win32-py2.5
> vcredist_x86.exe
> pyserial-2.2
>
> Our Framework is being currently used in Windows XP.
>
> The issue is:
> Soon our environment will be migrated from Windows XP to Windows 7.
> In this regard, I would be in need of suggestions/ideas/information
> regarding migration of our existing framework into windows 7
> environment.
>
> Do i need to migrate our framework from Python 2.5 to either Python
> 2.6 or directly to Python 3.0 ? What happens to all supporting
> packages..etc

I can't address all the stuff you're using, but I tried an app
that I am developing on Windows XP on Windows 7 and was glad to
see it worked fine.  I was using Python 2.5, wxPython 2.8.10.
The best thing would be to just test it yourself on Win7 and
if it works, there's nothing further to do.

Che

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question regarding python2.5 migration from windows xp to windows 7

2010-10-14 Thread Brian Curtin
On Thu, Oct 14, 2010 at 01:37, python_tsp  wrote:

> Hi,
>
> We have a Python based test framework which is being used in various
> projects.
>
> Our current environment is
> Python (ver 2.5.1)
> wxPython (wxPython2.8-win32-ansi-2.8.6.0-py25)
> pywin32-210.win32-py2.5
> vcredist_x86.exe
> pyserial-2.2
>
> Our Framework is being currently used in Windows XP.
>
> The issue is:
> Soon our environment will be migrated from Windows XP to Windows 7.
> In this regard, I would be in need of suggestions/ideas/information
> regarding migration of our existing framework into windows 7
> environment.
>
> Do i need to migrate our framework from Python 2.5 to either Python
> 2.6 or directly to Python 3.0 ? What happens to all supporting
> packages..etc
>
> Which is the best way ?
>
> We tried out of some option of using our framework under virtual xp
> context of windows 7.Thou it works for timebeing,i am not interested
> to having the same as kind of way of working for future.
>
> Please help
>
> Many thanks in advance
>
> - Pramod


You will be able to just install your current code on Windows 7 and it will
work fine. If it doesn't for any reason, it's probably a bug in Python,
which I think is highly unlikely.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCharm

2010-10-14 Thread Jeffrey Gaynor
Yip. I'm using it and for the most part like it. But...

I used their Java IDE for years (it totally rocks, highly recommended), so I it 
is very comfortable to use PyCharm. 

One thing that bugs me in refactoring though is that renaming a method or 
variable does not necessarily work. It's supposed to track down all references 
and correctly change them, but it tends to be hit or miss. No problem though, 
since I just do a search of the files in question and do it manually. Still, 
the Java refactoring engine works very well indeed and id one of their major 
selling points. Code completion works, you can specify different Python 
versions (helpful) and there is Django support.

The debugger, though I have only had limited use for it, does seem to work well 
too.

Certainly give it a shot. The only other IDE I found that was remotely close to 
it was Komodo which costs a lot more (Jetbrains is offering a 50% off coupon as 
a promotional offer for a while.) 

Hope this helps...



- Original Message -
From: "Robert H" 
To: python-list@python.org
Sent: Wednesday, October 13, 2010 4:36:31 PM
Subject: PyCharm

Since the new IDE from Jetbrains is out I was wondering if "you" are
using it and what "you" think about it.

I have to start learning Python for a project at work and I am looking
around for options.

Bob
-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Exceptions are not just for errors

2010-10-14 Thread Gregory Ewing

Ben Finney wrote:


Another way of thinking about it is that there's no sensible sequence of
bytes to return at EOF, so the Pythonic thing to do is to raise an
exception for this exceptional circumstance.


But this is *not* what Python does, so it's obviously
not Pythonic. :-)

If f.read(n) is to mean "read n bytes, or however many are
left", and there are no bytes left, then the consistent
thing to do is to return a zero-length sequence of bytes.

I came across a situation recently where Microsoft got this
badly wrong. I was using a language that didn't have very
good file-access capabilities, and I wanted to compare the
contents of two files. So I used Scripting.FileSystemObject
via COM, and wrote something like the equivalent of

   f = open(...)
   g = open(...)
   if f.read() == g.read():
  ...

This worked fine... except when one of the files was empty,
in which case the read() call *raised an exception*!

I am very glad that Python didn't go down that route.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Carl Banks
On Oct 14, 10:53 am, Paul Rubin  wrote:
> Carl Banks  writes:
> > In general, the only way to test if a generator is empty is to try to
> > consume an item.  (It's possible to write an iterator that consumes an
> > item and caches it to be returned on the next next(), and whose
> > boolean status indicates if there's an item left. ...)
>
> I remember thinking that Python would be better off if all generators
> automatically cached an item, so you could test for emptiness, look
> ahead at the next item without consuming it, etc.  This might have been
> a good change to make in Python 3.0 (it would have broken compatibility
> with 2.x) but it's too late now.

Since the generator's behavior can depend on when it was invoked, no I
don't think this is a good idea.

Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Compiling as 32bit on MacOSX

2010-10-14 Thread Gregory Ewing

Ned Deily wrote:
Perhaps you're 
calling ld(1) directly?  To link multiple-arch executables (etc), the 
Apple gcc driver does the dirty work of calling ld multiple times and 
lipo-ing the results.


Is this something that only works at link time, then? The
gcc man page says:

  "Multiple options work, and
  direct the compiler to produce "universal" binaries including
  object code for each architecture specified with -arch."

From this I was hoping to be able to do

   gcc -arch i386 -arch x86_64 -c foo.c

and get dual-architecture .o files that could then be linked
into dual-architecture libraries. But if I do the above, I
just get an x86_64 .o file.

Are you saying that I need to compile separate sets of .o
files and then combine them at link time? That sounds like
an awkward thing to retrofit onto a library's existing
build system.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Help with sets

2010-10-14 Thread Gregory Ewing

Steve Howell wrote:


That was the original context of my comment.  The term "symmetry" gets
used a couple times in that PEP, and I think we're in violent
agreement that the concept of "symmetry" is wishy-washy at best.

Here is just one example from the PEP:

  The symmetry between "if x in y" and "for x in y"
  suggests that it should iterate over keys.


Maybe "analogy" or "similarity" would be a better word here.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


ENVIRONMENT Variable expansion in ConfigParser

2010-10-14 Thread pikespeak
Hi,
I am using ConfigParser module and would like to know if it has the
feature to autoexpand environment variables.
For example currently, I have the below section in config where
hostname is hardcoded.
I would like it to be replaced with the values from the env variable
os.envion['HOSTNAME'] so that I can remove hardcoding and my config
will be host independent.

[section]
host.name=devserver1.company.com

to be replaced with something like this

[section]
host.name=os.environ['HOSTNAME']

I know of the interpolation feature of Config Parser %{foo}s  where
value of foo will be expanded..but not sure if this can be tweeked to
my needs.

Any ideas please.

srini

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does everyone keep getting recruiting emails from google?

2010-10-14 Thread Philip Semanchuk

On Oct 14, 2010, at 11:49 AM, Daniel Fetchinson wrote:

> I keep getting recruiting emails from charlesngu...@google.com about
> working for google as an engineer.


I know what you mean. Apparently Charles Nguyen doesn't realize that I already 
get no end of emails and phone calls from Sergei and Larry begging me to come 
work with them. They won't take a flat "no" over the phone but I can't stand 
another trip to California on the private jet (the Pouilly-Fuissé isn't 
properly chilled but it's better than the red which isn't fit for vinegar). The 
yacht trips are getting old too, stuck on the boat with Eric yammering on about 
stock options and tacking like a nervous maniac so I nearly get killed by the 
boom every five minutes. I'd rather be knocked unconscious into the Pacific 
than hear that "unique opportunity" speech again.

FWIW, I got one email from Charles Nguyen and answered  with a "thanks but no 
thanks". I have not heard from him again. He's perhaps casting too broad a net 
but the email I got looked legitimately from Google, judging by the headers.


Cheers
Philip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCharm

2010-10-14 Thread alex23
Jeffrey Gaynor  wrote:
> Certainly give it a shot. The only other IDE I found that was
> remotely close to it was Komodo which costs a lot more
> (Jetbrains is offering a 50% off coupon as a promotional offer
> for a while.)

I recently tried out PyCharm in anger after something (I forget what)
in Komodo was bothering me. In Komodo's defence, it supports Perl,
PHP, Python & Ruby, two of which I use daily, so replacing it would
require my buying two IDEs: PyCharm & PHPStorm.

It would just be a damn sight easier if I didn't have to suffer under
PHP :(
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get alternative char name with unicodedata.name() if no formal one defined

2010-10-14 Thread John Machin
On Oct 14, 7:25 pm, Dirk Wallenstein  wrote:
> Hi,
> I'd like to get control char names for the first 32 codepoints, but they
> apparently only have an alias and no official name. Is there a way to
> get the alternative character name (alias) in Python?
>

AFAIK there is no programatically-available list of those names. Try
something like:

name = unicodedata.name(x, some_default) if x > u"\x1f" else ("NULL",
etc etc, "UNIT SEPARATOR")[ord(x)]

or similarly with a prepared dict:

C0_CONTROL_NAMES = {
u"\x00": "NULL",
# etc
u"\x1f": "UNIT SEPARATOR",
}

name = unicodedata.name(x, some_default) if x > u"\x1f" else
C0_CONTROL_NAMES[x]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python/c api

2010-10-14 Thread alex23
On Oct 15, 5:53 am, de...@web.de (Diez B. Roggisch) wrote:
> For example Ableton Live, an audio sequencer.

I _have_ Live and I didn't realise this :O Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with sets

2010-10-14 Thread Steve Howell
On Oct 14, 4:09 pm, Gregory Ewing  wrote:
> Steve Howell wrote:
> > That was the original context of my comment.  The term "symmetry" gets
> > used a couple times in that PEP, and I think we're in violent
> > agreement that the concept of "symmetry" is wishy-washy at best.
>
> > Here is just one example from the PEP:
>
> >       The symmetry between "if x in y" and "for x in y"
> >       suggests that it should iterate over keys.
>
> Maybe "analogy" or "similarity" would be a better word here.
>

Agreed.  "Analogy" seems particularly appropriate.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 249 (database api) -- executemany() with iterable?

2010-10-14 Thread Steve Howell
On Oct 13, 8:32 pm, Lawrence D'Oliveiro  wrote:
> In message
> , Steve
>
> Howell wrote:
> > Bulk-load strategies usually solve one or more of these problems:
>
> >  network latency
>
> That’s not an issue. This is a bulk operation, after all.
>
> >  index maintenance during the upload
>
> There are usually options to temporarily turn this off.
>
> >  parsing of the SQL
>
> Not usually a problem, as far as I can tell.
>
> >  reallocating storage
>
> You mean for thr SQL? Again, not usually a problem, as far as I can tell.

If you are avoiding network latency and turning off indexes, then you
are using some kind of a bulk-load strategy.

If you are not concerned about parsing costs or storage churn, then
you are simply evaluating the costs of a non-bulk-oriented strategy
and determining that they are minimal for your needs, which is fine.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Steven D'Aprano
On Thu, 14 Oct 2010 14:13:30 -0500, Tim Chase wrote:

>> I remember thinking that Python would be better off if all generators
>> automatically cached an item, so you could test for emptiness, look
>> ahead at the next item without consuming it, etc.  This might have been
>> a good change to make in Python 3.0 (it would have broken compatibility
>> with 2.x) but it's too late now.
> 
> Generators can do dangerous things...I'm not sure I'd *want* to have
> Python implicitly cache generators without an explicit wrapper to
> request it:

I'm sure that I DON'T want it. It would be a terrible change.

(1) Generators are lightweight. Adding overhead to cache the next value 
adds value only for a small number of uses, but adds weight to all 
generators.

(2) Generators are simple. There is a clear and obvious distinction 
between "create the generator object by calling the generator function" 
and "call the generated values by iterating over the generator object". 
Admittedly the language is a bit clumsy, but the concept is simple -- you 
have a generator function that you call, and it returns an iterable 
object that yields values. Simple and straightforward. Caching blurs this 
distinction -- calling the function also produces the first object, 
caching it and hiding any StopIteration.

(3) Generators with side-effects. I know, I know, if you write functions 
with side-effects, you're in a state of sin already, but there's no need 
for Python to make it worse.

(4) Expensive generators. The beauty of generators is that they produce 
values on demand. Making all generators cache their first value means 
that you pay that cost even if you end up never needing the first value.

(5) Time dependent output of generators. The values yielded can depend on 
the time at which you invoke the generator. Caching plays havoc with that.


None of this is meant to say "Never cache generator output", that would 
be a silly thing to say. If you need an iterator with look-ahead, that 
knows whether it is empty or not, go right ahead and use one. But don't 
try to force it on everyone.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Compiling as 32bit on MacOSX

2010-10-14 Thread Ned Deily
In article <8hpgn7fho...@mid.individual.net>,
 Gregory Ewing  wrote:

> Ned Deily wrote:
> > Perhaps you're 
> > calling ld(1) directly?  To link multiple-arch executables (etc), the 
> > Apple gcc driver does the dirty work of calling ld multiple times and 
> > lipo-ing the results.
> 
> Is this something that only works at link time, then? The
> gcc man page says:
> 
>"Multiple options work, and
>direct the compiler to produce "universal" binaries including
>object code for each architecture specified with -arch."
> 
>  From this I was hoping to be able to do
> 
> gcc -arch i386 -arch x86_64 -c foo.c
> 
> and get dual-architecture .o files that could then be linked
> into dual-architecture libraries. But if I do the above, I
> just get an x86_64 .o file.
> 
> Are you saying that I need to compile separate sets of .o
> files and then combine them at link time? That sounds like
> an awkward thing to retrofit onto a library's existing
> build system.

No, it's supported both at compile and link time.  Here's a compile on 
10.6:

$ /usr/bin/gcc --version
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
Copyright (C) 2007 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is 
NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR 
PURPOSE.

$ /usr/bin/gcc -arch i386 -arch x86_64 -c foo.c
$ file foo.o
foo.o: Mach-O universal binary with 2 architectures
foo.o (for architecture i386):   Mach-O object i386
foo.o (for architecture x86_64): Mach-O 64-bit object x86_64

BTW, if, for some reason, you *do* need to extract a particular 
architecture from a multi-arch file, you can use lipo to do it:

$ lipo foo.o -output foo_32.o -extract i386
$ lipo foo.o -output foo_64.o -extract x86_64
$ file foo_32.o
foo_32.o: Mach-O universal binary with 1 architecture
foo_32.o (for architecture i386):   Mach-O object i386
$ file foo_64.o
foo_64.o: Mach-O universal binary with 1 architecture
foo_64.o (for architecture x86_64): Mach-O 64-bit object x86_64

-- 
 Ned Deily,
 n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCharm

2010-10-14 Thread Brian Jones
I've been using PyCharm since the very first EAP releases, and downloaded
1.0 yesterday. I've tested out so many IDEs for use with Python, but PyCharm
is the only one that gives me everything I want with just about zero work.
Here's what won me over:

1. I can set up nose and coverage as a Run configuration, so I can run tests
the way I want to with the click of the 'Run' button.

2. Best vim emulation of any IDE ever, and I'll note here that I used Komodo
for some time.

3. The code inspections actually have saved me a good bit of work. In
addition, they've helped me keep my code cleaner: it identifies unused
variables, methods, and imports very well. It also has an autoimport
feature, so if you reference a library you don't have imported yet, it'll
suggest a lib to import, which you can accept with a keystroke. If more than
one lib is a possibility, the UI for choosing which one to import is nicely
done.

4. If I just want to create a file, I can. If I want to open a directory I
can. It's not shoving its worldview down my throat by making me start
whatever its notion of a "project" is. Yes, it puts a '.idea' directory in
directories it opens, but I haven't seen that become an issue.

5. Git integration: the git integration piece might be the one piece that
they got right early on: I never had any problems with it.

6. You can see a diff against local history, the current branch version,
etc., pretty much no matter where you are in the interface. If you decide to
push changes, and when the commit message dialog comes up you find yourself
forgetting what you did, you can get a diff launched right from that window,
and it's a decent diff interface.

There are a few things I *don't* like about it, but they're pretty minor:

1. Only one default theme choice. It'd be nice to supply multiple themes and
let me edit one instead of creating one from scratch.

2. docstrings: I find their docstring handling to be a little clunky. For
one example, if you do this before declaring any classes in your module:
"""
this is a docstring
"""
It'll highlight that and tell you "this code appears to do nothing"

3. The Python interpreter is a little awkward. It's pretty obvious that
there are two separate windows for input and output, and things are just
being piped back and forth. There's a noticeable lag, and it's kind of
annoying for someone who types fast and is used to the cli interpreter. As
it stands, the cli interpreter is about the only thing I actually leave
PyCharm for.

4. If you have a popup dialog open, the entire rest of the application is
dead, so you can't scroll or switch files in your code pane when a dialog
comes up. So when you want to know why that extra file is in your commit,
you'll have to cancel out or run the diff tool.

Overall, though, this is the best IDE for Python I've seen, and I'm sure
it'll get even better with time.

hth.
brian


On Thu, Oct 14, 2010 at 8:49 PM, alex23  wrote:

> Jeffrey Gaynor  wrote:
> > Certainly give it a shot. The only other IDE I found that was
> > remotely close to it was Komodo which costs a lot more
> > (Jetbrains is offering a 50% off coupon as a promotional offer
> > for a while.)
>
> I recently tried out PyCharm in anger after something (I forget what)
> in Komodo was bothering me. In Komodo's defence, it supports Perl,
> PHP, Python & Ruby, two of which I use daily, so replacing it would
> require my buying two IDEs: PyCharm & PHPStorm.
>
> It would just be a damn sight easier if I didn't have to suffer under
> PHP :(
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Brian K. Jones
My Blog  http://www.protocolostomy.com
Follow me  http://twitter.com/bkjones
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Paul Rubin
Steven D'Aprano  writes:
> (4) Expensive generators. The beauty of generators is that they produce 
> values on demand. Making all generators cache their first value means 
> that you pay that cost even if you end up never needing the first value.

You wouldn't generate the cached value ahead of time.  You'd just
remember the last generated value so that you could use it again.
Sort of like getc/ungetc.

An intermediate measure might be to have a stdlib wrapper that added
caching like this to an arbitrary generator.  I've written such things a
few times in various clumsy ways.  Having the caching available in the C
code would eliminate a bunch of indirection.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Steven D'Aprano
On Thu, 14 Oct 2010 14:43:29 -0400, Albert Hopkins wrote:


> There may be times, however, that a generator may "know" that it
> doesn't/isn't/won't generate any values, and so you may be able to
> override boolean evaluation.  Consider this example:
[snip example]


This is a good example, but it's not a generator, it's an iterator :)

The two are similar in that they both produce values lazily, as required, 
but generators are a special case of iterators: generators are a special 
form of the function syntax which returns a lightweight and simple 
iterator. Iterators are more general. They're an interface rather than a 
type, so any class you build which matches the iterator protocol is an 
iterator, but only a function with a yield is a generator. 

Other than this nit-pic, your idea of making a custom iterator with a 
__nonzero__ method is an excellent idea.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Tim Chase

On 10/14/10 20:48, Steven D'Aprano wrote:

(3) Generators with side-effects. I know, I know, if you write functions
with side-effects, you're in a state of sin already, but there's no need
for Python to make it worse.

(4) Expensive generators. The beauty of generators is that they produce
values on demand. Making all generators cache their first value means
that you pay that cost even if you end up never needing the first value.


I'd consider "expensive generators" a subset (or at least 
intersecting) "generators with side-effects"...that side-effect 
being time-consumed.  Either way, I'm pretty firmly with you in 
the "don't do it by default; let me explicitly wrap it if I want 
it" camp.


-tkc



--
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Steve Howell
On Oct 14, 7:08 pm, Paul Rubin  wrote:
> Steven D'Aprano  writes:
> > (4) Expensive generators. The beauty of generators is that they produce
> > values on demand. Making all generators cache their first value means
> > that you pay that cost even if you end up never needing the first value.
>
> You wouldn't generate the cached value ahead of time.  You'd just
> remember the last generated value so that you could use it again.
> Sort of like getc/ungetc.
>
> An intermediate measure might be to have a stdlib wrapper that added
> caching like this to an arbitrary generator.  I've written such things a
> few times in various clumsy ways.  Having the caching available in the C
> code would eliminate a bunch of indirection.

Is there an idiomatic way in Python to wrap a generator with a getc/
ungetc mechanism?

I know Paul is not alone in having written such things himself in
various clumsy ways.

This is my own clumsy attempt, but it seems like there should be a
simpler way to achieve what I'm doing.

def abc():
yield 'a'
yield 'b'
yield 'c'

for letter in abc():
print letter

class Wrap:
def __init__(self, g):
self.g = g
self.use_cached = False

def get(self):
if self.use_cached:
self.use_cached = False
return self.value
self.value = self.g.next()
return self.value

def unget(self):
if self.use_cached:
raise Exception('only one unget allowed')
self.use_cached = True


w = Wrap(abc())
print w.get()
w.unget()
print w.get()
print w.get()
for letter in w.g:
print letter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ENVIRONMENT Variable expansion in ConfigParser

2010-10-14 Thread Rodrick Brown
How about doing something like 

host.name=%HOSTNAME%

Then when you parse in the value %HOSTNAME% from your configParser module you 
do a pattern substitution of %HOSTNAME% with os.environ['HOSTNAME'].

Sent from my iPhone 4.

On Oct 14, 2010, at 7:57 PM, pikespeak  wrote:

> Hi,
> I am using ConfigParser module and would like to know if it has the
> feature to autoexpand environment variables.
> For example currently, I have the below section in config where
> hostname is hardcoded.
> I would like it to be replaced with the values from the env variable
> os.envion['HOSTNAME'] so that I can remove hardcoding and my config
> will be host independent.
> 
> [section]
> host.name=devserver1.company.com
> 
> to be replaced with something like this
> 
> [section]
> host.name=os.environ['HOSTNAME']
> 
> I know of the interpolation feature of Config Parser %{foo}s  where
> value of foo will be expanded..but not sure if this can be tweeked to
> my needs.
> 
> Any ideas please.
> 
> srini
> 
> -- 
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


extract method with generators

2010-10-14 Thread Steve Howell
Is there a way to extract code out of a generator function f() into
g() and be able to have f() yield g()'s result without this idiom?:

  for g_result in g():
yield g_result

It feels like a clumsy hindrance to refactoring, to have to introduce
a local variable and a loop.

Here is a program that illustrates what I'm trying to achieve--
basically, I want some kind of general mechanism to exhaust one
generator from another.

This was tested on python2.6.  The failed attempts all simply produce
the number 50 and stop.

def unfactored():
# toy example, obviously
# pretend this a longer method in need of
# extract-method
yield 50
for num in [100, 200]:
yield num+1
yield num+2
yield num+3

print 'Works fine:'
for x in unfactored():
print x
print

def extracted_submethod(num):
yield num+1
yield num+2
yield num+3

print 'quick test'
for x in extracted_submethod(100):
print x
print

def refactored_original_method():
yield 50
for num in [100, 200]:
# naively delegate
extracted_submethod(num)

# the next does not do what you expect
print 'try naive'
for x in refactored_original_method():
print x
print 'DOH! that is all?'
print

# this feels clumsy
def clumsy_refactored_original_method():
yield 50
for num in [100, 200]:
for x in extracted_submethod(num):
yield x

print 'Works fine:'
for x in clumsy_refactored_original_method():
print x
print

# try to generalize and fail again
def exhaust_subgenerator(g):
for x in g:
yield x

def f():
yield 50
for num in [100, 200]:
exhaust_subgenerator(extracted_submethod(num))

print 'Try again'
for x in f():
print x
print 'DOH! that is all?'
print
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: toy list processing problem: collect similar terms

2010-10-14 Thread Xah Lee

On Sep 25, 9:05 pm, Xah Lee  wrote:
> here's a interesting toy list processing problem.
>
> I have a list of lists, where each sublist is labelled by
> a number. I need to collect together the contents of all sublists
> sharing
> the same label. So if I have the list
>
> ((0 a b) (1 c d) (2 e f) (3 g h) (1 i j) (2 k l) (4 m n) (2 o p) (4 q
> r) (5 s t))
>
> where the first element of each sublist is the label, I need to
> produce:
>
> output:
> ((a b) (c d i j) (e f k l o p) (g h) (m n q r) (s t))
> ...

thanks all for many interesting solutions. I've been so busy in past
month on other computing issues and writing and never got around to
look at this thread. I think eventually i will, but for now just made
a link on my page to point to here.

now we have solutions in perl, python, ruby, common lisp, scheme lisp,
mathematica. I myself would also be interested in javascript perhps
i'll write one soon. If someone would go thru all these solution and
make a good summary with consistent format/names of each solution...
that'd be very useful i think. (and will learn a lot, which is how i
find this interesting)

PS here's a good site that does very useful comparisons for those
learning multiple langs.

* 〈Lisp: Common Lisp, Scheme, Clojure, Emacs Lisp〉
http://hyperpolyglot.wikidot.com/lisp
* 〈Scripting Languages: PHP, Perl, Python, Ruby, Smalltalk〉
http://hyperpolyglot.wikidot.com/scripting
* 〈Scripting Languages: Bash, Tcl, Lua, JavaScript, Io〉
http://hyperpolyglot.wikidot.com/small
* 〈Platform Languages: C, C++, Objective C, Java, C#〉
http://hyperpolyglot.wikidot.com/c
* 〈ML: Standard ML, OCaml, F#, Scala, Haskell〉 
http://hyperpolyglot.wikidot.com/ml

 Xah ∑ http://xahlee.org/ ☄
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: extract method with generators

2010-10-14 Thread Cameron Simpson
On 14Oct2010 20:11, Steve Howell  wrote:
| Is there a way to extract code out of a generator function f() into
| g() and be able to have f() yield g()'s result without this idiom?:
| 
|   for g_result in g():
| yield g_result
| 
| It feels like a clumsy hindrance to refactoring, to have to introduce
| a local variable and a loop.

This sounds like the "yield from" proposal that had discussion some
months ago. Your above idiom would become:

  yield from g()

See PEP 380:
  http://www.python.org/dev/peps/pep-0380/
Short answer, not available yet.

A Google search on:

  python pep "yield from"

found some implementations at activestate, such as this:

  
http://code.activestate.com/recipes/577153-yet-another-python-implementation-of-pep-380-yield/

which lets you decorate an existing generator so that you can write:

  yield _from(gen())

where gen() is the decorated generator.

Cheers,
-- 
Cameron Simpson  DoD#743
http://www.cskk.ezoshosting.com/cs/

What I want is Facts. Teach these boys and girls nothing but Facts.  Facts
alone are wanted in life. Plant nothing else, and root out everything else.
- Charles DickensJohn Huffam   1812-1870  Hard Times [1854]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 249 (database api) -- executemany() with iterable?

2010-10-14 Thread Lawrence D'Oliveiro
In message , M.-A. 
Lemburg wrote:

> However, even with iterables, please keep in mind that pushing
> the data row-per-row over a network does not result in good
> performance, so using an iterable will make you update slower.
> 
> cursor.executemany() is meant to allow the database module
> to optimize sending bulk data to the database and ideally,
> it will send the whole sequence to the database in one go.

You seem to be assuming that using an iterator precludes buffering.

What’s wrong with evaluating the iterator to produce as many records as the 
API implementation finds convenient to send at once?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with sets

2010-10-14 Thread Lawrence D'Oliveiro
In message 
<12fcd67a-774d-42f0-851a-9c3497df9...@s24g2000pri.googlegroups.com>, Steve 
Howell wrote:

> On Oct 14, 4:09 pm, Gregory Ewing  wrote:
>> Steve Howell wrote:
>>
>> Maybe "analogy" or "similarity" would be a better word here.
> 
> Agreed.  "Analogy" seems particularly appropriate.

Except that, while analogies can be handy for illustrating things, they are 
useless for actually supporting arguments.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python/c api

2010-10-14 Thread Lawrence D'Oliveiro
In message , Diez B. Roggisch wrote:

> ... and a lot of embedding python into a software.

Let me mention some notable Free Software that does this: Blender, GIMP and 
Scribus, among ones I’ve messed about with recently. Makes an amazing amount 
of power available.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class-level variables - a scoping issue

2010-10-14 Thread Lawrence D'Oliveiro
In message <8hl2jvfb1...@mid.individual.net>, Gregory Ewing wrote:

> Lawrence D'Oliveiro wrote:
> 
>> If you can’t do it statically, do it dynamically.
> 
> But how can that be done without seeing into the future?

“Dynamically” is when that “future” becomes the present, so you can see it 
right in front of you, here, now.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: EOF while scanning triple-quoted string literal

2010-10-14 Thread Lawrence D'Oliveiro
In message , Rhodri James wrote:

> ... frankly putting arbitrary binary into a literal string is rather
> asking for something like this to come and bite you.

It normally works fine on sensible OSes.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with sets

2010-10-14 Thread Steve Howell
On Oct 14, 9:22 pm, Lawrence D'Oliveiro  wrote:
> In message
> <12fcd67a-774d-42f0-851a-9c3497df9...@s24g2000pri.googlegroups.com>, Steve
>
> Howell wrote:
> > On Oct 14, 4:09 pm, Gregory Ewing  wrote:
> >> Steve Howell wrote:
>
> >> Maybe "analogy" or "similarity" would be a better word here.
>
> > Agreed.  "Analogy" seems particularly appropriate.
>
> Except that, while analogies can be handy for illustrating things, they are
> useless for actually supporting arguments.

And only a fool would try to support his argument by actually
illustrating a point, right? ;)



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: extract method with generators

2010-10-14 Thread Steve Howell
On Oct 14, 8:45 pm, Cameron Simpson  wrote:
> On 14Oct2010 20:11, Steve Howell  wrote:
> | Is there a way to extract code out of a generator function f() into
> | g() and be able to have f() yield g()'s result without this idiom?:
> |
> |   for g_result in g():
> |     yield g_result
> |
> | It feels like a clumsy hindrance to refactoring, to have to introduce
> | a local variable and a loop.
>
> This sounds like the "yield from" proposal that had discussion some
> months ago. Your above idiom would become:
>
>   yield from g()
>
> See PEP 380:
>  http://www.python.org/dev/peps/pep-0380/
> Short answer, not available yet.
>

Very interesting, thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCharm

2010-10-14 Thread Vladimir Ignatov
Hello,

> One thing that bugs me in refactoring though is that renaming a method or 
> variable does not necessarily work. It's supposed to track down all 
> >references and correctly change them, but it tends to be hit or miss.

Since we got a true dynamic language here (Python) I don't see a way
how a "perfect" refactoring can be done even in a theory. PyDev tends
to do the very same things. Okay, at least that saves me $199. Not
bad.

Vladimir Ignatov
-- 
http://mail.python.org/mailman/listinfo/python-list


Weird try-except vs if behavior

2010-10-14 Thread James Matthews
Hi,

I have this code http://gist.github.com/627687 (I don't like pasting code
into the mailing list). I am wondering why the try except is taking longer.
I assume that if the IF statement checks every iteration of the loop (1000
times) shouldn't it be slower?


James

-- 
http://www.goldwatches.com

--
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Boolean value of generators

2010-10-14 Thread Arnaud Delobelle
Paul Rubin  writes:

> Steven D'Aprano  writes:
>> (4) Expensive generators. The beauty of generators is that they produce 
>> values on demand. Making all generators cache their first value means 
>> that you pay that cost even if you end up never needing the first value.
>
> You wouldn't generate the cached value ahead of time.  You'd just
> remember the last generated value so that you could use it again.
> Sort of like getc/ungetc.
>
> An intermediate measure might be to have a stdlib wrapper that added
> caching like this to an arbitrary generator.  I've written such things a
> few times in various clumsy ways.  Having the caching available in the C
> code would eliminate a bunch of indirection.

I've done such a thing myself a few times.  I remember posting on
python-ideas a while ago (no time to find the thread ATM).  My
suggestion was to add a function peekable(it) that returns an iterator
with a peek() method, whose behaviour is exactly the one that you
describe (i.e. similar to getc/ungetc).  I also suggested that iterators
could optionally implement a peek() method themselves, in which case
peek(it) would return the iterator without modification.  For examples,
list_iterators, str_iterators and other iterators over sequences could
implement next() without any cost.  I don't recall that this proposal
gained much traction!

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >