Re: Introdution

2014-06-03 Thread Chris
Hi,

On 06/03/2014 12:01 AM, Hisham Mughal wrote:
 plz tell me about books for python
 i am beginner of this lang..

the most important commands are in A Byte of Python [1]. This eBook
isn't sufficient for programming, but it's a nice introduction.

I bought Learning Python from Mark Lutz. It's not bad, but I think it's
a bit too narrative.


[1] http://www.swaroopch.com/notes/python/

-- 
Christian
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: ImportError: No module named _gdb

2014-06-03 Thread dieter
Marcelo Sardelich msardel...@gmail.com writes:

 Didier thanks for your prompt reply.
 I installed a pre-built version of Python.
 As you said, probably something is missing.

 I tried to google packages related to gdb, but ain't had no luck.

The missing part is related to the gdb-Python integration.
Look around for information about this integration - e.g.
installation instructions.

What is missing is apparently a C extension for Python. When you have
the source of this package, then it may contain a setup.py.
In this case [sudo] python setup.py install may do everything necessary
to get a working C extension.


 Do you have any idea if it is a compiler directive?

Not for this problem.

However, using gdb usually profits considerably from
having debugging symbols. A system installed Python usually
lacks those symbols. Therefore, after you have solved
the current problem, it may be profitable to compile your
own Python from source.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Terry Reedy

On 6/3/2014 1:16 AM, Gregory Ewing wrote:

Terry Reedy wrote:

The issue Armin ran into is this. He write a library module that makes
sure the streams are binary.


Seems to me he made a mistake right there. A library should
*not* be making global changes like that. It can obtain
binary streams from stdin and stdout for its own use, but
it shouldn't stuff them back into sys.stdin and sys.stdout.

If he had trouble because another library did that, then
that library is broken, not Python.


I agree. The example in Armin's blog rant was an application, an empty 
unix filter (ie, simplified cat clone). For that example the complex 
code he posted to show how awful Python 3 is is unneeded. When I asked 
what he did not directly use the fix in the doc, without the 
scaffolding, he switching to the 'library' module explanation.


The problem is that causal readers like Robin sometimes jump from 'In 
Python 3, it can be hard to do something one really ought not to do' to 
'Binary I/O is hard in Python 3' -- which is is not.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: can someone explain the concept of strings (or whatever) being immutable

2014-06-03 Thread Rustom Mody
On Tuesday, June 3, 2014 10:36:37 AM UTC+5:30, Deb Wyatt wrote:
 That was just the first question.  What does immutable really mean
 if you can add items to a list? and concatenate strings?  I don't
 understand enough to even ask a comprehensible question, I guess.

It is with some pleasure that I see this question: Most people who are
clueless have no clue that they are clueless -- also called the
Dunning-Krüger effect.

Be assured that this question is much harder and problematic than people 
believe.

There are earlier discussions on this on this list, eg

https://groups.google.com/forum/#!topic/comp.lang.python/023NLi4XXR4[126-150-false]

[Sorry the archive thread is too broken to quote meaningfully]

Here's a short(!) summary:
Programmer's live in 2 worlds.
1. A timeless mathematical world. Philosophers call this the platonic world
after Plato's allegory of the cave:
http://en.wikipedia.org/wiki/Allegory_of_the_Cave

2. An in-time world that is called Empirical in philosophy

You cannot reject 2 because your programs run in time and produce
effects (hopefully!) in the empirical world.

You cannot reject 1 because the time at which you - the programmer -
function is in another space-time from the run-time of your program.
The very fact that you write a program means you have (been able to)
algorithmize out a complex *process* into a simpler recipe - a *program*.

Once you see the need for both worldviews 1 and 2 you will see why even the most
deft=footed trip up doing this dance. eg. When someone says:
3 is immutable but [1,2,3] is mutable this is not a necessary fact but
an incidental choice of python's semantics.

Functional languages make everything immutable

Assembly language makes everything mutable -- you can self-modify the
code containing 3 as an immediate operand in an instruction into one containing
something else.¹

However the basic (necessary not incidental) fact remains - you need to dance 
between
the two worldviews:
- platonic and empiric (in traditional philosophy lingo)
- declarative and imperative (in computer theory lingo)
- FP and OO styles (the two major fashions in programming languages)


Choose absolutely only the first and your program can have no effect
whatever including writing a result to the screen

Choose absolutely only the second that you can have no comprehension of your
program's semantics.

You can find this further elaborated on my blog whose title is a summarization 
of what Ive written above:  http://blog.languager.org/search/label/FP

--
¹Heck! Steven showed some trick to make it happen in python also
But Ive not fathomed the black magic!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-06-03 Thread Rustom Mody
On Tuesday, June 3, 2014 11:42:30 AM UTC+5:30, jmf wrote:

 after thinking no

Yes [Also called Oui]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Paul Rubin no.email@nospam.invalid:

 Marko Rauhamaa ma...@pacujo.net writes:
 - Thread programming assumes each thread is waiting for precisely
   one external stimulus in any given state -- in practice, each
   state must be prepared to handle quite a few possible stimuli.

 Eh?  Threads typically have their own event loop dispatching various
 kinds of stimuli.

I have yet to see that in practice. The typical thread works as
follows:

while True:
while request.incomplete():
request.read() # block
sql_stmt = request.process()
db.act(sql_stmt)   # block
db.commit()# block
response = request.ok_response()
while response.incomplete():
response.write()   # block

The places marked with the block comment are states with only one
valid input stimulus.

 Have threads communicate by message passing with immutable data in the
 messages, and things tend to work pretty straightforwardly.

Again, I have yet to see that in practice. It is more common, and
naturally enforced, with multiprocessing.

 Having dealt with some node.js programs and the nest of callbacks they
 morph into as the application gets more complicated, threads have
 their advantages.

If threads simplify an asynchronous application, that is generally done
by oversimplifying and reducing functionality.

Yes, a nest of callbacks can get messy very quickly. That is why you
need to be very explicit with your states. Your class needs to have a
state field named state with clearly named state values.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Jaydeep Patil
On Tuesday, 3 June 2014 10:39:31 UTC+5:30, Ian  wrote:
 On Mon, Jun 2, 2014 at 10:28 PM, Jaydeep Patil patil.jay2...@gmail.com 
 wrote:
 
  Dear all,
 
  Can we Lock Windows Screen GUI till program runs  unlock screen GUI when 
  program finishes?
 
 
 
 If you mean can you programmatically bring up the Windows lock screen,
 
 then you can do this:
 
 
 
 import ctypes
 
 ctypes.windll.user32.LockWorkStation()
 
 
 
 The only way to unlock it is for the user to log in.
 
 
 
 If you mean something else, you'll have to be more specific.

Hi Lan,

Currently I am doing some automation in python excel. It read the data  plots 
number of graphs. It took more than 20 minutes. So while running my python 
program if user clicks on excel, error came.

So just i want to lock GUI not workstation.

I hope you understand this.

Let me know if you have any idea about it.


Regards
Jaydeep Patil
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Ian Kelly
On Tue, Jun 3, 2014 at 12:40 AM, Jaydeep Patil patil.jay2...@gmail.com wrote:
 Hi Lan,

 Currently I am doing some automation in python excel. It read the data  
 plots number of graphs. It took more than 20 minutes. So while running my 
 python program if user clicks on excel, error came.

 So just i want to lock GUI not workstation.

 I hope you understand this.

 Let me know if you have any idea about it.

You can set the Application.Interactive property on Excel to block user input:
http://msdn.microsoft.com/en-us/library/office/ff841248(v=office.15).aspx

Example:

excel_app = win32com.client.Dispatch(Excel.Application)
excel_app.Interactive = False
try:
# Do some automation...
finally:
excel_app.Interactive = True
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: can someone explain the concept of strings (or whatever) being immutable

2014-06-03 Thread Steven D'Aprano
On Mon, 02 Jun 2014 21:06:37 -0800, Deb Wyatt wrote:

 a_string = This is a string
 a_string is pointing to the above string
 
 now I change the value of a_string

This is where English can lead us astray. Change the value of a_string 
can mean two different things. An analogy may help make it clear.

Some people give their car a name. Suppose I call my car Vera. I might 
then change the value of Vera by replacing the engine with a more 
powerful one, installing superchargers, giving the car a new paint job, 
and replacing the tape desk with an MP3 player. The name Vera still 
refers to the same physical car, but now it is quite different from how 
it was before. This is equivalent to modifying a mutable value (like a 
list) in-place.

On the other hand, I might instead trade in my car for a newer model, and 
transfer the name with it. (I'm very unimaginative when it comes to names 
-- if I had children, they would all be called Chris, and all my pets are 
called Fluffy -- even the goldfish.) Whereas before Vera referred to a 
blue Toyota, now it refers to a red Ford. This is equivalent to replacing 
the string with a new string.

In Python terms, we call that re-binding. Binding is another term for 
assignment to a name. All of these things are binding operations:

import math
from random import random
x = 23
my_string = my_string.upper()


while these are mutation operations which change the value in-place:

my_list[0] = 42
my_list.append(None)
my_list.sort()
some_dict['key'] = 'hello world'
some_dict.clear()

Only mutable objects can be changed in place: e.g. lists, dicts, sets. 
Immutable objects are fixed at creation, and cannot be changed: strings, 
ints, floats, etc.

Some objects fall into a kind of grey area. Tuples are immutable, but 
tuples can include mutable objects inside them, and they remain mutable 
even inside the tuple.

t = (1, 2, [])  # tuple is fixed
t[2] = [1]  # fails, because you're trying to mutate the tuple
t[2].append(1)  # succeeds, because you're mutating the list inside t


 a_string = This string is different I understand that now a_string is
 pointing to a different string than it was before, in a different
 location.
 
 my question is what happens to the original string??  Is it still in
 memory somewhere, nameless? 
 That was just the first question.  What does immutable really mean if
 you can add items to a list? and concatenate strings?  I don't
 understand enough to even ask a comprehensible question, I guess.

No, it's an excellent question!

When an object is *unbound*, does it float free in memory? In principle, 
it could, at least for a little while. In practice, Python will recognise 
that the string is not being used for anything, and reclaim the memory 
for it. That's called garbage collection. There are no promises made 
about when that happens though. It could be instantly, or it could be in 
an hour. It depends on the specific version and implementation of Python.

(CPython, the standard version, will almost always garbage collect 
objects nearly instantly. Jython, which is Python on the Java Virtual 
Machine, only garbage collects objects every few seconds. So it does 
vary.)


In the case of lists, we've seen that a list is mutable, so you can 
append items to lists. In the case of string concatenation, strings are 
immutable, so code like this:

s = Hello
s +=  World!

does not append to the existing string, but creates a new string, Hello 
World!, and binds it to the name s. Then the two older strings, Hello 
and  World! are free to be garbage collected.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Jaydeep Patil
On Tuesday, 3 June 2014 12:39:38 UTC+5:30, Ian  wrote:
 On Tue, Jun 3, 2014 at 12:40 AM, Jaydeep Patil patil.jay2...@gmail.com 
 wrote:
 
  Hi Lan,
 
 
 
  Currently I am doing some automation in python excel. It read the data  
  plots number of graphs. It took more than 20 minutes. So while running my 
  python program if user clicks on excel, error came.
 
 
 
  So just i want to lock GUI not workstation.
 
 
 
  I hope you understand this.
 
 
 
  Let me know if you have any idea about it.
 
 
 
 You can set the Application.Interactive property on Excel to block user input:
 
 http://msdn.microsoft.com/en-us/library/office/ff841248(v=office.15).aspx
 
 
 
 Example:
 
 
 
 excel_app = win32com.client.Dispatch(Excel.Application)
 
 excel_app.Interactive = False
 
 try:
 
 # Do some automation...
 
 finally:
 
 excel_app.Interactive = True


Hi Lan,

Thanks for sharing valuable info.

I have another query.

We can now block user inputs. But in my automation three is copy  paste work 
going on continuously in Excel before plotting the graphs.

During copy paste of excel data, if user by mistake doing some copy  paste 
operation outside excel(for e.g. doing copy paste in outlook mails, firefox 
browser etc), it may be cause for the another error.

How i can control this?


Thanks  Regards
Jaydeep Patil
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: can someone explain the concept of strings (or whatever) being immutable

2014-06-03 Thread Mark Lawrence

On 03/06/2014 07:28, Rustom Mody wrote:

On Tuesday, June 3, 2014 10:36:37 AM UTC+5:30, Deb Wyatt wrote:

That was just the first question.  What does immutable really mean
if you can add items to a list? and concatenate strings?  I don't
understand enough to even ask a comprehensible question, I guess.


It is with some pleasure that I see this question: Most people who are
clueless have no clue that they are clueless -- also called the
Dunning-Krüger effect.

Be assured that this question is much harder and problematic than people 
believe.

There are earlier discussions on this on this list, eg

https://groups.google.com/forum/#!topic/comp.lang.python/023NLi4XXR4[126-150-false]

[Sorry the archive thread is too broken to quote meaningfully]



This list is available in maybe six different places, so I had to 
chuckle that you picked just about the worst possible to reference :)


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Strange Behavior

2014-06-03 Thread Peter Otten
Steven D'Aprano wrote:

 On Mon, 02 Jun 2014 20:05:29 +0200, robertw89 wrote:
 
 I invoked the wrong bug.py :/ , works fine now (this happens to me when
 im a bit tired sometimes...).
 
 Clarity in naming is an excellent thing. If you have two files called
 bug.py, that's two too many.

In the case of the OP the code is likely to be thrown away once the bug is 
found. Putting all experiments into a single folder even with the overly 
generic name bug would have been good enough to avoid the problem.
 
 Imagine having fifty files called program.py. Which one is which? How
 do you know? Programs should be named by what they do (think of Word,
 which does word processing, or Photoshop, which does photo editing), or
 when that isn't practical, at least give them a unique and memorable name
 (Outlook, Excel). The same applies to files demonstrating bugs.
 
Outlook and Excel are only good names because these are popular 
applications. If I were to name some private scripts in that style and not 
use them for a few months -- I don't think I'd have a clue what excel.py is 
meant to do. 

I have a few find_dupes dedupe_xxx compare_xxx scripts lying around and no 
idea which is which. So a reasonably clear name is not sufficient if there 
are other scripts that perform similar tasks.

One approach that seems to be working so far is to combine several scripts 
into one using argparse subparsers. This results in more frequent usage 
which means I can get away with short meaningless names, and infrequent 
actions are just one

$ xx -h

away.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-06-03 Thread Mark Lawrence

On 03/06/2014 07:30, Rustom Mody wrote:

On Tuesday, June 3, 2014 11:42:30 AM UTC+5:30, jmf wrote:


after thinking no


Yes [Also called Oui]



I'm very puzzled over thinking, what context was this in as I've 
kill-filed our most illustrious resident unicode expert?


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Mark Lawrence

On 03/06/2014 08:53, Jaydeep Patil wrote:

Would you please use the mailing list 
https://mail.python.org/mailman/listinfo/python-list or read and action 
this https://wiki.python.org/moin/GoogleGroupsPython to prevent us 
seeing double line spacing and single line paragraphs, thanks.


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Paul Sokolovsky
Hello,

On Mon, 02 Jun 2014 21:51:35 -0400
Terry Reedy tjre...@udel.edu wrote:

 To all the great responders. If anyone thinks the async intro is 
 inadequate and has a paragraph to contribute, open a tracker issue.

Not sure about intro (where's that?), but docs
(https://docs.python.org/3/library/asyncio.html) are pretty confusing
and bugs are reported, with no response:
http://bugs.python.org/issue21365

 
 -- 
 Terry Jan Reedy

-- 
Best regards,
 Paul  mailto:pmis...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Automating windows media player on win7

2014-06-03 Thread Deogratius Musiige
Hi guys,

I have been fighting with automating wmplayer but with no success.
It looks to me that using the .OCX would be the best option. I found the code 
below on the net but I cannot get it to work.
I can see from device manager that a driver is started by I get no audio out.
What am I doing wrong guys?


# this program will play MP3, WMA, MID, WAV files via the WindowsMediaPlayer

from win32com.client import Dispatch

mp = Dispatch(WMPlayer.OCX)

tune = mp.newMedia(./plays.wav)

mp.currentPlaylist.appendItem(tune)

mp.controls.play()

raw_input(Press Enter to stop playing)

mp.controls.stop()

Br
Deo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 4:36 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 I have yet to see that in practice. The typical thread works as
 follows:

 while True:
 while request.incomplete():
 request.read() # block
 sql_stmt = request.process()
 db.act(sql_stmt)   # block
 db.commit()# block
 response = request.ok_response()
 while response.incomplete():
 response.write()   # block

 The places marked with the block comment are states with only one
 valid input stimulus.
 ...
 Yes, a nest of callbacks can get messy very quickly. That is why you
 need to be very explicit with your states. Your class needs to have a
 state field named state with clearly named state values.

Simple/naive way to translate this into a callback system is like this:

def request_read_callback(request, data):
request.read(data) # however that part works
if not request.incomplete():
request.process()

def write(request, data):
request.write_buffer += data
request.attempt_write() # sets up callbacks for async writing

def request.process(self): # I know this isn't valid syntax
db.act(whatever) # may block but shouldn't for long
db.commit() # ditto
write(self, response) # won't block


This works as long as your database is reasonably fast and close
(common case for a lot of web servers: DB runs on same computer as web
and application and etc servers). It's nice and simple, lets you use a
single database connection (although you should probably wrap it in a
try/finally to ensure that you roll back on any exception), and won't
materially damage throughput as long as you don't run into problems.
For a database driven web site, most of the I/O time will be waiting
for clients, not waiting for your database.

Getting rid of those blocking database calls means having multiple
concurrent transactions on the database. Whether you go async or
threaded, this is going to happen. Unless your database lets you run
multiple simultaneous transactions on a single connection (I don't
think the Python DB API allows that, and I can't think of any DB
backends that support it, off hand), that means that every single
concurrency point needs its own database connection. With threads, you
could have a pool of (say) a dozen or so, one per thread, with each
one working synchronously; with asyncio, you'd have to have one for
every single incoming client request, or else faff around with
semaphores and resource pools and such manually. The throughput you
gain by making those asynchronous with callbacks is quite probably
destroyed by the throughput you lose in having too many simultaneous
connections to the database. I can't prove that, obviously, but I do
know that PostgreSQL requires up-front RAM allocation based on the
max_connections setting, and trying to support 5000 connections
started to get kinda stupid.

So how do you deal with the possibility that the database will block?
Pure threading (one thread listens for clients, spin off a thread
for each client, end the thread when the client disconnects) copes
poorly; async I/O copes poorly. The thread pool copes well (you know
exactly how many connections you'll need - one per thread in the
pool), but doesn't necessarily solve the problem (you can get all
threads waiting on the database and none handling other requests).
Frankly, I think the only solution is to beef up the database so it
won't block for too long (and, duh, to solve any stupid locking
problems, because they WILL kill you :) ).

 If threads simplify an asynchronous application, that is generally done
 by oversimplifying and reducing functionality.

Which means that I disagree with this statement. In my opinion, both
simple models (pure threading and asyncio) can express the same
functionality; the hybrid thread-pool model may simplify things a bit
in the interests of resource usage; but threading does let you think
about code the same way for one client as for fifty, without any
change of functionality. Compare:

# Console I/O:
def print_menu():
print(1: Spam)
print(2: Ham)
print(3: Quit)

def spam():
print(Spam, spam, spam, spam,)
while input(Continue? )!=NO!:
print(spam, spam, spam...)

def mainloop():
print(Welcome!)
while True:
print_menu()
x = int(input(What would you like? ))
if x == 1: spam()
elif x == 2: ham()
elif x == 3: break
else: print(I don't know numbers like %d.%x)
print(Goodbye!)


I could translate this into a pure-threading system very easily:

# Socket I/O:
import consoleio
class TerminateRequest(Exception): pass
tls = threading.local()
def print(s):
tls.socket.write(s+\r\n) # Don't forget, most of the internet uses \r\n!

def input(prompt):
tls.socket.write(s)
while '\n' not in tls.readbuffer:
tls.readbuffer += tls.socket.read()
if 

Re: Automating windows media player on win7

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 6:10 PM, Deogratius Musiige
dmusi...@sennheisercommunications.com wrote:
 Hi guys,



 I have been fighting with automating wmplayer but with no success.

 It looks to me that using the .OCX would be the best option. I found the
 code below on the net but I cannot get it to work.

 I can see from device manager that a driver is started by I get no audio
 out.

 What am I doing wrong guys?



 # this program will play MP3, WMA, MID, WAV files via the WindowsMediaPlayer

 from win32com.client import Dispatch

 mp = Dispatch(WMPlayer.OCX)

 tune = mp.newMedia(./plays.wav)

 mp.currentPlaylist.appendItem(tune)

 mp.controls.play()

 raw_input(Press Enter to stop playing)

 mp.controls.stop()



 Br

 Deo

First suggestion: post plain text to this list, not HTML. You don't
need it to look like the above. :)

Secondly: Is there a particular reason that you need to be automating
Windows Media Player specifically? I have a similar project which
works by sending keystrokes, which means it works with anything that
reacts to keys; mainly, I use it with VLC. It can invoke a movie or
audio file, can terminate the process, and can send a variety of
commands via keys. It's designed to be used on a (trusted) LAN.

Code is here:
https://github.com/Rosuav/Yosemite

Once something's invoked by the Yosemite project, it simply runs as
normal inside VLC. Easy to debug audio problems, because they're
managed the exact same way. Granted, this does assume that it's given
full control of the screen (it's designed to manage full-screen video
playback; in fact, my siblings are right now watching Toy Story 3 in
the other room, using an old laptop driving a TV via S-Video, all
managed via the above project), so it may not be ideal for background
music on a computer you use for other things; but feel free to borrow
ideas and/or code from there. (And for what it's worth, I use this as
one of my sources of BGM when I'm coding - just let it invoke the
file, then manually flip focus back to what I'm doing.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 5:53 PM, Jaydeep Patil patil.jay2...@gmail.com wrote:

 During copy paste of excel data, if user by mistake doing some copy  paste 
 operation outside excel(for e.g. doing copy paste in outlook mails, firefox 
 browser etc), it may be cause for the another error.

 How i can control this?

Suggestion: Don't. If you really need this level of control of the
workstation, you are going about things wrongly. This is a recipe for
fragility and horrific usability problems. There are two simple
solutions:

1) Make it really obvious that you're controlling the computer, by
putting something big across the screen - a splash screen, of sorts -
which is set to Always On Top and gives a message. And then just trust
that the user won't do anything, because you've done everything
reasonable. No further control needed.

2) Automate your code by moving stuff around on the disk, *not* by
actually working through Excel. Twenty minutes of Excel automation
should probably become a proper application that reads in some data
and generates some graphs. And it'd probably be faster, too (even if
Excel's performance is stellar, which I very much doubt, it's always
slower to work through a GUI than to do the work directly). Figure out
what you're really trying to do, and do that directly.

Also, please follow Mark's advice.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 def request.process(self): # I know this isn't valid syntax
 db.act(whatever) # may block but shouldn't for long
 db.commit() # ditto
 write(self, response) # won't block

 This works as long as your database is reasonably fast and close

I find that assumption unacceptable.

The DB APIs desperately need asynchronous variants. As it stands, you
are forced to delegate your DB access to threads/processes.

 So how do you deal with the possibility that the database will block?

You separate the request and response parts of the DB methods. That's
how it is implemented internally anyway.

Say no to blocking APIs.

 but otherwise, you would need to completely rewrite the main code.

That's a good reason to avoid threads. Once you realize you would have
been better off with an async approach, you'll have to start over. You
can easily turn a nonblocking solution into a blocking one but not the
other way around.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 7:10 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 def request.process(self): # I know this isn't valid syntax
 db.act(whatever) # may block but shouldn't for long
 db.commit() # ditto
 write(self, response) # won't block

 This works as long as your database is reasonably fast and close

 I find that assumption unacceptable.

It is a dangerous assumption.

 The DB APIs desperately need asynchronous variants. As it stands, you
 are forced to delegate your DB access to threads/processes.

 So how do you deal with the possibility that the database will block?

 You separate the request and response parts of the DB methods. That's
 how it is implemented internally anyway.

 Say no to blocking APIs.

Okay, but how do you handle two simultaneous requests going through
the processing that you see above? You *MUST* separate them onto two
transactions, otherwise one will commit half of the other's work. (Or
are you forgetting Databasing 101 - a transaction should be a logical
unit of work?) And since you can't, with most databases, have two
transactions on one connection, that means you need a separate
connection for each request. Given that the advantages of asyncio
include the ability to scale to arbitrary numbers of connections, it's
not really a good idea to then say oh but you need that many
concurrent database connections. Most systems can probably handle a
few thousand threads without a problem, but a few million is going to
cause major issues; but most databases start getting inefficient at a
few thousand concurrent sessions.

 but otherwise, you would need to completely rewrite the main code.

 That's a good reason to avoid threads. Once you realize you would have
 been better off with an async approach, you'll have to start over. You
 can easily turn a nonblocking solution into a blocking one but not the
 other way around.

Alright. I'm throwing down the gauntlet. Write me a purely nonblocking
web site concept that can handle a million concurrent connections,
where each one requires one query against the database, and one in a
hundred of them require five queries which happen atomically. I can do
it with a thread pool and blocking database queries, and by matching
the thread pool size and the database concurrent connection limit, I
can manage memory usage fairly easily; how do you do it efficiently
with pure async I/O?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Automating windows media player on win7

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 7:42 PM, Deogratius Musiige
dmusi...@sennheisercommunications.com wrote:
 Hi Chris,

 I want to have wmplayer as part of my automitized test for a headset via the
 USB HID.

 I want to be able to execute some of the following operations in my python
 script:

 1.   Play

 2.   Get playing track

 3.   Next

 4.   Get active device

 5.   …

 I am not sure if you are able to do this with your project

Play, definitely. Next, not specifically, but by sending the letter
'n' you can achieve that. Active device? Not sure what you mean there.

The one part that doesn't exist is Get playing track. But you could
manage this the other way around, by not invoking a playlist at all.
If you run vlc --play-and-exit some_file.wav, then when that process
terminates, the track has finished. Kill the process or send Ctrl-Q to
skip to the next track. Keep track (pun intended) of what file you've
most recently invoked.

I'm not sure how this ties in with your headset testing, though.

By the look of things, the Yosemite project isn't a here it is, just
deploy it solution, but you may find that there's some useful code
you can borrow.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 Okay, but how do you handle two simultaneous requests going through
 the processing that you see above? You *MUST* separate them onto two
 transactions, otherwise one will commit half of the other's work. (Or
 are you forgetting Databasing 101 - a transaction should be a logical
 unit of work?) And since you can't, with most databases, have two
 transactions on one connection, that means you need a separate
 connection for each request. Given that the advantages of asyncio
 include the ability to scale to arbitrary numbers of connections, it's
 not really a good idea to then say oh but you need that many
 concurrent database connections. Most systems can probably handle a
 few thousand threads without a problem, but a few million is going to
 cause major issues; but most databases start getting inefficient at a
 few thousand concurrent sessions.

I will do whatever I have to. Pooling transaction contexts
(connections) is probably necessary. Point is, no task should ever
block.

I deal with analogous situations all the time, in fact, I'm dealing with
one as we speak.

 Alright. I'm throwing down the gauntlet. Write me a purely nonblocking
 web site concept that can handle a million concurrent connections,
 where each one requires one query against the database, and one in a
 hundred of them require five queries which happen atomically. I can do
 it with a thread pool and blocking database queries, and by matching
 the thread pool size and the database concurrent connection limit, I
 can manage memory usage fairly easily; how do you do it efficiently
 with pure async I/O?

Sorry, I'm going to pass. That doesn't look like a 5-liner.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 8:08 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 Okay, but how do you handle two simultaneous requests going through
 the processing that you see above? You *MUST* separate them onto two
 transactions, otherwise one will commit half of the other's work.

 I will do whatever I have to. Pooling transaction contexts
 (connections) is probably necessary. Point is, no task should ever
 block.

 I deal with analogous situations all the time, in fact, I'm dealing with
 one as we speak.

Rule 1: No task should ever block.
Rule 2: Every task will require the database at least once.
Rule 3: No task's actions on the database should damage another task's
state. (Separate transactions.)
Rule 4: Maximum of N concurrent database connections, for any given value of N.

The only solution I can think of is to have a task wait (without
blocking) for a database connection to be available. That's a lot of
complexity, and you know what? It's going to come to exactly the same
thing as blocking database queries will - your throughput is defined
by your database.

It's the same with all sorts of other resources. What happens if your
error logging blocks? Do you code everything, *absolutely everything*,
around callbacks? Because ultimately, it adds piles and piles of
complexity and inefficiency, and it still comes back to the same
thing: stuff can make other stuff wait.

That's where threads are simpler. You do blocking I/O everywhere, and
the system deals with the rest. Has its limitations, but sure is
simpler.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: Automating windows media player on win7

2014-06-03 Thread Deogratius Musiige
Thanks for the good info Chris.
I'll look into the project. However, I hope that I can find a solution using 
OCX dispatch.
The dispatch provides all the functionalities I need.

Best regards / Med venlig hilsen
Deo

-Original Message-
From: Python-list [mailto:python-list-bounces+demu=senncom@python.org] On 
Behalf Of Chris Angelico
Sent: 3. juni 2014 11:46
Cc: python-list@python.org
Subject: Re: Automating windows media player on win7

On Tue, Jun 3, 2014 at 7:42 PM, Deogratius Musiige 
dmusi...@sennheisercommunications.com wrote:
 Hi Chris,

 I want to have wmplayer as part of my automitized test for a headset 
 via the USB HID.

 I want to be able to execute some of the following operations in my 
 python
 script:

 1.   Play

 2.   Get playing track

 3.   Next

 4.   Get active device

 5.   …

 I am not sure if you are able to do this with your project

Play, definitely. Next, not specifically, but by sending the letter 'n' you can 
achieve that. Active device? Not sure what you mean there.

The one part that doesn't exist is Get playing track. But you could manage 
this the other way around, by not invoking a playlist at all.
If you run vlc --play-and-exit some_file.wav, then when that process 
terminates, the track has finished. Kill the process or send Ctrl-Q to skip to 
the next track. Keep track (pun intended) of what file you've most recently 
invoked.

I'm not sure how this ties in with your headset testing, though.

By the look of things, the Yosemite project isn't a here it is, just deploy 
it solution, but you may find that there's some useful code you can borrow.

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: Automating windows media player on win7

2014-06-03 Thread Deogratius Musiige
Hi Chris,

I want to have wmplayer as part of my automitized test for a headset via the 
USB HID.

I want to be able to execute some of the following operations in my python 
script:

1.   Play

2.   Get playing track

3.   Next

4.   Get active device

5.   ...

I am not sure if you are able to do this with your project



Best regards / Med venlig hilsen

Deo





-Original Message-
From: Python-list [mailto:python-list-bounces+demu=senncom@python.org] On 
Behalf Of Chris Angelico
Sent: 3. juni 2014 10:58
Cc: python-list@python.org
Subject: Re: Automating windows media player on win7



On Tue, Jun 3, 2014 at 6:10 PM, Deogratius Musiige 
dmusi...@sennheisercommunications.commailto:dmusi...@sennheisercommunications.com
 wrote:

 Hi guys,







 I have been fighting with automating wmplayer but with no success.



 It looks to me that using the .OCX would be the best option. I found

 the code below on the net but I cannot get it to work.



 I can see from device manager that a driver is started by I get no

 audio out.



 What am I doing wrong guys?







 # this program will play MP3, WMA, MID, WAV files via the

 WindowsMediaPlayer



 from win32com.client import Dispatch



 mp = Dispatch(WMPlayer.OCX)



 tune = mp.newMedia(./plays.wav)



 mp.currentPlaylist.appendItem(tune)



 mp.controls.play()



 raw_input(Press Enter to stop playing)



 mp.controls.stop()







 Br



 Deo



First suggestion: post plain text to this list, not HTML. You don't need it to 
look like the above. :)



Secondly: Is there a particular reason that you need to be automating Windows 
Media Player specifically? I have a similar project which works by sending 
keystrokes, which means it works with anything that reacts to keys; mainly, I 
use it with VLC. It can invoke a movie or audio file, can terminate the 
process, and can send a variety of commands via keys. It's designed to be used 
on a (trusted) LAN.



Code is here:

https://github.com/Rosuav/Yosemite



Once something's invoked by the Yosemite project, it simply runs as normal 
inside VLC. Easy to debug audio problems, because they're managed the exact 
same way. Granted, this does assume that it's given full control of the screen 
(it's designed to manage full-screen video playback; in fact, my siblings are 
right now watching Toy Story 3 in the other room, using an old laptop driving a 
TV via S-Video, all managed via the above project), so it may not be ideal for 
background music on a computer you use for other things; but feel free to 
borrow ideas and/or code from there. (And for what it's worth, I use this as 
one of my sources of BGM when I'm coding - just let it invoke the file, then 
manually flip focus back to what I'm doing.)



ChrisA

--

https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: hashing strings to integers

2014-06-03 Thread Adam Funk
On 2014-05-28, Dan Sommers wrote:

 On Tue, 27 May 2014 17:02:50 +, Steven D'Aprano wrote:

 - rather than zillions of them, there are few enough of them that
  the chances of an MD5 collision is insignificant;

   (Any MD5 collision is going to play havoc with your strategy of
   using hashes as a proxy for the real string.)

 - and you can arrange matters so that you never need to MD5 hash a
   string twice.

 Hmmm...  I'll use the MD5 hashes of the strings as a key, and the
 strinsgs as the value (to detect MD5 collisions) ...

Hey, I'm not *that* stupid.


-- 
In the 1970s, people began receiving utility bills for
-£999,999,996.32 and it became harder to sustain the 
myth of the infallible electronic brain. (Verity Stob)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: hashing strings to integers

2014-06-03 Thread Adam Funk
On 2014-05-27, Steven D'Aprano wrote:

 On Tue, 27 May 2014 16:13:46 +0100, Adam Funk wrote:

 Well, here's the way it works in my mind:
 
I can store a set of a zillion strings (or a dict with a zillion
string keys), but every time I test if new_string in seen_strings,
the computer hashes the new_string using some kind of short hash,
checks the set for matching buckets (I'm assuming this is how python
tests set membership --- is that right?), 

 So far so good. That applies to all objects, not just strings.


then checks any
hash-matches for string equality.  Testing string equality is slower
than integer equality, and strings (unless they are really short)
take up a lot more memory than long integers.

 But presumably you have to keep the string around anyway. It's going to 
 be somewhere, you can't just throw the string away and garbage collect 
 it. The dict doesn't store a copy of the string, it stores a reference to 
 it, and extra references don't cost much.

In the case where I did something like that, I wasn't keeping copies
of the strings in memory after hashing ( otherwise processing them).
I know that putting the strings' pointers in the set is a light memory
load.



[snipping the rest because...]

You've convinced me.  Thanks.



-- 
I heard that Hans Christian Andersen lifted the title for The Little
Mermaid off a Red Lobster Menu. [Bucky Katt]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Burak Arslan

On 06/03/14 12:30, Chris Angelico wrote:
 Write me a purely nonblocking
 web site concept that can handle a million concurrent connections,
 where each one requires one query against the database, and one in a
 hundred of them require five queries which happen atomically.


I don't see why that can't be done. Twisted has everyting I can think of
except database bits (adb runs on threads), and I got txpostgres[1]
running in production, it seems quite robust so far. what else are we
missing?

[1]: https://pypi.python.org/pypi/txpostgres

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Frank Millman

Chris Angelico ros...@gmail.com wrote in message 
news:captjjmqwkestvrsrg30qjo+4ttlqfk9q4gabygovew8nsdx...@mail.gmail.com...

 This works as long as your database is reasonably fast and close
 (common case for a lot of web servers: DB runs on same computer as web
 and application and etc servers). It's nice and simple, lets you use a
 single database connection (although you should probably wrap it in a
 try/finally to ensure that you roll back on any exception), and won't
 materially damage throughput as long as you don't run into problems.
 For a database driven web site, most of the I/O time will be waiting
 for clients, not waiting for your database.

 Getting rid of those blocking database calls means having multiple
 concurrent transactions on the database. Whether you go async or
 threaded, this is going to happen. Unless your database lets you run
 multiple simultaneous transactions on a single connection (I don't
 think the Python DB API allows that, and I can't think of any DB
 backends that support it, off hand), that means that every single
 concurrency point needs its own database connection. With threads, you
 could have a pool of (say) a dozen or so, one per thread, with each
 one working synchronously; with asyncio, you'd have to have one for
 every single incoming client request, or else faff around with
 semaphores and resource pools and such manually. The throughput you
 gain by making those asynchronous with callbacks is quite probably
 destroyed by the throughput you lose in having too many simultaneous
 connections to the database. I can't prove that, obviously, but I do
 know that PostgreSQL requires up-front RAM allocation based on the
 max_connections setting, and trying to support 5000 connections
 started to get kinda stupid.


I am following this with interest.  I still struggle to get my head around 
the concepts, but it is slowly coming clearer.

Focusing on PostgreSQL, couldn't you do the following?

PostgreSQL runs client/server (they call it front-end/back-end) over TCP/IP.

psycopg2 appears to have some support for async communication with the 
back-end. I only skimmed the docs, and it looks a bit complicated, but it is 
there.

So why not keep a 'connection pool', and for every potentially blocking 
request, grab a connection, set up a callback or a 'yield from' to wait for 
the response, and unblock.

Provided the requests return quickly, I would have thought a hundred 
database connections could support thousands of users.

Frank Millman



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 your throughput is defined by your database.

Asyncio is not (primarily) a throughput-optimization method. Sometimes
it is a resource consumption optimization method as the context objects
are lighter-weight than full-blown threads.

Mostly asyncio is a way to deal with anything you throw at it. What do
you do if you need to exit the application immediately and your threads
are stuck in a 2-minute timeout? With asyncio, you have full control of
the situation.

 It's the same with all sorts of other resources. What happens if your
 error logging blocks? Do you code everything, *absolutely everything*,
 around callbacks? Because ultimately, it adds piles and piles of
 complexity and inefficiency, and it still comes back to the same
 thing: stuff can make other stuff wait.

It would be interesting to have an OS or a programming language where no
function returns a value. Linux, in particular, suffers from the
deeply-ingrained system assumption that all file access is synchronous.

However, your protestations seem like a straw man to me. I have really
been practicing event-driven programming for decades. It is fraught with
frustrating complications but they feel like fresh air compared with the
what-now moments I've had to deal with doing multithreaded programming.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-06-03 Thread Ned Batchelder

On 6/3/14 4:03 AM, Mark Lawrence wrote:

On 03/06/2014 07:30, Rustom Mody wrote:

On Tuesday, June 3, 2014 11:42:30 AM UTC+5:30, jmf wrote:


after thinking no


Yes [Also called Oui]



I'm very puzzled over thinking, what context was this in as I've
kill-filed our most illustrious resident unicode expert?



Let's please not have a recounting of other peoples' posts, especially 
if they are posts we try to minimize.


--
Ned Batchelder, http://nedbatchelder.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Strange Behavior

2014-06-03 Thread Steven D'Aprano
On Tue, 03 Jun 2014 10:01:26 +0200, Peter Otten wrote:

 Steven D'Aprano wrote:
 
 On Mon, 02 Jun 2014 20:05:29 +0200, robertw89 wrote:
 
 I invoked the wrong bug.py :/ , works fine now (this happens to me
 when im a bit tired sometimes...).
 
 Clarity in naming is an excellent thing. If you have two files called
 bug.py, that's two too many.
 
 In the case of the OP the code is likely to be thrown away once the bug
 is found. Putting all experiments into a single folder even with the
 overly generic name bug would have been good enough to avoid the
 problem.

Depends on how many bugs the OP thinks he has found. (Hint: check on 
the python bug tracker.) And of course you can't have multiple files in 
the same directory unless they have different names, so a good naming 
system is still needed.

But as you point out later:

 Imagine having fifty files called program.py. Which one is which? How
 do you know? Programs should be named by what they do (think of Word,
 which does word processing, or Photoshop, which does photo editing), or
 when that isn't practical, at least give them a unique and memorable
 name (Outlook, Excel). The same applies to files demonstrating bugs.
  
 Outlook and Excel are only good names because these are popular
 applications. If I were to name some private scripts in that style and
 not use them for a few months -- I don't think I'd have a clue what
 excel.py is meant to do.

... a good naming scheme has to take into account how often you use it. 
Scripts that you *literally* throw away after use don't need to be named 
with a lot of care, just enough to keep the different versions distinct 
while you use them. More generic scripts that you keep around need a bit 
more care -- I must admit I have far too many scripts called make_avi 
for slightly different video-to-AVI conversion scripts. But at least 
they're not all called script.py.

Outlook and Excel are memorable names, but if you don't use them 
frequently, you may not associate the name with the application. In the 
Python world, we have memorable names like Psyco and Pyrex, but it took 
me *ages* to remember which one was which, because I didn't use them 
often enough to remember.

(Psycho is a JIT specialising compiler, now unmaintained and obsoleted by 
PyPy; Pyrex enables you to write C extensions using Python, also 
unmaintained, and obsoleted by Cython.)

[...]
 One approach that seems to be working so far is to combine several
 scripts into one using argparse subparsers. This results in more
 frequent usage which means I can get away with short meaningless names,
[...]


Very true, but the cost is added complexity in the script.


-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 9:05 PM, Burak Arslan burak.ars...@arskom.com.tr wrote:
 On 06/03/14 12:30, Chris Angelico wrote:
 Write me a purely nonblocking
 web site concept that can handle a million concurrent connections,
 where each one requires one query against the database, and one in a
 hundred of them require five queries which happen atomically.


 I don't see why that can't be done. Twisted has everyting I can think of
 except database bits (adb runs on threads), and I got txpostgres[1]
 running in production, it seems quite robust so far. what else are we
 missing?

 [1]: https://pypi.python.org/pypi/txpostgres

I never said it can't be done. My objection was to Marko's reiterated
statement that asynchronous coding is somehow massively cleaner than
threading; my argument is that threading is often significantly
cleaner than async, and that at worst, they're about the same (because
they're dealing with exactly the same problems).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 9:09 PM, Frank Millman fr...@chagford.com wrote:
 So why not keep a 'connection pool', and for every potentially blocking
 request, grab a connection, set up a callback or a 'yield from' to wait for
 the response, and unblock.

Compare against a thread pool, where each thread simply does blocking
requests. With threads, you use blocking database, blocking logging,
blocking I/O, etc, and everything *just happens*; with a connection
pool, like this, you need to do every single one of them separately.
(How many of you have ever written non-blocking error logging? Or have
you written a non-blocking system with blocking calls to write to your
error log? The latter is far FAR more common, but all files, even
stdout/stderr, can block.) I don't see how Marko's assertion that
event-driven asynchronous programming is a breath of fresh air
compared with multithreading. The only way multithreading can possibly
be more complicated is that preemption can occur anywhere - and that's
exactly one of the big flaws in async work, if you don't do your job
properly.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Damien George
Hi,

We would like to announce Micro Python, an implementation of Python 3
optimised to have a low memory footprint.

While Python has many attractive features, current implementations
(read CPython) are not suited for embedded devices, such as
microcontrollers and small systems-on-a-chip.  This is because CPython
uses an awful lot of RAM -- both stack and heap -- even for simple
things such as integer addition.

Micro Python is a new implementation of the Python 3 language, which
aims to be properly compatible with CPython, while sporting a very
minimal RAM footprint, a compact compiler, and a fast and efficient
runtime.  These goals have been met by employing many tricks with
pointers and bit stuffing, and placing as much as possible in
read-only memory.

Micro Python has the following features:

- Supports almost full Python 3 syntax, including yield (compiles
99.99% of the Python 3 standard library).
- Most scripts use significantly less RAM in Micro Python, and various
benchmark programs run faster, compared with CPython.
- A minimal ARM build fits in 80k of program space, and with all
features enabled it fits in around 200k on Linux.
- Micro Python needs only 2k RAM for a basic REPL.
- It has 2 modes of AOT (ahead of time) compilation to native machine
code, doubling execution speed.
- There is an inline assembler for use in time-critical
microcontroller applications.
- It is written in C99 ANSI C and compiles cleanly under Unix (POSIX),
Mac OS X, Windows and certain ARM based microcontrollers.
- It supports a growing subset of Python 3 types and operations.
- Part of the Python 3 standard library has already been ported to
Micro Python, and work is ongoing to port as much as feasible.

More info at:

http://micropython.org/

You can follow the progress and contribute at github:

www.github.com/micropython/micropython
www.github.com/micropython/micropython-lib

--
Damien / Micro Python team.
-- 
https://mail.python.org/mailman/listinfo/python-list


Would a Python 2.8 help you port to Python 3?

2014-06-03 Thread Mark Lawrence
An interesting article from Lennart Regebro 
http://regebro.wordpress.com/2014/06/03/would-a-python-2-8-help-you-port-to-python-3/ 
although I'm inclined to ignore it as it appears to be factual.  We 
can't have that getting in the way of plain, good, old fashioned FUD now 
can we?


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 I don't see how Marko's assertion that event-driven asynchronous
 programming is a breath of fresh air compared with multithreading. The
 only way multithreading can possibly be more complicated is that
 preemption can occur anywhere - and that's exactly one of the big
 flaws in async work, if you don't do your job properly.

Say you have a thread blocking on socket.accept(). Another thread
receives the management command to shut the server down. How do you tell
the socket.accept() thread to abort and exit?

The classic hack is close the socket, which causes the blocking thread
to raise an exception.

The blocking thread might be also stuck in socket.recv(). Closing the
socket from the outside is dangerous now because of race conditions. So
you will have to carefully use add locking to block an unwanted closing
of the connection.

But what do you do if the blocking thread is stuck in the middle of a
black box API that doesn't expose a file you could close?

So you hope all blocking APIs have a timeout parameter. You then replace
all blocking calls with polling loops. You make the timeout value long
enough not to burden the CPU too much and short enough not to annoy the
human operator too much.

Well, ok,

   os.kill(os.getpid(), signal.SIGKILL)

is always an option.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 10:27 PM, Damien George
damien.p.geo...@gmail.com wrote:
 - Supports almost full Python 3 syntax, including yield (compiles
 99.99% of the Python 3 standard library).
 - It supports a growing subset of Python 3 types and operations.
 - Part of the Python 3 standard library has already been ported to
 Micro Python, and work is ongoing to port as much as feasible.

I don't have an actual use-case for this, as I don't target
microcontrollers, but I'm curious: What parts of Py3 syntax aren't
supported? And since you say port as much as feasible, presumably
there'll be parts that are never supported. Are there some syntactic
elements that just take up way too much memory?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 11:05 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 I don't see how Marko's assertion that event-driven asynchronous
 programming is a breath of fresh air compared with multithreading. The
 only way multithreading can possibly be more complicated is that
 preemption can occur anywhere - and that's exactly one of the big
 flaws in async work, if you don't do your job properly.

 Say you have a thread blocking on socket.accept(). Another thread
 receives the management command to shut the server down. How do you tell
 the socket.accept() thread to abort and exit?

 The classic hack is close the socket, which causes the blocking thread
 to raise an exception.

How's that a hack? If you're shutting the server down, you need to
close the listening socket anyway, because otherwise clients will
think they can get in. Yes, I would close the socket. Or just send the
process a signal like SIGINT, which will break the accept() call. (I
don't know about Python specifically here; the underlying Linux API
works this way, returning EINTR, as does OS/2 which is where I
learned. Generally I'd have the accept() loop as the process's main
loop, and spin off threads for clients.) In fact, the most likely case
I'd have would be that the receipt of that signal *is* the management
command to shut the server down; it might be SIGINT or SIGQUIT or
SIGTERM, or maybe some other signal, but one of the easiest ways to
notify a Unix process to shut down is to send it a signal. Coping with
broken proprietary platforms is an exercise for the reader, but I know
it's possible to terminate a console-based socket accept loop in
Windows with Ctrl-C, so there ought to be an equivalent API method.

 The blocking thread might be also stuck in socket.recv(). Closing the
 socket from the outside is dangerous now because of race conditions. So
 you will have to carefully use add locking to block an unwanted closing
 of the connection.

Maybe. More likely, the same situation applies - you're shutting down,
so you need to close the socket anyway. I've generally found -
although this may not work on all platforms - that it's perfectly safe
for one thread to be blocked in recv() while another thread calls
send() on the same socket, and then closes that socket.  On the other
hand, if your notion of shutting down does NOT include closing the
socket, then you have to deal with things some other way - maybe
handing the connection on to some other process, or something - so a
generic approach isn't appropriate here.

 But what do you do if the blocking thread is stuck in the middle of a
 black box API that doesn't expose a file you could close?

 So you hope all blocking APIs have a timeout parameter.

No! I never put timeouts on blocking calls to solve shutdown problems.
That is a hack, and a bad one. Timeouts should be used only when the
timeout is itself significant (eg if you decide that your socket
connections should time out if there's no activity in X minutes, so
you put a timeout on socket reads of X*6 and close the connection
cleanly if it times out).

 Well, ok,

os.kill(os.getpid(), signal.SIGKILL)

 is always an option.

Yeah, that's one way. More likely, you'll find that a lesser signal
also aborts the blocking API call. And even if you have to hope for an
alternate API to solve this problem, how is that different from hoping
that all blocking APIs have corresponding non-blocking APIs? I
reiterate the example I've used a few times already:

https://docs.python.org/3.4/library/logging.html#logging.Logger.debug

What happens if that blocks? How can you make sure it won't?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 https://docs.python.org/3.4/library/logging.html#logging.Logger.debug

 What happens if that blocks? How can you make sure it won't?

I haven't used that class. Generally, Python standard libraries are not
readily usable for nonblocking I/O.

For myself, I have solved that particular problem my own way.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Having trouble in expressing constraints in Python

2014-06-03 Thread varun7rs
I have a problem in writing a constraint in Python. Firstly, I wrote the code 
in AMPL and it was working and I'm using Python for the reason that it is more 
suitable to handle large data. I managed to write the code quite fine except 
for one constraint(Link Mapping Constraint). I've attached pieces of code from 
both AMPL and Python. First part of it is the Link Capacity Constraint in AMPl 
followed by Python. Second part of the code is the Link Mapping Constraint and 
I wish to write it in a similar fashion. But, I'm not able to proceed with it. 
I really appreciate your help.

subject to Link_Capacity_Constraints { (ns1, ns2) in PHY_LINKS}:
sum {dns in DEMAND} ((f_eN_SGW[dns, ns1, ns2] * (MME_bdw[dns] + 
IMS_bdw[dns] + PoP_bdw[dns])) + (f_SGW_PGW[dns, ns1, ns2] * PoP_bdw[dns]) + 
(f_SGW_IMS[dns, ns1, ns2] * IMS_bdw[dns]) + (f_SGW_MME[dns, ns1, ns2] * 
MME_bdw[dns]) + (f_PGW_PoP[dns, ns1, ns2] * PoP_bdw[dns])) = capacity_bdw[ns1, 
ns2];

for edge in phy_network.edges:
varNames = []
varCoeffs = []
for demand in demands:
varNames.append(f_eN_SGW_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
varCoeffs.append(demand.MME_bdw + demand.IMS_bdw + 
demand.PoP_bdw )
varNames.append(f_SGW_PGW_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
varCoeffs.append(demand.PoP_bdw)
varNames.append(f_SGW_IMS_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
varCoeffs.append(demand.IMS_bdw)
varNames.append(f_SGW_MME_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
varCoeffs.append(demand.MME_bdw)
varNames.append(f_PGW_PoP_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
varCoeffs.append(demand.PoP_bdw)
solver.add_constraint(varNames, varCoeffs, L, edge.capacity_bdw, 
Link_Capacity_Constraints{}_{}_{}.format(edge.SourceID, edge.DestinationID, 
demand.demandID))

#Link Mapping Constraint
subject to Link_Mapping_Constraints_1{dns in DEMAND, ns1 in PHY_NODES}: sum 
{(ns1,w) in PHY_LINKS} (f_eN_SGW[dns, w, ns1] - f_eN_SGW[dns, ns1, w]) = 
x_eN[dns, ns1] - x_SGW[dns, ns1];






-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Would a Python 2.8 help you port to Python 3?

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 10:40 PM, Mark Lawrence breamore...@yahoo.co.uk wrote:
 An interesting article from Lennart Regebro
 http://regebro.wordpress.com/2014/06/03/would-a-python-2-8-help-you-port-to-python-3/
 although I'm inclined to ignore it as it appears to be factual.  We can't
 have that getting in the way of plain, good, old fashioned FUD now can we?

One point I'd add to that blog post.

Without help:

try:
import configparser
except ImportError:
import ConfigParser as configparser

With six:

from six.moves import configparser

The theoretical Python 2.8 version isn't shown, but presumably it would be:

import configparser

If you want this sort of thing, there's nothing stopping you from
creating a configparser.py that just says from ConfigParser import *
and writing Python 3 code. If it really is that simple, and in a
number of cases it is, the hassle isn't great. (I've written
straddling code without six's help, just by looking at the
before-and-after of 2to3 - on the file I'm looking at now, that's two
imports that got renamed, plus FileNotFoundError=OSError for Python
2. Sure, I could cut that down a bit in length using six, but for just
three cases, it's not a big deal.)

Really, if the Py2 vs Py3 complaints are about module renamings, there
are so many easy solutions that it's almost laughable. There are other
changes that are slightly less small (the blog mentions metaclasses,
for instance), but all can be solved. The real problem - the reason
that big codebases can't be migrated with a few simple changes at the
tops of files or simple direct translations - is str-bytes/unicode,
because that one forces the programmers to actually think about what
they're doing. And there is nothing, absolutely nothing, that Python
2.8 can ever do to help with that.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Chris Angelico
On Tue, Jun 3, 2014 at 11:42 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 https://docs.python.org/3.4/library/logging.html#logging.Logger.debug

 What happens if that blocks? How can you make sure it won't?

 I haven't used that class. Generally, Python standard libraries are not
 readily usable for nonblocking I/O.

 For myself, I have solved that particular problem my own way.

Okay. How do you do basic logging? (Also - rolling your own logging
facilities, instead of using what Python provides, is the simpler
solution? This does not aid your case.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Steven D'Aprano
On Tue, 03 Jun 2014 13:27:11 +0100, Damien George wrote:

 Hi,
 
 We would like to announce Micro Python, an implementation of Python 3
 optimised to have a low memory footprint.

Fantastic!




-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Robin Becker



The problem is that causal readers like Robin sometimes jump from 'In Python 3,
it can be hard to do something one really ought not to do' to 'Binary I/O is
hard in Python 3' -- which is is not.

I'm fairly causal and I did understand that the rant was a bit over the top for 
fairly practical reasons I have always regarded the std streams as allowing 
binary data and always objected to having to open files in python with  a 't' or 
'b' mode to cope with line ending issues.


Isn't it a bit old fashioned to think everything is connected to a console?

I think the idea that we only give meaning to binary data using encodings is a 
bit limiting. A zip or gif file has structure, but I don't think it's reasonable 
to regard such a file as having an encoding in the python unicode sense.

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: can someone explain the concept of strings (or whatever) being immutable

2014-06-03 Thread Cameron Simpson

On 02Jun2014 21:35, Deb Wyatt codemon...@inbox.com wrote:

Please adjust your mailer to send plain text only. It is all you need
anyway,
and renders more reliably for other people.


I am so sorry, I did not realize it was a problem.  Hopefully it will behave 
now.


Looks just great now. Many thanks.

Cheers,
Cameron Simpson c...@zip.com.au
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 12:18 AM, Robin Becker ro...@reportlab.com wrote:
 I think the idea that we only give meaning to binary data using encodings is
 a bit limiting. A zip or gif file has structure, but I don't think it's
 reasonable to regard such a file as having an encoding in the python unicode
 sense.

Of course it doesn't. Those are binary files. Ultimately, every file
is binary; but since the vast majority of them actually contain text,
in one of a handful of common encodings, it's nice to have an easy way
to open a text file. You could argue that rb should be the default,
rather than rt, but that's a relatively minor point.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Having trouble in expressing constraints in Python

2014-06-03 Thread Mark Lawrence

On 03/06/2014 14:44, varun...@gmail.com wrote:

I have a problem in writing a constraint in Python. Firstly, I wrote the code 
in AMPL and it was working and I'm using Python for the reason that it is more 
suitable to handle large data. I managed to write the code quite fine except 
for one constraint(Link Mapping Constraint). I've attached pieces of code from 
both AMPL and Python. First part of it is the Link Capacity Constraint in AMPl 
followed by Python. Second part of the code is the Link Mapping Constraint and 
I wish to write it in a similar fashion. But, I'm not able to proceed with it. 
I really appreciate your help.

subject to Link_Capacity_Constraints { (ns1, ns2) in PHY_LINKS}:
sum {dns in DEMAND} ((f_eN_SGW[dns, ns1, ns2] * (MME_bdw[dns] + 
IMS_bdw[dns] + PoP_bdw[dns])) + (f_SGW_PGW[dns, ns1, ns2] * PoP_bdw[dns]) + 
(f_SGW_IMS[dns, ns1, ns2] * IMS_bdw[dns]) + (f_SGW_MME[dns, ns1, ns2] * 
MME_bdw[dns]) + (f_PGW_PoP[dns, ns1, ns2] * PoP_bdw[dns])) = capacity_bdw[ns1, 
ns2];

 for edge in phy_network.edges:
 varNames = []
 varCoeffs = []
 for demand in demands:
 varNames.append(f_eN_SGW_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
 varCoeffs.append(demand.MME_bdw + demand.IMS_bdw + 
demand.PoP_bdw )
 varNames.append(f_SGW_PGW_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
 varCoeffs.append(demand.PoP_bdw)
 varNames.append(f_SGW_IMS_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
 varCoeffs.append(demand.IMS_bdw)
 varNames.append(f_SGW_MME_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
 varCoeffs.append(demand.MME_bdw)
 varNames.append(f_PGW_PoP_{}_{}_{}.format(demand.demandID, 
edge.SourceID, edge.DestinationID))
 varCoeffs.append(demand.PoP_bdw)
 solver.add_constraint(varNames, varCoeffs, L, edge.capacity_bdw, 
Link_Capacity_Constraints{}_{}_{}.format(edge.SourceID, edge.DestinationID, 
demand.demandID))

#Link Mapping Constraint
subject to Link_Mapping_Constraints_1{dns in DEMAND, ns1 in PHY_NODES}: sum 
{(ns1,w) in PHY_LINKS} (f_eN_SGW[dns, w, ns1] - f_eN_SGW[dns, ns1, w]) = 
x_eN[dns, ns1] - x_SGW[dns, ns1];



Are you trying to implement your own code rather than use an existing 
library from pypi?


I also observe the gmail address which I'm assuming means google groups. 
 If that is the case, would you please use the mailing list 
https://mail.python.org/mailman/listinfo/python-list or read and action 
this https://wiki.python.org/moin/GoogleGroupsPython to prevent us 
seeing double line spacing and single line paragraphs, thanks.  If not 
please ignore this paragraph :)


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Having trouble in expressing constraints in Python

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 1:15 AM, Mark Lawrence breamore...@yahoo.co.uk wrote:
 I also observe the gmail address which I'm assuming means google groups.

No need to assume - the OP's headers show Google Groups injection info.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lock Windows Screen GUI using python

2014-06-03 Thread Ian Kelly
On Jun 3, 2014 1:56 AM, Jaydeep Patil patil.jay2...@gmail.com wrote:

 I have another query.

 We can now block user inputs. But in my automation three is copy  paste
work going on continuously in Excel before plotting the graphs.

 During copy paste of excel data, if user by mistake doing some copy 
paste operation outside excel(for e.g. doing copy paste in outlook mails,
firefox browser etc), it may be cause for the another error.

 How i can control this?

Do you really need to use the system clipboard for this automation? Why not
just store the data in script variables and write them back from the?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Roy Smith
In article 87ha42uos2@elektro.pacujo.net,
 Marko Rauhamaa ma...@pacujo.net wrote:

 Chris Angelico ros...@gmail.com:
 
  I don't see how Marko's assertion that event-driven asynchronous
  programming is a breath of fresh air compared with multithreading. The
  only way multithreading can possibly be more complicated is that
  preemption can occur anywhere - and that's exactly one of the big
  flaws in async work, if you don't do your job properly.
 
 Say you have a thread blocking on socket.accept(). Another thread
 receives the management command to shut the server down. How do you tell
 the socket.accept() thread to abort and exit?

You do the accept() in a daemon thread?
-- 
https://mail.python.org/mailman/listinfo/python-list


multiprocess (and paramiko)

2014-06-03 Thread mennis
I was able to work around this by using a completely different design but I 
still don''t understand why this doesn't work.  It appears that the process 
that launches the process doesn't get access to updated object attributes.  
When I set and check them in the object itself it behaves as expected.  When I 
check them from outside the object instance I get the initial values only.  
Could someone explain what I'm missing?

Here I have a simple multiprocessing class that when initializes takes a 
connected SSHClient instance and a command to run on the associated host in a 
new channel.

import multiprocessing

from time import time
from Crypto import Random
import paramiko


class Nonblock(multiprocessing.Process):

def __init__(self, connection, cmd):
Random.atfork()
multiprocessing.Process.__init__(self)

self.transport = connection.get_transport()
if self.transport is None:
raise ConnectionError(connection.get_transport() returned None )
self.channel = self.transport.open_session()

self.command = cmd
self.done = False
self.stdin = None
self.stdout = None
self.stderr = None
self.status = None
self.message = str()
self.time = float()

def _read(self, channelobj):
read until EOF
buf = channelobj.readline()
output = str(buf)
while buf:
buf = channelobj.readline()
output += buf
return output

def run(self):

start = time()
stdin, stdout, stderr = self.channel.exec_command(command=self.command)

self.stderr = self._read(stderr)
self.status = stdout.channel.recv_exit_status()

if self.status != 0:
self.status = False
self.message = self.stderr
else:
self.status = True
self.message = self._read(stdout)

self.time = time() - start
stdin.close()

self.done = True


I expect to use it in the following manner:

from simplelib import Nonblock
from time import sleep
from paramiko import SSHClient, AutoAddPolicy


if __name__== __main__:
connection = SSHClient()
connection.set_missing_host_key_policy(AutoAddPolicy())

username = uname
hostname = hostname
password = password

connection.connect(hostname, 22, username, password)
print connection.exec_command(sleep 1; echo test 0)[1].read()

n = Nonblock(connection,sleep 20; echo test 2)
n.start()

print connection.exec_command(sleep 1; echo test 1)[1].read()
while not n.done:
sleep(1)
print n.message
print done
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: multiprocess (and paramiko)

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 1:43 AM, mennis michaelian.en...@gmail.com wrote:
 I was able to work around this by using a completely different design but I 
 still don''t understand why this doesn't work.  It appears that the process 
 that launches the process doesn't get access to updated object attributes.  
 When I set and check them in the object itself it behaves as expected.  When 
 I check them from outside the object instance I get the initial values only.  
 Could someone explain what I'm missing?


When you fork into two processes, the child gets a copy of the
parent's state, but after that, changes happen completely
independently. You need to use actual multiprocessing features like
Queue and such to pass information from one process to another.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 Okay. How do you do basic logging? (Also - rolling your own logging
 facilities, instead of using what Python provides, is the simpler
 solution? This does not aid your case.)

Asyncio is fresh out of the oven. It's going to take years before the
standard libraries catch up with it.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Would a Python 2.8 help you port to Python 3?

2014-06-03 Thread Steven D'Aprano
On Tue, 03 Jun 2014 13:40:43 +0100, Mark Lawrence wrote:

 An interesting article from Lennart Regebro
 http://regebro.wordpress.com/2014/06/03/would-a-python-2-8-help-you-
port-to-python-3/
 although I'm inclined to ignore it as it appears to be factual.  We
 can't have that getting in the way of plain, good, old fashioned FUD now
 can we?


Thanks for that link.

People forget, or don't realise, that Python 2.7 *is* the 2.8 they are 
looking for. Python 3 came out with 2.5. There have already been two 
transitional releases, 2.6 and 2.7, which have partially introduced 3 
features such as from __future__ import division etc.

Python 3.0 final was released on December 3rd, 2008, just two months 
(almost to the day) after 2.6.

https://www.python.org/download/releases/2.6
https://www.python.org/download/releases/3.0



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Steven D'Aprano
On Mon, 02 Jun 2014 12:10:48 +0100, Robin Becker wrote:

 there seems to be an implicit assumption in python land that encoded
 strings are the norm. On virtually every computer I encounter that
 assumption is wrong. The vast majority of bytes in most computers is not
 something that can be easily printed out for humans to read. I suppose
 some clever pythonista can figure out an encoding to read my .o / .so
 etc  files, but they are practically meaningless to a unicode program
 today. Same goes for most image formats and media files. Browsers
 routinely encounter mis/un-encoded pages.

If you include image, video and sound files, you are probably correct 
that most content of files is binary.

Outside of those three kinds of files, I would expect that *by far* the 
single largest kind of file is text. Some text is wrapped in a binary 
layer, e.g. .doc, .odt, etc. but an awful lot of it is good old human 
readable text, including web pages (html) and XML.

Every programming language I know of defaults to opening files in text 
mode rather than binary mode. There may be exceptions, but reading and 
writing text is ubiquitous while writing .o and .so files is not.


 In python I would have preferred for bytes to remain the default io
 mechanism, at least that would allow me to decide if I need any
 decoding.

That implies that you're opening files in binary mode by default. It also 
implies that even something as trivial as writing the string Hello 
World to a file (stdout is a file) is impossible until you've learned 
about encodings and know which encoding you need. I really don't think 
that's a good plan, for any language, but especially a language like 
Python which is intended for beginners as well as experts.

The Python 2 approach, where stdout in binary but tries really hard to 
pretend to be a superset of ASCII, is simply broken. It works well for 
trivial examples, while breaking in surprising and hard-to-diagnose ways 
in others. It violates the Zen, errors should not be ignored unless 
explicitly silenced, instead silently failing and giving moji-bake:

[steve@ando ~]$ python2.7 -c import sys; sys.stdout.write(u'ñβж\n')
ñβж

Changing to print doesn't help:

[steve@ando ~]$ python2.7 -c print u'ñβж'
ñβж


Python 3 works correctly, whether you use print or sys.stdout:

[steve@ando ~]$ python3.3 -c import sys; sys.stdout.write(u'ñβж\n')
ñβж

(although I haven't tested it on Windows).





-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Shutdown (was Re: Python-list Digest, Vol 129, Issue 4)

2014-06-03 Thread Terry Reedy

On 6/3/2014 6:07 AM, Ramas Sami wrote:

My Python 3.3  is shutting down soon I open the new file or existing
Python file


Ramas, DO NOT reply to the digest with 100s of lines of other messages. 
Start a new thread.


DO include enough information with your question so it can possibly be 
answered. What OS/system? What exact version? 3.3.?. What exactly did 
you do to get the problem?


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 2:34 AM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 Outside of those three kinds of files, I would expect that *by far* the
 single largest kind of file is text. Some text is wrapped in a binary
 layer, e.g. .doc, .odt, etc. but an awful lot of it is good old human
 readable text, including web pages (html) and XML.

In terms of file I/O in Python, text wrapped in a binary layer has to
be treated as binary, not text. There's no difference between a JPEG
file that has some textual EXIF information and an ODT file that's a
whole lot of zipped up text; both of them have to be read as binary,
then unpacked according to the container's specs, and then the text
portion decoded according to an encoding like UTF-8.

But you're quite right that a large proportion of files out there
really are text.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: multiprocess (and paramiko)

2014-06-03 Thread Roy Smith
In article 3c0be3a7-9d2d-4530-958b-13be97db3...@googlegroups.com,
 mennis michaelian.en...@gmail.com wrote:

 Here I have a simple multiprocessing class that when initializes takes a 
 connected SSHClient instance and a command to run on the associated host in a 
 new channel.

ChrisA has already answered your question, but to answer the question 
you didn't ask, you probably want to ditch working directly with 
paramiko and take a look at fabric (http://www.fabfile.org/).  It layers 
a really nice interface on top of paramiko.  Instead of gobs of 
low-level paramiko code, you just do something like:

from fabric.api import env, run
env.host_string = my-remote-hostname.com
output = run(my command)

and you're done.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Paul Sokolovsky
Hello,

On Tue, 3 Jun 2014 23:11:46 +1000
Chris Angelico ros...@gmail.com wrote:

 On Tue, Jun 3, 2014 at 10:27 PM, Damien George
 damien.p.geo...@gmail.com wrote:
  - Supports almost full Python 3 syntax, including yield (compiles
  99.99% of the Python 3 standard library).
  - It supports a growing subset of Python 3 types and operations.
  - Part of the Python 3 standard library has already been ported to
  Micro Python, and work is ongoing to port as much as feasible.
 
 I don't have an actual use-case for this, as I don't target
 microcontrollers, 

Please let me chime in, as one of MicroPython contributors. I also
don't have immediate usecase for a Python microcontroller (but seeing
how fast industry moves, I won't be surprised if in half-year it will
seem just right). Instead, I treat MicroPython as a Python
implementation which scales *down* very well. With current situation in
the industry, people mostly care about scaling up - consume more
gigabytes and gigahertz, catch more clouds and include heavier and
heavier batteries.

MicroPython goes another direction. You don't have to use it on a
microcontroller. It's just if you want/need it, you'll be able - while
still staying with your favorite language.

I'm personally interested in using MicroPython on a small embedded
Linux systems, like home routers, Internet-of-Thing devices, etc. Such
devices usually have just few hundreds of megahertz of CPU power, and
2-4MB of flash. And to cut cost, the lower bound decreases all the
time.

 but I'm curious: What parts of Py3 syntax aren't
 supported? And since you say port as much as feasible, presumably
 there'll be parts that are never supported. Are there some syntactic
 elements that just take up way too much memory?

Syntax-wise, all Python 3.3 syntax is supported. This includes things
like yield from, annotations, etc. For example:

$ micropython 
Micro Python v1.0.1-139-g411732e on 2014-06-03; UNIX version
 def foo(a:int) - float:
... return float(a)
... 
 foo(4)
4.0


99.9% statement is due to fact that there were some problems parsing
couple of files in CPython 3.3/3.4 stdlib.

Note that above talks about syntax, not semantics. Though core
language semantics is actually now implemented pretty well. For
example, yield from works pretty well, so asyncio could work ;-).
(Except my analysis showed that CPython's implementation is a bit
bloated for MicroPython requirements, so I started to write a
simplified implementation from scratch).


As can be seen from the dump above, MicroPython perfectly works on a
Linux system, so we encourage any pythonista to touch a little bit of
Python magic and give it a try! ;-) And we of course interested to get
feedback how portable it is, etc.

(As a side note, it's of course possible to compile and run MicroPython
on Windows too, it's a bit more complicated than just make.)

 
 ChrisA
 -- 
 https://mail.python.org/mailman/listinfo/python-list



-- 
Best regards,
 Paul  mailto:pmis...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 2:49 AM, Paul Sokolovsky pmis...@gmail.com wrote:
 As can be seen from the dump above, MicroPython perfectly works on a
 Linux system, so we encourage any pythonista to touch a little bit of
 Python magic and give it a try! ;-) And we of course interested to get
 feedback how portable it is, etc.


With that encouragement, I just cloned your repo and built it on amd64
Debian Wheezy. Works just fine! Except... I've just found one fairly
major problem with your support of Python 3.x syntax. Your str type is
documented as not supporting Unicode. Is that a current flaw that
you're planning to remove, or a design limitation? Either way, I'm a
bit dubious about a purported version 1 that doesn't do one of the
things that Py3 is especially good at - matched by very few languages
in its encouragement of best practice with Unicode support.

What is your str type actually able to support? It seems to store
non-ASCII bytes in it, which I presume are supposed to represent the
rest of Latin-1, but I wasn't able to print them out:

Micro Python v1.0.1-144-gb294a7e on 2014-06-04; UNIX version
 print(asdf\xfdqwer)

Python 3.5.0a0 (default:6a0def54c63d, Mar 26 2014, 01:11:09)
[GCC 4.7.2] on linux
 print(asdf\xfdqwer)
asdfýqwer

In fact, printing seems to work with bytes:

 print(asdf\xc3\xbdqwer)
asdfýqwer

(my terminal uses UTF-8, this is the UTF-8 encoding of the above string)

I would strongly recommend either implementing all of PEP 393, or at
least making it very clear that this pretends everything is bytes -
and possibly disallowing any codepoint 127 in any string, which will
at least mean you're safe on all ASCII-compatible encodings.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


OT: This Swift thing

2014-06-03 Thread Sturla Molden
Dear Apple,

Why should I be exited about an illegitmate child of Python, Go and
JavaScript?

Because it has curly brackets, no sane exception handling, and sucks less
than Objective-C? 

Because init is spelled without double underscores?

Because it faster than Python? Computers and smart phones are slow these
days. And I guess Swift makes my 3g connection faster.

It's ok to use in iOS apps. That would be it, I guess.


Sturla

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 has some deadly infection

2014-06-03 Thread Terry Reedy

On 6/3/2014 10:18 AM, Robin Becker wrote:


I think the idea that we only give meaning to binary data using
encodings is a bit limiting.


On the contrary, it is liberating. The fact that bits have no meaning 
other than 'a choice between two alterntives' means
1. any binary choice - 0/1, -/+, false/true, no/yes, closed/open, 
male/female, sad/happy, evil/good, low/high, and so on ad infinitum, can 
be encoded into a bit. Since any such pair could have been reversed, the 
mapping between bit states and the pair is arbitrary, and constitutes an 
encoding.
2. any discret or digitized information that constitutes a choice 
between multiple alternative can be encoded into a sequence of bits.


This crucial discovery is the basis of Shannon's 1947 paper and of the 
information age that started about then.



A zip or gif file has structure, but I don't think it's reasonable  to

to regard such a file as having an encoding in the python unicode sense.

I an not quite sure what you are denying. Color encodings are encodings 
as much as character encodings, even if they encode different 
information. Both encode sensory experience and conceptual correlates 
into a sequences of bits usually organized for convenience into a 
sequence of bytes or other chunks.


There is another similarity. Text files often have at least two levels 
of encoding. First is the character encoding; that is all unicode 
handles. Then there is the text structure encoding, which is sometimes 
called the 'file format'. Most text files are at least structured into 
'lines'. For this, they use encoded line endings, and there have been 
multiple choices for this and at least 2 still in common use (which is a 
nuisance).


Similarly, a pixel (bitmap!) image file must encode the color of each 
pixel and a higher-level structuring of pixels into a a 2D array of rows 
of lines. Just as with text, there have been and still are multiple 
encoding at both levels. Also, similarly, the receiver of an image must 
know what encoding the sender used.


Vector graphics is a different way of encoding certain types of images, 
and again there are multiple ways to encode the information into bits. 
The encoding hassle here is similar to that for text. One of the 
frustrations of tk is that it natively uses just one old dialect of 
postscript (.ps) to output screen images. One has to find and install an 
extension to a modern Scaled Vector Graphics (.svg) encoding.


Because Python is programed with lines of text, it must come with 
minimal text decoding. If Python were programmed with drawings, it would 
come with one or more drawing decoders and a drawing equivalent of a 
lexer. It might even have special 'rd' (read drawing) mode for open.


--
Terry Jan Reedy


--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Skip Montanaro
From Apple's perspective, there's always platform lock-in. That's good
for them, so it must be good for you, right? :-)

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: multiprocess (and paramiko)

2014-06-03 Thread mennis
I'm familiar with and have learned much from fabric.  Its execution model don't 
work for this specific interface I'm working on.  I use fabric for other things 
though and it's great.

Ian
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Nicholas Cole
Swift may yet be good for PyObjC (the python bridge to the various
Apple libraries); it is possible that there is some kind of
translation table that PyObjC can make use of to make its own method
names less ugly.

Of course, I wish they had picked Python rather than inventing their
own language.  But Apple put a huge stock in the ability of their
libraries to make full use of multiple cores.  The GIL is surely the
sticking point here. It is also clear (reading the Swift
documentation) that they wanted a script-like language but with strict
typing.

It looks to me like there are a lot of strange design choices, the
logic of which I do not fully see.  I suspect that in a few years they
will have to go through their own Python 3 moment to make things a
little more logical.

N
-- 
https://mail.python.org/mailman/listinfo/python-list


immutable vs mutable

2014-06-03 Thread Deb Wyatt
Thanks everyone for your help.  I also found this article while I was waiting 
for answers from this list, in case anybody else is interested in this topic:

http://www.spontaneoussymmetry.com/blog/archives/438 

Deb in WA, USA


FREE ONLINE PHOTOSHARING - Share your photos online with your friends and 
family!
Visit http://www.inbox.com/photosharing to find out more!


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: immutable vs mutable

2014-06-03 Thread Mark H Harris

On 6/3/14 12:29 PM, Deb Wyatt wrote:



http://www.spontaneoussymmetry.com/blog/archives/438

Deb in WA, USA


The article is bogged down in unnecessary complications with regard to 
mutability (or not) and pass-by reference|value stuff. The author risks 
confusing her audience (those who are perhaps already confused about the 
nature of variables in Python).


The examples deal mostly with names and scope. The article in my opinion 
confuses a Python concept which is otherwise very straight-forward which 
has been beat to death on this forum.


marcus

--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Sturla Molden
Nicholas Cole nicholas.c...@gmail.com wrote:

 Of course, I wish they had picked Python rather than inventing their
 own language.  But Apple put a huge stock in the ability of their
 libraries to make full use of multiple cores. 

The GIL is not relevant if they stick to the Objective-C runtime and LLVM. 

 The GIL is surely the
 sticking point here. It is also clear (reading the Swift
 documentation) that they wanted a script-like language but with strict
 typing.

A Python with static typing would have been far better, IMHO. It seems they
have created a Python-JavaScript bastard with random mix of features.
Unfortunately they retained the curly brackets from JS...

Are Python apps still banned from AppStore, even if we bundle an
interpreter? If not, I see no reason to use Swift instead of Python and
PyObjC – perhaps with some Cython if there is need for speed.

Sturla

-- 
https://mail.python.org/mailman/listinfo/python-list


Please turn off “digest mode” to participate (was: Python-list Digest, Vol 129, Issue 4)

2014-06-03 Thread Ben Finney
Ramas Sami ra...@live.co.uk writes:

 My Python 3.3  is shutting down soon I open the new file or existing
 Python file

Please start a new thread to start a new discussion.

Also, *before* you want to participate, don't reply to a digest message.
Instead, first disable “digest mode” in your subscription settings
URL:https://mail.python.org/mailman/listinfo/python-list, so that you
get individual messages and can participate properly in the discussion.

-- 
 \“The industrial system is profoundly dependent on commercial |
  `\   television and could not exist in its present form without it.” |
_o__)—John Kenneth Galbraith, _The New Industrial State_, 1967 |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 6:43 AM, Sturla Molden sturla.mol...@gmail.com wrote:
 A Python with static typing would have been far better, IMHO. It seems they
 have created a Python-JavaScript bastard with random mix of features.
 Unfortunately they retained the curly brackets from JS...

More important than the syntax is the semantics. Have they kept the
embarrassment of UTF-16 strings? I skimmed the docs, and I *think*
they've made it support Unicode. No idea how performance and memory
usage are, but once you have the semantics right, you can worry about
performance later.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Mark H Harris

On 6/3/14 1:26 PM, Skip Montanaro wrote:

From Apple's perspective, there's always platform lock-in. That's good

for them, so it must be good for you, right? :-)



http://www.theregister.co.uk/2014/06/02/apple_aims_to_speed_up_secure_coding_with_swift_programming_language/

The key to this Swift thing is the same for the Julia thing... LLVM.

Swift is getting huge performance boost from LLVM, not to mention that 
its not bloated, nor is it designed by committee. ehem.


This has less to due with lock-in per se, and more to do with quality 
control and consistency. OTOH, it has A LOT to due with reinventing the 
wheel. I love it. Every time a product comes out like Julia, or Swift, 
the committee needs to take notice, and perhaps adapt.


marcus

--
https://mail.python.org/mailman/listinfo/python-list


Re: Loading modules from files through C++

2014-06-03 Thread Roland Plüss
I came now a bit further with Python 3 but I'm hitting a total
road-block right now with the importer in C++ which worked in Py2 but is
now totally broken in Py3. In general I've got a C++ class based module
which has two methods:

{ find_module, ( PyCFunction )spModuleModuleLoader::cfFindModule,
METH_VARARGS, Retrieve finder for a path. },
{ load_module, ( PyCFunction )spModuleModuleLoader::cfLoadModule,
METH_VARARGS, Load module for a path. },

An instance of this object is added to sys.meta_path

This is the same as with Py2. But in Py3 I get now this strange error
and everything breaks:

Traceback (most recent call last):
  File frozen importlib._bootstrap, line 1565, in _find_and_load
  File frozen importlib._bootstrap, line 1523, in
_find_and_load_unlocked
  File frozen importlib._bootstrap, line 1477, in _find_module
SystemError: Bad call flags in PyCFunction_Call. METH_OLDARGS is no
longer supported!

This happenes whenever I try to import something. I never used
METH_OLDARGS anywhere so I assume something is broken inside python.
Maybe wrong error code for not finding some method or wrong arguments? I
can't find any useful documentation on what could cause this problem.

-- 
Yours sincerely
Plüss Roland

Leader and Head Programmer
- Game: Epsylon ( http://www.indiedb.com/games/epsylon )
- Game Engine: Drag[en]gine ( http://www.indiedb.com/engines/dragengine
, http://dragengine.rptd.ch/wiki )
- Normal Map Generator: DENormGen ( http://epsylon.rptd.ch/denormgen.php )
- As well as various Blender export scripts und game tools



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Marko Rauhamaa
Sturla Molden sturla.mol...@gmail.com:

 A Python with static typing would have been far better, IMHO.

I don't think static typing and Python should be mentioned in the same
sentence.

 It seems they have created a Python-JavaScript bastard with random mix
 of features. Unfortunately they retained the curly brackets from JS...

It seems Swift is more of an imitation of Go than Python.

Swift leaves me cold: I actually *like* semicolons.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Having trouble in expressing constraints in Python

2014-06-03 Thread varun7rs

 
 Are you trying to implement your own code rather than use an existing 
 library from pypi?

I borrowed the idea from a previous file which I was working on. I input 
variables and coefficients as lists and then inturn as matrices to the CPLEX. 
So, I have a problem with expressing the constraint in Python.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Paul Sokolovsky
Hello,

On Wed, 4 Jun 2014 03:08:57 +1000
Chris Angelico ros...@gmail.com wrote:

[]

 With that encouragement, I just cloned your repo and built it on amd64
 Debian Wheezy. Works just fine! Except... I've just found one fairly
 major problem with your support of Python 3.x syntax. Your str type is
 documented as not supporting Unicode. Is that a current flaw that
 you're planning to remove, or a design limitation? Either way, I'm a
 bit dubious about a purported version 1 that doesn't do one of the
 things that Py3 is especially good at - matched by very few languages
 in its encouragement of best practice with Unicode support.

I should start with saying that it's MicroPython what made me look at
Python3. So for me, it already did lot of boon by getting me from under
the rock, so now instead of at my job, we use python 2.x I may report
at my job, we don't wait when our distro will kick us in the ass, and
add 'from __future__ import print_function' whenever we touch some
code.

With that in mind, I, as many others, think that forcing Unicode bloat
upon people by default is the most controversial feature of Python3.
The reason is that you go very long way dealing with languages of the
people of the world by just treating strings as consisting of 8-bit
data. I'd say, that's enough for 90% of applications. Unicode is needed
only if one needs to deal with multiple languages *at the same time*,
which is fairly rare (remaining 10% of apps).

And please keep in mind that MicroPython was originally intended (and
should be remain scalable down to) an MCU. Unicode needed there is even
less, and even less resources to support Unicode just because.

 
 What is your str type actually able to support? It seems to store
 non-ASCII bytes in it, which I presume are supposed to represent the
 rest of Latin-1, but I wasn't able to print them out:

There's a work-in-progress on documenting differences between CPython
and MicroPython at
https://github.com/micropython/micropython/wiki/Differences, it gives
following account on this:

No unicode support is actually implemented. Python3 calls for strict
difference between str and bytes data types (unlike Python2, which has
neutral unified data type for strings and binary data, and separates
out unicode data type). MicroPython faithfully implements str/bytes
separation, but currently, underlying str implementation is the same as
bytes. This means strings in MicroPython are not unicode, but 8-bit
characters (fully binary-clean).

 
 Micro Python v1.0.1-144-gb294a7e on 2014-06-04; UNIX version
  print(asdf\xfdqwer)
 
 Python 3.5.0a0 (default:6a0def54c63d, Mar 26 2014, 01:11:09)
 [GCC 4.7.2] on linux
  print(asdf\xfdqwer)
 asdfýqwer
 
 In fact, printing seems to work with bytes:
 
  print(asdf\xc3\xbdqwer)
 asdfýqwer
 
 (my terminal uses UTF-8, this is the UTF-8 encoding of the above
 string)
 
 I would strongly recommend either implementing all of PEP 393, or at
 least making it very clear that this pretends everything is bytes -
 and possibly disallowing any codepoint 127 in any string, which will
 at least mean you're safe on all ASCII-compatible encodings.

MicroPython is not the first tiny Python implementation. What differs
MicroPython is that it's neither aim or motto to be a subset of
language. And yet, it's not CPython rewrite either. So, while Unicode
support is surely possible, it's unlikely to be done as all of
PEPxxx. If you ask me, I'd personally envision it to be implemented as
UTF-8 (in this regard I agree with (or take an influence from) 
http://lucumr.pocoo.org/2014/1/9/ucs-vs-utf8/). But I don't have plans
to work on Unicode any time soon - applications I envision for
MicroPython so far fit in those 90% that live happily without Unicode.

But generally, there's no strict roadmap for MicroPython features.
While core of the language (parser, compiler, VM) is developed by
Damien, many other features were already contributed by the community
(project went open-source at the beginning of the year). So, if someone
will want to see Unicode support up to the level of providing patches,
it gladly will be accepted. The only thing we established is that we
want to be able to scale down, and thus almost all features should be
configurable.


 
 ChrisA
 -- 
 https://mail.python.org/mailman/listinfo/python-list



-- 
Best regards,
 Paul  mailto:pmis...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Mark H Harris

On 6/3/14 3:43 PM, Sturla Molden wrote:

Nicholas Cole nicholas.c...@gmail.com wrote:

 {snip}

Unfortunately they retained the curly brackets from JS...



The curly braces come from C, and before that B and A/.

(I think others used them too before that, but it escapes me now and I'm 
too lazy to google it)


... but the point is that curly braces don't come from JS !

I have been engaged in a minor flame debate (locally) over block 
delimiters (or lack thereof) which I'm loosing. Locally, people hate 
python's indentation block delimiting, and wish python would adopt curly 
braces. I do not agree, of course; however, I am noticing when new 
languages come out they either use END (as in Julia) or they propagate 
the curly braces paradigm as in C.   The issue locally is trying to pass 
code snippets around the net informally is a problem with indentation. 
My reply is, well, don't do that. For what I see as a freedom issue, 
folks want to format their white space (style) their way and don't want 
to be forced into an indentation paradigm that is rigid (or no so much!).


We even have a couple of clucks on our side of the world that refuse to 
even get their feet wet in python because they hate the indentation 
paradigm.



marcus

--
https://mail.python.org/mailman/listinfo/python-list


Re: Micro Python -- a lean and efficient implementation of Python 3

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 7:41 AM, Paul Sokolovsky pmis...@gmail.com wrote:
 Hello,

 On Wed, 4 Jun 2014 03:08:57 +1000
 Chris Angelico ros...@gmail.com wrote:

 []

 With that encouragement, I just cloned your repo and built it on amd64
 Debian Wheezy. Works just fine! Except... I've just found one fairly
 major problem with your support of Python 3.x syntax. Your str type is
 documented as not supporting Unicode. Is that a current flaw that
 you're planning to remove, or a design limitation? Either way, I'm a
 bit dubious about a purported version 1 that doesn't do one of the
 things that Py3 is especially good at - matched by very few languages
 in its encouragement of best practice with Unicode support.

 I should start with saying that it's MicroPython what made me look at
 Python3. So for me, it already did lot of boon by getting me from under
 the rock, so now instead of at my job, we use python 2.x I may report
 at my job, we don't wait when our distro will kick us in the ass, and
 add 'from __future__ import print_function' whenever we touch some
 code.

And that's a good thing :) Using Python 2.7 and starting to put in the
future directives breaks nothing, and will save you time later.

 With that in mind, I, as many others, think that forcing Unicode bloat
 upon people by default is the most controversial feature of Python3.
 The reason is that you go very long way dealing with languages of the
 people of the world by just treating strings as consisting of 8-bit
 data. I'd say, that's enough for 90% of applications. Unicode is needed
 only if one needs to deal with multiple languages *at the same time*,
 which is fairly rare (remaining 10% of apps).

Absolutely not. This is the mentality that results in web applications
that break on funny characters, which is completely the wrong way to
look at it. The truth is, there are not many funny characters in
Unicode at all; I found these, but that's about it:

http://www.fileformat.info/info/unicode/char/1F601/index.htm
http://www.fileformat.info/info/unicode/char/1F638/index.htm

Your code should accept any valid character with equal correctness.
(Note to jmf: Correctness does not necessarily imply exact nanosecond
performance, just that the right result is reached.) These days,
Unicode *is* needed everywhere. You might think you can get away with
8-bit data, but is that 8-bit data actually encoded Latin-1 or
UTF-8? There's a vast difference between them, and you'll hit it in
any English text with U+00A9 ©, or U+201C U+201D quotes, or any of a
large number of other common non-ASCII characters. Oh, and the three I
just mentioned happen to be in CP-1252, another common 8-bit encoding,
and a lot of people and programs don't know how to tell CP-1252 from
Latin-1 and label one as the other.

Unicode is needed on anything that touches the internet, which is a
*lot* more than 10% of applications. Unicode is also needed on
anything that shares files with anyone who speaks more than one
language, or uses any symbol that isn't in ASCII, or pretty much
anything beyond plain English with a restricted set of punctuation.
And even if you can guarantee that you're working only with English
and only with ASCII, you still need to be aware that ASCII text is
different stuff from a JPEG file, although it's possible to bury
your head in the sand over that one.

 But generally, there's no strict roadmap for MicroPython features.
 While core of the language (parser, compiler, VM) is developed by
 Damien, many other features were already contributed by the community
 (project went open-source at the beginning of the year). So, if someone
 will want to see Unicode support up to the level of providing patches,
 it gladly will be accepted. The only thing we established is that we
 want to be able to scale down, and thus almost all features should be
 configurable.

And that's exactly what's happening right now.

https://github.com/micropython/micropython/issues/657
https://github.com/Rosuav/micropython

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 7:49 AM, Mark H Harris harrismh...@gmail.com wrote:
 On 6/3/14 3:43 PM, Sturla Molden wrote:

 Nicholas Cole nicholas.c...@gmail.com wrote:

 {snip}

 Unfortunately they retained the curly brackets from JS...


 The curly braces come from C, and before that B and A/.

 (I think others used them too before that, but it escapes me now and I'm too
 lazy to google it)

 ... but the point is that curly braces don't come from JS !

If a merger between JS and Python adopts braces, the braces came from
JS. You look at a baby and say he has his father's nose
(http://tinyurl.com/kqltth4 perhaps?), not that he has his
great-grandmother's nose, even if it's the same nose.

 I have been engaged in a minor flame debate (locally) over block delimiters
 (or lack thereof) which I'm loosing. Locally, people hate python's
 indentation block delimiting, and wish python would adopt curly braces. I do
 not agree, of course; however, I am noticing when new languages come out
 they either use END (as in Julia) or they propagate the curly braces
 paradigm as in C.   The issue locally is trying to pass code snippets around
 the net informally is a problem with indentation. My reply is, well, don't
 do that. For what I see as a freedom issue, folks want to format their white
 space (style) their way and don't want to be forced into an indentation
 paradigm that is rigid (or no so much!).

I quite like braces, myself, but I'm happy with either model. But I
don't like massively verbose END blocks, nor the syntactic salt of
bash's case/esac, if/fi, etc (match the end marker to the beginning).
Keep it simple and keep it unobtrusive.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Eric S. Johansson


On 6/3/2014 5:49 PM, Mark H Harris wrote:


I have been engaged in a minor flame debate (locally) over block 
delimiters (or lack thereof) which I'm loosing. Locally, people hate 
python's indentation block delimiting, and wish python would adopt 
curly braces. I do not agree, of course; however, I am noticing when 
new languages come out they either use END (as in Julia) or they 
propagate the curly braces paradigm as in C.   The issue locally is 
trying to pass code snippets around the net informally is a problem 
with indentation. My reply is, well, don't do that. For what I see as 
a freedom issue, folks want to format their white space (style) their 
way and don't want to be forced into an indentation paradigm that is 
rigid (or no so much!).


the only problem I have with indentation defining blocks is that it's 
hard to cut and paste code and make it fit the right block level 
automatically. Too many times of made the mistake of one or two lines 
off by one or two levels of indentation and somehow the code doesn't 
work as I thought it would :-) It's also making it difficult to generate 
code automatically.


On the other hand, curly braces are royal pain to dictate or navigate 
around when programming with speech recognition.


--- eric

--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 9:22 AM, Eric S. Johansson e...@harvee.org wrote:
 On the other hand, curly braces are royal pain to dictate or navigate around
 when programming with speech recognition.

I've never done that, in any language, but if I had to guess, I'd say
that both braces and indentation are harder to work with than a REXX
style where *everything* is words. :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Eric S. Johansson


On 6/3/2014 7:29 PM, Chris Angelico wrote:

On Wed, Jun 4, 2014 at 9:22 AM, Eric S. Johansson e...@harvee.org wrote:

On the other hand, curly braces are royal pain to dictate or navigate around
when programming with speech recognition.

I've never done that, in any language, but if I had to guess, I'd say
that both braces and indentation are harder to work with than a REXX
style where *everything* is words. :)


The model I am working with now, and it requires a very smart editor, is 
the jump down - jump up approach or the combination of mouse and voice 
where you move the cursor to where you want to go and say go here. 
Yes, sometimes it's easier to move a mouse that is to click it and both 
are easier than dragging.

--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Kevin Walzer

On 6/3/14, 4:43 PM, Sturla Molden wrote:

Are Python apps still banned from AppStore, even if we bundle an
interpreter?


Python apps are not banned from the App Store. See 
https://itunes.apple.com/us/app/quickwho/id419483981?mt=12.


--
Kevin Walzer
Code by Kevin/Mobile Code by Kevin
http://www.codebykevin.com
http://www.wtmobilesoftware.com
--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Sturla Molden

On 04/06/14 01:39, Kevin Walzer wrote:

On 6/3/14, 4:43 PM, Sturla Molden wrote:

Are Python apps still banned from AppStore, even if we bundle an
interpreter?


Python apps are not banned from the App Store. See
https://itunes.apple.com/us/app/quickwho/id419483981?mt=12.



Mac AppStore yes, iOS AppStore as well?

There used to be a ban on interpreted languages to keep out Java and 
Flash, but it also hurt Python. On Mac there has never been a policy 
against Java or Flash.


Sturla





--
https://mail.python.org/mailman/listinfo/python-list


Re: IDE for python

2014-06-03 Thread Joseph Martinot-Lagarde

Le 28/05/2014 13:31, Sameer Rathoud a écrit :

I was searching for spyder, but didn't got any helpful installable.


What problem did you encounter while trying to install spyder ?

Spyder is oriented towards scientific applications, but can be used as a 
general python IDE. I use it for GUI development too.


---
Ce courrier électronique ne contient aucun virus ou logiciel malveillant parce 
que la protection avast! Antivirus est active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Upgrading from Python verison 2.7 to 3.4.1

2014-06-03 Thread Skafec, Allison
Hello All-

Please forgive me, as I am new to installing and configuring Python. I am a 
server administrator trying to install a new version of Python on a server. We 
currently have Python version 2.7 installed (located at C:/Python27), along 
with Python (x,y) and using Spyder2 to view. I have installed Python version 
3.4.1 (located at C:/Python34) and also have Spyder installed within this 
folder. I open Spyder through the Python34 folder, but it still opens Python 
version 2.7. I cannot seem to get it to open version 3.4.1. We do not want to 
uninstall Python 2.7 until we have Python 3.4 ready, as we do not want users to 
lose the use of the tool. Any guidance is greatly appreciated.

Thank you!
Allison

2014-06-03, 15:43:02
The information contained in this e-mail message and any attachments may be 
privileged and confidential.  If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that any review, dissemination, distribution or copying 
of this communication is strictly prohibited.  If you have received this 
communication in error, please notify the sender immediately by replying to 
this e-mail and delete the message and any attachments from your computer.
-- 
https://mail.python.org/mailman/listinfo/python-list


Unicode and Python - how often do you index strings?

2014-06-03 Thread Chris Angelico
A current discussion regarding Python's Unicode support centres (or
centers, depending on how close you are to the cent[er]{2} of the
universe) around one critical question: Is string indexing common?

Python strings can be indexed with integers to produce characters
(strings of length 1). They can also be iterated over from beginning
to end. Lots of operations can be built on either one of those two
primitives; the question is, how much can NOT be implemented
efficiently over iteration, and MUST use indexing? Theories are great,
but solid use-cases are better - ideally, examples from actual
production code (actual code optional).

I know the collective experience of python-list can't fail to bring up
a few solid examples here :)

Thanks in advance, all!!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Upgrading from Python verison 2.7 to 3.4.1

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 5:43 AM, Skafec, Allison
allison.ska...@alliancedata.com wrote:
 Please forgive me, as I am new to installing and configuring Python. I am a
 server administrator trying to install a new version of Python on a server.
 We currently have Python version 2.7 installed (located at C:/Python27),
 along with Python (x,y) and using Spyder2 to view. I have installed Python
 version 3.4.1 (located at C:/Python34) and also have Spyder installed within
 this folder. I open Spyder through the Python34 folder, but it still opens
 Python version 2.7. I cannot seem to get it to open version 3.4.1. We do not
 want to uninstall Python 2.7 until we have Python 3.4 ready, as we do not
 want users to lose the use of the tool. Any guidance is greatly appreciated.

When you say open Spyder through the Python34 folder, do you mean
you double-click on a file and use its Windows associations? If so,
the solution may be (depending on your setup) very easy - just make
sure Spyder has a shebang at the top.and it'll be invoked with the
right version of Python. Alternatively, simply create a shortcut to
C:/Python34/python.exe (or pythonw.exe, to suppress the console - this
is what a .pyw file will be associated with), passing it the name of
the Spyder file as an argument. That's probably the easiest solution -
it doesn't interfere with anything else on the system, doesn't change
associations, just gives you a way to run *this* Python and *that*
script.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-03 Thread Tim Chase
On 2014-06-04 10:39, Chris Angelico wrote:
 A current discussion regarding Python's Unicode support centres (or
 centers, depending on how close you are to the cent[er]{2} of the
 universe) around one critical question: Is string indexing common?
 
 Python strings can be indexed with integers to produce characters
 (strings of length 1). They can also be iterated over from beginning
 to end. Lots of operations can be built on either one of those two
 primitives; the question is, how much can NOT be implemented
 efficiently over iteration, and MUST use indexing? Theories are
 great, but solid use-cases are better - ideally, examples from
 actual production code (actual code optional).

Many of my string-indexing uses revolve around a sliding window which
can be done with itertools[1], though I often just roll it as
something like

  n = 3
  for i in range(1 + len(s) - n):
do_something(s[i:i+n])

So that could be supplanted by the SO iterator linked below.

The other use big case I have from production code involves a
column-offset delimited file where the headers have a row of
underscores under them delimiting the field widths, so it looks
something like

  EmpID NameCost Center
  - --- -
  314159Longstocking, Pippi RJ45
  265358Davis, MilesJA22
  979328Bell, Alexander RJ15

I then take row 2 and use it to make a mapping of header-name to a
slice-object for slicing the subsequent strings:

  import re
  r = re.compile('-+') # a sequence of 1+ dashes
  f = file(data.txt)
  headers = next(f)
  lines = next(f)
  header_map = dict((
  headers[i.start():i.end()].strip().upper(),
  slice(i.start(), i.end())
  )
for i in r.finditer(lines)
)
  for row in f:
print(EmpID = %s % row[header_map[EMPID]].strip())
print(Name = %s % row[header_map[NAME]].strip())
# ...

which I presume uses string indexing under the hood.

Perhaps there's a better way of doing that, but it's what I currently
use to process these large-ish files (largest max out at 10-20MB each)

There might be other use-cases I've done, but those two leap to mind.

-tkc


[1]
http://stackoverflow.com/questions/6822725/rolling-or-sliding-window-iterator-in-python




-- 
https://mail.python.org/mailman/listinfo/python-list



Re: immutable vs mutable

2014-06-03 Thread Deb Wyatt

 
 The examples deal mostly with names and scope. The article in my opinion
 confuses a Python concept which is otherwise very straight-forward which
 has been beat to death on this forum.
 
 marcus

Well, I'm glad you find this concept straight-forward.  I guess I'm not as 
smart as you.  I won't beat it anymore.

Deb in WA, USA


FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
Check it out at http://www.inbox.com/earth


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: immutable vs mutable

2014-06-03 Thread Deb Wyatt

 
 The examples deal mostly with names and scope. The article in my opinion
 confuses a Python concept which is otherwise very straight-forward which
 has been beat to death on this forum.
 
 marcus

Well, I'm glad you find this concept straight-forward.  I guess I'm not as 
smart as you.  I won't beat it anymore.

Deb in WA, USA


FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
Check it out at http://www.inbox.com/earth


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-03 Thread Roy Smith
In article mailman.10656.1401842403.18130.python-l...@python.org,
 Chris Angelico ros...@gmail.com wrote:

 A current discussion regarding Python's Unicode support centres (or
 centers, depending on how close you are to the cent[er]{2} of the
 universe)

sarcasm style=regex-pedantUm, you mean cent(er|re), don't you?  The 
pattern you wrote also matches centee and centrr./sarcasm

 around one critical question: Is string indexing common?

Not in our code.  I've got 80008 non-blank lines of Python (2.7) source 
handy.  I tried a few heuristics to find patterns which might be string 
indexing.

$ find . -name '*.py' | xargs egrep '\[[^]][0-9]+\]'

and then looked them over manually.  I see this pattern a bunch of times 
(in a single-use script):

data['shard_key'] = hashlib.md5(str(id)).hexdigest()[:4]  

We do this once:

if tz_offset[0] == '-':

We do this somewhere in some command-line parsing:

process_match = args.process[:15]

There's this little gem:

return [dedup(x[1:-1].lower()) for x in 
re.findall('(\[[^\]\[]+\]|\([^\)\(]+\))',title)]

It appears I wrote this one, but I don't remember exactly what I had in 
mind at the time...

withhyphen = number if '-' in number else (number[:-2] + '-' + 
number[-2:]) # big assumption here

Anyway, there's a bunch more, but the bottom line is that in our code, 
indexing into a string (at least explicitly in application source code) 
is a pretty rare thing.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: immutable vs mutable

2014-06-03 Thread Ethan Furman

On 06/03/2014 06:14 PM, Deb Wyatt wrote:

Mark Harris wrote:


The examples deal mostly with names and scope. The article in my opinion
confuses a Python concept which is otherwise very straight-forward which
has been beat to death on this forum.


Well, I'm glad you find this concept straight-forward.  I guess I'm not
 as smart as you.  I won't beat it anymore.


Deb, do yourself a favor and just trash-can anything from Mark Harris.

And keep asking questions.

--
~Ethan~

--
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-03 Thread Ethan Furman

On 06/03/2014 05:39 PM, Chris Angelico wrote:


A current discussion regarding Python's Unicode support centres (or
centers, depending on how close you are to the cent[er]{2} of the
universe) around one critical question: Is string indexing common?


I use it quite a bit, but the strings are usually quite small (well under 100 characters) so an implementation that 
wasn't O(1) would not hurt me much.


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Michael Torrie
On 06/03/2014 03:01 PM, Chris Angelico wrote:
 On Wed, Jun 4, 2014 at 6:43 AM, Sturla Molden sturla.mol...@gmail.com wrote:
 A Python with static typing would have been far better, IMHO. It seems they
 have created a Python-JavaScript bastard with random mix of features.
 Unfortunately they retained the curly brackets from JS...
 
 More important than the syntax is the semantics. Have they kept the
 embarrassment of UTF-16 strings? I skimmed the docs, and I *think*
 they've made it support Unicode. No idea how performance and memory
 usage are, but once you have the semantics right, you can worry about
 performance later.

A Swift string is simply a one-to-one mapping of the NSString class.
Apple claims it is unicode compliant whatever that means.

https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html

In some ways Swift reminds me of Vala.  IE it's syntactic sugar around
existing class libraries that expose them as basic types.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 11:18 AM, Roy Smith r...@panix.com wrote:
 In article mailman.10656.1401842403.18130.python-l...@python.org,
  Chris Angelico ros...@gmail.com wrote:

 A current discussion regarding Python's Unicode support centres (or
 centers, depending on how close you are to the cent[er]{2} of the
 universe)

 sarcasm style=regex-pedantUm, you mean cent(er|re), don't you?  The
 pattern you wrote also matches centee and centrr./sarcasm

Maybe there's someone who spells it that way! Let's not be excluding
people. That'd be rude.

 around one critical question: Is string indexing common?

 Not in our code.  I've got 80008 non-blank lines of Python (2.7) source
 handy.  I tried a few heuristics to find patterns which might be string
 indexing.

 $ find . -name '*.py' | xargs egrep '\[[^]][0-9]+\]'

 and then looked them over manually.  I see this pattern a bunch of times
 (in a single-use script):

 data['shard_key'] = hashlib.md5(str(id)).hexdigest()[:4]

Slicing is a form of indexing too, although in this case (slicing from
the front) it could be implemented on top of UTF-8 without much
problem.

 withhyphen = number if '-' in number else (number[:-2] + '-' +
 number[-2:]) # big assumption here

This *definitely* counts; if strings were represented internally in
UTF-8, this would involve two scans (although a smart implementation
could probably count backward rather than forward). By the way, any
time you slice up to the third from the end, you win two extra awesome
points, just for putting [:-3] into your code and having it mean
something. But I digress.

 Anyway, there's a bunch more, but the bottom line is that in our code,
 indexing into a string (at least explicitly in application source code)
 is a pretty rare thing.

Thanks. Of course, the pattern you searched for is looking only for
literals; it's a bit harder to find cases where the index (or slice
position) comes from a variable or expression, and those situations
are also rather harder to optimize (the MD5 prefix is clearly better
scanned from the front, the number tail is clearly better scanned from
the back - but with a variable?).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 11:11 AM, Tim Chase
python.l...@tim.thechases.com wrote:
 I then take row 2 and use it to make a mapping of header-name to a
 slice-object for slicing the subsequent strings:

   slice(i.start(), i.end())

 print(EmpID = %s % row[header_map[EMPID]].strip())
 print(Name = %s % row[header_map[NAME]].strip())

 which I presume uses string indexing under the hood.

Yes, it's definitely going to be indexing. If strings were represented
internally in UTF-8, each of those calls would need to scan from the
beginning of the string, counting and discarding characters until it
finds the place to start, then counting and retaining characters until
it finds the place to stop. Definite example of what I'm looking for,
thanks!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-03 Thread Chris Angelico
On Wed, Jun 4, 2014 at 11:47 AM, Michael Torrie torr...@gmail.com wrote:
 A Swift string is simply a one-to-one mapping of the NSString class.
 Apple claims it is unicode compliant whatever that means.

 https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html

Yeah, I was looking at the same page. Note how, further down, a syntax
is given for non-BMP character entities (the same as Python's), and
then a bit more down the page, iteration over a string is defined,
with a non-BMP character in the example string. That's a good start.
However, keep going down... and you find that the length of a string
is calculated by iteration, which is a bad sign. I don't see anything
about indexing, which is the most important part.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Corrputed stacktrace?

2014-06-03 Thread Nikolaus Rath
Hello,

I'm trying to debug a problem. As far as I can tell, one of my methods
is called at a point where it really should not be called. When setting
a breakpoint in the function, I'm getting this:

 /home/nikratio/in-progress/s3ql/src/s3ql/backends/s3c.py(693)close()
- if not self.md5_checked:
(Pdb) bt
  /usr/lib/python3.3/threading.py(878)_bootstrap()
- self._bootstrap_inner()
  /usr/lib/python3.3/threading.py(901)_bootstrap_inner()
- self.run()
  /usr/lib/python3.3/threading.py(858)run()
- self._target(*self._args, **self._kwargs)
  /usr/lib/python3.3/socketserver.py(610)process_request_thread()
- self.finish_request(request, client_address)
  /usr/lib/python3.3/socketserver.py(345)finish_request()
- self.RequestHandlerClass(request, client_address, self)
  /usr/lib/python3.3/socketserver.py(666)__init__()
- self.handle()
  /home/nikratio/in-progress/s3ql/tests/mock_server.py(77)handle()
- return super().handle()
  /usr/lib/python3.3/http/server.py(402)handle()
- self.handle_one_request()
  /usr/lib/python3.3/http/server.py(388)handle_one_request()
- method()
  /home/nikratio/in-progress/s3ql/tests/mock_server.py(169)do_GET()
- q = parse_url(self.path)
  /home/nikratio/in-progress/s3ql/tests/mock_server.py(52)parse_url()
- p.params = urllib.parse.parse_qs(q.query)
  /usr/lib/python3.3/urllib/parse.py(553)parse_qs()
- encoding=encoding, errors=errors)
  /usr/lib/python3.3/urllib/parse.py(585)parse_qsl()
- pairs = [s2 for s1 in qs.split('') for s2 in s1.split(';')]
  /usr/lib/python3.3/urllib/parse.py(585)listcomp()
- pairs = [s2 for s1 in qs.split('') for s2 in s1.split(';')]
  /home/nikratio/in-progress/s3ql/src/s3ql/backends/common.py(853)close()
- self.fh.close()
 /home/nikratio/in-progress/s3ql/src/s3ql/backends/s3c.py(693)close()
- if not self.md5_checked:


To me this does not make any sense.

Firstly, the thread that is (apparently) calling close should never ever
reach code in common.py. This thread is executing a socketserver handler
that is entirely contained in mock_server.py and only communicates with
the rest of the program via tcp.

Secondly, the backtrace does not make sense. How can evaluation of 

 pairs = [s2 for s1 in qs.split('') for s2 in s1.split(';')]

in urllib/parse.py() result in a method call in backends/common.py?
There is no trickery going on, qs is a regular string:

(Pdb) up
(Pdb) up
(Pdb) up
(Pdb) l
580 into Unicode characters, as accepted by the bytes.decode() 
method.
581 
582 Returns a list, as G-d intended.
583 
584 qs, _coerce_result = _coerce_args(qs)
585  - pairs = [s2 for s1 in qs.split('') for s2 in s1.split(';')]
586 r = []
587 for name_value in pairs:
588 if not name_value and not strict_parsing:
589 continue
590 nv = name_value.split('=', 1)
(Pdb) whatis qs
class 'str'
(Pdb) p qs
''
(Pdb)

I have also tried to get a backtrace with the faulthandler module, but
it gives the same result:

Thread 0x7f7dafdb4700:
  File /usr/lib/python3.3/cmd.py, line 126 in cmdloop
  File /usr/lib/python3.3/pdb.py, line 318 in _cmdloop
  File /usr/lib/python3.3/pdb.py, line 345 in interaction
  File /usr/lib/python3.3/pdb.py, line 266 in user_line
  File /usr/lib/python3.3/bdb.py, line 65 in dispatch_line
  File /usr/lib/python3.3/bdb.py, line 47 in trace_dispatch
  File /home/nikratio/in-progress/s3ql/src/s3ql/backends/s3c.py, line 693 in 
clos
  File /home/nikratio/in-progress/s3ql/src/s3ql/backends/common.py, line 853 
in c
  File /usr/lib/python3.3/urllib/parse.py, line 585 in listcomp
  File /usr/lib/python3.3/urllib/parse.py, line 585 in parse_qsl
  File /usr/lib/python3.3/urllib/parse.py, line 553 in parse_qs
  File /home/nikratio/in-progress/s3ql/tests/mock_server.py, line 52 in 
parse_url
  File /home/nikratio/in-progress/s3ql/tests/mock_server.py, line 169 in 
do_GET
  File /usr/lib/python3.3/http/server.py, line 388 in handle_one_request
  File /usr/lib/python3.3/http/server.py, line 402 in handle
  File /home/nikratio/in-progress/s3ql/tests/mock_server.py, line 77 in handle
  File /usr/lib/python3.3/socketserver.py, line 666 in __init__
  File /usr/lib/python3.3/socketserver.py, line 345 in finish_request
  File /usr/lib/python3.3/socketserver.py, line 610 in process_request_thread
  File /usr/lib/python3.3/threading.py, line 858 in run
  File /usr/lib/python3.3/threading.py, line 901 in _bootstrap_inner
  File /usr/lib/python3.3/threading.py, line 878 in _bootstrap


Is it possible that the stack got somehow corrupted?

Does anyone have a suggestion how I could go about debugging this?

I am using Python 3.3.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   >