Re: [Twisted-web] Twisted 15.1 Release Announcement

2015-04-13 Thread Glyph

> On Apr 13, 2015, at 04:17, HawkOwl  wrote:
> 
> On behalf of Twisted Matrix Laboratories, I am honoured to announce the 
> release of Twisted 15.1.0 -- just in time for the PyCon sprints!
> 
> This is not a big release, but does have some nice-to-haves:
> 
> - You can now install Twisted's optional dependencies easier -- for example, 
> `pip install twisted[tls]` installs Twisted with TLS support.
> - twisted.web.static.File allows defining a custom resource for rendering 
> forbidden pages.
> - Twisted's MSN support is now deprecated.
> - More documentation has been added on how Trial finds tests.
> - ...and 26 other closed tickets containing bug fixes, feature enhancements, 
> and documentation.
> 
> For more information, check the NEWS file (link provided below).
> 
> You can find the downloads at  (or 
> alternatively ) . The NEWS file 
> is also available at
> .
> 
> Many thanks to everyone who had a part in this release - the supporters of 
> the Twisted Software Foundation, the developers who contributed code as well 
> as documentation, and all the people building great things with Twisted!
> 
> Twisted Regards,
> Hawkie Owl

This is very exciting.  I am SUPER PUMPED to start telling people to `pip 
install twisted[tls]ยด instead of the whole mess it used to be.  Thanks in 
particular to you, Hawkie, and to Chris Wolfe who landed that patch (and 
brought twisted stickers to PyCon!)

-glyph



signature.asc
Description: Message signed with OpenPGP using GPGMail
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Chris Angelico
On Tue, Apr 14, 2015 at 3:35 AM, Fabien  wrote:
> With multiprocessing, do I have to care about processes writing
> simultaneously in *different* files? I guess the OS takes good care of this
> stuff but I'm not an expert.

Not sure what you mean, here. Any given file will be written by
exactly one process? No possible problem. Multiprocessing within one
application doesn't change that.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: find all multiplicands and multipliers for a number

2015-04-13 Thread Paul Rubin
Chris Angelico  writes:
> Small point: Calling dunder methods is usually a bad idea, so I'd
> change this to "p = next(ps)" instead.

Oh yes, I forgot about that.  I'm used to ps.next() and was irritated to
find that it doesn't work in Python 3, so I did the ugly thing that was
closest.  Thanks.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: find all multiplicands and multipliers for a number

2015-04-13 Thread Chris Angelico
On Tue, Apr 14, 2015 at 12:42 PM, Paul Rubin  wrote:
> Just for laughs, this prints the first 20 primes using Python 3's
> "yield from":
>
> import itertools
>
> def sieve(ps):
> p = ps.__next__()
> yield p
> yield from sieve(a for a in ps if a % p != 0)
>
> primes = sieve(itertools.count(2))
> print(list(itertools.islice(primes,20)))

Small point: Calling dunder methods is usually a bad idea, so I'd
change this to "p = next(ps)" instead. But yep, that works...
inefficiently, but it works.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: find all multiplicands and multipliers for a number

2015-04-13 Thread Paul Rubin
Steven D'Aprano  writes:
> http://code.activestate.com/recipes/577821-integer-square-root-function/

The methods there are more "mathematical" but probably slower than what
I posted.

Just for laughs, this prints the first 20 primes using Python 3's 
"yield from":

import itertools

def sieve(ps):
p = ps.__next__()
yield p
yield from sieve(a for a in ps if a % p != 0)

primes = sieve(itertools.count(2))
print(list(itertools.islice(primes,20)))

It's not that practical above a few hundred primes, probably.
-- 
https://mail.python.org/mailman/listinfo/python-list


showing a graph

2015-04-13 Thread Pippo
Hi, 

I want to show the graph for this code:

import re
from Tkinter import *
import tkFileDialog
import testpattern
import networkx as nx
import matplotlib.pyplot as plt

patternresult =[]
nodes = {}
edges = {}

pattern = ['(#P\[\w*\])', '(#C\[\w*[\s\(\w\,\s\|\)]+\])',
   '(#ST\[\w*[\s\(\w\,\s\|\)]+\])', '(#A\[\w*[\s\(\w\,\s\|\)]+\])']


patternresult = testpattern.patternfinder()

G=nx.Graph()


for patres in patternresult:
G.add_node(patres)
if patres[1] == "P":
first_node = patres


for patres in patternresult:

if patres[1:3] == "ST":
G.add_edge(first_node, patres, label="Sub-type")
elif patres[1:2] == "C":
G.add_edge(first_node, patres, label="Constraint_by")
elif patres[1:3] == "A":
G.add_edge(first_node, patres, label="Created_by")

#graph = G.edge()
pos = nx.shell_layout(G)
#nx.draw(G, pos)


nx.draw_networkx_nodes(G, pos, nodelist=None,
   node_size=300, node_color='r',
   node_shape='o', alpha=1.0,
   cmap=None, vmin=None, vmax=None,
   ax=None, linewidths=None)
nx.draw_networkx_edges(G, pos, edgelist=None, width=1.0, edge_color='k',
   style='solid',alpha=None, edge_cmap=None,
   edge_vmin=None, edge_vmax=None,ax=None, arrows=True)
nx.draw_networkx_labels(G, pos,font_size=10, font_family='sans-serif')

plt.show()

#plt.savefig("path.png")


But when I run the code, only a bar is shown on my desktop without showing the 
proper graph. Note that I checked the number of edges and the number of nodes 
and they are correct and this output also has been shown:
 
#C[Health]
#P[Information]
#ST[genetic information]
#C[oral | (recorded in (any form | medium))]
#C[Is created or received by]
#A[health care provider | health plan | public health authority | employer | 
life insurer | school | university | or health care clearinghouse]
#C[Relates to]
#C[the past, present, or future physical | mental health | condition of an 
individual]
#C[the provision of health care to an individual]
#C[the past, present, or future payment for the provision of health care to an 
individual]

I don't get any error. Just that I don't see any graph!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: installing matplotlib

2015-04-13 Thread Pippo
On Monday, 13 April 2015 15:58:58 UTC-4, Ned Deily  wrote:
> In article <176d49d3-6ff8-4d35-b8ec-647f13250...@googlegroups.com>,
>  Pippo  wrote:
> > I am trying to install matplotlib and I keep getting error:
> [...]
> >   File 
> >   
> > "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/distutils/v
> >   ersion.py", line 343, in _cmp
> > if self.version < other.version:
> > TypeError: unorderable types: str() < int()
> > 
> > any idea?
> 
> If you are running on OS X 10.10 (Yosemite), try upgrading to a current 
> Python 3.4.x.  3.3.x is no longer supported and was released long before 
> Yosemite was.
> 
> http://bugs.python.org/issue21811
> 
> -- 
>  Ned Deily,
>  n...@acm.org

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: installing matplotlib

2015-04-13 Thread Ned Deily
In article <176d49d3-6ff8-4d35-b8ec-647f13250...@googlegroups.com>,
 Pippo  wrote:
> I am trying to install matplotlib and I keep getting error:
[...]
>   File 
>   "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/distutils/v
>   ersion.py", line 343, in _cmp
> if self.version < other.version:
> TypeError: unorderable types: str() < int()
> 
> any idea?

If you are running on OS X 10.10 (Yosemite), try upgrading to a current 
Python 3.4.x.  3.3.x is no longer supported and was released long before 
Yosemite was.

http://bugs.python.org/issue21811

-- 
 Ned Deily,
 n...@acm.org

-- 
https://mail.python.org/mailman/listinfo/python-list


installing matplotlib

2015-04-13 Thread Pippo
Hi guys,

I am trying to install matplotlib and I keep getting error:

Traceback (most recent call last):
  File "setup.py", line 155, in 
result = package.check()
  File 
"/Users/sepidehghanavati/Desktop/Programs/python/matplotlib-1.4.3/setupext.py", 
line 961, in check
min_version='2.3', version=version)
  File 
"/Users/sepidehghanavati/Desktop/Programs/python/matplotlib-1.4.3/setupext.py", 
line 445, in _check_for_pkg_config
if (not is_min_version(version, min_version)):
  File 
"/Users/sepidehghanavati/Desktop/Programs/python/matplotlib-1.4.3/setupext.py", 
line 173, in is_min_version
return found_version >= expected_version
  File 
"/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/distutils/version.py",
 line 76, in __ge__
c = self._cmp(other)
  File 
"/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/distutils/version.py",
 line 343, in _cmp
if self.version < other.version:
TypeError: unorderable types: str() < int()

any idea?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Fabien

On 13.04.2015 19:08, Peter Otten wrote:

How about a file-based workflow?

Write distinct scripts, e. g.

a2b.py that reads from *.a and writes to *.b

and so on. Then use a plain old makefile to define the dependencies.
Whether .a uses pickle, .b uses json, and .z uses csv is but an
implementation detail that only its producers and consumers need to know.
Testing an arbitrary step is as easy as invoking the respective script with
some prefabricated input and checking the resulting output file(s).


I think I like the idea because it is more durable. The data I 
manipulate comes with specific formats which are very efficient. With 
the pickle I was kind of "lazy" and, well, saved a couple of read/write 
routines.


Still, your idea is probably more elegant.

With multiprocessing, do I have to care about processes writing 
simultaneously in *different* files? I guess the OS takes good care of 
this stuff but I'm not an expert.


Tahnks,

Fabien

--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Fabien

On 13.04.2015 17:45, Devin Jeanpierre wrote:

On Mon, Apr 13, 2015 at 10:58 AM, Fabien  wrote:

>Now, to my questions:
>1. Does that seem reasonable?

A big issue is the use of pickle, which is:

* Often suboptimal performance wise (e.g. you can't load only subsets
of the data)
* Makes forwards/backwards compatibility very difficult
* Can make python 2/3 migrations harder
* Creates data files which are difficult to analyze/fix by hand if
they get broken
* Is schemaless, and can accidentally include irrelevant data you
didn't mean to store, making all of the above worse.
* Means you have to be very careful who wrote the pickles, or you open
a remote code execution vulnerability. It's common for people to
forget that code is unsafe, and get themselves pwned. Security is
always better if you don't do anything bad in the first place, than if
you do something bad but try to manage the context in which the bad
thing is done.

Cap'n Proto might be a decent alternatives that gives you good
performance, by letting you process only the bits of the file you want
to. It is also not a walking security nightmare.


Thanks for your thoughts. All these concerns are rather secondary for 
the kind of tool I am working on, with the exception of speed. I will 
have a look at Proto






>2. Should Watershed be an object or should it be a simple dictionary? I
>thought that an object could be nice, because it could take care of some
>operations such as plotting and logging. Currently I defined a class
>Watershed, but its attributes are defined and filled by A, B and C (this
>seems a bit wrong to me).

It is usually very confusing for attributes to be defined anywhere
other than __init__. It's very really confusing for them to be defined
by some random other function living somewhere else.


Yes, OK. I will stop that.


>I could give more responsibilities to this class
>but it might become way too big: since the whole purpose of the tool is to
>work on watersheds, making a Watershed class actually sounds like a code
>smell (http://en.wikipedia.org/wiki/God_object)

Whether they are methods or not doesn't make this any more or less of
a god object -- if it stores all this data used by all these different
things, it is already a bit off.


Yes, but I see no other way. The "god" container will probably be the 
watershed's directory with the data in it. The rest will specialize.




>3. The operation A opens an external file, reads data out of it and writes
>it in Watershed object. Is it a bad idea to multiprocess this? (I guess it
>is, since the file might be read twice at the same time)

That does sound like a bad idea, for the reason you gave. It might be
possible to read it once, and share it among many processes.


Yes. Thanks!

--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Fabien

On 13.04.2015 18:25, Dave Angel wrote:

On 04/13/2015 10:58 AM, Fabien wrote:

Folks,



A comment.  Pickle is a method of creating persistent data, most
commonly used to preserve data between runs.  A database is another
method.  Although either one can also be used with multiprocessing, you
seem to be worrying more about the mechanism, and not enough about the
problem.


I am writing a quite extensive piece of scientific software. Its
workflow is quite easy to explain. The tool realizes series of
operations on watersheds (such as mapping data on it, geostatistics and
more). There are thousands of independent watersheds of different size,
and the size determines the computing time spent on each of them.


First question:  what is the name or "identity" of a watershed?
Apparently it's named by a directory.  But you mention ID as well.  You
write a function A() that takes only a directory name. Is that the name
of the watershed?  One per directory?  And you can derive the ID from
the directory name?

Second question, is there any communication between watersheds, or are
they totally independent?

Third:  this "external data", is it dynamic, do you have to fetch it in
a particular order, is it separated by watershed id, or what?

Fourth:  when the program starts, are the directories all empty, so the
presence of a pickle file tells you that A() has run?  Or is there some
other meaning for those files?



Say I have the operations A, B, C and D. B and C are completely
independent but they need A to be run first, D needs B and C, and so
forth. Eventually the whole operations A, B, C and D will run once for
all,


For all what?


but of course the whole development is an iterative process and I
rerun all operations many times.


Based on what?  Is the external data changing, and you have to rerun
functions to update what you've already stored about them?  Or do you
just mean you call the A() function on every possible watershed?



(I suddenly have to go out, so I can't comment on the rest, except that
choosing to pickle, or to marshall, or to database, or to
custom-serialize seems a bit premature.  You may have it all clear in
your head, but I can't see what the interplay between all these calls to
one-letter-named functions is intended to be.)



Thanks Dave for your interest. I'll make an example:

external files:
- watershed outlines (single file)
- global topography (single file)
- climate data (single file)

Each watershed has an ID. Each watershed is completely independant.

So the function A for example will take one ID as argument, open the 
watershed file and extract its outlines, make a local map, open the 
topography file, extract a part of it, make a watershed object and store 
the watersheds local data in it.


Function B will open the watershed pickle, take the local information it 
needs (like local topography, already cropped to the region of interest) 
and map climate data on it.


And so forth, so that each function A, B, C, ... builds upon the 
information of the others and adds it's own "service" in terms of data.


Currently, all data (numpy arrays and vecor objects mostly) are stored 
as object attributes, which is I guess bad practice. It's kind of a 
"database for dummies": read topography of watershed ID 0128 will be:

- open watershed.p in the '0128' directory
- read the watershed.topography attribute

I think that I like Peter's idea to follow a file based workflow 
instead, and forget about my watershed object for now.


But I'd still be interested in your comments if you find time for it.

Fabien

--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Peter Otten
Fabien wrote:

> I am writing a quite extensive piece of scientific software. Its
> workflow is quite easy to explain. The tool realizes series of
> operations on watersheds (such as mapping data on it, geostatistics and
> more). There are thousands of independent watersheds of different size,
> and the size determines the computing time spent on each of them.
> 
> Say I have the operations A, B, C and D. B and C are completely
> independent but they need A to be run first, D needs B and C, and so
> forth. Eventually the whole operations A, B, C and D will run once for
> all, but of course the whole development is an iterative process and I
> rerun all operations many times.

> 4. Other comments you might have?

How about a file-based workflow?

Write distinct scripts, e. g.

a2b.py that reads from *.a and writes to *.b

and so on. Then use a plain old makefile to define the dependencies.
Whether .a uses pickle, .b uses json, and .z uses csv is but an 
implementation detail that only its producers and consumers need to know. 
Testing an arbitrary step is as easy as invoking the respective script with 
some prefabricated input and checking the resulting output file(s).


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Dave Angel

On 04/13/2015 10:58 AM, Fabien wrote:

Folks,



A comment.  Pickle is a method of creating persistent data, most 
commonly used to preserve data between runs.  A database is another 
method.  Although either one can also be used with multiprocessing, you 
seem to be worrying more about the mechanism, and not enough about the 
problem.



I am writing a quite extensive piece of scientific software. Its
workflow is quite easy to explain. The tool realizes series of
operations on watersheds (such as mapping data on it, geostatistics and
more). There are thousands of independent watersheds of different size,
and the size determines the computing time spent on each of them.


First question:  what is the name or "identity" of a watershed? 
Apparently it's named by a directory.  But you mention ID as well.  You 
write a function A() that takes only a directory name. Is that the name 
of the watershed?  One per directory?  And you can derive the ID from 
the directory name?


Second question, is there any communication between watersheds, or are 
they totally independent?


Third:  this "external data", is it dynamic, do you have to fetch it in 
a particular order, is it separated by watershed id, or what?


Fourth:  when the program starts, are the directories all empty, so the 
presence of a pickle file tells you that A() has run?  Or is there some 
other meaning for those files?




Say I have the operations A, B, C and D. B and C are completely
independent but they need A to be run first, D needs B and C, and so
forth. Eventually the whole operations A, B, C and D will run once for
all,


For all what?


but of course the whole development is an iterative process and I
rerun all operations many times.


Based on what?  Is the external data changing, and you have to rerun 
functions to update what you've already stored about them?  Or do you 
just mean you call the A() function on every possible watershed?




(I suddenly have to go out, so I can't comment on the rest, except that 
choosing to pickle, or to marshall, or to database, or to 
custom-serialize seems a bit premature.  You may have it all clear in 
your head, but I can't see what the interplay between all these calls to 
one-letter-named functions is intended to be.)



--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Robin Becker
for what it's worth I believe that marshal is a faster method for storing simple 
python objects. So if your information can be stored using simple python things 
eg strings, floats, integers, lists and dicts then storage using marshal is 
faster than pickle/cpickle. If you want to persist the objects for years then 
the pickle protocol is probably better as it is not python version dependant.

--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list


Re: Pickle based workflow - looking for advice

2015-04-13 Thread Devin Jeanpierre
On Mon, Apr 13, 2015 at 10:58 AM, Fabien  wrote:
> Now, to my questions:
> 1. Does that seem reasonable?

A big issue is the use of pickle, which is:

* Often suboptimal performance wise (e.g. you can't load only subsets
of the data)
* Makes forwards/backwards compatibility very difficult
* Can make python 2/3 migrations harder
* Creates data files which are difficult to analyze/fix by hand if
they get broken
* Is schemaless, and can accidentally include irrelevant data you
didn't mean to store, making all of the above worse.
* Means you have to be very careful who wrote the pickles, or you open
a remote code execution vulnerability. It's common for people to
forget that code is unsafe, and get themselves pwned. Security is
always better if you don't do anything bad in the first place, than if
you do something bad but try to manage the context in which the bad
thing is done.

Cap'n Proto might be a decent alternatives that gives you good
performance, by letting you process only the bits of the file you want
to. It is also not a walking security nightmare.

> 2. Should Watershed be an object or should it be a simple dictionary? I
> thought that an object could be nice, because it could take care of some
> operations such as plotting and logging. Currently I defined a class
> Watershed, but its attributes are defined and filled by A, B and C (this
> seems a bit wrong to me).

It is usually very confusing for attributes to be defined anywhere
other than __init__. It's very really confusing for them to be defined
by some random other function living somewhere else.

> I could give more responsibilities to this class
> but it might become way too big: since the whole purpose of the tool is to
> work on watersheds, making a Watershed class actually sounds like a code
> smell (http://en.wikipedia.org/wiki/God_object)

Whether they are methods or not doesn't make this any more or less of
a god object -- if it stores all this data used by all these different
things, it is already a bit off.

> 3. The operation A opens an external file, reads data out of it and writes
> it in Watershed object. Is it a bad idea to multiprocess this? (I guess it
> is, since the file might be read twice at the same time)

That does sound like a bad idea, for the reason you gave. It might be
possible to read it once, and share it among many processes.

-- Devin
-- 
https://mail.python.org/mailman/listinfo/python-list


Pickle based workflow - looking for advice

2015-04-13 Thread Fabien

Folks,

I am writing a quite extensive piece of scientific software. Its 
workflow is quite easy to explain. The tool realizes series of 
operations on watersheds (such as mapping data on it, geostatistics and 
more). There are thousands of independent watersheds of different size, 
and the size determines the computing time spent on each of them.


Say I have the operations A, B, C and D. B and C are completely 
independent but they need A to be run first, D needs B and C, and so 
forth. Eventually the whole operations A, B, C and D will run once for 
all, but of course the whole development is an iterative process and I 
rerun all operations many times.


Currently my workflow is defined as follows:

Define a unique ID and file directory for each watershed, and define A 
and B:


def A(watershed_dir):
# read some external data
# do stuff
# Store the stuff in a Watershed object
# save it
f_pickle = os.path.join(watershed_dir, 'watershed.p')
with open(f_pickle, 'wb') as f:
pickle.dump(watershed, f)

def B(watershed_directory):
w = pickle.read()
f_pickle = os.path.join(watershed_dir, 'watershed.p')
with open(f_pickle, 'rb') as f:
watershed = pickle.load(f)
# do new stuff
# store it in watershed and save
with open(f_pickle, 'wb') as f:
pickle.dump(watershed, f)

So the watershed object is a data container which grows in content. The 
pickle that stores the info can reach a few Mb of size. I chose this 
strategy because A, B, C and D are independent, but they can share their 
results through the pickle. The functions have a single argument (the 
path to the working directory), which means that when I run the 
thousands catchments I can use the multiprocessing pool:


import multiprocessing as mp
poolargs = [list of directories]
pool = mp.Pool()
poolout = pool.map(A, poolargs, chunksize=1)
poolout = pool.map(B, poolargs, chunksize=1)
etc.

I can easily choose to rerun just B without rerunning A. Reading and 
writing pickle times is real slow in comparison to the other stuffs to 
do (running B or C on a single catchment can take seconds for example).


Now, to my questions:
1. Does that seem reasonable?
2. Should Watershed be an object or should it be a simple dictionary? I 
thought that an object could be nice, because it could take care of some 
operations such as plotting and logging. Currently I defined a class 
Watershed, but its attributes are defined and filled by A, B and C (this 
seems a bit wrong to me). I could give more responsibilities to this 
class but it might become way too big: since the whole purpose of the 
tool is to work on watersheds, making a Watershed class actually sounds 
like a code smell (http://en.wikipedia.org/wiki/God_object)
3. The operation A opens an external file, reads data out of it and 
writes it in Watershed object. Is it a bad idea to multiprocess this? (I 
guess it is, since the file might be read twice at the same time)

4. Other comments you might have?

Sorry for the lengthy mail but thanks for any tip.

Fabien





--
https://mail.python.org/mailman/listinfo/python-list


Re: Anyone used snap.py for drawing a graph?

2015-04-13 Thread Jerry Hill
On Sun, Apr 12, 2015 at 10:29 PM, Pippo  wrote:
> Any guide on this?
>
> http://snap.stanford.edu/snappy/#download

Sure.  http://snap.stanford.edu/snappy/#docs
-- 
https://mail.python.org/mailman/listinfo/python-list


bottle.py app doesn't timeout properly ?

2015-04-13 Thread Yassine Chaouche
Hello,

I have written a script using bottle.py. The app works fine most of times. 
Sometimes though, the server takes time to respond and the web browser 
eventually drops the connection to the server after a certain time (timeout), 
showing this page : 

"""
Connection reset

The connection to the server was reset while the page was loading.

 This site may be temporarily unavailable or too busy. Try again in a few 
moments.
 If you can not load any pages, check your computer's network connection.
 If your computer or network is protected by a firewall or proxy, make sure 
that Firefox is permitted to access the web.

"""

In the app, I have setup faulthandler 
(https://pypi.python.org/pypi/faulthandler/2.4) in my application to respond to 
SIGUSER1. When I trigger the signal (while the server is hung), here's the 
traceback I get. It shows precisely where is my application hanging : 


Current thread 0x7fb1a0bfe700 (most recent call first):
  File "/usr/lib/python2.7/socket.py", line 447 in readline
  File "/usr/lib/python2.7/wsgiref/simple_server.py", line 116 in handle
  File "/usr/lib/python2.7/SocketServer.py", line 649 in __init__
  File "/usr/lib/python2.7/SocketServer.py", line 334 in finish_request
  File "/usr/lib/python2.7/SocketServer.py", line 321 in process_request
  File "/usr/lib/python2.7/SocketServer.py", line 295 in _handle_request_noblock
  File "/usr/lib/python2.7/SocketServer.py", line 238 in serve_forever
  File "/usr/local/lib/python2.7/dist-packages/infomaniak/bottle.py", line 2680 
in run
  File "/usr/local/lib/python2.7/dist-packages/infomaniak/bottle.py", line 3048 
in run
  File "/usr/local/lib/python2.7/dist-packages/infomaniak/server.py", line 69 
in 
  File "/usr/bin/infomaniak", line 2 in 


Instead of the readline call to timeout after a certain amount of time, it 
seems it gets stuck there forever. When I strace my app it, I find it stuck at 
the recvfrom system call : 


root@audio-mon[10.10.10.82] ~ # service infomaniak status
infomaniak is running with pid 2149
root@audio-mon[10.10.10.82] ~ # strace -fp 2149
Process 2149 attached - interrupt to quit
recvfrom(7, ^C 
Process 2149 detached
root@audio-mon[10.10.10.82] ~ # 

It is stuck there for hours (7 hours now). 

How can I fix this ? this behaviour is also hard to reproduce, I don't know 
what triggers it. The app was working fine for almost a month, no problem at 
all. 

Here's code : 

#!/usr/bin/env python
#-*- encoding=utf-8 -*-

# infomaniak.getFlux() et infomaniak.login()
import infomaniak
# Pour la localisation des fichiers HTML
import pkg_resources
import logging
import ConfigParser
# pour attrapper l'exception requests.exceptions.ConnectionError
import requests
# pour faire tourner le schmilblick
import bottle
# service infomaniak reload
import signal
# service infomaniak trace
import faulthandler

cp = ConfigParser.ConfigParser()
cp.read("/etc/infomaniak.conf")
PORT  = cp.getint("SERVER","PORT")
USER  = cp.get("LOGIN","USER")
PASS  = cp.get("LOGIN","PASS")
LOG_LEVEL = cp.get("LOG","LOG_LEVEL")

logging.basicConfig(filename='/var/log/infomaniak/main.log',
level=getattr(logging,LOG_LEVEL),
format='%(asctime)s [%(levelname)s] %(message)s'
   )
logging.info("= START ==")
logging.info("Reading config file Ok.. Serving on port %s" % (PORT))

def reload_conf(signal,frame):
logging.info("= RELOAD ==")
logging.info("Reading config file Ok.. Serving on port %s" % (PORT))
cp = ConfigParser.ConfigParser()
cp.read("/etc/infomaniak.conf")
LOG_LEVEL = cp.get("LOG","LOG_LEVEL")
logging.getLogger().setLevel(getattr(logging,LOG_LEVEL))
logging.info("LOG LEVEL %s" % (LOG_LEVEL))

signal.signal(signal.SIGUSR2,reload_conf)
faulthandler.register(signal.SIGUSR1,file=open("/var/log/infomaniak/main.log","a"))


@bottle.route("/")
def get(url):
# print "thing",url
# 
logging.info("--")
# logging.info("[%s:%s] get %s" % 
(self.client_address[0],self.client_address[1],self.path))
# MUST BE OF THE FORM :
# https://statslive.infomaniak.com/radio/config/formatsflux.php/g3377s3i4402
return infomaniak.getFlux(url.lstrip("/"),USER,PASS)

logged_in = False
attempts  = 1

while (logged_in == False) :
logging.info("login...(tentative %d)" % attempts)
try: 
infomaniak.login(USER,PASS)
logged_in = True
except requests.exceptions.ConnectionError,e:
logging.critical("Impossible de se connecter au serveur infomaniak")
logging.critical(e)
attempts+=1

logging.info("login OK (tentatives : %d)" % attempts)
bottle.run(host='',port=PORT)
logging.info("= STOP ==")


Any ideas on how to debug this or try to reproduce it ? anyone encountered 
similar behaviou

Re: Excluding a few pawns from the game

2015-04-13 Thread Dave Angel

On 04/13/2015 07:30 AM, userque...@gmail.com wrote:

I am writing a function in python, where the function excludes a list of pawns 
from the game. The condition for excluding the pawns is whether the pawn is 
listed in the database DBPawnBoardChart. Here is my code:

   def _bring_bigchart_pawns(self, removed_list=set(), playing_amount=0):
 chart_pawns = DBPawnBoardChart.query().fetch()
 chart_pawns_filtered = []
 for bigchart_pawn in chart_pawns:
 pawn_number = bigchart_pawn.key.number()
 db = DBPawn.bring_by_number(pawn_number)
 if db is None:
 chart_pawn.key.delete()
 logging.error('DBPawnBoardChart entry is none for chart_pawn = 
%s' % pawn_number)
 if pawn_number in chart_pawns:
 chart_pawn.add(pawn_number)
 else:
 exclude_pawn_numbers = ['1,2,3']
 chart_pawns_filtered.append(chart_pawn)
 pawn_numbers = [x.key.number() for x in chart_pawns_filtered]
 return pawn_numbers, chart_pawns, exclude_pawn_numbers

If the pawn is listed in DBPawnBoardChart it should be added to the game or 
else it should be excluded. I am unable to exclude the pawns,the 
DBPawnBoardChart contains Boardnumber and Pawnnumber. What necessary changes 
should I make ?



Looks to me like an indentation error.  About 9 of those lines probably 
need to be included in the for loop, so they have to be indented further in.


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Excluding a few pawns from the game

2015-04-13 Thread userquery5
I am writing a function in python, where the function excludes a list of pawns 
from the game. The condition for excluding the pawns is whether the pawn is 
listed in the database DBPawnBoardChart. Here is my code:

  def _bring_bigchart_pawns(self, removed_list=set(), playing_amount=0):
chart_pawns = DBPawnBoardChart.query().fetch()
chart_pawns_filtered = []
for bigchart_pawn in chart_pawns:
pawn_number = bigchart_pawn.key.number()
db = DBPawn.bring_by_number(pawn_number)
if db is None:
chart_pawn.key.delete() 
logging.error('DBPawnBoardChart entry is none for chart_pawn = 
%s' % pawn_number)
if pawn_number in chart_pawns:
chart_pawn.add(pawn_number)
else:
exclude_pawn_numbers = ['1,2,3']
chart_pawns_filtered.append(chart_pawn)
pawn_numbers = [x.key.number() for x in chart_pawns_filtered]
return pawn_numbers, chart_pawns, exclude_pawn_numbers  

If the pawn is listed in DBPawnBoardChart it should be added to the game or 
else it should be excluded. I am unable to exclude the pawns,the 
DBPawnBoardChart contains Boardnumber and Pawnnumber. What necessary changes 
should I make ?
-- 
https://mail.python.org/mailman/listinfo/python-list


U.K. Royal Mail MailMark web service from python?

2015-04-13 Thread loial
Anyone out there got any examples of calling the UK Royal Mail Mailmark web 
service from python?


-- 
https://mail.python.org/mailman/listinfo/python-list


Twisted 15.1 Release Announcement

2015-04-13 Thread HawkOwl
On behalf of Twisted Matrix Laboratories, I am honoured to announce the release 
of Twisted 15.1.0 -- just in time for the PyCon sprints!

This is not a big release, but does have some nice-to-haves:

- You can now install Twisted's optional dependencies easier -- for example, 
`pip install twisted[tls]` installs Twisted with TLS support.
- twisted.web.static.File allows defining a custom resource for rendering 
forbidden pages.
- Twisted's MSN support is now deprecated.
- More documentation has been added on how Trial finds tests.
- ...and 26 other closed tickets containing bug fixes, feature enhancements, 
and documentation.

For more information, check the NEWS file (link provided below).

You can find the downloads at  (or 
alternatively ) . The NEWS file 
is also available at
.

Many thanks to everyone who had a part in this release - the supporters of the 
Twisted Software Foundation, the developers who contributed code as well as 
documentation, and all the people building great things with Twisted!

Twisted Regards,
Hawkie Owl


signature.asc
Description: Message signed with OpenPGP using GPGMail
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: installing error in python

2015-04-13 Thread Steven D'Aprano
On Monday 13 April 2015 15:38, Mahima Goyal wrote:

> error of corrupted file or directory is coming if i am installing
> python for 64 bit.

My sympathies. Did you have a question, or are you just sharing the bad 
news?



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: find all multiplicands and multipliers for a number

2015-04-13 Thread Steven D'Aprano
On Monday 13 April 2015 15:25, Paul Rubin wrote:

> Dave Angel  writes:
>> But doesn't math.pow return a float?...
>> Or were you saying bignums bigger than a float can represent at all? 
>> Like:
> x = 2**1 -1  ...
> math.log2(x)
>> 1.0
> 
> Yes, exactly that.  Thus (not completely tested):
> 
> def isqrt(x):
> def log2(x): return math.log(x,2)  # python 2 compatibility
> if x < 1e9:
>return int(math.ceil(math.sqrt(x)))
> a,b = divmod(log2(x), 1.0)
> c = int(a/2) - 10
> d = (b/2 + a/2 - c + 0.001)
> # now c+d = log2(x)+0.001, c is an integer, and
> # d is a float between 10 and 11
> s = 2**c * int(math.ceil(2**d))
> return s
> 
> should return slightly above the integer square root of x.  This is just
> off the top of my head and maybe it can be tweaked a bit.  Or maybe it's
> stupid and there's an obvious better way to do it that I'm missing.

Check the archives: I started a thread last November titled "Challenge: 
optimizing isqrt" which is relevant. Also:

http://code.activestate.com/recipes/577821-integer-square-root-function/



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list