[issue46936] Fix grammar_grapher with the new forced directive

2022-03-07 Thread Luca


Change by Luca :


--
pull_requests: +29842
pull_request: https://github.com/python/cpython/pull/31719

___
Python tracker 
<https://bugs.python.org/issue46936>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46936] Fix grammar_grapher with the new forced directive

2022-03-06 Thread Luca


Change by Luca :


--
keywords: +patch
pull_requests: +29823
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/31704

___
Python tracker 
<https://bugs.python.org/issue46936>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46936] Fix grammar_grapher with the new forced directive

2022-03-06 Thread Luca


New submission from Luca :

The grammar_grapher.py utility has not been updated after the introduction  of 
the new "forced" directive ('&&') in the grammar (see 
https://github.com/python/cpython/pull/24292) and fails to visualize the 
current Python grammar.

--
components: Parser
messages: 414608
nosy: lucach, lys.nikolaou, pablogsal
priority: normal
severity: normal
status: open
title: Fix grammar_grapher with the new forced directive

___
Python tracker 
<https://bugs.python.org/issue46936>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16535] json encoder unable to handle decimal

2022-03-03 Thread Luca Lesinigo


Change by Luca Lesinigo :


--
nosy: +luca.lesinigo

___
Python tracker 
<https://bugs.python.org/issue16535>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Selection sort

2021-12-24 Thread Luca Anzilli
Hello

try this code

def selectionsort(arr):
#le=len(arr)
for b in range(0,len(arr)-1):
   # pos=b
for a in range(b+1,len(arr)):
if arr[b]>arr[a]:
arr[b],arr[a]=arr[a],arr[b]
return arr

arr=[3,5,9,8,2,6]
print(selectionsort(arr))

Il giorno ven 24 dic 2021 alle ore 15:50 Mats Wichmann 
ha scritto:

> On 12/24/21 07:22, vani arul wrote:
> > Hello,
> > I am trying write a code.Can some help me find the error in my code.
> > Thanks!
> >
> >
> > def selectionsort(arr):
> ># le=len(arr)
> > for b in range(0,len(arr)-1):
> > pos=b
> > for a in range(b+1,len(arr)-1):
> > if arr[b]>arr[a+1]:
> > arr[b],arr[a+1]=arr[a+1],arr[b]
> > return arr
> >
> > arr=[3,5,9,8,2,6]
> > print(selectionsort(arr))
> >
>
> Hint: what are you using 'pos' for?  A placeholder for (something) has
> an actual purpose in the typical selection-sort, but you only initialize
> it and never use or update it.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue45969] When uninstalling Python under Windows the "Scripts" folders should be removed from the PATH environment variable

2021-12-02 Thread Luca


New submission from Luca :

This issue is related to:
  https://bugs.python.org/issue3561
and:
  https://bugs.python.org/issue45968

When installing Python under Windows, if the "Add Python 3.x to PATH" option is 
flagged, the following two folders are added to the PATH environment variable:
  %USERPROFILE%\AppData\Local\Programs\Python\Python3x\
  %USERPROFILE%\AppData\Local\Programs\Python\Python3x\Scripts\

However when uninstalling Pyhton only the former folder is removed from the 
PATH environment variable, and not the latter one.

Moreover if also the following folder will be added to the PATH environment 
variable in future (see issue #45968):
  %USERPROFILE%\AppData\Roaming\Python\Python3x\Scripts\
then the same would apply to it.

--
components: Installation, Windows
messages: 407551
nosy: lucatrv, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: When uninstalling Python under Windows the "Scripts" folders should be 
removed from the PATH environment variable
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue45969>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45968] Windows installer should add also "%USERPROFILE%\AppData\Roaming\Python\Python3x\Scripts" folder to the PATH environment variable

2021-12-02 Thread Luca


New submission from Luca :

This issue is related to https://bugs.python.org/issue3561

When installing Python under Windows, if the "Add Python 3.x to PATH" option is 
flagged, the following two folders are added to the PATH environment variable:
%USERPROFILE%\AppData\Local\Programs\Python\Python3x\
%USERPROFILE%\AppData\Local\Programs\Python\Python3x\Scripts\

However also the following folder should be added, which is where scripts 
reside when packages are installed with "pip install --user":
%USERPROFILE%\AppData\Roaming\Python\Python3x\Scripts\

--
components: Installation, Windows
messages: 407550
nosy: lucatrv, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Windows installer should add also 
"%USERPROFILE%\AppData\Roaming\Python\Python3x\Scripts" folder to the PATH 
environment variable
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue45968>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45463] Documentation inconsistency on the number of identifiers allowed in global stmt

2021-10-13 Thread Luca


New submission from Luca :

The global statement allows specifying a list of identifiers, as correctly 
reported in the "Simple statements" chapter: 
https://docs.python.org/3/reference/simple_stmts.html#the-global-statement.

Inconsistently, however, the "Execution model" chapter describes the global 
statement as if it only allowed one single name.

This can be addressed by pluralizing the word "name" in the appropriate places 
in the "Execution model" chapter.

--
assignee: docs@python
components: Documentation
messages: 403864
nosy: docs@python, lucach
priority: normal
pull_requests: 27227
severity: normal
status: open
title: Documentation inconsistency on the number of identifiers allowed in 
global stmt

___
Python tracker 
<https://bugs.python.org/issue45463>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44292] contextmanager + ExitStack.pop_all()

2021-06-02 Thread Luca Mattiello

New submission from Luca Mattiello :

Reading the contextlib documentation, one might assume the following to be 
functionally equivalent, when used in a with statement:

@contextlib.contextmanager
def managed_resource():
  resource = acquire()
  try:
yield resource
  finally:
resource.release()

class ManagedResource:
  def __init__(self):
self.resource = acquire()
  def __enter__(self):
return self.resource
  def __exit__(self, *args):
self.resource.release()

However, the first version has a seemingly unexpected behavior when used in 
conjunction with an ExitStack, and pop_all().

with contextlib.ExitStack() as es:
  r = es.enter_context(managed_resource())
  es.pop_all()
  # Uh-oh, r gets released anyway

with contextlib.ExitStack() as es:
  r = es.enter_context(ManagedResource())
  es.pop_all()
  # Works as expected

I think the reason is 
https://docs.python.org/3/reference/expressions.html#yield-expressions, in 
particular

> Yield expressions are allowed anywhere in a try construct.
> If the generator is not resumed before it is finalized (by
> reaching a zero reference count or by being garbage collected),
> the generator-iterator’s close() method will be called,
> allowing any pending finally clauses to execute.

I guess this is working according to the specs, but I found it very 
counter-intuitive. Could we improve the documentation to point out this subtle 
difference?

Full repro:

import contextlib

@contextlib.contextmanager
def cm():
  print("acquire cm")
  try:
yield 1
  finally:
print("release cm")

class CM:
  def __init__(self):
print("acquire CM")
  def __enter__(self):
return 1
  def __exit__(self, *args):
print("release CM")

def f1():
  with contextlib.ExitStack() as es:
es.enter_context(cm())
es.pop_all()

def f2():
  with contextlib.ExitStack() as es:
es.enter_context(CM())
es.pop_all()

f1()
f2()

Output:

acquire cm
release cm
acquire CM

--
assignee: docs@python
components: Documentation, Library (Lib)
messages: 394948
nosy: docs@python, lucae.mattiello
priority: normal
severity: normal
status: open
title: contextmanager + ExitStack.pop_all()
versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40633] json.dumps() should encode float number NaN to null

2020-12-29 Thread Luca Barba


Luca Barba  added the comment:

I agree with arjanstaring

This implementation is not standard compliant and breaks interoperability with 
every ECMA compliant Javascript deserializer.

Technically is awful of course but interoperability and standardization come 
before than technical cleanliness IMHO

Regarding standardization:

If you consider https://tools.ietf.org/html/rfc7159

there is no way to represent the literal "nan" with the grammar supplied in 
section 6 hence the Infinity and Nan values are forbidden so as "nan"

For interoperability 

If you consider 
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf

It is clearly stated in section 24.5.2 Note 4 that JSON.stringify produces null 
for Infinity and NaN

"Finite numbers are stringified as if by calling ToString(number). NaN and 
Infinity regardless of sign are represented as the String null"

It is clearly stated in section 24.5.1 that JSON.parse uses eval-like parsing 
as a reference for decoding. nan is not an allowed keyword at all. For 
interoperability NaN could be used but out from the JSON standard.

So what happens is that this will break all the ECMA compliant parsers (aka 
browsers) in the world. Which is what is happening to my project by the way

Pandas serialization methos (to_json) already adjusts this issue, but I really 
think the standard should too

--
nosy: +alucab

___
Python tracker 
<https://bugs.python.org/issue40633>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-07 Thread Luca

On 4/6/2020 11:05 PM, Luca wrote:

On 4/6/2020 8:51 PM, Reto wrote:

out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)


Thank you, brother. It works



BTW, a little gotcha (I write this in case someone gets here in the 
future through Google or something)


"""
import pandas as pd
import numpy as np
import io
df = pd.DataFrame(10*np.random.randn(3,4))
df = df.astype(int)
out = df.to_csv(None)

# out == ',0,1,2,3\n0,9,4,-5,-2\n1,16,12,-1,-5\n2,-2,8,0,6\n'

new = pd.read_csv(io.StringIO(out), index_col=0)

#gotcha
type(df.iloc[1,1]) # numpy.int32
type(new.iloc[1,1]) # numpy.int64
"""

new == out will return a dataframe of False

0   1   2   3
0   False   False   False   False
1   False   False   False   False
2   False   False   False   False

Thanks again
--
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 8:51 PM, Reto wrote:

out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)


Thank you, brother. It works

--
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 3:03 PM, Christian Gollwitzer wrote:





CSV is the most sensible option here. It is widely supported by 
spreadsheets etc. and easily copy/pasteable.


Thank you Christian.

so, given a dataframe, how do I make it print itself out as CSV?

And given CSV data in my clipboard, how do I paste it into a Jupiter 
cell (possibly along with a line or two of code) that will create a 
dataframe out of it?



--
https://mail.python.org/mailman/listinfo/python-list


Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 4:08 AM, Reto wrote:

Does this help?


Thank you, but not really. What I am trying to achieve is to have a way 
to copy and paste small yet complete dataframes (which may be the result 
of previous calculations) between a document (TXT, Word, GoogleDoc) and 
Jupiter/IPython.


Did I make sense?

Thanks
--
https://mail.python.org/mailman/listinfo/python-list


print small DataFrame to STDOUT and read it back into dataframe

2020-04-04 Thread Luca



possibly a stupid question. Let's say I have a (small) dataframe:

import pandas as pd
dframe = pd.DataFrame({'A': ['a0','a1','a2','a3'],
'B': ['b0','b1','b2','b3'],
'C': ['c0','c1','c2','c3'],
'D': ['d0','d1','d2','d3']}, index=[0,1,2,3])

Is there a way that I can ask this dataframe to "print itself" in a way 
that I can copy that output and easily rebuild the original dataframe 
with index, columns and all?


dframe.to_string

gives:



Can I evaluate this string to obtain a new dataframe like the one that 
generated it?


Thanks
--
https://mail.python.org/mailman/listinfo/python-list


confused by matplotlib and subplots

2020-04-01 Thread Luca



Hello Covid fighters and dodgers,

I'm sort of confused by what I am seeing in a Pandas book.

This works:

fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax3.plot(np.random.randn(50).cumsum(), 'k--');

but also this works!

fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
plt.plot(np.random.randn(50).cumsum(), 'k--');

(the second one is actually the example in the book).

Why does it work? Isn't axX referring to one of the subplots and plt to 
the plot as a whole?


Thanks

--
https://mail.python.org/mailman/listinfo/python-list


[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-06 Thread Luca


Luca  added the comment:

The issue has been fixed in `typeshed`, so the following is now allowed:

width = min(80, shutil.get_terminal_size().columns - 2)
formatter_class = lambda prog: argparse.RawDescriptionHelpFormatter(prog, 
width=width)

https://github.com/python/typeshed/issues/3806#event-3104796040

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-04 Thread Luca


Luca  added the comment:

I opened two issues regarding the mypy error, I understand it is going to be 
fixed in typeshed:

https://github.com/python/mypy/issues/8487

https://github.com/python/typeshed/issues/3806

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-03 Thread Luca


Luca  added the comment:

Of course I still think there should be an easier way to do this though.

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-02 Thread Luca


Luca  added the comment:

For the benefit of other developers willing to control text width with 
`argparse`, using the lambda function let `mypy` fail with the following error:

541: error: Argument "formatter_class" to "ArgumentParser" has incompatible 
type "Callable[[Any], RawDescriptionHelpFormatter]"; expected 
"Type[HelpFormatter]"

So I am reverting back to the following custom formatting class:

class 
RawDescriptionHelpFormatterMaxTextWidth80(argparse.RawDescriptionHelpFormatter):
"""Set maximum text width = 80."""

def __init__(self, prog):
width = min(80, shutil.get_terminal_size().columns - 2)
argparse.RawDescriptionHelpFormatter.__init__(self, prog, 
width=width)

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-02 Thread Luca


Luca  added the comment:

OK, I will do this for my package, but I do not believe many others will do the 
same without a `max_text_width` parameter (too much care is needed for this to 
work correctly), and as a result most packages using `argparse` will continue 
to not properly handle text width.

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-02 Thread Luca


Luca  added the comment:

That lambda function would not give the same result, as it would constrain the 
text width to a fixed value, while my proposal would just set a maximum limit 
(if the terminal is narrower, the actual text width would be reduced).

With the current implementation all possible solutions are in my opinion 
overkilling for such a basic task. Maybe it is not a direct consequence of 
this, but as a matter of fact most Python scripts using `argparse` that I know 
(including `pip`) do not control the text width, which IMHO is not a good 
practice.

--

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39809] argparse: add max_text_width parameter to ArgumentParser

2020-03-01 Thread Luca


New submission from Luca :

It is often desirable to limit the help text width, for instance to 78 or 88 
columns, regardless of the actual size of the terminal window.

Currently you can achieve this in rather cumbersome ways, for instance by 
setting "os.environ['COLUMNS'] = '80'" (but this requires the "os" module, 
which may not be needed otherwise by your module, and may lead to other 
undesired effects), or by writing a custom formatting class. IMHO there should 
be a simpler option for such a basic task.

I propose to add a max_text_width parameter to ArgumentParser. This would 
require only minor code changes to argparse (see attached patch), should I open 
a pull request on GitHub?

--
components: Library (Lib)
files: argparse_max_text_width.patch
keywords: patch
messages: 363053
nosy: lucatrv
priority: normal
severity: normal
status: open
title: argparse: add max_text_width parameter to ArgumentParser
type: enhancement
versions: Python 3.9
Added file: https://bugs.python.org/file48939/argparse_max_text_width.patch

___
Python tracker 
<https://bugs.python.org/issue39809>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



What's the best forum to get help with Pandas?

2020-02-20 Thread Luca



subject has it all. Thanks
--
https://mail.python.org/mailman/listinfo/python-list


[issue39658] Include user scripts folder to PATH on Windows

2020-02-16 Thread Luca

New submission from Luca :

When installing Python on Windows, and selecting the option “Add Python to 
PATH”, the following folders are added to the "PATH" environment variable:
- C:\Users\[username]\AppData\Local\Programs\Python\Python38\Scripts\
- C:\Users\[username]\AppData\Local\Programs\Python\Python38\
However also the following folder should be added, _before_ the other two:
- C:\Users\[username]\AppData\Roaming\Python\Python38\Scripts\
This is needed to correctly expose scripts of packages installed with `pip 
install --user` (`pip` emits a warning when installing a script with `--user` 
flag if that folder is not in "PATH").

--
components: Installation
messages: 362108
nosy: lucatrv
priority: normal
severity: normal
status: open
title: Include user scripts folder to PATH on Windows

___
Python tracker 
<https://bugs.python.org/issue39658>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39140] shutil.move does not work properly with pathlib.Path objects

2019-12-26 Thread Luca Paganin


New submission from Luca Paganin :

Suppose you have two pathlib objects representing source and destination of a 
move:

src=pathlib.Path("foo/bar/barbar/myfile.txt")
dst=pathlib.Path("foodst/bardst/")

If you try to do the following

shutil.move(src, dst)

Then an AttributeError will be raised, saying that PosixPath objects do not 
have an rstrip attribute. The error is the following:

Traceback (most recent call last):
  File "mover.py", line 10, in 
shutil.move(src, dst)
  File "/Users/lucapaganin/opt/anaconda3/lib/python3.7/shutil.py", line 562, in 
move
real_dst = os.path.join(dst, _basename(src))
  File "/Users/lucapaganin/opt/anaconda3/lib/python3.7/shutil.py", line 526, in 
_basename
return os.path.basename(path.rstrip(sep))
AttributeError: 'PosixPath' object has no attribute 'rstrip'

Looking into shutil code, line 526, I see that the problem happens when you try 
to strip the trailing slash using rstrip, which is a method for strings, while 
PosixPath objects do not have it. Moreover, pathlib.Path objects already manage 
for trailing slashes, correctly getting basenames even when these are present.
The following two workarounds work:

1) Explicit cast both src and dst as string using

shutil.move(str(src), str(dst))

This work for both the cases in which dst contains the destination filename or 
not.

2) Add the filename to the end of the PosixPath dst object:

dst=pathlib.Path("foodst/bardst/myfile.txt")

Then do 

shutil.move(src, dst)

Surely one could use the method pathlib.Path.replace for PosixPath objects, 
which does the job without problems, even if it requires for dst to contain the 
destination filename at the end, and lacks generality, since it bugs when one 
tries to move files between different filesystems. 
I think that you should account for the possibility for shutil.move to manage 
pathlib.Path objects even if one does not provide the destination filename, 
since the source of the bug is due to a safety measure which is not necessary 
for pathlib.Path objects, i.e. the managing of the trailing slash.
Do you think that is possible? Thank you in advance.

Luca Paganin

P.S.: I attach a tarball with the dirtree I used for the demonstration.

--
files: mover.tgz
messages: 358891
nosy: Luca Paganin
priority: normal
severity: normal
status: open
title: shutil.move does not work properly with pathlib.Path objects
type: behavior
versions: Python 3.7
Added file: https://bugs.python.org/file48803/mover.tgz

___
Python tracker 
<https://bugs.python.org/issue39140>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: variable exchange

2018-10-09 Thread Luca Bertolotti
Thanks Cameron
but my problem is that i can't access to a line edit of the class Form from the 
class Form_cli i get this error:
'Form_cli' object has no attribute 'lineEdit_29'
-- 
https://mail.python.org/mailman/listinfo/python-list


variable exchange

2018-10-09 Thread Luca Bertolotti
Hello i'm using python with pyqt but i have a problem on varable:
I have two class:

from PyQt5.QtCore import pyqtSlot
from PyQt5.QtWidgets import QWidget
from PyQt5.QtSql import QSqlDatabase, QSqlTableModel
from PyQt5 import QtWidgets
from PyQt5.QtCore import Qt

from Ui_form import Ui_Form
from form_cli import Form_cli


class Form(QWidget, Ui_Form):
"""
Class documentation goes here.
"""
def __init__(self, parent=None):
"""
Constructor

@param parent reference to the parent widget
@type QWidget
"""
super(Form, self).__init__(parent)
self.setupUi(self)
self.ftc = Form_cli()etc


from PyQt5.QtCore import pyqtSlot, QModelIndex
from PyQt5.QtWidgets import QWidget, QTableWidgetItem, QLineEdit
from PyQt5.QtSql import QSqlTableModel
from PyQt5.QtSql import QSqlDatabase
from PyQt5.QtCore import Qt




from Ui_form_cli import Ui_Form



class Form_cli(QWidget, Ui_Form):
"""
Class documentation goes here.
"""
def __init__(self, parent=None):
"""
Constructor

@param parent reference to the parent widget
@type QWidget
"""
super(Form_cli, self).__init__(parent)
self.setupUi(self)
self.db = QSqlDatabase()
self.tableWidget.setRowCount(1)
self.tableWidget.setColumnCount(10)


>From the class Form_cli how i can modify a textedit that is in the class Form?

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue27334] pysqlite3 context manager not performing rollback when a database is locked elsewhere for non-DML statements

2016-06-16 Thread Luca Citi

New submission from Luca Citi:

I have reported this bug to the pysqlite module for python2 ( 
https://github.com/ghaering/pysqlite/issues/103 ) but I also report it here 
because it applies to python3 too.

The pysqlite3 context manager does not perform a rollback when a transaction 
fails because the database is locked by some other process performing non-DML 
statements (e.g. during the sqlite3 command line .dump method).

To reproduce the problem, open a terminal and run the following:

```bash
sqlite3 /tmp/test.db 'drop table person; create table person (id integer 
primary key, firstname varchar)'
echo -e 'begin transaction;\nselect * from person;\n.system sleep 
1000\nrollback;' | sqlite3 /tmp/test.db
```

Leave this shell running and run the python3 interpreter from a different 
shell, then type:

```python
import sqlite3
con = sqlite3.connect('/tmp/test.db')
with con:
con.execute("insert into person(firstname) values (?)", ("Jan",))
pass
```

You should receive the following:

```
  1 with con:
  2 con.execute("insert into person(firstname) values (?)", 
("Jan",))
> 3 pass
  4 

OperationalError: database is locked
```

Without exiting python, switch back to the first shell and kill the `'echo ... 
| sqlite3'` process. Then run:

```bash
sqlite3 /tmp/test.db .dump
```

you should get:

```
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
/ ERROR: (5) database is locked */
ROLLBACK; -- due to errors
```

This means that the python process never executed a `rollback` and is still 
holding the lock. To release the lock one can exit python (clearly, this is not 
the intended behaviour of the context manager).

I believe the reason for this problem is that the exception happened in the 
implicit `commit` that is run on exiting the context manager, rather than 
inside it. In fact the exception is in the `pass` line rather than in the 
`execute` line. This exception did not trigger a `rollback` because the it 
happened after `pysqlite_connection_exit` checks for exceptions.

The expected behaviour (pysqlite3 rolling back and releasing the lock) is 
recovered if the initial blocking process is a Data Modification Language (DML) 
statement, e.g.:

```bash
echo -e 'begin transaction; insert into person(firstname) values 
("James");\n.system sleep 1000\nrollback;' | sqlite3 /tmp/test.db
```

because this raises an exception at the `execute` time rather than at `commit` 
time.

To fix this problem, I think the `pysqlite_connection_exit` function in 
src/connection.c should handle the case when the commit itself raises an 
exception, and invoke a rollback. Please see patch attached.

--
components: Extension Modules
files: fix_pysqlite_connection_exit.patch
keywords: patch
messages: 268678
nosy: lciti
priority: normal
severity: normal
status: open
title: pysqlite3 context manager not performing rollback when a database is 
locked elsewhere for non-DML statements
versions: Python 3.6
Added file: http://bugs.python.org/file43420/fix_pysqlite_connection_exit.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27334>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: variable scope of class objects

2015-10-22 Thread Luca Menegotto

Il 20/10/2015 23:33, JonRob ha scritto:



Hello Luca,

I very much appreciated your comments.  And I understand the
importance of "doing something right"  (i.e. convention).

This leads me to another question.

Because I am interfacing with an I2C sensor I have many register
definations to include (30 register addresses and  26 Variables to be
red from some of those registers.
In your comment you mentioned that convention is to declare variables
(and constants?)  in the construction (__ini__).
I am concerned that the sheer number of varialbe / constants would
make it difficult to read.

In your opinion, what would be the best method to structure such code?

Regards
JonRob


Let's start from constants. Constants, in Python, simply don't exist 
(and IMHO this is one of the few lacks of Python). All you can do is to 
declare a variable and treat it as a constant: you never change it!


It doesn't make sense to put a constant declaration at instance level, 
declaring it in the __init__ part of a class. After all, a constant is 
an information you want to share. The choice is up to you as the project 
manager: if you think that your constant is deeply related to the class 
you're designing, declare it as a class variable; otherwise, declare it 
at global level (in this case, often I use a separate file dedicated to 
constant declaration).


--
Ciao!
Luca


--
https://mail.python.org/mailman/listinfo/python-list


Re: variable scope of class objects

2015-10-22 Thread Luca Menegotto

Maybe I've been too cryptic. I apologize.

Il 22/10/2015 01:35, JonRob ha scritto:

@Dennis,


Thanks for your example.  My structure is very similar.


And that's ok. But you can also 'attach' the constants to a class, if it 
makes sense. For example, the same code of Dennis can be written as:


class SensorA():
GYROXREG = 0x0010
GYROYREG = 0x0011
GYROZREG = 0x0001
_registers = [GYROXREG, GYROYREG, GYROZREG]

And then you can invoke those constants as:

SensorA.GYROXREG

to emphasize that they are significant to this class, and to this class 
only.



 Luca wrote...

Please, note that declaring a variable in the constructor is only a
convention: in Python you can add a variable to an object of a class
wherever you want in your code (even if it is very dangerous and
discouraged).


This is the cryptic part.
I mean: you can do, and it's perfectly legal:

class A():
def __init__(self):
self.a = 10

if __name__ == '__main__':
o = A()
print(o.a)
# this is a new member, added on the fly
o.b = 20
print(o.b)

but, for God's sake, use it only if you have a gun at your head!

--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: variable scope of class objects

2015-10-20 Thread Luca Menegotto

Il 19/10/2015 20:39, JonRob ha scritto:


I (think) I understand that in the below case, the word self could be
replaced with "BME280" to explicitly call out a variable.

But even still I don't know how explicit call out effects the scope of
a variable.


These two statements make me think you come from C++ or something similar.

In Python you can declare variables at class level, but this declaration 
must NOT be interpreted in the same manner of a similar declaration in 
C++: they remain at the abstract level of a class, and they have nothing 
to do with an instance of a class (in fact, to be correctly invoked, 
they must be preceeded by the class name).


'self' (or a similar representation, you could use 'this' without 
problem) gives you access to the instance of the class, even in the 
constructor; it is important, because the constructor is the place where 
instance variables should be defined. Something like this:


class foo:
# invoke with foo._imAtClassLevel
_imAtClassLevel = 10

def __init__(self):
#  need to say how this must be invoked?
self._imAtInstanceLevel = 0

no confusion is possible, because:

class foo2:
_variable = 1000

def __init__(self):
# let's initialize an instance variable with
# a class variable
self._variable = foo2._variable

Please, note that declaring a variable in the constructor is only a 
convention: in Python you can add a variable to an object of a class 
wherever you want in your code (even if it is very dangerous and 
discouraged).


--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: variable scope of class objects

2015-10-20 Thread Luca Menegotto

Il 20/10/2015 08:38, Nagy László Zsolt ha scritto:


When you say "they have nothing to do", it is almost true but not 100%.


I know it, but when it comes to eradicate an idea that comes directly 
from C++-like languages, you must be drastic.

Nuances come after...

--
Ciao!
Luca
--
https://mail.python.org/mailman/listinfo/python-list


Re: Check if a given value is out of certain range

2015-09-30 Thread Luca Menegotto

Il 29/09/2015 23:04, Random832 ha scritto:


How about x not in range(11)?



Remember: simpler is better.

--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Porting Python Application to a new linux machine

2015-09-03 Thread Luca Menegotto

Il 03/09/2015 16:32, Heli Nix ha scritto:


How can I do this in Linux ?


As far as I know, in general a Linux distro comes with Python already 
installed.

All you have to do is check if the installed version matches your needs.
Tipically, you'll find Python 2.7; however, I know there are distros 
with Python3.x as default (Fedora?)


--
Ciao!
Luca


--
https://mail.python.org/mailman/listinfo/python-list


Re: Porting Python Application to a new linux machine

2015-09-03 Thread Luca Menegotto

Il 03/09/2015 18:49, Chris Angelico ha scritto:


If you mean that typing "python" runs 2.7, then that's PEP 394 at
work. For compatibility reasons, 'python' doesn't ever run Python 3.


Please forgive me, Il make it clearer.
I'm pretty shure that Ubuntu 15.04 comes with Python 2.7.
I don't remember if Python 3 was preinstalled or if I had to install it 
manually.


--
Ciao!
Luca


--
https://mail.python.org/mailman/listinfo/python-list


Re: continue vs. pass in this IO reading and writing

2015-09-03 Thread Luca Menegotto

Il 03/09/2015 17:05, kbtyo ha scritto:


I am experimenting with many exception handling and utilizing

> continue vs pass. After pouring over a lot of material on SO
> and other forums I am still unclear as to the difference when
> setting variables and applying functions within multiple "for"
> loops.

'pass' and 'continue' have two very different meanings.

'pass' means 'don't do anything'; it's useful when you _have_ to put a 
statement but you _don't_need_ to put a statement.
You can use it everywhere you want, with no other damage then adding a 
little weight to your code.


A stupid example:

if i == 0:
   pass
else:
   do_something()


'continue', to be used in a loop (for or while) means 'ignore the rest 
of the code and go immediatly to the next iteration'. The statement 
refers to the nearest loop; so, if you have two nested loops, it refers 
to the inner one; another stupid example:


for i in range(10):
for j in range(10):
if j < 5: continue
do_something(i, j) # called only if j >= 5

--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Porting Python Application to a new linux machine

2015-09-03 Thread Luca Menegotto

Il 03/09/2015 17:53, Nick Sarbicki ha scritto:

Is 3.x the default on ubuntu now? My 14.10 is still 2.7. Although it
does have python3 installed.


I've checked my Ubuntu 15.04, and the default is 2.7.9.
There is also Python3 (3.4.3), but sorry, I can't remember if I've 
manually installed it or not.


--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Why Python is not both an interpreter and a compiler?

2015-08-31 Thread Luca Menegotto

Il 31/08/2015 19:48, Mahan Marwat ha scritto:


If it hasn't been considered all that useful, then why

> the tools like cx_freeze, pytoexe are doing very hard!

Well, I consider those tools useless at all!
I appreciate Python because, taken one or two precautions, I can easily 
port my code from one OS to another with no pain.

So, why should I loose this wonderful freedom?

--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Flan definition collision

2015-08-11 Thread Luca Menegotto

Il 11/08/2015 08:28, smahab...@google.com ha scritto:

I am importing two modules, each of which is defining flags

 (command line arguments) with the same name. This makes
 it impossible to import both the modules at once, because of flag
 name definition conflict.




If you use 'import', and not 'from xyz import', you avoid any conflict.

Ah: 'from ... import' has caused me a lot of terrible headaches. I don't 
use this statement if not strictly necessary.


--
Bye.
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Is it a newsgroup or a list?

2015-06-07 Thread Luca Menegotto

Il 07/06/2015 13:45, Steven D'Aprano ha scritto:


As far as I know, python-list a.k.a. comp.lang.python is the only one of the
Python mailing lists with an official newsgroup mirror.


OK. So let me rephrase: Thank God the list is mirrired to a newsgroup...

--
Ciao!
Luca


--
https://mail.python.org/mailman/listinfo/python-list


Re: Function to show time to execute another function

2015-06-07 Thread Luca Menegotto

Il 07/06/2015 10:22, Cecil Westerhof ha scritto:
 That only times the function. I explicitly mentioned I want both the
 needed time AND the output.

 Sadly the quality of the answers on this list is going down

First of all, thank God it's a newsgroup, not a list.

Second, often the quality of an answer is deeply connected to the 
quality of a question. I've red your question, and I still have not 
clear what you want.


If you want a function that returns a result and a time elapsed, why 
don't you pack the time and the result in a tuple, a list or a 
dictionary? it takes 20 seconds, if you are slow.


Else if you want a function that prints time and result, why don't you 
print result more or less in the same manner you do with the time 
elapsed? it takes 20 seconds, if you are slow.


Else, can you explain exactly what do you want?


--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


Re: Function to show time to execute another function

2015-06-07 Thread Luca Menegotto

Il 07/06/2015 11:28, Steven D'Aprano ha scritto:



But if your function takes less than, say, 1 millisecond, then your timing
results are probably just meaningless random numbers, affected more by the
other ten thousand processes running on your computer than by the Python
code itself.


That's a good point. IMHO test the time execution of a function makes 
sense only if all these assetions are true:


- the function takes a long time to be executed;
- the execution is iterated a certain (large) number of times.

--
Ciao!
Luca

--
https://mail.python.org/mailman/listinfo/python-list


python paho mqtt thread

2015-05-21 Thread Luca Sanna
hi ,
I try to run this code MQTT but I can not read the messages in the topic , can 
you help me ?
thanks

class MyMQTTClass(Thread):
#def __init__(self, clientid=None):
clientid=None
_mqttc = mqtt.Client(clientid)
#_mqttc.on_message = mqtt_on_message
#_mqttc.on_connect = mqtt_on_connect
#_mqttc.on_publish = mqtt_on_publish
#_mqttc.on_subscribe = mqtt_on_subscribe
db=Mydata()

def mqtt_on_connect(self, mqttc, obj, flags, rc):
#print(rc: +str(rc))
pass

def mqtt_on_message(self, mqttc, obj, msg):
#print(msg.topic+ +str(msg.qos)+ +str(msg.payload))
if msg.topic==home/soggiorno/luce:
if msg.paylod==ON:
self.db.update_configure('power',1,1)
else:
self.db.update_configure('power',0,1)

def mqtt_on_publish(self, mqttc, obj, mid):
#print(mid: +str(mid))
pass

def mqtt_on_subscribe(self, mqttc, obj, mid, granted_qos):
#print(Subscribed: +str(mid)+ +str(granted_qos))
pass

def mqtt_on_log(self, mqttc, obj, level, string):
#print(string)
pass

def run(self):
logCritical(run)
self._mqttc.on_message = self.mqtt_on_message
rc=0
while rc==0:

self._mqttc.connect(192.168.1.60, 1883, 60)
self._mqttc.subscribe(home/soggiorno/temperatura)
self._mqttc.subscribe(home/soggiorno/umidita)
self._mqttc.subscribe(home/soggiorno/luce)
self.mqtt_on_message
rc=self._mqttc.loop()
-- 
https://mail.python.org/mailman/listinfo/python-list


Let me introduce myself.

2015-04-30 Thread Luca Menegotto

Hello everybody.

One of the common rules i like most is: when you enter in a community, 
introduce yourself!


So here I am! Luca, old developer (50 and still running!), Python (and 
not only) developer, from Marostica, a lovely small town in the 
north-eastern part of Italy.


It's a pleasure for me to notice that an NNTP newsgroup regarding Python 
is alive and kicking!


--
Ciao!
Luca Menegotto.
--
https://mail.python.org/mailman/listinfo/python-list


[issue22361] Ability to join() threads in concurrent.futures.ThreadPoolExecutor

2014-09-09 Thread Luca Falavigna

Luca Falavigna added the comment:

There is indeed little benefit in freeing up resources left open by a unused 
thread, but it could be worth closing it for specific needs (e.g. thread 
processes sensible information) or in embedded systems with very low resources.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22361
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22361] Ability to join() threads in concurrent.futures.ThreadPoolExecutor

2014-09-08 Thread Luca Falavigna

New submission from Luca Falavigna:

I have a program which waits for external events (mostly pyinotify events), and 
when events occur a new worker is created using 
concurrent.futures.ThreadPoolExecutor. The following snippet represents shortly 
what my program does:

from time import sleep
from concurrent.futures import ThreadPoolExecutor

def func():
print(start)
sleep(10)
print(stop)

ex = ThreadPoolExecutor(1)

# New workers will be scheduled when an event
# is triggered (i.e. pyinotify events)
ex.submit(func)

# Dummy sleep
sleep(60)

When func() is complete, I'd like the underlying thread to be terminated. I 
realize I could call ex.shutdown() to achieve this, but this would prevent me 
from adding new workers in case new events occur. Not calling ex.shutdown() 
leads to have unfinished threads which pile up considerably:

(gdb) run test.py
Starting program: /usr/bin/python3.4-dbg test.py
[Thread debugging using libthread_db enabled]
[New Thread 0x7688e700 (LWP 17502)]
start
stop
^C
Program received signal SIGINT, Interrupt.
0x76e41963 in select () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) info threads
  Id   Target Id Frame
  2Thread 0x7688e700 (LWP 17502) python3.4-dbg 0x77bce420 in 
sem_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
* 1Thread 0x77ff1700 (LWP 17501) python3.4-dbg 0x76e41963 in 
select () from /lib/x86_64-linux-gnu/libc.so.6
(gdb)

Would it be possible to add a new method (or a ThreadPoolExecutor option) which 
allows to join the underlying thread when the worker function returns?

--
components: Library (Lib)
messages: 226569
nosy: dktrkranz
priority: normal
severity: normal
status: open
title: Ability to join() threads in concurrent.futures.ThreadPoolExecutor
type: enhancement
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22361
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19023] ctypes docs: Unimplemented and undocumented features

2014-01-03 Thread Luca Faustin

Changes by Luca Faustin luca.faus...@ibttn.it:


--
nosy: +faustinl

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19023
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Using sh library with grep command

2013-11-23 Thread Luca
I'm trying to use sh (https://pypi.python.org/pypi/sh) for calling
system grep command but it's now working as expected.

An example:

import sh
sh.grep('abc', os.getcwd(), '-r')

But I get the ErrorReturnCode_1: exception, that I learned is the
normal exit code for grep command when it not found any match.

The error instance object reported that the command run is:

*** ErrorReturnCode_1:
  RAN: '/usr/bin/grep abc /Users/keul/test_sh'

Obviously manually running the command I get some output and exit code 0.

Where I'm wrong?

-- 
-- luca

twitter: http://twitter.com/keul
linkedin: http://linkedin.com/in/lucafbb
blog: http://blog.keul.it/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Using sh library with grep command

2013-11-23 Thread Luca Fabbri
On Sat, Nov 23, 2013 at 5:29 PM, Peter Otten __pete...@web.de wrote:
 Luca wrote:

 I'm trying to use sh (https://pypi.python.org/pypi/sh) for calling
 system grep command but it's now working as expected.

 An example:

 import sh
 sh.grep('abc', os.getcwd(), '-r')

 But I get the ErrorReturnCode_1: exception, that I learned is the
 normal exit code for grep command when it not found any match.

 The error instance object reported that the command run is:

 *** ErrorReturnCode_1:
   RAN: '/usr/bin/grep abc /Users/keul/test_sh'

 Obviously manually running the command I get some output and exit code 0.

 Where I'm wrong?

 Did you run grep with or without the -r option?

 The code sample and the error message don't match. Maybe you accidentally
 left out the -r in your actual code.


Sorry all, it was a stupid error and I provided a bad example.

I was running...
   sh.grep('abc', os.getcwd(), '-r')

...and the output of the command inside the exception was exactly...
RAN: '/usr/bin/grep abc /Users/keul/test_sh -r'

So, at first glance it was ok (copying/pasting it in in the terminal
return exactly was I expected).

The error? The doublequote inclusion.

Using this...
   sh.grep('abc', os.getcwd(), '-r')

...I  get this output...
RAN: '/usr/bin/grep abc /Users/keul/test_sh -r'

But this time I get the expected result (both in terminal and python
env). So seems that when quoting you get a false positive right
command output but a wrong execution.


-- 
-- luca

twitter: http://twitter.com/keul
linkedin: http://linkedin.com/in/lucafbb
blog: http://blog.keul.it/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: What's the best way to extract 2 values from a CSV file from each row systematically?

2013-09-28 Thread Luca Cerone
 I'd really appreciate any suggestions or help, thanks in advance!

Hi Alex if you know that you want only columns 3 and 5, you could also use list 
comprehension to fetch the values:

import csv

with open('yourfile.csv','rU') as fo: 
 #the rU means read using Universal newlines
 cr = csv.reader(fo)
 values_list = [(r[2],r[4]) for r in cr] #you have a list of tuples containing 
the values you need

Cheers,
Luca
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-23 Thread Luca Cerone
 It won't be very good documenation any more but nothing stops you
 
 from examining the result in the next doctest and making yourself
 
 happy about it.
 
 
 
x = input(indeterminate:)
 
result = '{}'.format(x))
 
result.startswith(') and result.endswith(')
 
   True
 

Hi Neil, thanks for the hint, but this won't work.

The problem is that the function displays some output informing you of what 
steps are being performed (some of which are displayed by a 3rd party function 
that I don't control).

This output interferes with the output that should be checked by doctest.

For example, you can check that the following doctest would fail:

.. doctest:: example_fake

def myfun(x,verbose):
   ...print random output
   ...return x
myfun(10)
   10

When you run make doctest the test fails with this message:

File tutorial.rst, line 11, in example_fake
Failed example:
myfun(10)
Expected:
10
Got:
random output
10

In this case (imagine that random output is really random, therefore I can 
not easily filter it, if not ignoring several lines. This would be quite easy 
if ellipsis and line continuation wouldn't have the same sequence of 
characters, but unfortunately this is not the case.

The method you proposed still is not applicable, because I have no way to use 
startswith() and endswith()...

The following code could do what I want if I could ignore the output...

def myfun(x,verbose):
   ...print random output
   ...return x
result = myfun(10) #should ignore the output here!
print result
   10

fails with this message:

File tutorial.rst, line 11, in example_fake
Failed example:
result = myfun(10)
Expected nothing
Got:
random output

(line 11 contains:  result = myfun(10))

A SKIP directive is not feasible either:

.. doctest:: example_fake

def myfun(x):
   ...print random output
   ...return x
result = myfun(10) # doctest: +SKIP 
result
   10

fails with this error message:
File tutorial.rst, line 12, in example_fake
Failed example:
result
Exception raised:
Traceback (most recent call last):
  File /usr/lib/python2.7/doctest.py, line 1289, in __run
compileflags, 1) in test.globs
  File doctest example_fake[2], line 1, in module
result
NameError: name 'result' is not defined

As you can see is not that I want something too weird, is just that sometimes 
you can't control what the function display and ignoring the output is a 
reasonable way to implement a doctest.

Hope these examples helped to understand better what my problem is.

Thanks all of you guys for the hints, suggestions and best practices :)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-23 Thread Luca Cerone
I don't know why but it seems that google groups stripped the indentation from 
the code. I just wanted to ensure you that in the examples that I have run
the definition of myfunc contained correctly indented code!

On Monday, 23 September 2013 15:45:43 UTC+1, Luca Cerone  wrote:
 .. doctest:: example_fake
 
 
 
 def myfun(x,verbose):
 
...print random output
 
...return x
 
 myfun(10)
 
10
 
 
 
 When you run make doctest the test fails with this message:
 
 
 
 File tutorial.rst, line 11, in example_fake
 
 Failed example:
 
 myfun(10)
 
 Expected:
 
 10
 
 Got:
 
 random output
 
 10
 
 
 
 In this case (imagine that random output is really random, therefore I can 
 not easily filter it, if not ignoring several lines. This would be quite easy 
 if ellipsis and line continuation wouldn't have the same sequence of 
 characters, but unfortunately this is not the case.
 
 
 
 The method you proposed still is not applicable, because I have no way to use 
 startswith() and endswith()...
 
 
 
 The following code could do what I want if I could ignore the output...
 
 
 
 def myfun(x,verbose):
 
...print random output
 
...return x
 
 result = myfun(10) #should ignore the output here!
 
 print result
 
10
 
 
 
 fails with this message:
 
 
 
 File tutorial.rst, line 11, in example_fake
 
 Failed example:
 
 result = myfun(10)
 
 Expected nothing
 
 Got:
 
 random output
 
 
 
 (line 11 contains:  result = myfun(10))
 
 
 
 A SKIP directive is not feasible either:
 
 
 
 .. doctest:: example_fake
 
 
 
 def myfun(x):
 
...print random output
 
...return x
 
 result = myfun(10) # doctest: +SKIP 
 
 result
 
10
 
 
 
 fails with this error message:
 
 File tutorial.rst, line 12, in example_fake
 
 Failed example:
 
 result
 
 Exception raised:
 
 Traceback (most recent call last):
 
   File /usr/lib/python2.7/doctest.py, line 1289, in __run
 
 compileflags, 1) in test.globs
 
   File doctest example_fake[2], line 1, in module
 
 result
 
 NameError: name 'result' is not defined
 
 
 
 As you can see is not that I want something too weird, is just that sometimes 
 you can't control what the function display and ignoring the output is a 
 reasonable way to implement a doctest.
 
 
 
 Hope these examples helped to understand better what my problem is.
 
 
 
 Thanks all of you guys for the hints, suggestions and best practices :)

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-23 Thread Luca Cerone
 
 The docstring for doctest.DocTestRunner contains the example code
 
 I was looking for.
 
 
 Thanks, I will give it a try!
 
 -- 
 
 Neil Cerutti

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-22 Thread Luca Cerone
 This makes no sense. If you ignore the output, the code could do ANYTHING 
 
 and the test would still pass. Raise an exception? Pass. SyntaxError? 
 
 Pass. Print 99 bottles of beer? Pass.
 

if you try the commands, you can see that the tests fail..
for example

.. doctest::

raise Exception(test)

will fail with this message: 

File utils.rst, line 5, in default
Failed example:
raise Exception(test)
Exception raised:
Traceback (most recent call last):
  File /usr/lib/python2.7/doctest.py, line 1289, in __run
compileflags, 1) in test.globs
  File doctest default[0], line 1, in module
raise Exception(test)
Exception: test

So to me this seems OK.. Print will fail as well...

 
 
 I have sometimes written unit tests that just check whether a function 
 
 actually is callable:
 
 
 
 ignore = function(a, b, c)
 
 
 
 but I've come to the conclusion that is just a waste of time, since there 
 
 are dozens of other tests that will fail if function isn't callable. But 
 
 if you insist, you could always use that technique in your doctests:
 
 
 
  ignore = function(a, b, c)
 
 
 If the function call raises, your doctest will fail, but if it returns 
 
 something, anything, it will pass.
 
 

I understand your point, but now I am not writing unit tests to check the 
correctness of the code. I am only writing a tutorial and assuming that
the code is correct. What I have to be sure is that the code in the tutorial
can be executed correctly, and some commands print verbose output which can 
change.

It is not enough to write  ignore = function(a,b,c) won't work because the 
function still prints messages on screen and this causes the failure of the 
test...

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-22 Thread Luca Cerone
On Sunday, 22 September 2013 14:39:07 UTC+1, Ned Batchelder  wrote:
 On 9/22/13 12:09 AM, Luca Cerone wrote:
 
  Hi Chris,
 
  actually my priority is to check that the code is correct. I changed the 
  syntax
 
  during the development, and I want to be sure that my tutorial is up to 
  date.
 
 
 
 
 
 If you do manage to ignore the output, how will you know that the syntax 
 
 is correct?  The output for an incorrect syntax line will be an 
 
 exception, which you'll ignore.

if the function raises an exception, the test fails, regardless of the output

  Maybe I don't know enough about the 
 
 details of doctest.  It's always seemed incredibly limited to me.  

I agree that has some limitations

 
 Essentially, it's as if you used unittest but the only assertion you're 
 
 allowed to make is self.assertEqual(str(X), )
 
 

I don't know unittest, is it possible to use it within Sphinx?

 
 --Ned.

-- 
https://mail.python.org/mailman/listinfo/python-list


Sphinx Doctest: test the code without comparing the output.

2013-09-21 Thread Luca Cerone
Dear all,
I am writing the documentation for a Python package using Sphinx.

I have a problem when using doctest blocks in the documentation:
I couldn't manage to get doctest to run a command but completely ignoring
the output.

For example, how can I get a doctest like the following to run correctly?

.. doctest:: example_1

import random
x = random.uniform(0,100)
print str(x)
   #some directive here to completely ignore the output

Since I don't know the value of `x`, ideally in this doctest I only want
to test that the various commands are correct, regardless of
the output produced.

I have tried using the ELLIPSIS directive, but the problem is that the `...` 
are interpreted as line continuation rather than `any text`:

.. doctest:: example_2

import random
x = random.uniform(0,100)
print str(x) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
   ...

I don't know if there is a way to make Sphinx understand that I want to ignore 
the whole output. I think the easiest way to solve this, would be 
differentiating between the ellipsis sequence and the line continuation 
sequence, but I don't know how to do that.

I know that I could skip the execution of print(str(x)) but this is not what I 
want; I really would like the command to be executed the output ignored.
Can you point me to any solution for this issue?

Thanks a lot in advance for your help,
Cheers,
Luca
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-21 Thread Luca Cerone
Dear Steven,
thanks for the help.

I am aware that I might have used the SKIP directive (as I hinted in my mail).
Even if the fine manual suggests to do so I don't agree with it, though.
The reason is simple: SKIP as the name suggests causes the code not to be run 
at all, it doesn't ignore the output. If you use a SKIP directive on code that 
contains a typo, or maybe you changed
the name of a keyword to make it more meaningful and forgot to update your 
docstring, then the error won't be caught.

For example:

.. doctest:: example

printt Hello, World! # doctest: +SKIP
   Hello, World!

would pass the test. Since I am writing a tutorial for people that have even 
less experience than me with Python, I want be sure that the code in my 
examples runs just fine.

 
 (There's no need to convert things to str before printing them.)


You are right, I modified an example that uses x in one of my functions that 
requires a string in input, and didn't change that.

Thanks again for the help anyway,

Cheers,
Luca
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-21 Thread Luca Cerone
 And if you ignore the output, the error won't be caught either. What's 
 
 the difference?
 
  1 + 1  #doctest:+IGNORE_OUTPUT  (not a real directive)
 
 1000
 
 

The difference is that in that case you want to check whether the result is 
correct or not, because you expect a certain result.

In my case, I don't know what the output is, nor care for the purpose of the 
tutorial. What I care is being sure that the command in the tutorial is 
correct, and up to date with the code.

If you try the following, the test will fail (because there is a typo in the 
code)

.. doctest:: example

printt hello, world

and not because the output doesn't match what you expected.

Even if the command is correct:

.. doctest:: example_2

print hello, world

this text will fail because doctest expects an output. I want to be able to 
test that the syntax is correct, the command can be run, and ignore whatever 
the output is.

Don't think about the specific print example, I use this just to show what the 
problem is, which is not what I am writing a tutorial about!

 
 
 So you simply can't do what you want. You can't both ignore the output of 
 
 a doctest and have doctest report if the test fails.
 

OK, maybe it is not possible using doctest. Is there any other way to do what I 
want? For example using an other Sphinx extension??
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-21 Thread Luca Cerone
 but if you're using this for a
 
 tutorial, you risk creating a breed of novice programmers who believe
 
 their first priority is to stop the program crashing. Smoke testing is

Hi Chris,
actually my priority is to check that the code is correct. I changed the syntax 
during the development, and I want to be sure that my tutorial is up to date.

The user will only see the examples that, after testing with doctest, will
run. They won't know that I used doctests for the documentation..

How can I do what you call smoke tests in my Sphinx documentation?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sphinx Doctest: test the code without comparing the output.

2013-09-21 Thread Luca Cerone
 
 That is not how doctest works. That test fails because its output is:

ok.. is there a tool by which I can test if my code runs regardless the output?

 
 The only wild-card output that doctest recognises is ellipsis, and like 
 
 all wild-cards, can match too much if you aren't careful. If ellipsis is 


actually I want to match the whole output.. and you can't because ellipsis is 
the same as line continuation...
 
 
 
 will work. But a better solution, I think, would be to pick a 

I think you are sticking too much to the examples I posted, where I used 
functions that are part of Python, so that everybody could run the code and 
test the issues.

I don't use random numbers, so I can't apply what you said.
Really, I am looking for a way to test the code while ignoring the output.

I don't know if usually it is a bad choice, but in my case is what I want/need.

Thanks for the help,
Luca
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to keep cookies when making http requests (Python 2.7)

2013-08-30 Thread Luca Cerone
Thanks Dieter,
 
 With respect to cookie handling, you do everything right.
 
 
 
 There may be other problems with the (wider) process.
 
 Analysing the responses of your requests (reading the status codes,
 
 the response headers and the response bodies) may provide hints
 
 towards the problem.
 

I will try to do that and try to see if I can figure out why.

 
 
 Do I misunderstand something in the process?
 
 
 
 Not with respect to cookie handling.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to keep cookies when making http requests (Python 2.7)

2013-08-27 Thread Luca Cerone
Dear all, 
first of all thanks for the help.
As for your remark, you are right, and I usually tend to post questions in a 
way that is detached from the particular problem I have to solve.
In this case since I only have a limited knowledge of the cookies mechanism (in 
general, not only in Python), I preferred to ask for the specific case.
I am sorry if I gave you the impression I didn't appreciate your answer,
it was absolutely not my intention.

Cheers,
Luca
 Let me make an additional remark however: you should
 
 not expect to get complete details in a list like this - but only
 
 hints towards a solution for your problem (i.e.
 
 there remains some work for you).
 
 Thus, I expect you to read the cookielib/cookiejar documentation
 
 (part of Python's standard documentation) in order to understand
 
 my example code - before I would be ready to provide further details.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to keep cookies when making http requests (Python 2.7)

2013-08-27 Thread Luca Cerone
 
  Let me make an additional remark however: you should
  not expect to get complete details in a list like this - but only
  hints towards a solution for your problem (i.e. 
  there remains some work for you).
  Thus, I expect you to read the cookielib/cookiejar documentation
  (part of Python's standard documentation) in order to understand
  my example code - before I would be ready to provide further details.

Ok so after reading the documentation for urllib2 and cookielib I came up with 
the following code:

#START
from urllib2 import urlopen , Request
from cookielib import CookieJar
import re
regex = re.compile(r'span class=\'x\'\{(.*)\}span class=\'x\'')

base_url = http://quiz.gambitresearch.com;
job_url  = base_url + /job/

cookies = CookieJar()
r = Request(base_url) #prepare the request object
cookies.add_cookie_header(r) #allow to have cookies
R = urlopen(r) #read the url
cookies.extract_cookies(R,r) #take the cookies from the response R and adds 
#them to the request object 

#build the new url
t = R.read()
v = str(eval(regex.findall(t)[0]))
job_url = job_url + v


# Here I create a new request to the url containing the email address
r2 = Request(job_url)
cookies.add_cookie_header(r2) #I prepare the request for cookies adding the 
cookies that I extracted before.

#perform the request and print the page
R2 = urlopen(r2)
t2 = R2.read()
print job_url
print t2
#END

This still doesn't work, but I really can't understand why.
As far as I have understood first I have to instantiate a Request object
and allow it to receive and set cookies (I do this with r = Request() and 
cookies.add_cookie_header(r))
Next I perform the request (urlopen),  save the cookies in the CookieJar 
(cookies.extract_cookies(R,r)).

I evaluate the new address and I create a new Request for it (r2 = Request)
I add the cookies stored in the cookiejar in my new request 
(cookies.add_cookie_header(r2))
Then I perform the request (R2 = urlopen(r2)) and read the page (t2 = R2.read())

What am I doing wrong? Do I misunderstand something in the process?

Thanks again in advance for the help,
Cheers,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to keep cookies when making http requests (Python 2.7)

2013-08-21 Thread Luca Cerone
 
 I have used cookielib externally to urllib2. It looks
 
 like this:
 
 from urllib2 import urlopen, Request
 
 from cookielib import CookieJar
 cookies = CookieJar()
 
 
 
 r = Request(...)
 
 cookies.add_cookie_header(r) # set the cookies
 
 R = urlopen(r, ...) # make the request
 
 cookies.extract_cookies(R, r) # remember the new cookies

Hi Dieter,
thanks a lot for the help.
I am sorry but your code is not very clear to me.
It seems that you are setting some cookies,
but I can't understand how you use the ones that the site
sends to you when you perform the initial request.

Have you tried this code to check if this work?
If it works as intended can you explain a bit better
what it does exactly?

Thanks again!
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


How to keep cookies when making http requests (Python 2.7)

2013-08-20 Thread Luca Cerone
Hi everybody,
I am trying to write a simple Python script to solve the riddle at:
http://quiz.gambitresearch.com/

The quiz is quite easy to solve, one needs to evaluate the expression between 
the curly brackets (say that the expression has value val)
and go to the web page:

http://quiz.gambitresearch/job/val

You have to be fast enough, because with the page there is an associated cookie 
that expires 1 sec after the first request, therefore you need to be quick to 
access the /job/val page.

[I know that this is the correct solution because with a friend we wrote a 
small script in JavaScript and could access the page with the email address]

As an exercise I have decided to try doing the same with Python.

First I have tried with the following code:

#START SCRIPT

import re
import urllib2

regex = re.compile(r'span class=\'x\'\{(.*)\}span class=\'x\'')
base_address = http://quiz.gambitresearch.com/;
base_h = urllib2.urlopen(base_address)
base_page = base_h.read()

val = str(eval(regex.findall(base_page)[0]))

job_address = base_address + job/ + val
job_h = urllib2.urlopen(job_address)
job_page = job_h.read()

print job_page
#END SCRIPT

job_page has the following content now: WRONG! (Have you enabled cookies?)

Trying to solve the issues with the cookies I found the requests module that 
in theory should work.
I therefore rewrote the above script to use request:

#START SCRIPT:
import re
import requests

regex = re.compile(r'span class=\'x\'\{(.*)\}span class=\'x\'')

base_address = http://quiz.gambitresearch.com/;

s = requests.Session()
 
base_h = s.get('http://quiz.gambitresearch.com/')
base_page = base_h.text

val = eval( regex.findall( base_page )[0] )

job_address = base_address + job/ + str(val)
job_h = s.get( job_address )
job_page = job_h.text

print job_page
#END SCRIPT
# print job_page produces Wrong!.

According to the manual using Session() the cookies should be enabled and 
persistent for all the session. In fact the cookies in base_h.cookies and in 
job_h.cookies seem to be the same:

base_h.cookies == job_h.cookies
#returns True

So, why does this script fail to access the job page?
How can I change it so that I it works as intended and job_page prints
the content of the page that displays the email address to use for the job 
applications?

Thanks a lot in advance for the help!

Best Wishes,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Nested virtual environments

2013-08-16 Thread Luca Cerone
Thanks Marcel,
I will give it a try during the weekend and let you know if it worked for me :)
 
 If you have a recent version of pip, you can use wheels [1] to save built 
 packages locally. First create a new virtualenv and install the common 
 packages. Then put these packages in a wheel directory. Then, for any other 
 virtualenv that need the common packages, you can easily install then from 
 the wheel directory (this is fast even for numpy  friends, because nothing 
 will be compiled again) [2].
 
 
 
 # Create a new virtualenv
 
 virtualenv myenv
 
 source myenv/bin/activate
 # Install the wheel package
 pip install wheel
 # Install your common packages
 
 pip install numpy scipy matplotlib
 # Create a requirements file
 pip freeze  /local/requirements.txt
 # Create wheel for the common packages
 pip wheel --wheel-dir=/local/wheels -r /local/requirements.txt
 
 
 Now you have all the built packages saved to /local/wheels, ready to install 
 on any other environment. You can safely delete myenv. Test it with the 
 following:
 
 # Create a virtualenv for a new project
 
 virtualenv myproj
 source myproj/bin/activate
 # Install common packages from wheel
 pip install --use-wheel --no-index --find-links=/local/wheels -r 
 /local/requirements.txt
 
 
 
 
 
 
 
 
 
 
 
 
 
 [1] https://wheel.readthedocs.org
 [2] 
 http://www.pip-installer.org/en/latest/cookbook.html#building-and-installing-wheels
 
 
 
 
 
 
 2013/8/9 Luca Cerone luca@gmail.com
 
 Dear all, is there a way to nest virtual environments?
 
 
 
 I work on several different projects that involve Python programming.
 
 
 
 For a lot of this projects I have to use the same packages (e.g. numpy, 
 scipy, matplotlib and so on), while having to install packages that are 
 specific
 
 for each project.
 
 
 
 For each of this project I created a virtual environment (using virtualenv 
 --no-site-packages) and I had to reinstall the shared packages in each of 
 them.
 
 
 
 I was wondering if there is a way to nest a virtual environment into another,
 
 so that I can create a common virtual environment  that contains all the
 
 shared packages and then specialize the virtual environments installing the 
 packages specific for each project.
 
 
 
 In a way this is not conceptually different to using virtualenv 
 --system-site-packages, just instead of getting access to the system packages 
 a virtual environment should be able to access the packages of an other one.
 
 
 
 
 Thanks a lot in advance for the help,
 
 Luca
 
 --
 
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Nested virtual environments

2013-08-09 Thread Luca Cerone
Dear all, is there a way to nest virtual environments?

I work on several different projects that involve Python programming.

For a lot of this projects I have to use the same packages (e.g. numpy, scipy, 
matplotlib and so on), while having to install packages that are specific
for each project.

For each of this project I created a virtual environment (using virtualenv 
--no-site-packages) and I had to reinstall the shared packages in each of them.

I was wondering if there is a way to nest a virtual environment into another,
so that I can create a common virtual environment  that contains all the
shared packages and then specialize the virtual environments installing the 
packages specific for each project.

In a way this is not conceptually different to using virtualenv 
--system-site-packages, just instead of getting access to the system packages a 
virtual environment should be able to access the packages of an other one.

Thanks a lot in advance for the help,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-07 Thread Luca Cerone
Hi Joshua thanks!

 I think you might not understand what Chris said.
 Currently this does *not* work with Python 2.7 as you suggested it would.
  op = map(A.fun,l)

Yeah actually that wouldn't work even in Python 3, since value attribute used 
by fun has not been set.
It was my mistake in the example, but it is not the source of the problem..

 This, however, does:
  op = map(A(3).fun,l)
 
  op
 
 [1, 3, 9, 27, 81, 243, 729, 2187, 6561, 19683]
 
 

This works fine (and I knew that).. but is not what I want...

You are using the map() function that comes with Python. I want
to use the map() method of the Pool class (available in the multiprocessing 
module).

And there are differences between map() and Pool.map() apparently, so that if 
something works fine with map() it may not work with Pool.map() (as in my case).

To correct my example:

from multiprocessing import Pool

class A(object):
def __init__(self,x):
self.value = x
def fun(self,x):
return self.value**x

l = range(100)
p = Pool(4)
op = p.map(A(3).fun, l)

doesn't work neither in Python 2.7, nor 3.2 (by the way I can't use Python 3 
for my application).

 You will find that 
 http://stackoverflow.com/questions/1816958/cant-pickle-type-instancemethod- 
  when-using-pythons-multiprocessing-pool-ma 
 explains the problem in more detail than I understand. I suggest 
 reading it and relaying further questions back to us. Or use Python 3 

:) Thanks, but of course I googled and found this link before posting. I don't 
understand much of the details as well, that's why I posted here.

Anyway, thanks for the attempt :)

Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-07 Thread Luca Cerone
  doesn't work neither in Python 2.7, nor 3.2 (by the way I can't use Python 
  3 for my application).
  
 Are you using Windows? Over here on 3.3 on Linux it does. Not on 2.7 though.

No I am using Ubuntu (12.04, 64 bit).. maybe things changed from 3.2 to 3.3?
 
 from multiprocessing import Pool
 
 from functools import partial
 
 
 
 class A(object):
 
 def __init__(self,x):
 
 self.value = x
 
 def fun(self,x):
 
 return self.value**x
 
 
 
 def _getattr_proxy_partialable(instance, name, arg):
 
 return getattr(instance, name)(arg)
 
 
 
 def getattr_proxy(instance, name):
 
 
 
 A version of getattr that returns a proxy function that can
 
 be pickled. Only function calls will work on the proxy.
 
 
 
 return partial(_getattr_proxy_partialable, instance, name)
 
 
 
 l = range(100)
 
 p = Pool(4)
 
 op = p.map(getattr_proxy(A(3), fun), l)
 
 print(op)

I can't try it now, I'll let you know later if it works!
(Though just by reading I can't really understand what the code does).

Thanks for the help,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-07 Thread Luca Cerone
Thanks for the post.
I actually don't know exactly what can and can't be pickles..
not what partialing a function means..
Maybe can you link me to some resources?
 
I still can't understand all the details in your code :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-07 Thread Luca Cerone
Thanks for the help Peter!

 
 
 
  def make_instancemethod(inst, methodname):
 
  return getattr(inst, methodname)
 
  
 
  This is just getattr -- you can replace the two uses of
 
  make_instancemethod with getattr and delete this ;).
 
 
 
 D'oh ;)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-06 Thread Luca Cerone
 my_thread.join()

Thanks! I managed to make it work using the threading library :)

-- 
http://mail.python.org/mailman/listinfo/python-list


Using Pool map with a method of a class and a list

2013-08-06 Thread Luca Cerone
Hi guys,
I would like to apply the Pool.map method to a member of a class.

Here is a small example that shows what I would like to do:

from multiprocessing import Pool

class A(object):
   def __init__(self,x):
   self.value = x
   def fun(self,x):
   return self.value**x


l = range(10)

p = Pool(4)

op = p.map(A.fun,l)

#using this with the normal map doesn't cause any problem

This fails because it says that the methods can't be pickled.
(I assume it has something to do with the note in the documentation: 
functionality within this package requires that the __main__ module be 
importable by the children., which is obscure to me).

I would like to understand two things: why my code fails and when I can expect 
it to fail? what is a possible workaround?

Thanks a lot in advance to everybody for the help!

Cheers,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-06 Thread Luca Cerone
On Tuesday, 6 August 2013 18:12:26 UTC+1, Luca Cerone  wrote:
 Hi guys,
 
 I would like to apply the Pool.map method to a member of a class.
 
 
 
 Here is a small example that shows what I would like to do:
 
 
 
 from multiprocessing import Pool
 
 
 
 class A(object):
 
def __init__(self,x):
 
self.value = x
 
def fun(self,x):
 
return self.value**x
 
 
 
 
 
 l = range(10)
 
 
 
 p = Pool(4)
 
 
 
 op = p.map(A.fun,l)
 
 
 
 #using this with the normal map doesn't cause any problem
 
 
 
 This fails because it says that the methods can't be pickled.
 
 (I assume it has something to do with the note in the documentation: 
 functionality within this package requires that the __main__ module be 
 importable by the children., which is obscure to me).
 
 
 
 I would like to understand two things: why my code fails and when I can 
 expect it to fail? what is a possible workaround?
 
 
 
 Thanks a lot in advance to everybody for the help!
 
 
 
 Cheers,
 
 Luca



On Tuesday, 6 August 2013 18:12:26 UTC+1, Luca Cerone  wrote:
 Hi guys,
 
 I would like to apply the Pool.map method to a member of a class.
 
 
 
 Here is a small example that shows what I would like to do:
 
 
 
 from multiprocessing import Pool
 
 
 
 class A(object):
 
def __init__(self,x):
 
self.value = x
 
def fun(self,x):
 
return self.value**x
 
 
 
 
 
 l = range(10)
 
 
 
 p = Pool(4)
 
 
 
 op = p.map(A.fun,l)
 
 
 
 #using this with the normal map doesn't cause any problem
 
 
 
 This fails because it says that the methods can't be pickled.
 
 (I assume it has something to do with the note in the documentation: 
 functionality within this package requires that the __main__ module be 
 importable by the children., which is obscure to me).
 
 
 
 I would like to understand two things: why my code fails and when I can 
 expect it to fail? what is a possible workaround?
 
 
 
 Thanks a lot in advance to everybody for the help!
 
 
 
 Cheers,
 
 Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Pool map with a method of a class and a list

2013-08-06 Thread Luca Cerone
Hi Chris, thanks

 Do you ever instantiate any A() objects? You're attempting to call an
 
 unbound method without passing it a 'self'.

I have tried a lot of variations, instantiating the object, creating lambda 
functions that use the unbound version of fun (A.fun.__func__) etc etc..
I have played around it quite a bit before posting.

As far as I have understood the problem is due to the fact that Pool pickle the 
function and copy it in the various pools.. 
But since the methods cannot be pickled this fails..

The same example I posted won't run in Python 3.2 neither (I am mostly 
interested in a solution for Python 2.7, sorry I forgot to mention that).

Thanks in any case for the help, hopefully there will be some other advice in 
the ML :)

Cheers,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Hi everybody,
I am trying to understand how to use named pipes in python to launch external 
processes (in a Linux environment).

As an example I am trying to imitate the behaviour of the following sets of 
commands is bash:

 mkfifo named_pipe
 ls -lah  named_pipe 
 cat  named_pipe

In Python I have tried the following commands:

import os
import subprocess as sp

os.mkfifo(named_pipe,0777) #equivalent to mkfifo in bash..
fw = open(named_pipe,'w')
#at this point the system hangs...

My idea it was to use subprocess.Popen and redirect stdout to fw...
next open named_pipe for reading and giving it as input to cat (still using 
Popen).

I know it is a simple (and rather stupid) example, but I can't manage to make 
it work..


How would you implement such simple scenario?

Thanks a lot in advance for the help!!!

Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Hi Paul, first of all thanks for the help.

I am aware of the first solutions, just now I would like to experiment a bit 
with  using named pipes (I also know that the example is trivial, but it just 
to grasp the main concepts)

 
 You can also pass a file object to p1's stdout and p2's stdin if you want to 
 pipe via a file.
 
 
 with open(named_pipe, rw) as named_pipe:
     p1 = subprocess.Popen([ls, -lah], stdout=named_pipe)
     p2 = subprocess.Popen([cat], stdin=named_pipe)
 
     p1.wait()
     p2.wait()
  

Your second example doesn't work for me.. if named_file is not a file in the 
folder I get an error saying that there is not such a file.

If I create named_pipe as a named pipe using os.mkfifo(named_file,0777) than 
the code hangs.. I think it is because there is not process that reads the
content of the pipe, so the system waits for the pipe to be emptied.

Thanks a lot in advance for the help in any case.
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Hi MRAB, thanks for the reply!
 
 Opening the pipe for reading will block until it's also opened for
 
 writing, and vice versa.
 

OK.

 
 
 In your bash code, 'ls' blocked until you ran 'cat', but because you
 
 ran 'ls' in the background you didn't notice it!
 
 
Right.
 
 In your Python code, the Python thread blocked on opening the pipe for
 
 writing. It was waiting for another thread or process to open the pipe
 
 for reading.

OK. What you have written makes sense to me. So how do I overcome the block?
As you said in bash I run the ls process in background. How do I do that in 
Python?

Thanks again for the help,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Hi Alister,
 Are you sure you are using the correct tool for the task?

Yes. For two reasons: 1. I want to learn how to do this in Python :) 2. for an 
application I have in mind I will need to run external tools (not developed by 
me) and process the output using some tools that I have written in Python.

For technical reasons I can't use the subprocess.communicate() (the output to 
process is very large) method, and due to a bug in the interactive shell I am 
using (https://github.com/ipython/ipython/issues/3884) I cannot pipe processes 
just using the standard subprocess.Popen() approach.

 
 I tend to find that in most cases if you are trying to execute bash 
 
 commands from Python you are doing it wrong.

As I said, the example in my question is just for learning purposes, I don't 
want to reproduce ls and cat in Python...

I just would like to learn how to handle named pipes in Python, which I find it 
easier to do by using a simple example that I am comfortable to use :)

Thanks in any case for your answer,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Thanks MRAB,
 
 You need to ensure that the pipe is already open at the other end.

So I need to open the process that reads the pipe before writing in it?

 
 
 
 Why are you using a named pipe anyway?

For some bug in ipython (see my previous email) I can't use subprocess.Popen 
and pipe in the standard way.
One of Ipython developers has suggested me to use named pipes as a temporary 
workaround. So I am taking the occasion to learn :)

 
 
 If you're talking to another program, then that needs to be running
 
 already, waiting for the connection, at the point that you open the
 
 named pipe from this end.

I am not entirely sure I got this: ideally I would like to have a function that 
runs an external tool (the equivalent of ls in my example) redirecting its 
output in a named pipe.

A second function (the cat command in my example) would read the named_pipe, 
parse it and extract some features from the output.

I also would like that the named_pipe is deleted when the whole communication 
is ended.


 
 If you're using a pipe _within_ a program (a queue would be better),
 
 then you should opening for writing in one thread and for reading in
 
 another.

Let's stick with the pipe :) I will ask about the queue when I manage to use 
pipes ;)

I should have explained better that I have no idea how to run threads in Python 
:): how do I open a thread that executes ls -lah in background and writes 
into a named pipe? And how do I open a thread that reads from the named pipe?

Can you please post a small example, so that I have something to work on?

Thanks a lot in advance for your help!

Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-08-05 Thread Luca Cerone
thanks and what about python 2.7?
 
 
 In Python 3.3 and above:
 
 
 
 p = subprocess.Popen(..., stderr=subprocess.DEVNULL)

P.s. sorry for the late reply, I discovered I don't receive notifications from 
google groups..
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
Thanks this works (if you add shell=True in Popen).
If I don't want to use shell = True, how can I redirect the stdout to 
named_pipe? Popen accepts an open file handle for stdout, which I can't open 
for writing because that blocks the process...

 
 
 os.mkfifo(named_pipe, 0777)
 
 ls_process = subprocess.Popen(ls -lah  named_pipe)
 
 pipe = open(named_pipe, r)
 
 # Read the output of the subprocess from the pipe.
 
 
 
 When the subprocess terminates (look at the docs for Popen objects),
 
 close and delete the fifo.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simulate `bash` behaviour using Python and named pipes.

2013-08-05 Thread Luca Cerone
 You're back to using separate threads for the reader and the writer.

And how do I create separate threads in Python? I was trying to use the 
threading library without not too success..
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Create a file in /etc/ as a non-root user

2013-05-31 Thread Luca Cerone
 fd = open('/etc/file','w')
 
 fd.write('jpdas')
 
 fd.close()
 
 
Hi Bibhu, that is not a Python problem, but a permission one.
You should configure the permissions so that you have write access to the 
folder.
However unless you know what you are doing it is discouraged to save your
file in the /etc/ folder.

I don't know if on Mac the commands are the same, but in Unix systems (that I 
guess Mac is) you can manage permissions with chmod.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-05-31 Thread Luca Cerone
 
 That's because stdin/stdout/stderr take file descriptors or file
 
 objects, not path strings.
 

Thanks Chris, how do I set the file descriptor to /dev/null then?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-05-27 Thread Luca Cerone
 
 
 Will it violate privacy / NDA to post the command line? Even if we
 
 can't actually replicate your system, we may be able to see something
 
 from the commands given.
 
 

Unfortunately yes..
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-05-26 Thread Luca Cerone
 
 Can you please help me understanding what's the difference between the two 
 cases? 
 

Hi guys has some of you ideas on what is causing my issue?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-05-26 Thread Luca Cerone
 Could you provide the *actual* commands you're using, rather than the generic 
 program1 and program2 placeholders? It's *very* common for people to get 
 the tokenization of a command line wrong (see the Note box in 
 http://docs.python.org/2/library/subprocess.html#subprocess.Popen for some 
 relevant advice).
 
Hi Chris, first of all thanks for the help. Unfortunately I can't provide the 
actual commands because are tools that are not publicly available.
I think I get the tokenization right, though.. the problem is not that the 
programs don't run.. it is just that sometimes I get that error..

Just to be clear I run the process like:

p = subprocess.Popen(['program1','--opt1','val1',...'--optn','valn'], ... the 
rest)

which I think is the right way to pass arguments (it works fine for other 
commands)..

 
 Could you provide the full  complete error message and exception traceback?
 
yes, as soon as I get to my work laptop..

 
 One obvious difference between the 2 approaches is that the shell doesn't 
 redirect the stderr streams of the programs, whereas you /are/ redirecting 
 the stderrs to stdout in the non-shell version of your code. But this is 
 unlikely to be causing the error you're currently seeing.
 
 
 You may also want to provide /dev/null as p1's stdin, out of an abundance of 
 caution.


I tried to redirect the output to /dev/null using the Popen argument:
'stdin = os.path.devnull' (having imported os of course)..
But this seemed to cause even more troubles...
 
 Lastly, you may want to consider using a wrapper library such as 
 http://plumbum.readthedocs.org/en/latest/ , which makes it easier to do 
 pipelining and other such fancy things with subprocesses, while still 
 avoiding the many perils of the shell.
 
 
Thanks, I didn't know this library, I'll give it a try.
Though I forgot to mention that I was using the subprocess module, because I 
want the code to be portable (even though for now if it works in Unix platform 
is OK).

Thanks a lot for your help,
Cheers,
Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


Piping processes works with 'shell = True' but not otherwise.

2013-05-24 Thread Luca Cerone
Hi everybody, 
I am new to the group (and relatively new to Python)
so I am sorry if this issues has been discussed (although searching for topics 
in the group I couldn't find a solution to my problem).

I am using Python 2.7.3 to analyse the output of two 3rd parties programs that 
can be launched in a linux shell as:

 program1 | program2

To do this I have written a function that pipes program1 and program2 (using 
subprocess.Popen) and the stdout of the subprocess, and a function that parses 
the output:

A basic example:

from subprocess import Popen, STDOUT, PIPE
def run():
  p1 = Popen(['program1'], stdout = PIPE, stderr = STDOUT)
  p2 = Popen(['program2'], stdin = p1.stdout, stdout = PIPE, stderr = STDOUT)
  p1.stdout.close()
  return p2.stdout


def parse(out):
  for row in out:
print row
#do something else with each line
  out.close()
  return parsed_output
   

# main block here

pout = run()

parsed = parse(pout)

#--- END OF PROGRAM #

I want to parse the output of 'program1 | program2' line by line because the 
output is very large.

When running the code above, occasionally some error occurs (IOERROR: [Errno 
0]). However this error doesn't occur if I code the run() function as:

def run():
  p = Popen('program1 | program2', shell = True, stderr = STDOUT, stdout = PIPE)
  return p.stdout

I really can't understand why the first version causes errors, while the second 
one doesn't.

Can you please help me understanding what's the difference between the two 
cases? 

Thanks a lot in advance for the help,
Cheers, Luca
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue17527] PATCH as valid request method in wsgiref.validator

2013-03-23 Thread Luca Sbardella

New submission from Luca Sbardella:

http://tools.ietf.org/html/rfc5789

--
components: Library (Lib)
files: validate.patch
keywords: patch
messages: 185031
nosy: lsbardel
priority: normal
severity: normal
status: open
title: PATCH as valid request method in wsgiref.validator
type: behavior
versions: Python 3.4
Added file: http://bugs.python.org/file29552/validate.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17527
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Transparent Proxy and Redirecting Sockets

2013-02-21 Thread Luca Bongiorni
Hi all,
Around I have found plenty useful sources about TCP transparent proxies. 
However I am still missing how to make socket redirection.

What I would like to do is:

host_A -- PROXY -- host_B
  ^
  |
host_C --

At the beginning the proxy is simply forwarding the data between A and B.
Subsequently, when a parser catches the right pattern, the proxy quit the 
communication between A and B and redirect all the traffic to the host_C.

I would be pleased if someone would suggest me some resources or hints.

Thank you :)
Cheers,
Luca

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Transparent Proxy and Redirecting Sockets

2013-02-21 Thread Luca Bongiorni
2013/2/21 Rodrick Brown rodrick.br...@gmail.com

 On Thu, Feb 21, 2013 at 10:24 AM, Luca Bongiorni bongi...@gmail.comwrote:

 Hi all,
 Around I have found plenty useful sources about TCP transparent proxies.
 However I am still missing how to make socket redirection.

 What I would like to do is:

 host_A -- PROXY -- host_B
   ^
   |
 host_C --

 At the beginning the proxy is simply forwarding the data between A and B.
 Subsequently, when a parser catches the right pattern, the proxy quit the
 communication between A and B and redirect all the traffic to the host_C.

 I would be pleased if someone would suggest me some resources or hints.


 Are you looking for a Python way of doing this? I would highly recommend
 taking a look at ha-proxy as its very robust, simple and fast. If you're
 looking to implement this in Python code you may want to use a framework
 like Twisted - http://twistedmatrix.com/trac/wiki/TwistedProject

 Twisted provides many functionality that can leverage to accomplish this
 task.


Thank you for the hint. I will start to delve on it right now.
Cheers,
Luca




 Thank you :)
 Cheers,
 Luca

 --
 http://mail.python.org/mailman/listinfo/python-list



-- 
http://mail.python.org/mailman/listinfo/python-list


ping in bluetooth

2012-11-02 Thread Luca Sanna
hi,
how do I send a ping in bluetooth? 
because android phones are not always visible. 
I can not find the ping command 
thanks 
-- 
http://mail.python.org/mailman/listinfo/python-list


error bluetooth

2012-10-05 Thread Luca Sanna
the code is output the error of the ubuntu

from bluetooth import *

target_name = My Phone
target_address = None

nearby_devices = discover_devices()

for address in nearby_devices:
if target_name == lookup_name( address ):
target_address = address
break

if target_address is not None:
print found target bluetooth device with address, target_address
else:
print could not find target bluetooth device nearby

the error

luca@luca-XPS-M1330:~/py-temperature/py-temperature$ python bluetooth.py
Traceback (most recent call last):
  File bluetooth.py, line 14, in module
from bluetooth import *
  File /home/luca/py-temperature/py-temperature/bluetooth.py, line 19, in 
module
nearby_devices = discover_devices()
NameError: name 'discover_devices' is not defined
luca@luca-XPS-M1330:~/py-temperature/py-temperature$ 

it's a bug of the module? thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: error bluetooth

2012-10-05 Thread Luca Sanna
Il giorno venerdì 5 ottobre 2012 13:33:14 UTC+2, Hans Mulder ha scritto:
 On 5/10/12 10:51:42, Luca Sanna wrote:
 
 
 
  from bluetooth import *
 
 
 
 [..]
 
 
 
  luca@luca-XPS-M1330:~/py-temperature/py-temperature$ python bluetooth.py
 
 
 
 When you say from bluetooth import *, Python will find a file
 
 name bluetooth.py and import stuff from that file.  Since your
 
 script happens to be named bluetooth.py, Python will import
 
 your script, thinking it is a module.
 
 
 
  it's a bug of the module?
 
 
 
 You've chosen the wrong file name.  Rename your script.
 
 
 
 
 
 Hope this helps,
 
 
 
 -- HansM



i'm sorry, it's ok the rename file in bt.py

how do I send a ping in bluetooth?
because android phones are not always visible.
I can not find the ping command
thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


calendar from python to html

2012-10-05 Thread Luca Sanna
hi,

I enter a calendar in an html page
in each calendar day, I enter a time that is used by the program to perform 
actions with python

What can I use to do this?

thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue15372] Python is missing alternative for common quoting character

2012-07-16 Thread Luca Fabbri

New submission from Luca Fabbri luca...@gmail.com:

Using the unicodedata.decomposition function on characters like \u201c and 
\u201d I didn't get back the classic quote character ().

This is a very common error when text is taken from Microsoft Word (where in 
italian language a couple of quoting character in a sentence like foo is 
automatically changed to “foo”).

--
components: Unicode
messages: 165630
nosy: ezio.melotti, keul
priority: normal
severity: normal
status: open
title: Python is missing alternative for common quoting character
type: behavior
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15372
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12840] maintainer value clear the author value when register

2011-08-25 Thread Luca Fabbri

New submission from Luca Fabbri luca...@gmail.com:

I reported this problem in the pypi site issue tracker (issue 3396924):
https://sourceforge.net/tracker/?func=detailatid=513503aid=3396924group_id=66150

However it seems that is a python bug.

If in one package's setup.py I provide maintainer (with email) and author (whit 
email) after the python setup.py register ... upload I create a new package 
where I see the maintainer as a creator.

If I manually fix it through the pypi user interface all works, so seems that 
this is only a bug of the register procedure.

--
assignee: tarek
components: Distutils
messages: 142959
nosy: eric.araujo, keul, tarek
priority: normal
severity: normal
status: open
title: maintainer value clear the author value when register
type: behavior
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12840
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12840] maintainer value clear the author value when register

2011-08-25 Thread Luca Fabbri

Luca Fabbri luca...@gmail.com added the comment:

I'm quite sure that after gettint ownership on an already existent package 
(with a different author inside) and I added the maintainer (with my name) I 
get the author override. But maybe I don't remember exactly...

Is not simpler (to understand) to keep always the approach to keep both?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12840
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >