Re: Odd version scheme

2015-02-12 Thread Ian Kelly
On Thu, Feb 12, 2015 at 11:58 AM, MRAB pyt...@mrabarnett.plus.com wrote:
 On 2015-02-12 17:35, Ian Kelly wrote:

 On Thu, Feb 12, 2015 at 10:19 AM, Skip Montanaro
 skip.montan...@gmail.com wrote:

 I believe this sort of lexicographical comparison wart is one of the
 reasons
 the Python-dev gang decided that there would be no micro versions  9.
 There
 are too many similar assumptions about version numbers out in the real
 world.


 It still becomes an issue when we get to Python 10.

 Just call it Python X! :-)

Things break down again when we get to Python XIX.

 'XVIII'  'XIX'
False
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Chris Angelico
On Fri, Feb 13, 2015 at 5:29 AM, John Ladasky
john_lada...@sbcglobal.net wrote:
 It works fine, at least on my Ubuntu Linux system (and what scientist doesn't 
 use Linux?).  I also have special mathematical symbols, superscripted 
 numbers, etc. in my program comments.  It's easier to read 2x³ + 3x² than 
 2*x**3 + 3*x**2.

 I am teaching someone Python who is having a few problems with Unicode on his 
 Windows 7 machine.  It would appear that Windows shipped with a 
 less-than-complete Unicode font for its command shell.  But that's not 
 Python's fault.


Yes, Windows's default terminal/console does have issues. If all your
text is staying within the BMP, you may be able to run it within IDLE
to get somewhat better results; or PowerShell may help. But as you
say, that's not Python's fault.

Fortunately, it's not difficult to write a GUI program that
manipulates Unicode text, or something that works entirely with files
and leaves the display up to something else (maybe a good text editor,
or a web browser). All your internals are working perfectly, it's just
the human interface that's a bit harder. And only on flawed/broken
platforms.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Async/Concurrent HTTP Requests

2015-02-12 Thread Paul Rubin
Marko Rauhamaa ma...@pacujo.net writes:
 I have successfully done event-driven I/O using select.epoll() and
 socket.socket().

Sure, but then you end up writing a lot of low-level machinery that
packages like twisted take care of for you.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

bench_scandir.py: dummy benchmark to compare listdir+stat vs scandir+is_dir.

os.scandir() is always slower than os.listdir() on tmpfs and ext4 partitions of 
a local hard driver.

I will try with NFS.


Results with scandir-5.patch on Fedora 21 (Linux).

--- Using /home/haypo (ext4, hard drive) ---

Test listdir+stat vs scandir+is_dir
Temporary directory: /home/haypo/tmpji8uviyl
Create 10 files+symlinks...
Create 1 directories...
# entries: 21
Benchmark...
listdir: 2187.3 ms
scandir: 1047.2 ms
listdir: 494.4 ms
scandir: 1048.1 ms
listdir: 493.0 ms
scandir: 1042.6 ms

Result:
listdir: min=493.0 ms (2.3 us per file), max=2187.3 ms (10.4 us per file)
scandir: min=1042.6 ms (5.0 us per file), max=1048.1 ms (5.0 us per file)
scandir is between 0.5x and 2.1x faster


--- Using /tmp (tmpfs, full in memory) ---

Test listdir+stat vs scandir+is_dir
Temporary directory: /tmp/tmp6_zk3mqo
Create 10 files+symlinks...
Create 1 directories...
# entries: 21
Benchmark...
listdir: 405.4 ms
scandir: 1001.3 ms
listdir: 403.3 ms
scandir: 1024.2 ms
listdir: 408.1 ms
scandir: 1013.5 ms

Remove the temporary directory...

Result:
listdir: min=403.3 ms (1.9 us per file), max=408.1 ms (1.9 us per file)
scandir: min=1001.3 ms (4.8 us per file), max=1024.2 ms (4.9 us per file)
scandir is between 0.4x and 0.4x faster

--
Added file: http://bugs.python.org/file38114/bench_scandir.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Python discussed in Nature

2015-02-12 Thread John Ladasky
On Thursday, February 12, 2015 at 3:08:10 AM UTC-8, Fabien wrote:

 ... what a coincidence then that a huge majority of scientists 
 (including me) dont care AT ALL about unicode. But since scientists are 
 not paid to rewrite old code, the scientific world is still stuck to 
 python 2.

I'm a scientist.  I'm a happy Python 3 user who migrated from Python 2 about 
two years ago.

And I use Unicode in my Python.  In implementing some mathematical models which 
have variables like delta, gamma, and theta, I decided that I didn't like the 
line lengths I was getting with such variable names.  I'm using δ, γ, and θ 
instead.  It works fine, at least on my Ubuntu Linux system (and what scientist 
doesn't use Linux?).  I also have special mathematical symbols, superscripted 
numbers, etc. in my program comments.  It's easier to read 2x³ + 3x² than 
2*x**3 + 3*x**2.

I am teaching someone Python who is having a few problems with Unicode on his 
Windows 7 machine.  It would appear that Windows shipped with a 
less-than-complete Unicode font for its command shell.  But that's not Python's 
fault.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue23146] Incosistency in pathlib between / and \

2015-02-12 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Yes, this is a bug indeed. A patch would be welcome ;-)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23146
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20503] super behaviour and abstract base classes (either implementation or documentation/error message is wrong)

2015-02-12 Thread eryksun

eryksun added the comment:

Given super(cls, obj), cls needs to be somewhere in type(obj).__mro__. Thus the 
implementation checks PyType_IsSubtype instead of the more generic 
PyObject_IsSubclass. 

In this case int's MRO is unrelated to numbers.Number:

 print(*int.__mro__, sep='\n')
class 'int'
class 'object'

It gets registered as a subclass via numbers.Integral.register(int).

 print(*numbers.Integral._abc_registry)
class 'int'

issubclass calls PyObject_IsSubclass, which uses the __subclasscheck__ API. In 
this case ABCMeta.__subclasscheck__ recursively checks the registry and caches 
the result to speed up future checks.

 numbers.Number.__subclasscheck__(int)
True
 print(*numbers.Number._abc_cache)
class 'int'

--
nosy: +eryksun

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20503
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Async/Concurrent HTTP Requests

2015-02-12 Thread Paul Rubin
Ari King ari.brandeis.k...@gmail.com writes:
 I'd like to query two (or more) RESTful APIs concurrently. What is the
 pythonic way of doing so? Is it better to use built in functions or
 are third-party packages? Thanks.

The two basic approaches are event-based asynchronous i/o (there are
various packages for that) and threads.  There are holy wars over which
is better.  Event-driven i/o in Python 2.x was generally done with
callback-based packages like Twisted Matrix (www.twistedmatrix.com).  In
Python 3 there are some nicer mechanisms (coroutines) so the new asyncio
package may be easier to use than Twisted.  I haven't tried it yet.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue9698] When reusing an handler, urllib(2)'s digest authentication fails after multiple regative replies

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy: +demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9698
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23455] file iterator deemed broken; can resume after StopIteration

2015-02-12 Thread Andrew Dalke

New submission from Andrew Dalke:

The file iterator is deemed broken. As I don't think it should be made 
non-broken, I suggest the documentation should be changed to point out when 
file iteration is broken. I also think the term 'broken' is a label with 
needlessly harsh connotations and should be softened.

The iterator documentation uses the term 'broken' like this (quoting here from 
https://docs.python.org/3.4/library/stdtypes.html):

  Once an iterator’s __next__() method raises StopIteration,
  it must continue to do so on subsequent calls. Implementations
  that do not obey this property are deemed broken.

(Older versions comment This constraint was added in Python 2.3; in Python 
2.2, various iterators are broken according to this rule.)

An IOBase is supposed to support the iterator protocol (says 
https://docs.python.org/3.4/library/io.html#io.IOBase ). However, it does not, 
nor does the documentation say that it's broken in the face of a changing file 
(eg, when another process appends to a log file).

  % ./python.exe 
  Python 3.5.0a1+ (default:4883f9046b10, Feb 11 2015, 04:30:46) 
  [GCC 4.8.4] on darwin
  Type help, copyright, credits or license for more information.
   f = open(empty)
   next(f)
  Traceback (most recent call last):
File stdin, line 1, in module
  StopIteration
  
   ^Z
  Suspended
  % echo Hello!  empty
  % fg
  ./python.exe

   next(f)
  'Hello!\n'

This is apparently well-known behavior, as I've come across several references 
to it on various Python-related lists, including this one from Miles in 2008: 
https://mail.python.org/pipermail/python-list/2008-September/491920.html .

  Strictly speaking, file objects are broken iterators:

Fredrik Lundh in the same thread ( 
https://mail.python.org/pipermail/python-list/2008-September/521090.html ) says:

  it's a design guideline, not an absolute rule

The 7+ years of 'broken' behavior in Python suggests that /F is correct. But 
while 'broken' could be considered a meaningless label, it carries with it some 
rather negative connotations. It sounds like developers are supposed to make 
every effort to avoid broken code, when that's not something Python itself 
does. It also means that my code can be called broken solely because it 
assumed Python file iterators are non-broken. I am not happy when people say my 
code is broken.

It is entirely reasonable that a seek(0) would reset the state and cause 
next(it) to not continue to raise a StopIteration exception. However, errors 
can arise when using Python file objects, as an iterator, to parse a log file 
or any other files which are appended to by another process.

Here's an example of code that can break. It extracts the first and last 
elements of an iterator; more specifically, the first and last lines of a file. 
If there are no lines it returns None for both values; and if there's only one 
line then it returns the same line as both values.

  def get_first_and_last_elements(it):
first = last = next(it, None)
for last in it:
pass
return first, last

This code expects a non-broken iterator. If passed a file, and the file were 1) 
initially empty when the next() was called, and 2) appended to by the time 
Python reaches the for loop, then it's possible for first value to be None 
while last is a string.

This is unexpected, undocumented, and may lead to subtle errors.

There are work-arounds, like ensuring that the StopIteration only occurs once:

  def get_first_and_last_elements(it):
first = last = next(it, None)
if last is not None:
for last in it:
pass
return first, last

but much existing code expects non-broken iterators, such as the Python example 
implementation at 
https://docs.python.org/2/library/itertools.html#itertools.dropwhile . (I have 
a reproducible failure using it, a fork(), and a file iterator with a sleep() 
if that would prove useful.)

Another option is to have a wrapper around file object iterators to keep 
raising StopIteration, like:

   def safe_iter(it):
   yield from it

   # -or-  (line for line in file_iter)

but people need to know to do this with file iterators or other potentially 
broken iterators. The current documentation does not say when file iterators 
are broken, and I don't know which other iterators are also broken.

I realize this is a tricky issue.

I don't think it's possible now to change the file's StopIteration behavior. I 
expect that there is code which depends on the current brokenness, the ability 
to seek() and re-iterate is useful, and the idea that next() returns text if 
and only if readline() is not empty is useful and well-entrenched. Pypy has the 
same behavior as CPython so any change will take some time to propagate to the 
other implementations.

Instead, I'm fine with a documentation change in io.html . It currently says:

  IOBase (and its subclasses) support the iterator protocol,
  meaning that an IOBase object can be iterated over yielding
 

Re: Odd version scheme

2015-02-12 Thread Mark Lawrence

On 12/02/2015 19:16, Ian Kelly wrote:

On Thu, Feb 12, 2015 at 11:58 AM, MRAB pyt...@mrabarnett.plus.com wrote:

On 2015-02-12 17:35, Ian Kelly wrote:


On Thu, Feb 12, 2015 at 10:19 AM, Skip Montanaro
skip.montan...@gmail.com wrote:


I believe this sort of lexicographical comparison wart is one of the
reasons
the Python-dev gang decided that there would be no micro versions  9.
There
are too many similar assumptions about version numbers out in the real
world.



It still becomes an issue when we get to Python 10.


Just call it Python X! :-)


Things break down again when we get to Python XIX.


'XVIII'  'XIX'

False



I believe that this could be solved by borrowing from Mark Pilgrim's 
excellent Dive Into Python which uses (or used?) these hex (?) numbers 
as the basis for a look at unit testing.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Ethan Furman
On 02/12/2015 12:46 AM, Steven D'Aprano wrote:

 Nature, one of the world's premier science journals, has published an 
 excellent article about programming in Python:
 
 http://www.nature.com/news/programming-pick-up-python-1.16833

That is a very nice article, thanks for sharing!

--
~Ethan~



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Async/Concurrent HTTP Requests

2015-02-12 Thread Marko Rauhamaa
Paul Rubin no.email@nospam.invalid:

 Marko Rauhamaa ma...@pacujo.net writes:
 I have successfully done event-driven I/O using select.epoll() and
 socket.socket().

 Sure, but then you end up writing a lot of low-level machinery that
 packages like twisted take care of for you.

Certainly. It would be nice if the stdlib protocol facilities were
event-driven and divorced from the low-level I/O.

Asyncio does that, of course, but the programming model feels a bit
weird.

Twisted documentation seems a bit vague on details. For example, what
should one make of this:

   def write(data):
  Write some data to the physical connection, in sequence, in a
  non-blocking fashion.

  If possible, make sure that it is all written. No data will ever
  be lost, although (obviously) the connection may be closed before
  it all gets through.

   URL: https://twistedmatrix.com/documents/15.0.0/api/twisted.intern
   et.interfaces.ITransport.html#write

So I'm left wondering if the call will block and if not, how is flow
control and buffering managed. The API documentation leads me to a maze
of twisted passages, all alike. From what I could gather, the write()
method is blocking and hence not suitable for serious work.

By contrast, the semantics of Python's socket.send() is crisply defined
and a pleasure to work with.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


fuzzysearch: find not exactly what you're looking for!

2015-02-12 Thread Tal Einat
Hi everyone,

I'd like to introduce a Python library I've been working on for a
while: fuzzysearch. I would love to get as much feedback as possible:
comments, suggestions, bugs and more are all very welcome!

fuzzysearch is useful for searching when you'd like to find
nearly-exact matches. What should be considered a nearly matching
sub-string is defined by a maximum allowed Levenshtein distance[1].
This can be further refined by indicating the maximum allowed number
of substitutions, insertions and/or deletions, each separately.

Here is a basic example:

 from fuzzysearch import find_near_matches
 find_near_matches('PATTERN', 'aaaPATERNaaa', max_l_dist=1)
[Match(start=3, end=9, dist=1)]

The library supports Python 2.6+ and 3.2+ with a single code base. It
is extensively tested with 97% code coverage. There are many
optimizations under the hood, including custom algorithms and C
extensions implemented in C and Cython.

Install as usual:
$ pip install fuzzysearch

The repo is on github:
https://github.com/taleinat/fuzzysearch

Let me know what you think!

- Tal Einat

.. [1]: http://en.wikipedia.org/wiki/Levenshtein_distance
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Async/Concurrent HTTP Requests

2015-02-12 Thread Zachary Ware
On Thu, Feb 12, 2015 at 10:37 AM, Ari King ari.brandeis.k...@gmail.com wrote:
 Hi,

 I'd like to query two (or more) RESTful APIs concurrently. What is the 
 pythonic way of doing so? Is it better to use built in functions or are 
 third-party packages? Thanks.

Have a look at asyncio (new in Python 3.4, available for 3.3 as the
'tulip' project) and possibly the aiohttp project, available on PyPI.
I'm using both for a current project, and they work very well.

-- 
Zach
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Odd version scheme

2015-02-12 Thread MRAB

On 2015-02-12 17:35, Ian Kelly wrote:

On Thu, Feb 12, 2015 at 10:19 AM, Skip Montanaro
skip.montan...@gmail.com wrote:

I believe this sort of lexicographical comparison wart is one of the reasons
the Python-dev gang decided that there would be no micro versions  9. There
are too many similar assumptions about version numbers out in the real
world.


It still becomes an issue when we get to Python 10.


Just call it Python X! :-)

--
https://mail.python.org/mailman/listinfo/python-list



[issue23455] file iterator deemed broken; can resume after StopIteration

2015-02-12 Thread Ethan Furman

Changes by Ethan Furman et...@stoneleaf.us:


--
nosy: +ethan.furman

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23455
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Async/Concurrent HTTP Requests

2015-02-12 Thread Marko Rauhamaa
Paul Rubin no.email@nospam.invalid:

 Event-driven i/o in Python 2.x was generally done with callback-based
 packages like Twisted Matrix (www.twistedmatrix.com). In Python 3
 there are some nicer mechanisms (coroutines) so the new asyncio
 package may be easier to use than Twisted. I haven't tried it yet.

I have successfully done event-driven I/O using select.epoll() and
socket.socket().


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Odd version scheme

2015-02-12 Thread Tim Chase
On 2015-02-12 12:16, Ian Kelly wrote:
  It still becomes an issue when we get to Python 10.
 
  Just call it Python X! :-)
 
 Things break down again when we get to Python XIX.
 
  'XVIII'  'XIX'
 False

You know what this sub-thread gives me? The icks.

https://www.youtube.com/watch?v=6DzfPcSysAg

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


[issue21717] Exclusive mode for ZipFile and TarFile

2015-02-12 Thread Berker Peksag

Changes by Berker Peksag berker.pek...@gmail.com:


Added file: http://bugs.python.org/file38115/issue21717_tarfile_v5.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21717
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Python discussed in Nature

2015-02-12 Thread Fabien

On 12.02.2015 10:31, wxjmfa...@gmail.com wrote:
[some OT stuffs about unicode]

... what a coincidence then that a huge majority of scientists 
(including me) dont care AT ALL about unicode. But since scientists are 
not paid to rewrite old code, the scientific world is still stuck to 
python 2. It's a pitty, given how easy it is to write py2/py3 compatible 
scientific tools.


Thanks for the link to the article steven!

Fabien

(sorry for the OT  sorry for feeding the t)
--
https://mail.python.org/mailman/listinfo/python-list


[issue23450] Possible loss of data warnings building 3.5 Visual Studio Windows 8.1 64 bit

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

The 64-bit support of Windows is still incomplete :-/ We tried to fix most of 
them, but there are still remaining issues.

The main issue is #9566. I opened for example the issue #18295: Possible 
integer overflow in PyCode_New().

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23450
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20521] [PATCH] Cleanup for dis module documentation

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

Could someone review the patch please, it doesn't appear to contain anything 
that's contentious.

--
nosy: +BreamoreBoy
versions: +Python 3.5 -Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20521
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20947] -Wstrict-overflow findings

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

@Serhiy/Victor I believe that you're both interested in this type of problem.

--
nosy: +BreamoreBoy, haypo, serhiy.storchaka

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20947
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Python discussed in Nature

2015-02-12 Thread Steven D'Aprano
Nature, one of the world's premier science journals, has published an 
excellent article about programming in Python:

http://www.nature.com/news/programming-pick-up-python-1.16833


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


CrocToy inquiry

2015-02-12 Thread Andrew Kostadis
Hello,
I am working on CrocToy project, which is a big robotic toy with artificial
intellect, a mix of an animal and a 3-4 years old kid. After the research
on possible main functionality, I decide to use Python as a programming
platform. I am looking for help in developing the entire architecture or at
least the main functionality modules:
– text-to-speech synthesizer
– voice recognition
– use xbox to maintain a “vision”, simple object distancing and recognition
functions
– use Python to move (program) servo motors
Having a limited programming skills, I am looking for much of help or
perhaps someone is interested to get over of programming part of CrocToy’s
project.
Question is how to let know about my project to as many programmers as
possible?
If it helps, I can prepare a short overview of my project.
Thank you, sorry for imperfect English,
Andrew
PS
I may have troubles of follow answers to my post, please use my e-mail:
croc...@gmail.com
Thank you,
Andrew
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Re: Python discussed in Nature

2015-02-12 Thread Marko Rauhamaa
Fabien fabien.mauss...@gmail.com:

 ... what a coincidence then that a huge majority of scientists
 (including me) dont care AT ALL about unicode.

You shouldn't, any more than you care about ASCII or 2's-complement
encoding. Things should just work.

 But since scientists are not paid to rewrite old code, the scientific
 world is still stuck to python 2. It's a pitty, given how easy it is
 to write py2/py3 compatible scientific tools.

What's a pity is that Python3 chose to ignore the seamless transition
path. It would have been nice, for example, to have all Python 3 code
explicitly mark its dialect (a .py3 extension, a magic import or
something) and then allow legacy Py2 code and Py3 code coexist the same
way C and C++ can coexist.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Fabien

On 12.02.2015 12:25, Marko Rauhamaa wrote:

Fabienfabien.mauss...@gmail.com:


... what a coincidence then that a huge majority of scientists
(including me) dont care AT ALL about unicode.

You shouldn't, any more than you care about ASCII or 2's-complement
encoding. Things should just work.


And they do! Since almost one year in writing scientific python code, 
not a single problem. I wouldnt even know about issues if i didnt read 
some posts here.



But since scientists are not paid to rewrite old code, the scientific
world is still stuck to python 2. It's a pitty, given how easy it is
to write py2/py3 compatible scientific tools.

What's a pity is that Python3 chose to ignore the seamless transition
path. It would have been nice, for example, to have all Python 3 code
explicitly mark its dialect (a .py3 extension, a magic import or
something) and then allow legacy Py2 code and Py3 code coexist the same
way C and C++ can coexist.


But this was exactly my point! Today in 2015 it's incredibly easy to 
write py2/py3 code for a scientist. The whole SciPy track has done the 
transition. Not an issue anymore either, for me at least (python 
youngster ;-)


Fabien
--
https://mail.python.org/mailman/listinfo/python-list


[issue23314] Disabling CRT asserts in debug build

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

When we completely switch Windows builds over to VC14, we're going to 
encounter some new assert dialogs from the CRT. (...) A number of tests attempt 
operations on bad file descriptors, which will assert and terminate in MSVCRT 
(I have a fix for the termination coming, but the assertions will still 
appear).

Can you give some examples of tests which fail with an assertion error?

_PyVerify_fd() doesn't protect use against the assertion error?

I would prefer to keep CRT error checks, to protect us against bugs.

--

Instead of patching faulthandler, you should patch 
test.support.SuppressCrashReport.__enter__. This function already calls 
SetErrorMode(). Instead of ctypes, the function may be exposed in the _winapi 
module for example. I'm talking about SetErrorMode() *and* _CrtSetReportMode().

+#if defined MS_WINDOWS  defined _DEBUG
+if ((p = Py_GETENV(PYTHONNOCRTASSERT))  *p != '\0') {
+_CrtSetReportMode(_CRT_ASSERT, 0);
+}
+#endif

The function is not available if _DEBUG is not defined? Why not calling the 
function if _DEBUG is not defined?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23314
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23453] Opening a stream with tarfile.open() triggers a TypeError: can't concat bytes to str error

2015-02-12 Thread Carl Chenet

New submission from Carl Chenet:

I'm trying to use a tar stream to a Python tarfile object but each time I do 
have a  TypeError: can't concat bytes to str error

Here is my test:
-8-
#!/usr/bin/python3.4

import tarfile
import sys

tarobj = tarfile.open(mode='r|', fileobj=sys.stdin)
print(tarobj)
tarobj.close()
-8-


$ tar cvf test.tar.gz tests/
tests/
tests/foo1
tests/foo/
tests/foo/bar
$ tar -O -xvf test.tar | ./tarstream.py
tests/
tests/foo1
tests/foo/
tests/foo/bar
Traceback (most recent call last):
  File ./tarstream.py, line 6, in module
tarobj = tarfile.open(mode='r|', fileobj=sys.stdin)
  File /usr/lib/python3.4/tarfile.py, line 1578, in open
t = cls(name, filemode, stream, **kwargs)
  File /usr/lib/python3.4/tarfile.py, line 1470, in __init__
self.firstmember = self.next()
  File /usr/lib/python3.4/tarfile.py, line 2249, in next
tarinfo = self.tarinfo.fromtarfile(self)
  File /usr/lib/python3.4/tarfile.py, line 1082, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
  File /usr/lib/python3.4/tarfile.py, line 535, in read
buf = self._read(size)
  File /usr/lib/python3.4/tarfile.py, line 543, in _read
return self.__read(size)
  File /usr/lib/python3.4/tarfile.py, line 569, in __read
self.buf += buf
TypeError: can't concat bytes to str

Regards,
Carl Chenet

--
components: Library (Lib)
messages: 235808
nosy: chaica_
priority: normal
severity: normal
status: open
title: Opening a stream with tarfile.open() triggers a TypeError: can't concat 
bytes to str error
type: crash
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23453
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23446] Use PyMem_New instead of PyMem_Malloc

2015-02-12 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

 In _testbuffer.c:  ndim = 64, so the changes aren't really necessary.

Indeed, I'll remove these changes.

 The reason is of course that even an array with only 2 elements per
dimension gets quite large with ndim=64. :)

But an array can be with 1 element per dimension. In any case it is good that 
there is strict limitation on ndim values.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23446
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20486] msilib: can't close opened database

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

Sorry folks I can't try this myself as I'm not running 2.7 and I don't know how 
to create the test.msi file.

--
nosy: +BreamoreBoy, steve.dower, tim.golden, zach.ware

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20486
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23453] Opening a stream with tarfile.open() triggers a TypeError: can't concat bytes to str error

2015-02-12 Thread Martin Panter

Martin Panter added the comment:

Using fileobj=sys.stdin.buffer instead should do the trick. The “tarfile” 
module would expect a binary stream, not a text stream.

Given the documentation currently says, “Use this variant in combination with 
e.g. sys.stdin, . . .”, I presume that is why you were using plain stdin. The 
documentation should be clarified.

--
assignee:  - docs@python
components: +Documentation
nosy: +docs@python, vadmium

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23453
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23450] Possible loss of data warnings building 3.5 Visual Studio Windows 8.1 64 bit

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

signal_cast_socket_t.patch: Fix warning in signal.set_wakeup_fd(). I introduced 
recently the warning when I added support for sockets in this function on 
Windows.

--
Added file: http://bugs.python.org/file38109/signal_cast_socket_t.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23450
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20699] Behavior of ZipFile with file-like object and BufferedWriter.

2015-02-12 Thread Martin Panter

Martin Panter added the comment:

Posting patch v2:

* Changed readinto() argument descriptions to “a pre-allocated, writable 
bytes-like buffer”, for both RawIOBase and BufferedIOBase
* Integrated the single-use test_memoryio.BytesIOMixin test class, which 
tricked me when I did the first patch
* Added tests for BufferedRWPair, BytesIO.readinto() etc methods with 
non-bytearray() buffers
* Fix _pyio.BufferedReader.readinto/1() for non-bytearray

--
Added file: http://bugs.python.org/file38107/bytes-like-param.v2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20699
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20391] windows python launcher should support explicit 64-bit version

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

Having read 
https://docs.python.org/3/using/windows.html#customizing-default-python-versions
 I'm not convinced that this is needed, as the first sentence of the fifth 
paragraph states On 64-bit Windows with both 32-bit and 64-bit implementations 
of the same (major.minor) Python version installed, the 64-bit version will 
always be preferred.

--
components: +Windows
nosy: +BreamoreBoy, steve.dower, tim.golden, zach.ware
versions: +Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20391
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

scandir-3.patch: New implementation based on scandir-2.patch on Ben's github 
repository.

Main changes with scandir-2.patch:

* new DirEntry.inode() method

* os.scandir() doesn't support bytes on Windows anymore: it's deprecated since 
python 3.3 and using bytes gives unexpected results on Windows


As discussed with Ben Hoyt, I added a inode() method to solve the use case 
described here:
https://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/cnvnz1w

I will update the PEP 471 to add the inode() method.


Notes:

* os.scandir() doesn't accept a file descriptor, as decided in the PEP 471.

* It may be nice to modify Scandir.close() to close the handle/directory; 
Scandir is the iterator returned by os._scandir()

* My patch doesn't modify os.walk(): it prefer to do that in a new issue

* DirEntry methods have no docstring


Changes with scandir-2.patch:

* C code is added to posixmodule.c, not into a new _scandir.c file, to avoid 
code duplication (all Windows code to handle attributes)
* C code is restricted to the minimum: it's now only a wrapper to 
opendir+readdir and FindFirstFileW/FindNextFileW
* os._scandir() directly calls opendir(), it's no more delayed to the first 
call to next(), to report errors earlier. In practice, I don't think that 
anymore will notify :-p
* don't allocate a buffer to store a HANDLE: use directly a HANDLE
* C code: use #ifdef inside functions, not outside
* On Windows, os._scandir() appends *.* instead of * to the path, to mimic 
os.listdir()
* put information about cache and syscall directly in the doc of DirEntry 
methods
* remove promise of performances from scandir doc: be more factual, explain 
when syscalls are required or not
* expose DT_UNKOWN, DT_DIR, DT_REG, DT_LNK constants in the posix module; but I 
prefer to not document them: use directly scandir!
* rewrite completly unit test:

  - reuse test.support
  - compare DirEntry attributes with the result of functions (ex: os.stat() or 
os.path.islink())

* add tests on removed directory, removed file and broken symbolic link
* remove : from repr(DirEntry) result, it's now DirEntry 'xxx'; drop 
__str__ method (by default, __str__ calls __repr__)
* use new OSError subclasses (FileNotFoundError)
* DirEntry methods: use stat.S_ISxxx() methods instead of st.st_mode  
0o17 == S_IFxxx

--
Added file: http://bugs.python.org/file38108/scandir-3.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20523] global .pdbrc on windows 7 not reachable out of the box

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

We have a patch to review or we need a doc patch, unless someone has a 
different idea to the approaches suggested by the originator.  I prefer the 
idea of changing the code, manually changing environment variables just seems 
wrong to me, but I won't lose any sleep over it.

--
components: +Library (Lib), Windows
nosy: +BreamoreBoy, steve.dower, tim.golden, zach.ware
versions: +Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20523
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23442] http.client.REQUEST_HEADER_FIELDS_TOO_LARGE renamed in 3.5

2015-02-12 Thread Berker Peksag

Berker Peksag added the comment:

I found another regression: In Python 3.4, 416 is 
REQUESTED_RANGE_NOT_SATISFIABLE, but REQUEST_RANGE_NOT_SATISFIABLE in 3.5.

--
nosy: +berker.peksag
stage:  - patch review

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23442
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Floating point g format not stripping trailing zeros

2015-02-12 Thread Hrvoje Nikšić
  from decimal import Decimal as D
  x = D(1)/D(999)
  '{:.15g}'.format(x)
 
  '0.00100100100100100'
[...]
  I'd say it's a bug.  P is 15, you've got 17 digits after the decimal place
  and two of those are insignificant trailing zeros.

 Actually it's the float version that doesn't match the documentation.
 In the decimal version, sure there are 17 digits after the decimal
 place there, but the first two -- which are leading zeroes -- would
 not normally be considered significant.

{:.15g} is supposed to give 15 digits of precision, but with trailing
zeros removed.  For example, '{:.15g}'.format(Decimal('0.5')) should
yield '0.5', not '0.500' -- and, it indeed does.  It is
only for some numbers that trailing zeros are not removed, which looks
like a bug.  The behavior of floats matches both the documentation and
other languages using the 'g' decimal format, such as C.

 The float version OTOH is only giving you 13 significant digits when
 15 were requested.

It is giving 15 significant digits if you count the trailing zeros
that have been removed.  If those two digits had not been zeros, they
would have been included.  This is again analogous to
'{:.15g}'.format(0.5) returning '0.5'.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue23450] Possible loss of data warnings building 3.5 Visual Studio Windows 8.1 64 bit

2015-02-12 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Please ignore changes to Objects/codeobject.c, Objects/funcobject.c and 
Python/ceval.c. The patch in issue18295 is more advanced.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23450
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23146] Incosistency in pathlib between / and \

2015-02-12 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Here is a patch. It also fixes tests which didn't test altsep.

--
keywords: +patch
stage: needs patch - patch review
Added file: http://bugs.python.org/file38117/pathlib_parse_parts_altsep.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23146
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23442] http.client.REQUEST_HEADER_FIELDS_TOO_LARGE renamed in 3.5

2015-02-12 Thread Martin Panter

Martin Panter added the comment:

Thanks. Confirming the patch fixes the problem for me, so should be comitted. I 
wonder if a test case would be good too though.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23442
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

Similar benchmark result on my laptop which has a SSD (ext4 filesystem tool, 
but I guess that the directory is small and fits into the memory).

Note: I'm not sure that the between ...x and ...x faster are revelant, I'm 
not sure that my computation is correct.

Test listdir+stat vs scandir+is_dir
Temporary directory: /home/haypo/prog/python/default/tmpbrn4r2tv
Create 10 files+symlinks...
Create 1 directories...
# entries: 21
Benchmark...
listdir: 1730.7 ms
scandir: 1029.4 ms
listdir: 476.8 ms
scandir: 1058.3 ms
listdir: 485.4 ms
scandir: 1041.1 ms

Remove the temporary directory...

Result:
listdir: min=476.8 ms (2.3 us per file), max=1730.7 ms (8.2 us per file)
scandir: min=1029.4 ms (4.9 us per file), max=1058.3 ms (5.0 us per file)
scandir is between 0.5x and 1.7x faster

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23442] http.client.REQUEST_HEADER_FIELDS_TOO_LARGE renamed in 3.5

2015-02-12 Thread Demian Brecht

Demian Brecht added the comment:

Thanks for the test Berker, I'll put a patch together with the changes
later this afternoon.

On 2015-02-12 2:27 PM, Berker Peksag wrote:
 
 Berker Peksag added the comment:
 
 Here is a test case.
 
 ==
 FAIL: test_client_constants (test.test_httplib.OfflineTest) 
 (constant='REQUESTED_RANGE_NOT_SATISFIABLE')
 --
 Traceback (most recent call last):
   File /home/berker/projects/cpython/default/Lib/test/test_httplib.py, line 
 985, in test_client_constants
 self.assertTrue(hasattr(client, const))
 AssertionError: False is not true
 
 ==
 FAIL: test_client_constants (test.test_httplib.OfflineTest) 
 (constant='REQUEST_HEADER_FIELDS_TOO_LARGE')
 --
 Traceback (most recent call last):
   File /home/berker/projects/cpython/default/Lib/test/test_httplib.py, line 
 985, in test_client_constants
 self.assertTrue(hasattr(client, const))
 AssertionError: False is not true
 
 --
 Added file: http://bugs.python.org/file38119/httpstatus_tests.diff
 
 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue23442
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23442
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: fuzzysearch: find not exactly what you're looking for!

2015-02-12 Thread Emile van Sebille

On 2/12/2015 11:51 AM, Tal Einat wrote:

Hi everyone,

I'd like to introduce a Python library I've been working on for a
while: fuzzysearch. I would love to get as much feedback as possible:
comments, suggestions, bugs and more are all very welcome!


I adapt difflib's SequenceMatcher for my fuzzy search needs now -- can 
you provide some perspective on how fuzzysearch compares?


Thanks,

Emile


--
https://mail.python.org/mailman/listinfo/python-list


[issue23442] http.client.REQUEST_HEADER_FIELDS_TOO_LARGE renamed in 3.5

2015-02-12 Thread Berker Peksag

Berker Peksag added the comment:

Here is a test case.

==
FAIL: test_client_constants (test.test_httplib.OfflineTest) 
(constant='REQUESTED_RANGE_NOT_SATISFIABLE')
--
Traceback (most recent call last):
  File /home/berker/projects/cpython/default/Lib/test/test_httplib.py, line 
985, in test_client_constants
self.assertTrue(hasattr(client, const))
AssertionError: False is not true

==
FAIL: test_client_constants (test.test_httplib.OfflineTest) 
(constant='REQUEST_HEADER_FIELDS_TOO_LARGE')
--
Traceback (most recent call last):
  File /home/berker/projects/cpython/default/Lib/test/test_httplib.py, line 
985, in test_client_constants
self.assertTrue(hasattr(client, const))
AssertionError: False is not true

--
Added file: http://bugs.python.org/file38119/httpstatus_tests.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23442
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23439] Fixed http.client.__all__ and added a test

2015-02-12 Thread Martin Panter

Martin Panter added the comment:

I don’t have a strong opinion about changing __all__ in these cases. I only 
noticed the potential problem when I went to add a new class to the module, and 
thought this was common practice. If we leave it as it is, it would be good to 
add comment in the source code explaining this decision. Also the test case 
could still be useful to catch future bugs.

The currently situation means the status constants are be missing from pydoc 
help output, and are not available when you do “from http.client import *” in 
the interactive interpreter.

HTTPMessage is a strange class indeed; see Issue 5053. But it is referenced a 
few times by the documentation, so I originally assumed it should be in __all__.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23439
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

Benchmark on NFS. Client: my laptop, connected to the LAN by wifi. Server: 
desktop, connected to the LAN by PLC. For an unknown reason, the creation of 
files, symlinks and directories is very slow (more than 30 seconds while I 
reduced the number of files  directories).

Test listdir+stat vs scandir+is_dir
Temporary directory: /home/haypo/mnt/tmp5aee0eic
Create 1000 files+symlinks...
Create 1000 directories...
# entries: 3000
Benchmark...
listdir: 14478.0 ms
scandir: 732.1 ms
listdir: 9.9 ms
scandir: 14.9 ms
listdir: 7.5 ms
scandir: 12.9 ms

Remove the temporary directory...

Result:
listdir: min=7.5 ms (2.5 us per file), max=14478.0 ms (4826.0 us per file)
scandir: min=12.9 ms (4.3 us per file), max=732.1 ms (244.0 us per file)
scandir is between 0.0x and 1119.6x faster

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18295] Possible integer overflow in PyCode_New()

2015-02-12 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Many of these overflows can be provoked by specially constructed function, code 
object or bytecode.

Also I think following examples crash or return wrong result on 64 bit platform:

def f(*args, **kwargs): return len(args), len(kwargs)

f(*([0]*(2**32+1)))
f(**dict.fromkeys(map(hex, range(2**31+1

Here is updated patch which handles overflows in non-debug build. It prevent 
creating Python function with more than 255 default values (in any case 
compiler and interpreter don't support more than 255 arguments) and raise 
exception when function is called with too many arguments or too large *args or 
**kwargs.

--
stage:  - patch review
type:  - crash
Added file: http://bugs.python.org/file38116/code_ssize_t_2.patch.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18295
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Odd version scheme

2015-02-12 Thread Emile van Sebille

On 2/12/2015 11:16 AM, Ian Kelly wrote:

Things break down again when we get to Python XIX.



'XVIII'  'XIX'

False


Looks to me like you better check if your PEP313 patch is installed 
properly.  :)


Emile



--
https://mail.python.org/mailman/listinfo/python-list


[issue23456] asyncio: add missing @coroutine decorators

2015-02-12 Thread STINNER Victor

New submission from STINNER Victor:

coroutine_decorator.patch adds missing @coroutine decorator to coroutine 
functions and methods in the asyncio module.

I'm not sure that it's ok to add @coroutine to __iter__() methods. At least, 
test_asyncio pass.

--
components: asyncio
files: coroutine_decorator.patch
keywords: patch
messages: 235857
nosy: gvanrossum, haypo, yselivanov
priority: normal
severity: normal
status: open
title: asyncio: add missing @coroutine decorators
versions: Python 3.4, Python 3.5
Added file: http://bugs.python.org/file38118/coroutine_decorator.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23456
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Floating point g format not stripping trailing zeros

2015-02-12 Thread Ian Kelly
On Thu, Feb 12, 2015 at 1:23 PM, Hrvoje Nikšić hnik...@gmail.com wrote:
  from decimal import Decimal as D
  x = D(1)/D(999)
  '{:.15g}'.format(x)
 
  '0.00100100100100100'
 [...]
  I'd say it's a bug.  P is 15, you've got 17 digits after the decimal place
  and two of those are insignificant trailing zeros.

 Actually it's the float version that doesn't match the documentation.
 In the decimal version, sure there are 17 digits after the decimal
 place there, but the first two -- which are leading zeroes -- would
 not normally be considered significant.

 {:.15g} is supposed to give 15 digits of precision, but with trailing
 zeros removed.

The doc says with insignificant trailing zeros removed, not all
trailing zeros.

 For example, '{:.15g}'.format(Decimal('0.5')) should
 yield '0.5', not '0.500' -- and, it indeed does.  It is
 only for some numbers that trailing zeros are not removed, which looks
 like a bug.  The behavior of floats matches both the documentation and
 other languages using the 'g' decimal format, such as C.

Ah, I see now what's going on here. With floats, there is really no
notion of significant digits. The values 0.5 and 0.5 are
completely equivalent. With decimals, that's not exactly true; if you
give the decimal a trailing zero then you are telling it that the zero
is significant.

 Decimal('0.5')
Decimal('0.5')
 Decimal('0.5')
Decimal('0.5')
 Decimal('0.5').as_tuple()
DecimalTuple(sign=0, digits=(5,), exponent=-1)
 Decimal('0.5').as_tuple()
DecimalTuple(sign=0, digits=(5, 0, 0, 0, 0), exponent=-5)

These are distinct; the decimal knows how many significant digits you
passed it. As a result, these are also distinct:

 '{:.4g}'.format(Decimal('0.5'))
'0.5'
 '{:.4g}'.format(Decimal('0.5'))
'0.5000'

Now what happens in your original example of 1/999? The default
decimal context uses 28 digits of precision, so the result of that
calculation will have 28 significant digits in it.

 decimal.getcontext().prec
28
 Decimal(1) / Decimal(999)
Decimal('0.001001001001001001001001001001')

When you specify the a precision of 15 in your format string, you're
telling it to take the first 15 of those. It doesn't care that the
last couple of those are zeros, because as far as it's concerned,
those digits are significant.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue22147] PosixPath() constructor should not accept strings with embedded NUL bytes

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22147
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12455] urllib2 forces title() on header names, breaking some requests

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12455
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17908] Unittest runner needs an option to call gc.collect() after each test

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17908
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19433] Define PY_UINT64_T on Windows 32bit

2015-02-12 Thread Brian Curtin

Changes by Brian Curtin br...@python.org:


--
nosy:  -brian.curtin

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19433
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

I enhanced bench_scandir2.py to have one command to create a directory or a 
different command to run the benchmark.

All commands:
- create: create the directory for tests (you don't need this command, you can 
also use an existing directory)
- bench: compare scandir+is_dir to listdir+stat, cached
- bench_nocache: compare scandir+is_dir to listdir+stat, flush disk caches
- bench_nostat: compare scandir to listdir, cached
- bench_nostat_nocache: compare scandir to listdir, flush disk caches

--

New patch version 6 written for performances, changes:

- On POSIX, decode the filename in C
- _scandir() iterator now yields list of items, instead of an single item

With my benchmarks, I see that yielding 10 items reduces the overhead of 
scandir on Linux (creating DirEntry objects). On Windows, the number of items 
has no effect. I prefer to also fetch entries 10 per 10 to mimic POSIX. Later, 
on POSIX, we may use directly getdents() and yield the full getdents() result 
at once. according to strace, it's currently around 800 entries per getdents() 
syscall.


Results of bench_scandir2.py on my laptop using SSD and ext4 filesystem:

- 110,100 entries (100,000 files, 100 symlinks, 10,000 directories)
- bench: 1.3x faster (scandir: 164.9 ms, listdir: 216.3 ms)
- bench_nostat: 0.4x faster (scandir: 104.0 ms, listdir: 38.5 ms)
- bench_nocache: 2.1x faster (scandir: 460.2 ms, listdir: 983.2 ms)
- bench_nostat_nocache: 2.2x faster (scandir: 480.4 ms, listdir: 1055.6 ms)

Results of bench_scandir2.py on my laptop using NFS share (server: ext4 
filesystem) and slow wifi:

- 11,100 entries (1, files, 100 symlinks, 1000 directories)
- bench: 1.3x faster (scandir: 22.5 ms, listdir: 28.9 ms)
- bench_nostat: 0.2x faster (scandir: 14.3 ms, listdir: 3.2 ms)

*** Timings with NFS are not reliable. Sometimes, a directory listing takes 
more than 30 seconds, but then it takes less than 100 ms. ***

Results of bench_scandir2.py on a Windows 7 VM using NTFS:

- 11,100 entries (10,000 files, 1,000 directories, 100 symlinks)
- bench: 9.9x faster (scandir: 58.3 ms, listdir: 578.5 ms)
- bench_nostat: 0.3x faster (scandir: 28.5 ms, listdir: 7.6 ms)

Results of bench_scandir2.py on my desktop PC using tmpfs (/tmp):

- 110,100 entries (100,000 files, 100 symlinks, 10,000 directories)
- bench: 1.3x faster (scandir: 149.2 ms, listdir: 189.2 ms)
- bench_nostat: 0.3x faster (scandir: 91.9 ms, listdir: 27.1 ms)

Results of bench_scandir2.py on my desktop PC using HDD and ext4:

- 110,100 entries (10 files, 100 symlinks, 1 directories)
- bench: 1.4x faster (scandir: 168.5 ms, listdir: 238.9 ms)
- bench_nostat: 0.4x faster (scandir: 107.5 ms, listdir: 41.9 ms)

--
Added file: http://bugs.python.org/file38121/scandir-6.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23439] Fixed http.client.__all__ and added a test

2015-02-12 Thread Demian Brecht

Demian Brecht added the comment:

 If we leave it as it is, it would be good to add comment in the source code 
 explaining this decision.
I think that __all__ should be left as-is for the time being. Adding
some comments around that decision makes sense to me to avoid any future
confusion around that.

 Also the test case could still be useful to catch future bugs.
Agreed. I've added a couple minor comments to the review.

Thanks for the work on this!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23439
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22197] Allow better verbosity / output control in test cases

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22197
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14044] IncompleteRead error with urllib2 or urllib.request -- fine with urllib, wget, or curl

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14044
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14414] xmlrpclib leaves connection in broken state if server returns error without content-length

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14414
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23043] doctest ignores from __future__ import print_function

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23043
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23004] mock_open() should allow reading binary data

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23004
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17986] Alternative async subprocesses (pep 3145)

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

As Tulip and subprocdev, starting such project outside the Python stdlib may 
help to get feedback, find and fix bugs faster. What do you think?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17986
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23448] urllib2 needs to remove scope from IPv6 address when creating Host header

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23448
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23166] urllib2 ignores opener configuration under certain circumstances

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23166
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread STINNER Victor

Changes by STINNER Victor victor.stin...@gmail.com:


Added file: http://bugs.python.org/file38120/bench_scandir2.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23442] http.client.REQUEST_HEADER_FIELDS_TOO_LARGE renamed in 3.5

2015-02-12 Thread Demian Brecht

Demian Brecht added the comment:

I've attached a patch with fixes for both cases and the tests added by
Berker. Thanks guys.

--
Added file: http://bugs.python.org/file38122/issue23442_1.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23442
___diff -r e548ab4ce71d Lib/http/__init__.py
--- a/Lib/http/__init__.py  Mon Feb 09 19:49:00 2015 +
+++ b/Lib/http/__init__.py  Thu Feb 12 16:48:03 2015 -0800
@@ -93,7 +93,7 @@
 'URI is too long')
 UNSUPPORTED_MEDIA_TYPE = (415, 'Unsupported Media Type',
 'Entity body in unsupported format')
-REQUEST_RANGE_NOT_SATISFIABLE = (416,
+REQUESTED_RANGE_NOT_SATISFIABLE = (416,
 'Request Range Not Satisfiable',
 'Cannot satisfy request range')
 EXPECTATION_FAILED = (417, 'Expectation Failed',
@@ -107,7 +107,7 @@
 TOO_MANY_REQUESTS = (429, 'Too Many Requests',
 'The user has sent too many requests in '
 'a given amount of time (rate limiting)')
-REQUEST_HEADER_FIELD_TOO_LARGE = (431,
+REQUEST_HEADER_FIELDS_TOO_LARGE = (431,
 'Request Header Field Too Large',
 'The server is unwilling to process the request because its header '
 'fields are too large')
diff -r e548ab4ce71d Lib/test/test_httplib.py
--- a/Lib/test/test_httplib.py  Mon Feb 09 19:49:00 2015 +
+++ b/Lib/test/test_httplib.py  Thu Feb 12 16:48:03 2015 -0800
@@ -924,6 +924,66 @@
 def test_responses(self):
 self.assertEqual(client.responses[client.NOT_FOUND], Not Found)
 
+def test_client_constants(self):
+expected = [
+'CONTINUE',
+'SWITCHING_PROTOCOLS',
+'PROCESSING',
+'OK',
+'CREATED',
+'ACCEPTED',
+'NON_AUTHORITATIVE_INFORMATION',
+'NO_CONTENT',
+'RESET_CONTENT',
+'PARTIAL_CONTENT',
+'MULTI_STATUS',
+'IM_USED',
+'MULTIPLE_CHOICES',
+'MOVED_PERMANENTLY',
+'FOUND',
+'SEE_OTHER',
+'NOT_MODIFIED',
+'USE_PROXY',
+'TEMPORARY_REDIRECT',
+'BAD_REQUEST',
+'UNAUTHORIZED',
+'PAYMENT_REQUIRED',
+'FORBIDDEN',
+'NOT_FOUND',
+'METHOD_NOT_ALLOWED',
+'NOT_ACCEPTABLE',
+'PROXY_AUTHENTICATION_REQUIRED',
+'REQUEST_TIMEOUT',
+'CONFLICT',
+'GONE',
+'LENGTH_REQUIRED',
+'PRECONDITION_FAILED',
+'REQUEST_ENTITY_TOO_LARGE',
+'REQUEST_URI_TOO_LONG',
+'UNSUPPORTED_MEDIA_TYPE',
+'REQUESTED_RANGE_NOT_SATISFIABLE',
+'EXPECTATION_FAILED',
+'UNPROCESSABLE_ENTITY',
+'LOCKED',
+'FAILED_DEPENDENCY',
+'UPGRADE_REQUIRED',
+'PRECONDITION_REQUIRED',
+'TOO_MANY_REQUESTS',
+'REQUEST_HEADER_FIELDS_TOO_LARGE',
+'INTERNAL_SERVER_ERROR',
+'NOT_IMPLEMENTED',
+'BAD_GATEWAY',
+'SERVICE_UNAVAILABLE',
+'GATEWAY_TIMEOUT',
+'HTTP_VERSION_NOT_SUPPORTED',
+'INSUFFICIENT_STORAGE',
+'NOT_EXTENDED',
+'NETWORK_AUTHENTICATION_REQUIRED',
+]
+for const in expected:
+with self.subTest(constant=const):
+self.assertTrue(hasattr(client, const))
+
 
 class SourceAddressTest(TestCase):
 def setUp(self):
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22946] urllib gives incorrect url after open when using HTTPS

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22946
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21557] os.popen os.system lack shell-related security warnings

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21557
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8843] urllib2 Digest Authorization uri must match request URI

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8843
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14301] xmlrpc client transport and threading problem

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14301
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Parsing and comparing version strings (was: Odd version scheme)

2015-02-12 Thread Tim Chase
On 2015-02-13 12:20, Ben Finney wrote:
  Not sure why this is ridiculous.
 
 Right, versions are effectively a special type [0], specifically
 *because* they intentionally don't compare as scalar numbers or
 strings. It's not “ridiculous” to need custom comparisons when
 that's the case.
 
 Python even comes with support for version parsing and comparison
 in the standard library [1]. So if anything's “ridiculous”, it's
 the act of re-implementing that and getting it wrong.
 
 (Or maybe that such an important part of the standard library is
 largely undocumented.)

I was surprised (pleasantly) to learn of that corner of distutils,
but can never remember where it is in the stdlib or what it's
called.  So it's pretty standard for my process to be something like

 - Hmm, I need to compare version-strings
 - search the web for python compare version number
 - note the top-ranked Stack Overflow answer
 - spot that it uses distutils.version.{LooseVersion,StrictVersion}
 - use that in my code, optionally searching to get full docs with 
   site:python.org LooseVersion StrictVersion, only to be
   surprised when something like
   https://docs.python.org/2/library/{module}
   isn't anywhere in the top umpteen hits.

largely undocumented is an understatement :-)

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Download multiple xls files using Python

2015-02-12 Thread polleysarah7
I was wondering if somebody here could help me out creating a script? I have 
never done something like this before so I have no idea what I'm doing. But I 
have been reading about it for a couple days now and I'm still not 
understanding it so I appreciating all help I can get. I'm even willing to pay 
for your service!

Here is an example of my problem. I have for the moment a CSV file named 
Stars saved on my windows desktop containing around 50.000 different links 
that directly starts downloading a xls file when pressed. Each row contains one 
of these links. I would want with your help create some kind of script for this 
that will make some kind of loop thru each row and visit this different links 
so it can download these 50.000 different files.

Thank you all for taking time to read this

/ Sarah
-- 
https://mail.python.org/mailman/listinfo/python-list


Download multiple xls files using Python

2015-02-12 Thread polleysarah7
I was wondering if somebody here could help me out creating a script? I have 
never done something like this before so I have no idea what I'm doing. But I 
have been reading about it for a couple days now and I'm still not 
understanding it so I appreciating all help I can get. I'm even willing to pay 
for your service!

Here is an example of my problem. I have for the moment a CSV file named 
Stars saved on my windows desktop containing around 50.000 different links 
that directly starts downloading a xls file when pressed. Each row contains one 
of these links. I would want with your help create some kind of script for this 
that will make some kind of loop thru each row and visit this different links 
so it can download these 50.000 different files.

Thank you all for taking time to read this

/ Sarah
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Download multiple xls files using Python

2015-02-12 Thread Chris Angelico
On Fri, Feb 13, 2015 at 11:07 AM,  polleysar...@gmail.com wrote:
 Here is an example of my problem. I have for the moment a CSV file named 
 Stars saved on my windows desktop containing around 50.000 different links 
 that directly starts downloading a xls file when pressed. Each row contains 
 one of these links. I would want with your help create some kind of script 
 for this that will make some kind of loop thru each row and visit this 
 different links so it can download these 50.000 different files.


Let's break that down into pieces.

1) You have a CSV file, from which you'd like to extract specific data (URLs).
2) You have a series of URLs, which you would like to download.

Python can do both of these jobs! The first is best handled by the csv
module, and the second by the requests module (which you'd have to get
off PyPI).

https://docs.python.org/3/library/csv.html
http://docs.python-requests.org/en/latest/

In short: Loop over the lines in the file, as returned by the CSV
reader, and for each line, fire off a request to fetch its XLS file,
and save it to disk. Given that you're working with 50K distinct URLs,
you'll want to have some kind of check to see if you already have the
file, so you can restart the downloader script.

Have fun!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue17986] Alternative async subprocesses (pep 3145)

2015-02-12 Thread Martin Panter

Changes by Martin Panter vadmium...@gmail.com:


--
nosy: +vadmium

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17986
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Parsing and comparing version strings (was: Odd version scheme)

2015-02-12 Thread Ben Finney
Gisle Vanem gva...@yahoo.no writes:

 That's exactly what they do now in IPython/utils/version.py with
 the comment:
   Utilities for version comparison
   It is a bit ridiculous that we need these.

 Not sure why this is ridiculous.

Right, versions are effectively a special type [0], specifically
*because* they intentionally don't compare as scalar numbers or strings.
It's not “ridiculous” to need custom comparisons when that's the case.

Python even comes with support for version parsing and comparison in the
standard library [1]. So if anything's “ridiculous”, it's the act of
re-implementing that and getting it wrong.

(Or maybe that such an important part of the standard library is
largely undocumented.)


[0] URL:https://wiki.python.org/moin/Distutils/VersionComparison
[1] Unfortunately undocumented, like much of the Distutils API. Try

URL:http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html.

-- 
 \  “Programs must be written for people to read, and only |
  `\incidentally for machines to execute.” —Abelson  Sussman, |
_o__)  _Structure and Interpretation of Computer Programs_ |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue23447] Relative imports with __all__ attribute

2015-02-12 Thread Steven Barker

Steven Barker added the comment:

This issue is a special case of the problem discussed in issue 992389, that 
modules within packages are not added to the package dictionary until they are 
fully loaded, which breaks circular imports in the form from package import 
module.

The consensus on that issue is that it doesn't need to be fixed completely, 
given the partial fix from issue 17636. I think the current issue is a corner 
case that was not covered by the fix, but which probably should be fixed as 
well, for consistency. The fix so far has made imports that name the module 
work, even though the module objects are still not placed into the package's 
namespace yet (this is why Antonio's last example works in the newly released 
3.5a1, though not in previous versions). Wildcard imports however still fail.

Can the fix be expanded to cover wildcard imports as well? I know, we're 
heaping up two kinds of usually-bad code (wildcard imports and circular 
imports) on top of one another, but still, the failure is very unexpected to 
anyone who's not read the bug reports.

I don't know my way around the import system at all yet, so I'm not going to be 
capable of writing up a patch immediately, but if there's any interest at all, 
and nobody more knowledgeable gets to it first I might see what I can learn and 
try to put together something.

Here's a more concise example of the issue:


package/__init__.py:

__all__ = [module]


package/module.py:

from . import module # this works in 3.5a1, thanks to the fix from issue 
17636
from . import *  # this still fails

--
nosy: +Steven.Barker

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23447
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Download multiple xls files using Python

2015-02-12 Thread Tim Chase
On 2015-02-13 11:19, Chris Angelico wrote:
 On Fri, Feb 13, 2015 at 11:07 AM,  polleysar...@gmail.com wrote:
  Here is an example of my problem. I have for the moment a CSV
  file named Stars saved on my windows desktop containing around
  50.000 different links that directly starts downloading a xls
  file when pressed. Each row contains one of these links. I would
  want with your help create some kind of script for this that will
  make some kind of loop thru each row and visit this different
  links so it can download these 50.000 different files.
 
 In short: Loop over the lines in the file, as returned by the CSV
 reader, and for each line, fire off a request to fetch its XLS file,

Alternatively, unless you're wed to Python, downloading tools like
wget are your friend and make quick work of this.  Just extract the
column of URLs into a file (which can be done in Excel or with any
number of utilities such as cut) and then use wget to pull them
all down:

  cut -d, -f3  data.csv  urls.txt
  wget -i urls.txt

(the above assumes that column #3 contains the URLs)

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


[issue5053] http.client.HTTPMessage.getallmatchingheaders() always returns []

2015-02-12 Thread Berker Peksag

Changes by Berker Peksag berker.pek...@gmail.com:


--
nosy: +berker.peksag

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5053
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12849] Cannot override 'connection: close' in urllib2 headers

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12849
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19433] Define PY_UINT64_T on Windows 32bit

2015-02-12 Thread Mark Lawrence

Mark Lawrence added the comment:

Is there any more work needed on this or can it be closed?  Please note the 
reference to #17884 in msg201654.

--
nosy: +BreamoreBoy

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19433
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9698] When reusing an handler, urllib(2)'s digest authentication fails after multiple regative replies

2015-02-12 Thread Demian Brecht

Changes by Demian Brecht demianbre...@gmail.com:


--
nosy:  -demian.brecht

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9698
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17986] Alternative async subprocesses (pep 3145)

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

Subprocess support of asyncio has nice features and is efficient:

- async read from stdout and stderr
- async write into stdin
- async wait for the process exit
- async communicate()
- timeout on any async operation
- support running multiple child processes in parallel
- epoll, kqueue, devpoll or IOCP selector
- On POSIX, fast child watcher waiting for SIGCHLD signal

If possible, I would prefer to not write a second async subprocess.

It's possible to write a blocking API on top of asyncio using 
loop.run_until_complete().

sync_proc_asyncio.py: incomplete proof-of-concept (no working select() method).

--
Added file: http://bugs.python.org/file38123/sync_proc_asyncio.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17986
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: building c extensions with setuptools that are not tied to python installed on build machine

2015-02-12 Thread Matthew Taylor
Ned, thank you for your insight on this problem. I will take your
advice and do some more digging. You've been very helpful.

Regards,

-
Matt Taylor
OS Community Flag-Bearer
Numenta


On Wed, Feb 11, 2015 at 4:23 PM, Ned Deily n...@acm.org wrote:
 In article
 cajv6ndphginodqq1fkh1-ubcyzq2chag7qjxsbq0_pht5z8...@mail.gmail.com,
  Matthew Taylor m...@numenta.org wrote:
 Does this make sense to anyone? I'm still a little new to Python in
 general (especially binary packaging), and it seems like this would be
 a common problem for any projects with C extensions that need broad
 binary distribution. Does anyone have any suggestions? Please take a
 look at our setup.py file [4] and tell me if we're doing something
 egregiously wrong.

 Just taking a quick look at your setup.py there shows a quite
 complicated build, including SWIG.  One suggestion: keep in mind that
 it's normal on OS X for the absolute path of shared libraries and
 frameworks to be embedded in the linked binaries, including the C (or
 C++) extension module bundles (.so files) built for Python packages.  If
 any of the .so files or any other binary artifacts (executables, shared
 libraries) created by your package are linked to the Python
 interpreter's shared library, that may be why you are getting a mixture
 of Python instances.  One way to check for this is to use:

 otool -L *.so *.dylib

 on all of the directories containing linked binaries in your project and
 check for paths like:
 /System/Library/Frameworks/Python.framework

 That would be a link to the Apple-supplied system Python.

 A link to /Library/Frameworks/Python.framework or some other path would
 be to a third-party Python like from python.org or Homebrew.

 Due to differences in how the various Pythons are built and depending on
 what the C or C++ code is doing, it may not be possible to have one
 binary wheel that works with different Python instances of the same
 version.  For many simple projects, it does work.

 You *could* also ask on the PythonMac SIG list.

 https://mail.python.org/mailman/listinfo/pythonmac-sig

 --
  Ned Deily,
  n...@acm.org

 --
 https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue22559] [backport] ssl.MemoryBIO

2015-02-12 Thread STINNER Victor

STINNER Victor added the comment:

 Since this is such a new feature (not even released in 3.x), I don't think we 
 should put it in 2.7.9.

While ssl.MemoryBIO would be very useful on Windows for Trollius (to support 
SSL with the IOCP event loop), I also consider it as a new feature. It's a 
little bit borderline between feature and security fix.

Maybe it should be discussed on python-dev?

Note: MemoryBIO was added by issue #21965 (first commit: a79003f25a41).

--
nosy: +haypo

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22559
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23450] Possible loss of data warnings building 3.5 Visual Studio Windows 8.1 64 bit

2015-02-12 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Here is a patch which fixes many warnings reported by MS compiler on 64-bit 
platform [1]. Some of these warnings indicated real bugs.

[1] 
http://buildbot.python.org/all/builders/AMD64%20Windows8%203.x/builds/411/steps/compile/logs/warnings%20(396)

--
components: +Extension Modules, Interpreter Core -Build, Windows
keywords: +patch
stage:  - patch review
Added file: http://bugs.python.org/file38106/int_overflows.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23450
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Python discussed in Nature

2015-02-12 Thread Mark Lawrence

On 12/02/2015 08:46, Steven D'Aprano wrote:

Nature, one of the world's premier science journals, has published an
excellent article about programming in Python:

http://www.nature.com/news/programming-pick-up-python-1.16833



Interesting.  I'll leave someone more diplomatic than myself to reply to 
the comment at the end of the article.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Steven D'Aprano
John Ladasky wrote:

 And I use Unicode in my Python.  In implementing some mathematical models
 which have variables like delta, gamma, and theta, I decided that I didn't
 like the line lengths I was getting with such variable names.  I'm using
 δ, γ, and θ instead.  It works fine, at least on my Ubuntu Linux system
 (and what scientist doesn't use Linux?).  I also have special mathematical
 symbols, superscripted numbers, etc. in my program comments.  It's easier
 to read 2x³ + 3x² than 2*x**3 + 3*x**2.


Oooh! What is your font of choice for this?



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Odd version scheme

2015-02-12 Thread Steven D'Aprano
Skip Montanaro wrote:

 I believe this sort of lexicographical comparison wart is one of the
 reasons the Python-dev gang decided that there would be no micro versions
  9. There are too many similar assumptions about version numbers out in
 the real world.

Which is why there will be no Windows 9. Too many bad programmers that test
for Windows 95 or 98 by checking the version string and doing a prefix test
for Windows 9.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Rustom Mody
On Thursday, February 12, 2015 at 11:59:55 PM UTC+5:30, John Ladasky wrote:
 On Thursday, February 12, 2015 at 3:08:10 AM UTC-8, Fabien wrote:
 
  ... what a coincidence then that a huge majority of scientists 
  (including me) dont care AT ALL about unicode. But since scientists are 
  not paid to rewrite old code, the scientific world is still stuck to 
  python 2.
 
 I'm a scientist.  I'm a happy Python 3 user who migrated from Python 2 about 
 two years ago.
 
 And I use Unicode in my Python.  In implementing some mathematical models 
 which have variables like delta, gamma, and theta, I decided that I didn't 
 like the line lengths I was getting with such variable names.  I'm using δ, 
 γ, and θ instead.  It works fine, at least on my Ubuntu Linux system (and 
 what scientist doesn't use Linux?).  I also have special mathematical 
 symbols, superscripted numbers, etc. in my program comments.  It's easier to 
 read 2x³ + 3x² than 2*x**3 + 3*x**2.
 
 I am teaching someone Python who is having a few problems with Unicode on his 
 Windows 7 machine.  It would appear that Windows shipped with a 
 less-than-complete Unicode font for its command shell.  But that's not 
 Python's fault.

Haskell is a bit ahead of python in this respect:

Prelude let (x₁ , x₂) = (1,2)
Prelude (x₁ , x₂)
(1,2)

 (x₁ , x₂) = (1,2)
  File stdin, line 1
(x₁ , x₂) = (1,2)
  ^
SyntaxError: invalid character in identifier

But python is ahead in another (arguably more) important aspect:
Haskell gets confused by ligatures in identifiers; python gets them right

 flag = 1 
 flag
1

Prelude let flag = 1 
Prelude flag

interactive:4:1: Not in scope: `flag'

Hopefully python will widen its identifier-chars also



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Steven D'Aprano
Marko Rauhamaa wrote:

 Chris Angelico ros...@gmail.com:
 
 On Fri, Feb 13, 2015 at 1:39 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 I write both Py2 and Py3 code, but I keep the two worlds hermetically
 separated from each other.

 [...]

 You don't need to be afraid of the gap.
 
 No problem. When I write Py3, I write Py3. When I write Py2, I write
 Py2. When I write bash, I write bash. When I write C, I write C.

Do you get confused by the difference between talking to Americans and
talking to Britons? The differences between American and British English is
a better analogy for the differences between Python 2 and 3 than the
differences between C and bash.

Especially if you target 2.7 and 3.3+, it is almost trivially easy to write
multi-dialect Python 2 and 3 code in the same code base. The trickiest part
is not a language change at all, but remembering that the names of some
standard library modules have changed.

Nobody here is suggesting that there are no differences between Python 2 and
3, but suggesting that those differences are of the same order of magnitude
as those between bash and C is ridiculous. The common subset of the Python
language is far greater than the differences between the versions.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bad text appearance in IDLE

2015-02-12 Thread Frank Millman

ast nom...@invalid.com wrote in message 
news:54dc9bee$0$3046$426a3...@news.free.fr...
 Hello

 Here is how text appears in IDLE window
 http://www.cjoint.com/data/0BmnEIcxVAx.htm

 Yesterday evening I had not this trouble. It appears
 this morning. I restarted my computer with no effect.

 A windows Vista update has been done this morning, with about 10 fixes. I 
 suspect something gone wrong with this update

 Has somebody an explanation about that ?

 thx


I use Windows Server 2003. It also ran an automatic update yesterday. 
Something seems to have gone wrong with the system font. I don't use IDLE, 
but I use OutlookExpress and Textpad, and they both show a similar effect.

I can live with it for now. I am hoping (with fingers crossed) that enough 
people will complain to Microsoft that they will issue a fix shortly.

Frank Millman



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python discussed in Nature

2015-02-12 Thread Terry Reedy

On 2/12/2015 11:07 PM, Rustom Mody wrote:

On Thursday, February 12, 2015 at 11:59:55 PM UTC+5:30, John Ladasky wrote:

On Thursday, February 12, 2015 at 3:08:10 AM UTC-8, Fabien wrote:


... what a coincidence then that a huge majority of scientists
(including me) dont care AT ALL about unicode. But since scientists are
not paid to rewrite old code, the scientific world is still stuck to
python 2.


I'm a scientist.  I'm a happy Python 3 user who migrated from Python 2 about 
two years ago.

And I use Unicode in my Python.  In implementing some mathematical models which 
have variables like delta, gamma, and theta, I decided that I didn't like the 
line lengths I was getting with such variable names.  I'm using δ, γ, and θ 
instead.  It works fine, at least on my Ubuntu Linux system (and what scientist 
doesn't use Linux?).  I also have special mathematical symbols, superscripted 
numbers, etc. in my program comments.  It's easier to read 2x³ + 3x² than 
2*x**3 + 3*x**2.

I am teaching someone Python who is having a few problems with Unicode on his 
Windows 7 machine.  It would appear that Windows shipped with a 
less-than-complete Unicode font for its command shell.  But that's not Python's 
fault.


Haskell is a bit ahead of python in this respect:

Prelude let (x₁ , x₂) = (1,2)
Prelude (x₁ , x₂)
(1,2)


(x₁ , x₂) = (1,2)

   File stdin, line 1
 (x₁ , x₂) = (1,2)
   ^
SyntaxError: invalid character in identifier

But python is ahead in another (arguably more) important aspect:
Haskell gets confused by ligatures in identifiers; python gets them right


flag = 1
flag

1

Prelude let flag = 1
Prelude flag

interactive:4:1: Not in scope: `flag'

Hopefully python will widen its identifier-chars also


Python (supposedly) follows the Unicode definition based on character 
classes, as documented.  If the Unicode definition in fact allows 
subscripts, then Python should also.  If you want Python to broaden its 
definition beyond unicode, you will have to advocate and persuade.  It 
will not 'just happen'.


--
Terry Jan Reedy


--
https://mail.python.org/mailman/listinfo/python-list


[issue23457] make test failures

2015-02-12 Thread Dwight

New submission from Dwight:

Hi,
  Looking for assistance in figuring out what caused the following
test failures and how to fix the problems.
  Built and run on an IBM pSeries system running AIX 7.1.
  Appreciate any help I can get.
  I am not a software developer.
  I am compiling this because I need this to build
things I need to build firefox.
  Would really appreciate some help!
  (Mozilla.org and IBM not interested in providing
   a working browser for my system.)
Bye,
Dwight

***Failed Test the were run in verbose mode
test_locale failed
test_io failed
test_ssl failed
test_faulthandler
test_httpservers
test_socket failed
test_fileio failed
test_distutils failed
test_asyncio failed
test_mmap failed
test_resource failed
test_posix failed

--
components: Tests
files: PYTHONFailedTest.log
messages: 235871
nosy: dcrs6...@gmail.com
priority: normal
severity: normal
status: open
title: make test failures
versions: Python 3.4
Added file: http://bugs.python.org/file38124/PYTHONFailedTest.log

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23457
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17911] traceback: add a new thin class storing a traceback without storing local variables

2015-02-12 Thread Robert Collins

Robert Collins added the comment:

@Mahmoud thanks! I had a quick look and the structural approach we've taken is 
a bit different. What do you think of the current patch here 
(issue17911-4.patch) ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17911
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread Ben Hoyt

Ben Hoyt added the comment:

Hi Victor, I thank you for your efforts here, especially your addition of 
DirEntry.inode() and your work on the tests.

However, I'm a bit frustrated that you just re-implemented the whole thing 
without discussion: I've been behind scandir and written the first couple of 
implementations, and I asked if you could review my code, but instead of 
reviewing it or interacting with my fairly clear desire for the all-C version, 
you simply turn up and say I've rewritten it, can you please review?

You did express concern off list about an all-C vs part-Python implementation, 
but you said it's still unclear to me which implementation should be added to 
Python and I don't feel able today to take the right decision. We may use 
python-dev to discuss that. But I didn't see that discussion at all, and I had 
expressed fairly clearly (both here and off list), with benchmarks, why I 
thought it should be the all-C version.

So it was a bit of a let down for a first-time Python contributor. Even a 
simple question would have been helpful here: Ben, I really think this much C 
code for a first version is bug-prone; what do you think about me 
re-implementing it and comparing?

TLDR: I'd like to see os.scandir() in Python even if it means the slower 
implementation, but some discussion would have been nice. :-)

So if it's not too late, I'll continue that discussion in my next message.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22524] PEP 471 implementation: os.scandir() directory scanning function

2015-02-12 Thread Ben Hoyt

Ben Hoyt added the comment:

To continue the actual which implementation discussion: as I mentioned last 
week in http://bugs.python.org/msg235458, I think the benchmarks above show 
pretty clearly we should use the all-C version.

For background: PEP 471 doesn't add any new functionality, and especially with 
the new pathlib module, it doesn't make directory iteration syntax nicer 
either: os.scandir() is all about letting the OS give you whatever info it can 
*for performance*. Most of the Rationale for adding scandir given in PEP 471 is 
because it can be so so much faster than listdir + stat.

My original all-C implementation is definitely more code to review (roughly 800 
lines of C vs scandir-6.patch's 400), but it's also more than twice as fast. On 
my Windows 7 SSD just now, running benchmark.py:

Original scandir-2.patch version:
os.walk took 0.509s, scandir.walk took 0.020s -- 25.4x as fast

New scandir-6.patch version:
os.walk took 0.455s, scandir.walk took 0.046s -- 10.0x as fast

So the all-C implementation is literally 2.5x as fast on Windows. (After both 
tests, just for a sanity check, I ran the ctypes version as well, and it said 
about 8x as fast for both runs.)

Then on Linux, not a perfect comparison (different benchmarks) but shows the 
same kind of trend:

Original scandir-2.patch benchmark (http://bugs.python.org/msg228857):
os.walk took 0.860s, scandir.walk took 0.268s -- 3.2x as fast

New scandir-6.patch benchmark (http://bugs.python.org/msg235865) -- note 
that 1.3x faster should actually read 1.3x as fast here:
bench: 1.3x faster (scandir: 164.9 ms, listdir: 216.3 ms)

So again, the all-C implementation is 2.5x as fast on Linux too.

And on Linux, the incremental improvement provided by scandir-6 over listdir is 
hardly worth it -- I'd use a new directory listing API for 3.2x as fast, but 
not for 1.3x as fast.

Admittedly a 10x speed gain (!) on Windows is still very much worth going for, 
so I'm positive about scandir even with a half-Python implementation, but 
hopefully the above shows fairly clearly why the all-C implementation is 
important, especially on Linux.

Also, if the consensus is in favour of slow but less C code, I think there are 
further tweaks we can make to the Python part of the code to improve things a 
bit more.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22524
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >