[issue33961] Inconsistency in exceptions for dataclasses.dataclass documentation

2019-02-21 Thread Inada Naoki


Change by Inada Naoki :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36073] sqlite crashes with converters mutating cursor

2019-02-21 Thread SilentGhost


Change by SilentGhost :


--
nosy: +ghaering
versions: +Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32657] Mutable Objects in SMTP send_message Signature

2019-02-21 Thread Inada Naoki


Change by Inada Naoki :


--
versions: +Python 3.8 -Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32657] Mutable Objects in SMTP send_message Signature

2019-02-21 Thread Inada Naoki


Change by Inada Naoki :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36073] sqlite crashes with converters mutating cursor

2019-02-21 Thread Sergey Fedoseev


Change by Sergey Fedoseev :


--
keywords: +patch
pull_requests: +12008
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36073] sqlite crashes with converters mutating cursor

2019-02-21 Thread Sergey Fedoseev


New submission from Sergey Fedoseev :

It's somewhat similar to bpo-10811, but for converter function:

In [197]: import sqlite3 as sqlite
 ...: con = sqlite.connect(':memory:', detect_types=sqlite.PARSE_COLNAMES)
 ...: cur = con.cursor()
 ...: sqlite.converters['CURSOR_INIT'] = lambda x: cur.__init__(con)
 ...: 
 ...: cur.execute('create table test(x foo)')
 ...: cur.execute('insert into test(x) values (?)', ('foo',))
 ...: cur.execute('select x as "x [CURSOR_INIT]", x from test')
 ...: 
[1]25718 segmentation fault  python manage.py shell

Similar to bpo-10811, proposed patch raises ProgrammingError instead of 
crashing.

--
components: Extension Modules
messages: 336283
nosy: sir-sigurd
priority: normal
severity: normal
status: open
title: sqlite crashes with converters mutating cursor
type: crash

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20582] socket.getnameinfo() does not document flags

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

You can use the special role :manpage: for referring man pages.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux)

2019-02-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

Duplicate of issue35545, I believe.

--
nosy: +twisteroid ambassador

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36072] str.translate() behaves differently for ASCII-only and other strings

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

You are using a mapping that returns different values for the same key. You 
should not expect a stable result for it.

I do not think this needs a special mentioning in the documentation. Garbage in 
-- garbage out.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36072] str.translate() behaves differently for ASCII-only and other strings

2019-02-21 Thread Sergey Fedoseev


Change by Sergey Fedoseev :


--
title: str.translate() behave differently for ASCII-only and other strings -> 
str.translate() behaves differently for ASCII-only and other strings

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36072] str.translate() behave differently for ASCII-only and other strings

2019-02-21 Thread Sergey Fedoseev

New submission from Sergey Fedoseev :

In [186]: from itertools import cycle

In [187]: class ContainerLike:
 ...: def __init__(self):
 ...: self.chars = cycle('12')
 ...: def __getitem__(self, key):
 ...: return next(self.chars)
 ...: 

In [188]: 'aa'.translate(ContainerLike())
Out[188]: '11'

In [189]: 'ыы'.translate(ContainerLike())
Out[189]: '121212

It seems that behavior was changed in 
https://github.com/python/cpython/commit/89a76abf20889551ec1ed64dee1a4161a435db5b.
 At least it should be documented.

--
messages: 336279
nosy: sir-sigurd
priority: normal
severity: normal
status: open
title: str.translate() behave differently for ASCII-only and other strings

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36070] Enclosing scope not visible from within list comprehension

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

This is a duplicate of issue3692.

--
nosy: +serhiy.storchaka
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed
superseder:  -> improper scope in list comprehension, when used in class 
declaration

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29871] Enable optimized locks on Windows

2019-02-21 Thread Paulie Pena


Change by Paulie Pena :


--
nosy: +paulie4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: sys.modules

2019-02-21 Thread Terry Reedy

On 2/21/2019 11:40 AM, ast wrote:

Hello

Is it normal to have 151 entries in dictionary sys.modules
just after starting IDLE or something goes wrong ?


The is the right number.  When Python starts, it imports around 50 
modules.  When it runs IDLE, most of idlelib modules are imported, plus 
about 50 stdlib modules, including 4 or 5 tkinter modules.  IDLE used to 
import many more modules, but I managed to delay some until needed, and 
reduced the startup time by about 25%.



 >>> import sys
 >>> len(sys.modules)
151


Since this is your code, the report is for the user-code execution 
process, as opposed to the IDLE GUI process.  Both processes are running 
when you get the prompt.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


[issue36071] Add support for Windows ARM32 in ctypes/libffi

2019-02-21 Thread Paul Monson


Change by Paul Monson :


--
components: Windows, ctypes
nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Add support for Windows ARM32 in ctypes/libffi
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36070] Enclosing scope not visible from within list comprehension

2019-02-21 Thread Eric V. Smith


Eric V. Smith  added the comment:

I suspect Nathan is seeing this problem at class scope. This is a well known 
issue:

>>> class C:
...   from random import random
...   out = [random() for ind in range(3)]
...
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in C
  File "", line 3, in 
NameError: name 'random' is not defined
>>>

It is not related to the list comprehension, but to the class scope. See the 
last paragraph of 
https://docs.python.org/3/reference/executionmodel.html#resolution-of-names

But I agree with Zach about needing an example that fails.

--
nosy: +eric.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug

2019-02-21 Thread Steve Dower


Steve Dower  added the comment:

The regular test suite ought to be enough - see devguide.python.org for the 
info. It was definitely failing in multiprocessing last time I tried this.

You could also just push changes and start a PR, as that will run the tests 
automatically.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28235] In xml.etree.ElementTree docs there is no parser argument in fromstring()

2019-02-21 Thread Cheryl Sabella


Cheryl Sabella  added the comment:

Thank you @py.user for reporting this issue and for the original patch and 
thank you @Manjusaka for the PR!

This was my first merge! Woot! :-)

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28235] In xml.etree.ElementTree docs there is no parser argument in fromstring()

2019-02-21 Thread miss-islington


miss-islington  added the comment:


New changeset b046f1badaaffb7e526b937fa2192c449b9076ed by Miss Islington (bot) 
in branch '3.7':
bpo-28235: Fix xml.etree.ElementTree.fromstring docs (GH-11903)
https://github.com/python/cpython/commit/b046f1badaaffb7e526b937fa2192c449b9076ed


--
nosy: +miss-islington

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28235] In xml.etree.ElementTree docs there is no parser argument in fromstring()

2019-02-21 Thread miss-islington


Change by miss-islington :


--
pull_requests: +12007

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28235] In xml.etree.ElementTree docs there is no parser argument in fromstring()

2019-02-21 Thread Cheryl Sabella


Cheryl Sabella  added the comment:


New changeset e5458bdb6af81f9b98acecd8819c60016d3f1441 by Cheryl Sabella 
(Manjusaka) in branch 'master':
bpo-28235: Fix xml.etree.ElementTree.fromstring docs (GH-11903)
https://github.com/python/cpython/commit/e5458bdb6af81f9b98acecd8819c60016d3f1441


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread Giampaolo Rodola'


Giampaolo Rodola'  added the comment:

Interesting. Because both errors/conditions are mapped to ERROR_INVALID_HANDLE 
we need the creation time. I can work on a patch for that. Potentially I can 
also include OSX, Linux and BSD* implementations for methods involving 
os.kill(pid). That would be a broader task though. That also raises the 
question if there are other methods other than kill()/terminate()/send_signal() 
that we want to make "safe" from the reused PID scenario. 

> Also, unrelated but something I noticed. Using _active list in Windows 
> shouldn't be necessary. Unlike Unix, a process in Windows doesn't have to be 
> waited on by its parent to avoid a zombie. Keeping the handle open will 
> actually create a zombie until the next _cleanup() call, which may be never 
> if Popen() isn't called again.

Good catch. Looks like it deserves a ticket.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36046] support dropping privileges when running subprocesses

2019-02-21 Thread Gregory P. Smith


Change by Gregory P. Smith :


--
assignee:  -> gregory.p.smith
nosy: +gregory.p.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36070] Enclosing scope not visible from within list comprehension

2019-02-21 Thread Zachary Ware


Zachary Ware  added the comment:

Can you attach a test file that shows the failure?

--
nosy: +zach.ware

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35651] PEP 257 (active) references PEP 258 (rejected) as if it were active

2019-02-21 Thread Mihai Capotă

Change by Mihai Capotă :


--
nosy: +mihaic

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36070] Enclosing scope not visible from within list comprehension

2019-02-21 Thread Nathan Woods


New submission from Nathan Woods :

The following code works in an interactive shell or in a batch file, but not 
when executed as part of a unittest suite or pdb:

from random import random
out = [random() for ind in range(3)]

It can be made to work using pdb interact, but this doesn't help with unittest.

Tested in Python 3.7.2

--
messages: 336270
nosy: woodscn
priority: normal
severity: normal
status: open
title: Enclosing scope not visible from within list comprehension
type: behavior
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35904] Add statistics.fmean(seq)

2019-02-21 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35904] Add statistics.fmean(seq)

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:


New changeset 47d9987247bcc45983a6d51fd1ae46d5d356d0f8 by Raymond Hettinger in 
branch 'master':
bpo-35904: Add statistics.fmean() (GH-11892)
https://github.com/python/cpython/commit/47d9987247bcc45983a6d51fd1ae46d5d356d0f8


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22865] Document how to make pty.spawn not copy data

2019-02-21 Thread Martin Panter


Martin Panter  added the comment:

I'm not sure it is wise for the Python documentation to suggest inserting null 
bytes in general. This seems more like an application-specific hack. There is 
nothing in Python that handles these null bytes specially, and I expect they 
will be seen if the child reads the terminal in raw mode, or if the parent's 
output is redirected to a file, sent over the network or a real serial link.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22239] asyncio: nested event loop

2019-02-21 Thread Jesús Cea Avión

Change by Jesús Cea Avión :


--
nosy: +jcea

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34785] pty.spawn -- auto-termination after child process is dead (a zombie)

2019-02-21 Thread Martin Panter


Martin Panter  added the comment:

Suggest closing this assuming it is a duplicate, unless Jarry can give more 
information.

--
resolution:  -> duplicate
status: open -> pending
superseder:  -> pty.spawn hangs on FreeBSD 9.3, 10.x

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug

2019-02-21 Thread Jeff Robbins


Jeff Robbins  added the comment:

Steve, sorry to be dense, but I'm unfortunately ignorant as to what tests I 
ought to be running.  The only test I have right now is much too complicated, 
and I'd rather be running some official regression test that reveals the 
problem without my app code, if possible.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33944] Deprecate and remove pth files

2019-02-21 Thread Steve Dower


Change by Steve Dower :


--
nosy: +steve.dower

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35867] NameError is not caught at Task execution

2019-02-21 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

I do not think this is a bug. Any exception that is raised inside a task will 
be in the .exception() method when the task is finished. Here you are running 
the task without waiting for finalization. For example, if you change:

async def cofunc1(self):
await self.cofunc2()
await self.task # <-
print("\nwaitin' : where-t-f is the NameError hiding!?")
await asyncio.sleep(6)
print("Wait is over, let's exit\n")

you will find the NameError immediately. If you do not want to await the task 
you can wait until self.task.done() is True and then check 
self.task.exception() for retrieving the exception (if any).

What happens with BaseException is that is a very low-level exception that is 
handled differently compared with regular exceptions that derive from 
Exception. The reason is that control flow exceptions and things like 
KeyboardInterrupt need to be handled differently. This happens explicitly here:

https://github.com/python/cpython/blob/master/Modules/_asynciomodule.c#L2675

--
nosy: +pablogsal
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29871] Enable optimized locks on Windows

2019-02-21 Thread Steve Dower


Steve Dower  added the comment:

> I assume you meant #35662

Yes indeed. I am apparently massively dyslexic when it comes to copying issue 
numbers into the bpo comment field :)

Meanwhile, over on #35662, Jeff has a fix for at least one of the regressions.

--
versions: +Python 3.8 -Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug

2019-02-21 Thread Steve Dower


Steve Dower  added the comment:

> reveals an expectation that Py_END_ALLOW_THREADS won't change the results of 
> GetLastError()

Fantastic work, Jeff! That's almost certainly the major problem there - 
Py_END_ALLOW_THREADS can totally change the error code, and we haven't ever 
done a full sweep to check it.

Feel free to send a PR against issue29871 with your changes. If the tests are 
happy, then I am too.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread Eryk Sun


Eryk Sun  added the comment:

The subprocess Handle object is never finalized explicitly, so the Process 
handle should always be valid as long as we have a reference to the Popen 
instance. We can call TerminateProcess as many times as we want, and the handle 
will still be valid. If it's already terminated, NtTerminateProcess fails with 
STATUS_PROCESS_IS_TERMINATING, which maps to Windows ERROR_ACCESS_DENIED. 

If some other code mistakenly called CloseHandle on our handle. This is a 
serious bug that should never be silenced and always raise an exception if we 
can detect it. 

If the handle value hasn't been reused, NtTerminateProcess fails with 
STATUS_INVALID_HANDLE. If it now references a non-Process object, it fails with 
STATUS_OBJECT_TYPE_MISMATCH. Both of these map to Windows ERROR_INVALID_HANDLE. 
If the handle value was reused by a Process object (either via 
CreateProcess[AsUser] or OpenProcess) that lacks the PROCESS_TERMINATE (1) 
right (cannot be our original handle, since ours had all access), then it fails 
with STATUS_ACCESS_DENIED, which maps to Windows ERROR_ACCESS_DENIED. Otherwise 
if it has the PROCESS_TERMINATE right, then currently we'll end up terminating 
an unrelated process. As mentioned by Giampaolo, we could improve our chances 
of catching this bug by first verifying the PID via GetProcessId and the 
creation time from GetProcessTimes. We'd also have to store the creation time 
in _execute_child. Both functions would have to be added to _winapi.

> The solution I propose just ignores ERROR_INVALID_HANDLE and 
> makes it an alias for "process is already gone". 

If we get ERROR_INVALID_HANDLE, we should not try to call GetExitCodeProcess. 
All we know is that it either wasn't a valid handle or was reused to reference 
a non-Process object. Maybe by the time we call GetExitCodeProcess it has since 
been reused again to reference a Process. That would silence the error and 
propagate a bug by setting an unrelated exit status. Otherwise, 
GetExitCodeProcess will just fail again with ERROR_INVALID_HANDLE. There's no 
point to this, and it's potentially making the problem worse. 

---

Also, unrelated but something I noticed. Using _active list in Windows 
shouldn't be necessary. Unlike Unix, a process in Windows doesn't have to be 
waited on by its parent to avoid a zombie. Keeping the handle open will 
actually create a zombie until the next _cleanup() call, which may be never if 
Popen() isn't called again.

--
nosy: +eryksun

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35899] '_is_sunder' function in 'enum' module fails on empty string

2019-02-21 Thread Ethan Furman


Ethan Furman  added the comment:

The changes to `_is_sunder` and `_is_dunder` look good, but there is a problem 
with the underlying assumptions of what Enum should be doing:

- nameless members are not to be allowed
- non-alphanumeric characters are not supported

In other words, while `_is_sunder` should not fail, neither should an empty 
string be allowed as a member name.  This can be checked at line 154 (just add 
'' to the set) -- then double check that the error raised is a ValueError and 
not an IndexError.

For the strange character portion, use some non-latin numbers and letters to 
make sure they work, but don't check for symbols such as exclamation points -- 
while they might work, we are not supporting such things, and having a test 
that checks to make sure they work suggests that we do support it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Joe Jevnik


Joe Jevnik  added the comment:

Thank you for reviewing this so quickly!

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:

Thanks for noticing this.  We really should have stuck with the original plan 
of subclassing property().

--
assignee:  -> rhettinger

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:


New changeset f36f89257b30e0bf88e8aaff6da14a9a96f57b9e by Raymond Hettinger 
(Joe Jevnik) in branch 'master':
bpo-36068: Make _tuplegetter objects serializable (GH-11981)
https://github.com/python/cpython/commit/f36f89257b30e0bf88e8aaff6da14a9a96f57b9e


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22865] Document how to make pty.spawn not copy data

2019-02-21 Thread Cheryl Sabella


Cheryl Sabella  added the comment:

Thanks @RadicalZephyr!  

@martin.panter, please review the PR when you get a chance.  Thank you!

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RE: python3.7.2 won't compile with SSL support (solved)

2019-02-21 Thread Felix Lazaro Carbonell


Incredibly:

./configure --with-ssl=/usr/include/openssl/

Made the trick!!
Although --with-ssl is not documented in ./configure --help.

Cheers,
Felix.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug

2019-02-21 Thread Jeff Robbins


Jeff Robbins  added the comment:

Steve, I did some more digging into why the native condition variable approach 
might be causing problems on Windows.  Testing my fix revealed that there is at 
least one place in Modules\overlapped.c that either 

a) waits too long to call GetLastError(), or 

b) reveals an expectation that Py_END_ALLOW_THREADS won't change the results of 
GetLastError().



Py_BEGIN_ALLOW_THREADS
ret = GetQueuedCompletionStatus(CompletionPort, ,
, , Milliseconds);
save_err = GetLastError();
Py_END_ALLOW_THREADS

err = ret ? ERROR_SUCCESS : GetLastError();





The problem in this code is that it allows *other* Windows API calls between 
the original Windows API call (in this case GetQueuedCompletionStatus()) and 
the call to GetLastError().  If those other Windows API calls change the 
thread-specific GetLastError state,
the info we need is lost.


To test for this possibility, I added a diagnostic test right after the code 
above



if (!ret && (err != save_err)) {
printf("GetQueuedCompletionStatus returned 0 but we lost the error=%d 
lost=%d Overlapped=%d\n", save_err, err, (long)Overlapped);
}




 and ran a test that eventually produced this on the console:



GetQueuedCompletionStatus returned 0 but we lost the error=258 lost=0 
Overlapped=0




error 258 is WAIT_TIMEOUT.  The next lines of code are looking for that "error" 
in order to decide if GetQueuedCompletionStatus failed, or merely timed out.



if (Overlapped == NULL) {
if (err == WAIT_TIMEOUT)
Py_RETURN_NONE;
else
return SetFromWindowsErr(err);
}



So the impact of this problem is severe.   Instead of returning None to the 
caller (in this case _poll in asyncio\windows_events.py), it will raise an 
error!



while True:
status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms)
if status is None:
break




And, to make things extra confusing, the error raised via 
SetFromWindowsErr(err) (where err == 0) ends up looking like this:



OSError: [WinError 0] The operation completed successfully




A valid WAIT_TIMEOUT thus gets converted to a Python error, but also loses the 
original Windows Error Code of 258, so you are left scratching your head about 
how a WinError 0 (ERROR_SUCCESS) could have crashed your call to, say, 
asyncio.run()? (See traceback below.)


So either we need to make sure that all calls to GetLastError() are made 
immediately after the relevant Windows API call, without any intervening other 
Windows API calls, and thereby prevent case a) above, or as in case b), the GIL 
code (using either emulated or native condition variables from condvar.h) needs 
to preserve the Error state. 

Some code in Python\thread_nt.h in fact does this already, e.g.



void *
PyThread_get_key_value(int key)
{
/* because TLS is used in the Py_END_ALLOW_THREAD macro,
 * it is necessary to preserve the windows error state, because
 * it is assumed to be preserved across the call to the macro.
 * Ideally, the macro should be fixed, but it is simpler to
 * do it here.
 */
DWORD error = GetLastError();
void *result = TlsGetValue(key);
SetLastError(error);
return result;
}



Of course there might be *other* problems associated with using native 
condition variables on Windows, but this is the only one 
I've experienced after some fairly heavy use of Python 3.7.2 asyncio on Windows.


traceback:

asyncio.run(self.main())
  File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\runners.py", 
line 43, in run
return loop.run_until_complete(main)
  File 
"C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", 
line 571, in run_until_complete
self.run_forever()
  File 
"C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", 
line 539, in run_forever
self._run_once()
  File 
"C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", 
line 1739, in _run_once
event_list = self._selector.select(timeout)
  File 
"C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\windows_events.py", 
line 405, in select
self._poll(timeout)
  File 
"C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\windows_events.py", 
line 703, in _poll
status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms)
OSError: [WinError 0] The operation completed successfully

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: sys.modules

2019-02-21 Thread DL Neil

Hello,


On 22/02/19 5:40 AM, ast wrote:

Is it normal to have 151 entries in dictionary sys.modules
just after starting IDLE or something goes wrong ?
 >>> import sys
 >>> len(sys.modules)
151


I don't use Idle. Written in python, doesn't it require various packages 
to run before it even talks to you, eg tkinter? Thus am not sure if they 
are also being counted. However:-


After firing-up Python 3.7 from the cmdLN, my system reported only 60.

Remember also, that the sys Run-time service is described as 
"System-specific parameters and functions", which will account for 
(some) differences by itself.




Most of common modules seems to be already there,
os, itertools, random 
I thought that sys.modules was containing loaded modules
with import command only.


Not quite true! Whereas the manual says
<<<
sys.modules This is a dictionary that maps module names to modules which 
have already been loaded.

>>>

also remember that import is not the only way modules are "loaded"! 
Built-in types (etc) are/must be loaded as part of python, otherwise 
nothing would work, eg float, int, list, contextlib, collections, 
functools, ... This is the modular/boot-strap method that is a feature 
of python.


Web-refs:
https://docs.python.org/3/tutorial/modules.html#standard-modules
https://stackoverflow.com/questions/7643809/what-are-default-modules-in-python-which-are-imported-when-we-run-python-as-for

--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list


[issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux)

2019-02-21 Thread Leonardo Mörlein

Leonardo Mörlein  added the comment:

It seems to be a regression, as my python 3.6 version is not affected:

lemoer@orange ~> python3.6 --version
Python 3.6.8

My python 3.7 version is affected:

lemoer@orange ~> python3.7 --version
Python 3.7.2

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux)

2019-02-21 Thread Leonardo Mörlein

Leonardo Mörlein  added the comment:

The generated error is:

OSError: [Errno 22] Invalid argument

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux)

2019-02-21 Thread Leonardo Mörlein

New submission from Leonardo Mörlein :

The tuple (host, port) is ("fe80::5054:01ff:fe04:3402%node4_client", 22) in 
https://github.com/python/cpython/blob/master/Lib/asyncio/base_events.py#L918. 
The substring "node4_client" identifies the interface, which is needed for link 
local connections.

The function self._ensure_resolved() is called and resolves to
infos[0][4] = ("fe80::5054:01ff:fe04:3402", 22, something, 93), where 93 is the 
resolved scope id (see sin6_scope_id from struct sockaddr_in6 from man ipv6).

Afterwards the self.sock_connect() is called with address = infos[0][4].   In 
self.sock_connect() the function self._ensure_resolved() is called again. In 
https://github.com/python/cpython/blob/master/Lib/asyncio/base_events.py#L1282 
the scope id is stripped from the tuple. The tuple (host, port) is now only 
("fe80::5054:01ff:fe04:3402", 22) and therefore the scope id is lost.

I wrote this quick fix, which is not really suitable as a real solution for the 
problem:

lemoer@orange ~> diff /usr/lib/python3.7/asyncio/base_events.py{.bak,}
--- /usr/lib/python3.7/asyncio/base_events.py.bak   2019-02-21 
18:42:17.060122277 +0100
+++ /usr/lib/python3.7/asyncio/base_events.py   2019-02-21 18:49:36.886866750 
+0100
@@ -942,8 +942,8 @@
 sock = None
 continue
 if self._debug:
-logger.debug("connect %r to %r", sock, address)
-await self.sock_connect(sock, address)
+logger.debug("connect %r to %r", sock, (host, port))
+await self.sock_connect(sock, (host, port))
 except OSError as exc:
 if sock is not None:
 sock.close()

--
components: asyncio
messages: 336253
nosy: Leonardo Mörlein, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: asyncio: create_connection cannot handle IPv6 link-local addresses 
anymore (linux)
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: sys.modules

2019-02-21 Thread Chris Angelico
On Fri, Feb 22, 2019 at 6:03 AM Chris Warrick  wrote:
>
> On Thu, 21 Feb 2019 at 18:57, ast  wrote:
> >
> > Hello
> >
> > Is it normal to have 151 entries in dictionary sys.modules
> > just after starting IDLE or something goes wrong ?
> >
> >  >>> import sys
> >  >>> len(sys.modules)
> > 151
> >
> > Most of common modules seems to be already there,
> > os, itertools, random 
> >
> > I thought that sys.modules was containing loaded modules
> > with import command only.
>
> sys.modules contains all modules that have been imported in the
> current session. Some of those imports happen in the background,
> without your knowledge — for example, because these modules are
> required by the interpreter itself, or are part of IDLE. The number
> you see depends on the environment (I got 530 in ipython3, 34 in
> python3, 45 in python2) and is not in any way important.
>

The OP is technically correct in that they have been loaded with the
"import" statement (or equivalent). What may not be obvious is that
many modules will import other modules, which also get cached.
Generally this is pretty insignificant, unless you're trying to
benchmark startup performance or something; the module cache just
magically speeds everything up for you.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python3.7.2 won't compile with SSL support

2019-02-21 Thread Kushal Kumaran
"Felix Lazaro Carbonell"  writes:

> Hello:
>
>  
>
> I'm trying to install python3.7.2 from source in debian9.8  but it doesn't
> compile with SSL.
>
>  
>
> I already installed openssl
>
>  
>
> And ./configure -with-openssl=/usr/include/openssl/ yields:
>
>  
>
> checking for openssl/ssl.h in /usr/include/openssl/... no
>
>  
>
> and ssl.h is certainly in /usr/include/openssl/
>

Looks like it is appending openssl/ssl.h to /usr/include/openssl/ and
failing to find the header.  It's been a long time since I attemped to
build python by hand; perhaps you need to not specify the --with-openssl
argument at all, if the headers are installed in the expected places.

Have you seen https://github.com/pyenv/pyenv?

-- 
regards,
kushal
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Feature suggestion: "Using declarations" i.e. context managers ("with" blocks) tied to scope/lifetime of the variable rather than to nesting

2019-02-21 Thread Thomas Jollans
On 21/02/2019 19:35, mnl.p...@gmail.com wrote:
> (I sent this a few days ago but got bounced without a reason—don’t see it
> posted, so I’m trying one more time.)

No, it got through. And it's in the archive:

https://mail.python.org/pipermail/python-list/2019-February/739548.html

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: sys.modules

2019-02-21 Thread Chris Warrick
On Thu, 21 Feb 2019 at 18:57, ast  wrote:
>
> Hello
>
> Is it normal to have 151 entries in dictionary sys.modules
> just after starting IDLE or something goes wrong ?
>
>  >>> import sys
>  >>> len(sys.modules)
> 151
>
> Most of common modules seems to be already there,
> os, itertools, random 
>
> I thought that sys.modules was containing loaded modules
> with import command only.

sys.modules contains all modules that have been imported in the
current session. Some of those imports happen in the background,
without your knowledge — for example, because these modules are
required by the interpreter itself, or are part of IDLE. The number
you see depends on the environment (I got 530 in ipython3, 34 in
python3, 45 in python2) and is not in any way important.

-- 
Chris Warrick 
PGP: 5EAAEA16
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Feature suggestion: "Using declarations" i.e. context managers ("with" blocks) tied to scope/lifetime of the variable rather than to nesting

2019-02-21 Thread Rhodri James

On 21/02/2019 18:35, mnl.p...@gmail.com wrote:

(I sent this a few days ago but got bounced without a reason—don’t see it
posted, so I’m trying one more time.)


It was posted, and commented on.  You can see the thread in the mailing 
list archives, if you don't believe me: 
https://mail.python.org/pipermail/python-list/2019-February/739548.html



I thought this new C# feature would be a good thing to add to Python:
https://vcsjones.com/2019/01/30/csharp-8-using-declarations/

The nesting required by context managers can be at odds with a program’s
real structure.


Really?  I thought in your example, particularly as revised here, the 
context managers showed up the real structure of the code nicely.


--
Rhodri James *-* Kynesim Ltd
--
https://mail.python.org/mailman/listinfo/python-list


[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Karthikeyan Singaravelan


Change by Karthikeyan Singaravelan :


--
nosy: +rhettinger

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35840] Control flow inconsistency on closed asyncio stream

2019-02-21 Thread Marc Schlaich


Marc Schlaich  added the comment:

No, I'm seeing the same issue on MacOS. Attached modified example.

--
Added file: https://bugs.python.org/file48160/tcp_test.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Joe Jevnik


Change by Joe Jevnik :


--
keywords: +patch
pull_requests: +12006
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35810] Object Initialization does not incref Heap-allocated Types

2019-02-21 Thread Neil Schemenauer


Neil Schemenauer  added the comment:

Sorry, morning coffee didn't kick in yet I guess. ;-)  My actual wish is to 
make all types heap allocated and eliminate the statically allocated ones.  So, 
Py_TPFLAGS_HEAPTYPE would be set on all types in that world.  That is a 
gigantic task, affecting near every Python extension type.  Too huge for even a 
nutty person like me to imagine doing in the near term.  So, sorry for 
potentially derailing discussion here.

I agree with comments made by Stefan Behnel and Petr Viktorin.  There is a 
small risk to cause problems (i.e. serious memory leaks in a previously working 
program).  However, as Petr says, the extension in that case is broken and it 
is not hard to fix.  Eddie has provided examples for what changes are needed.

I think if we properly communicate the change then it is okay to merge the PR.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36068] Make _tuplegetter objects serializable

2019-02-21 Thread Joe Jevnik


New submission from Joe Jevnik :

The new _tuplegetter objects for accessing fields of a namedtuple are no longer 
serializable with pickle. Cloudpickle, a library which provides extensions to 
pickle to facilitate distributed computing in Python, depended on being able to 
pickle the members of namedtuple classes. While property isn't serializable, 
cloudpickle has support for properties allowing us to serialize the old 
property(itemgetter) members.

The attached PR adds a __reduce__ method to _tuplegetter objects which will 
allow serialization without special support. Another option would be to expose 
`index` as a read-only attribute, allowing cloudpickle or other libraries to 
provide the pickle implementation as a third-party library.

--
components: Library (Lib)
messages: 336251
nosy: ll
priority: normal
severity: normal
status: open
title: Make _tuplegetter objects serializable
type: enhancement
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Multiprocessing performance question

2019-02-21 Thread DL Neil

George: apologies for mis-identifying yourself as OP.

Israel:

On 22/02/19 6:04 AM, Israel Brewster wrote:
Actually not a ’toy example’ at all. It is simply the first step in 
gridding some data I am working with - a problem that is solved by tools 
like SatPy, but unfortunately I can’t use SatPy because it doesn’t 
recognize my file format, and you can’t load data directly. Writing a 
custom file importer for SatPy is probably my next step.


Not to focus on the word "toy", the governing issue is of setup cost cf 
the acceleration afforded by the parallel processing. In this case, the 
former is/was more-or-less as high as the latter, and your efforts were 
insufficiently rewarded.


That said, if the computer was concurrently performing this task and a 
number of others, the number of cores available to you would decrease. 
At which point, speeds start heading backwards!


This is largely speculation because only you know the task, objectives, 
and circumstances - however, for those 'playing along at home' and 
learning from your experiment...



That said, the entire process took around 60 seconds to run. As this 
step was taking 10, I figured it would be low-hanging fruit for speeding 
up the process. Obviously I was wrong. For what it’s worth, I did manage 
to re-factor the code, so instead of generating the entire grid 
up-front, I generate the boxes as needed to calculate the overlap with 
the data grid. This brought the processing time down to around 40 
seconds, so a definite improvement there.


Doing it on-demand. Now you're talking! Plus, if you're able to 'fit' 
the data into each box as it is created, that will help justify the 
setup/tear-down overhead cost for each async process.


Well done!




---
Israel Brewster
Software Engineer
Alaska Volcano Observatory
Geophysical Institute - UAF
2156 Koyukuk Drive
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

On Feb 20, 2019, at 4:30 PM, DL Neil > wrote:


George

On 21/02/19 1:15 PM, george trojan wrote:

def create_box(x_y):
return geometry.box(x_y[0] - 1, x_y[1],  x_y[0], x_y[1] - 1)
x_range = range(1, 1001)
y_range = range(1, 801)
x_y_range = list(itertools.product(x_range, y_range))
grid = list(map(create_box, x_y_range))
Which creates and populates an 800x1000 “grid” (represented as a flat 
list

at this point) of “boxes”, where a box is a shapely.geometry.box(). This
takes about 10 seconds to run.
Looking at this, I am thinking it would lend itself well to
parallelization. Since the box at each “coordinate" is independent of all
others, it seems I should be able to simply split the list up into chunks
and process each chunk in parallel on a separate core. To that end, I
created a multiprocessing pool:



I recall a similar discussion when folk were being encouraged to move 
away from monolithic and straight-line processing to modular functions 
- it is more (CPU-time) efficient to run in a straight line; than it 
is to repeatedly call, set-up, execute, and return-from a function or 
sub-routine! ie there is an over-head to many/all constructs!


Isn't the 'problem' that it is a 'toy example'? That the amount of 
computing within each parallel process is small in relation to the 
inherent 'overhead'.


Thus, if the code performed a reasonable analytical task within each 
box after it had been defined (increased CPU load), would you then 
notice the expected difference between the single- and multi-process 
implementations?




From AKL to AK
--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list




--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list


[issue36019] test_urllib fail in s390x buildbots: http://www.example.com/

2019-02-21 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

Related failure:

https://buildbot.python.org/all/#/builders/141/builds/1233


--
Ran 56 tests in 25.105s
OK (skipped=1)
Re-running test 'test_normalization' in verbose mode
test_bug_834676 (test.test_normalization.NormalizationTest) ... ok
test test_normalization failed
test_main (test.test_normalization.NormalizationTest) ...   fetching 
http://www.pythontest.net/unicode/11.0.0/NormalizationTest.txt ...
FAIL
==
FAIL: test_main (test.test_normalization.NormalizationTest)
--
Traceback (most recent call last):
  File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/urllib/request.py", 
line 1316, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File 
"/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_normalization.py",
 line 41, in test_main
testdata = open_urlresource(TESTDATAURL, encoding="utf-8",
urllib.error.URLError: 
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File 
"/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_normalization.py",
 line 47, in test_main
self.fail(f"Could not retrieve {TESTDATAURL}")
AssertionError: Could not retrieve 
http://www.pythontest.net/unicode/11.0.0/NormalizationTest.txt
--
Ran 2 tests in 20.044s
FAILED (failures=1)
Re-running test 'test_urllib2net' in verbose mode
test_close (test.test_urllib2net.CloseSocketTest) ... skipped "Resource 
'http://www.example.com/' is not available"
test_custom_headers (test.test_urllib2net.OtherNetworkTests) ... skipped 
"Resource 'http://www.example.com' is not available"
test_file (test.test_urllib2net.OtherNetworkTests) ... ok
test_ftp (test.test_urllib2net.OtherNetworkTests) ... ok
test_redirect_url_withfrag (test.test_urllib2net.OtherNetworkTests) ... skipped 
"Resource 'http://www.pythontest.net/redir/with_frag/' is not available"
test_sites_no_connection_close (test.test_urllib2net.OtherNetworkTests) ... 
skipped 'XXX: http://www.imdb.com is gone'
test_urlwithfrag (test.test_urllib2net.OtherNetworkTests) ... skipped "Resource 
'http://www.pythontest.net/index.html#frag' is not available"
test_ftp_basic (test.test_urllib2net.TimeoutTest) ... ok
test_ftp_default_timeout (test.test_urllib2net.TimeoutTest) ... ok
test_ftp_no_timeout (test.test_urllib2net.TimeoutTest) ... ok
test_ftp_timeout (test.test_urllib2net.TimeoutTest) ... ok
test_http_basic (test.test_urllib2net.TimeoutTest) ... ok
test_http_default_timeout (test.test_urllib2net.TimeoutTest) ... ok
test_http_no_timeout (test.test_urllib2net.TimeoutTest) ... ok
/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/support/__init__.py:1608:
 ResourceWarning: unclosed 
  gc.collect()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/support/__init__.py:1608:
 ResourceWarning: unclosed 
  gc.collect()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
test_http_timeout (test.test_urllib2net.TimeoutTest) ... ok
--
Ran 15 tests in 686.335s
OK (skipped=5)
1 test failed again:
test_normalization

Also, it seems that there are some socket leaks.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Feature suggestion: "Using declarations" i.e. context managers ("with" blocks) tied to scope/lifetime of the variable rather than to nesting

2019-02-21 Thread mnl.p...@gmail.com
(I sent this a few days ago but got bounced without a reason—don’t see it
posted, so I’m trying one more time.)


I thought this new C# feature would be a good thing to add to Python:
https://vcsjones.com/2019/01/30/csharp-8-using-declarations/

The nesting required by context managers can be at odds with a program’s
real structure.  The code indentation in python should reflect the program
logic and flow as much as possible, but for one thing, the context manager
generally doesn't apply to most of the code it forces to be indented, but
only to a few lines using the resource; for another, even for those
lines, the context manager is usually not reflective of the flow
anyway (unlike the classic control statements, such as for/next, if/else,
and while).

for example:
with xx.open() as logfile:
 do this
 do that
 logfile.write()
 with  as dbs_conn:
  do this
  do that
  logfile.write()
Things like dbs_conn are functioning as just a variable (albeit one wants
assurance its object will be determinstically destroyed)—the indentation
reflects the span of lines the variable is being used (held open), but
shouldn’t change the structure for that reason.

(This is uglier as additional withs get nested.)
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue36060] Document how collections.ChainMap() determines iteration order

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:


New changeset 86f093f71a594dcaf21b67ba13dda72863e9bde9 by Raymond Hettinger in 
branch 'master':
bpo-36060: Document how collections.ChainMap() determines iteration order 
(GH-11969)
https://github.com/python/cpython/commit/86f093f71a594dcaf21b67ba13dda72863e9bde9


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36060] Document how collections.ChainMap() determines iteration order

2019-02-21 Thread miss-islington


Change by miss-islington :


--
pull_requests: +12002

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35812] Don't log an exception from the main coroutine in asyncio.run()

2019-02-21 Thread Andrew Svetlov


Andrew Svetlov  added the comment:

Nevermind.

Actually, I used a backport `asyncio.run()` to Python 3.6.
I saw the problem because of the difference between `asyncio.all_task()` and 
`asyncio.Task.all_task()`.

The former return only active tasks but the later returns done tasks also.

--
resolution:  -> not a bug

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35812] Don't log an exception from the main coroutine in asyncio.run()

2019-02-21 Thread Andrew Svetlov


Change by Andrew Svetlov :


--
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35810] Object Initialization does not incref Heap-allocated Types

2019-02-21 Thread Neil Schemenauer


Neil Schemenauer  added the comment:

Hello Eddie,
Thank you for putting what looks to be significant effort into this PR.  It 
would be great if we can get this fixed.  There is a real issue about breaking 
3rd party extensions.  So, we want to proceed with care.

I wonder, if we are going to break extensions already, could we just remove the 
whole concept of heap allocated types?  If you look through the CPython source 
code, I think you will find a lot of tricky code that deals with the 
Py_TPFLAGS_HEAPTYPE case.  If we could remove heap types, we could remove all 
those cases.  That would give some performance improvement but more importantly 
would simplify the implementation.

If PyType_FromSpec() is now working correctly, could we just move everything 
that the currently a heap type to use that?  Obviously we have to give 3rd 
party extensions a lot of time to get themselves updated.  Maybe give a 
deprecation warning if Py_TPFLAGS_HEAPTYPE is used.  You could have a 
configuration option for Python that enables or disables the 
Py_TPFLAGS_HEAPTYPE support.  Once we think extensions have been given enough 
time to update themselves, we can remove Py_TPFLAGS_HEAPTYPE.

Some other possible advantages of getting rid of heap types:

- GC objects will always have the GC header allocated (because CPython controls 
the allocation of the chunk of memory for the type)

- might be possible to eliminate GC headers and use bitmaps.  I have been 
experimenting with the idea but it seems to require that we don't use heap 
types.  Initially I was interested in the bitmap idea because of memory 
savings.  After more tinkering, I think the big win will be in eliminating 
linked-list traversals.  On modern CPUs, that's slow and iterating over a 
bitmap should be much faster.

- I suspect heap types are difficult to support for PyPy.  I haven't looked 
into that but it seems tricky when you have non-refcounting GC

- type_is_gc() is ugly and would go away.  Based on my profiling, 
PyObject_IS_GC() is pretty expensive.  A lot of types have the tp_is_gc slot 
set (more than you would expect).

- In the very long term, using PyType_FromSpec() could give us the freedom to 
change the structure layout of types.  I don't have any specific ideas about 
that but it seems like a better design.

--
nosy: +nascheme

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



sys.modules

2019-02-21 Thread ast

Hello

Is it normal to have 151 entries in dictionary sys.modules
just after starting IDLE or something goes wrong ?

>>> import sys
>>> len(sys.modules)
151

Most of common modules seems to be already there,
os, itertools, random 

I thought that sys.modules was containing loaded modules
with import command only.

--
https://mail.python.org/mailman/listinfo/python-list


[issue22865] Document how to make pty.spawn not copy data

2019-02-21 Thread Geoff Shannon


Geoff Shannon  added the comment:

It is submitted @cheryl.sabella. Thanks for reviving this, I had totally lost 
track of it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22865] Document how to make pty.spawn not copy data

2019-02-21 Thread Roundup Robot


Change by Roundup Robot :


--
pull_requests: +12004

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36060] Document how collections.ChainMap() determines iteration order

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:


New changeset 7121a6eeb7941f36fb9e7eae28ec24ecfa533e81 by Raymond Hettinger 
(Miss Islington (bot)) in branch '3.7':
bpo-36060: Document how collections.ChainMap() determines iteration order 
(GH-11969) (GH-11978)
https://github.com/python/cpython/commit/7121a6eeb7941f36fb9e7eae28ec24ecfa533e81


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread Giampaolo Rodola'


Giampaolo Rodola'  added the comment:

On POSIX there is that risk, yes. As for Windows, the termination is based on 
the process handle instead of the PID, but I am not sure if that makes a 
difference. The risk of reusing the PID/handle is not related to this issue 
though. The solution I propose just ignores ERROR_INVALID_HANDLE and makes it 
an alias for "process is already gone". It does not involve preventing the 
termination of other process handles/PIDs (FWIW in order to do that in psutil I 
use PID + process' creation time to identify a process uniquely).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35995] logging.handlers.SMTPHandler

2019-02-21 Thread Vinay Sajip


Vinay Sajip  added the comment:

The existing implementation supports doing an SSL handshake using STARTTLS, 
which provides encryption for the actual email traffic. You are asking, it 
seems, to support a server that only listens on an already encrypted 
connection, and doesn't use STARTTLS. That would, in my book, be an 
*enhancement request* and not a bug. Your PR has removed the STARTTLS support - 
what is supposed to happen when connecting to a server that listens unencrypted 
and expects to use STARTTLS to initiate encypted traffic?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36057] Add docs and tests for ordering in Counter. [no behavior change]

2019-02-21 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread STINNER Victor


STINNER Victor  added the comment:

> _winapi.TerminateProcess(self._handle, 1)
> OSError: [WinError 6] The handle is invalid

Silently ignoring that would be dangerous.

There is a risk that the handle is reused by another process and so that you 
terminate an unrelated process, no?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36057] Add docs and tests for ordering in Counter. [no behavior change]

2019-02-21 Thread Raymond Hettinger


Raymond Hettinger  added the comment:


New changeset 407c7343266eb3e5a2f5c1f4913082c84f8dd8a0 by Raymond Hettinger in 
branch 'master':
bpo-36057 Update docs and tests for ordering in collections.Counter [no 
behavior change] (#11962)
https://github.com/python/cpython/commit/407c7343266eb3e5a2f5c1f4913082c84f8dd8a0


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36057] Add docs and tests for ordering in Counter. [no behavior change]

2019-02-21 Thread miss-islington


Change by miss-islington :


--
pull_requests: +12003

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36060] Document how collections.ChainMap() determines iteration order

2019-02-21 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36043] FileCookieJar constructor don't accept PathLike

2019-02-21 Thread Alexander Kapshuna


Alexander Kapshuna  added the comment:

Oh sorry, I just thought that everybody has forgotten about this part of 
library. Nevermind my patch then, your work is certainly better, matrixise.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Multiprocessing performance question

2019-02-21 Thread Israel Brewster
Actually not a ’toy example’ at all. It is simply the first step in gridding 
some data I am working with - a problem that is solved by tools like SatPy, but 
unfortunately I can’t use SatPy because it doesn’t recognize my file format, 
and you can’t load data directly. Writing a custom file importer for SatPy is 
probably my next step.

That said, the entire process took around 60 seconds to run. As this step was 
taking 10, I figured it would be low-hanging fruit for speeding up the process. 
Obviously I was wrong. For what it’s worth, I did manage to re-factor the code, 
so instead of generating the entire grid up-front, I generate the boxes as 
needed to calculate the overlap with the data grid. This brought the processing 
time down to around 40 seconds, so a definite improvement there.
---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> On Feb 20, 2019, at 4:30 PM, DL Neil  wrote:
> 
> George
> 
> On 21/02/19 1:15 PM, george trojan wrote:
>> def create_box(x_y):
>> return geometry.box(x_y[0] - 1, x_y[1],  x_y[0], x_y[1] - 1)
>> x_range = range(1, 1001)
>> y_range = range(1, 801)
>> x_y_range = list(itertools.product(x_range, y_range))
>> grid = list(map(create_box, x_y_range))
>> Which creates and populates an 800x1000 “grid” (represented as a flat list
>> at this point) of “boxes”, where a box is a shapely.geometry.box(). This
>> takes about 10 seconds to run.
>> Looking at this, I am thinking it would lend itself well to
>> parallelization. Since the box at each “coordinate" is independent of all
>> others, it seems I should be able to simply split the list up into chunks
>> and process each chunk in parallel on a separate core. To that end, I
>> created a multiprocessing pool:
> 
> 
> I recall a similar discussion when folk were being encouraged to move away 
> from monolithic and straight-line processing to modular functions - it is 
> more (CPU-time) efficient to run in a straight line; than it is to repeatedly 
> call, set-up, execute, and return-from a function or sub-routine! ie there is 
> an over-head to many/all constructs!
> 
> Isn't the 'problem' that it is a 'toy example'? That the amount of computing 
> within each parallel process is small in relation to the inherent 'overhead'.
> 
> Thus, if the code performed a reasonable analytical task within each box 
> after it had been defined (increased CPU load), would you then notice the 
> expected difference between the single- and multi-process implementations?
> 
> 
> 
> From AKL to AK
> -- 
> Regards =dn
> -- 
> https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread STINNER Victor


STINNER Victor  added the comment:

I'm not sure of the purpose of this issue.


It's expected to get an error if you try to send a signal to process which is 
already terminated.

vstinner@apu$ python3
Python 3.7.2 (default, Jan 16 2019, 19:49:22) 
>>> import subprocess
>>> proc=subprocess.Popen("/bin/true")
>>> import os
>>> os.waitpid(proc.pid, 0)
(8171, 0)
>>> proc.kill()
ProcessLookupError: [Errno 3] No such process
>>> proc.terminate()
ProcessLookupError: [Errno 3] No such process

Ignoring these errors would be very risky: if another process gets the same 
pid, you would send a signal to the wrong process. Ooops.


If you only use the subprocess API, you don't have this issue:

vstinner@apu$ python3
Python 3.7.2 (default, Jan 16 2019, 19:49:22) 
>>> import subprocess
>>> proc=subprocess.Popen("/bin/true")
>>> proc.wait()
0
>>> proc.kill() # do nothing
>>> proc.terminate() # do nothing

--
nosy: +vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33039] int() and math.trunc don't accept objects that only define __index__

2019-02-21 Thread Rémi Lapeyre

Rémi Lapeyre  added the comment:

Yes it is. Thanks for finding that @Serhiy.

Since nobody objected to the change on the mailing list and people seem to 
agree in issue 20092:

[R. David Murray]
To summarize for anyone like me who didn't follow that issue: __index__ 
means the object can be losslessly converted to an int (is a true int), while 
__int__ may be an approximate conversion.  Thus it makes sense for an object to 
have an __int__ but not __index__, but vice-versa does not make sense.


I will post my patch tonight.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30782] Allow limiting the number of concurrent tasks in asyncio.as_completed

2019-02-21 Thread twisteroid ambassador


twisteroid ambassador  added the comment:

I feel like once you lay out all the requirements: taking futures from an 
(async) generator, limiting the number of concurrent tasks, getting completed 
tasks to one consumer "as completed", and an implicit requirement that back 
pressure from the consumer should be handled (i.e. if whoever's iterating 
through "async for fut in as_completed(...)" is too slow, then the tasks should 
pause until it catches up), there are too many moving parts, and this should 
really be implemented using several tasks.

So a straightforward implementation may look like this:

async def better_as_completed(futs, limit):
MAX_DONE_FUTS_HELD = 10  # or some small number

sem = asyncio.Semaphore(limit)
done_q = asyncio.Queue(MAX_DONE_FUTS_HELD)

async def run_futs():
async for fut in futs:
await sem.acquire()
asyncio.create_task(run_one_fut(fut))

async with sem:
await done_q.put(None)

async def run_one_fut(fut):
try:
fut = asyncio.ensure_future(fut)
await asyncio.wait((fut,))
await done_q.put(fut)
finally:
sem.release()

asyncio.create_task(run_futs())

while True:
next_fut = await done_q.get()
if next_fut is None:
return
yield next_fut


Add proper handling for cancellation and exceptions and whatnot, and it may 
become a usable implementation.

And no, I do not feel like this should be added to asyncio.as_completed.

--
nosy: +twisteroid ambassador

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread Giampaolo Rodola'


Giampaolo Rodola'  added the comment:

I think this is somewhat similar to issue14252. The problem I see is that we 
should either raise ProcessLookupError or ignore the error (better). This 
concept of suppressing errors if process is gone is currently already 
established in 2 places:
https://github.com/python/cpython/blob/bafa8487f77fa076de3a06755399daf81cb75598/Lib/subprocess.py#L1389
Basically what I propose is to extend the existing logic to also include 
ERROR_INVALID_HANDLE other than ERROR_ACCESS_DENIED. Not tested:

def terminate(self):
"""Terminates the process."""
# Don't terminate a process that we know has already died.
if self.returncode is not None:
return
try:
_winapi.TerminateProcess(self._handle, 1)
except WindowsError as err:
if err.errno in (ERROR_ACCESS_DENIED, ERROR_INVALID_HANDLE):
rc = _winapi.GetExitCodeProcess(self._handle)
if rc == _winapi.STILL_ACTIVE:
raise
self.returncode = rc
else:
raise

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36065] Add unified C API for accessing bytes and bytearray

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

If you need to support only bytes and bytearray, but not other bytes-like 
object, this is a too special case. It is easy to write your own macros or 
functions that wrap existing C API. Other option -- duplicate the code and 
replace PyBytes_ with PyByteArray_. In all cases be aware abot differences 
between bytes and bytearray: bytarray can change its content and size, saved 
values of PyByteArray_AS_STRING(ob) and PyByteArray_GET_SIZE(self) can not be 
used after executing arbitrary code in destructors or releasing GIT.

--
resolution:  -> rejected
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36065] Add unified C API for accessing bytes and bytearray

2019-02-21 Thread Ori Avtalion


Ori Avtalion  added the comment:

My use-case is modifying existing code that supports bytes to also support 
bytearray.

https://github.com/mongodb/mongo-python-driver/blob/9902d239b4e557c2a657e8c8110f7751864cec95/bson/_cbsonmodule.c#L1112

The buffer protocol, which I didn't know of, feels slightly complicated for my 
use-case. For now I opted to adding these macros myself, only changing 3 lines.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36067] subprocess terminate() "invalid handle" error when process is gone

2019-02-21 Thread Giampaolo Rodola'


New submission from Giampaolo Rodola' :

Happened in psutil:
https://ci.appveyor.com/project/giampaolo/psutil/builds/22546914/job/rlp112gffyf2o30i

==
ERROR: psutil.tests.test_process.TestProcess.test_halfway_terminated_process
--
Traceback (most recent call last):
  File "c:\projects\psutil\psutil\tests\test_process.py", line 85, in tearDown
reap_children()
  File "c:\projects\psutil\psutil\tests\__init__.py", line 493, in reap_children
subp.terminate()
  File "C:\Python35-x64\lib\subprocess.py", line 1092, in terminate
_winapi.TerminateProcess(self._handle, 1)
OSError: [WinError 6] The handle is invalid

During the test case, the process was already gone (no PID).

--
components: Library (Lib)
messages: 336231
nosy: giampaolo.rodola
priority: normal
severity: normal
stage: needs patch
status: open
title: subprocess terminate() "invalid handle" error when process is gone
type: behavior
versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36030] add internal API function to create tuple without items array initialization

2019-02-21 Thread STINNER Victor


STINNER Victor  added the comment:

I reviewed PR 11954: I asked to rework the PR to only add _PyTuple_FromArray() 
and the "unrelated" micro-optimization. So it would be easier to see code 
simplification and the micro-optimization.

If the micro-optimization doesn't make the code more complex and doesn't 
introduce subtle issue with the GC, I'm fine with taking 10 ns optimization ;-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36030] add internal API function to create tuple without items array initialization

2019-02-21 Thread STINNER Victor


STINNER Victor  added the comment:

> Once I had wrote a similar patch that adds PyTuple_FromArray, but never 
> published it because I did not found enough use cases for this function.  
> Although I considered using it only for removing some code duplication, but 
> Sergey shown that it can be used for small performance boost in some special 
> cases. I am still not sure, but this argument makes this change a tiny bit 
> more attractive. Leave this on Raymond.

The micro-benchmark results are not really impressive. I still like PR 11954 
because it removes code (simply loops). _PyTuple_FromArray() has a well defined 
API and is safe (I'm saying that because PR 11927 adds an "unsafe" function). 
As soon as it's private, I'm fine with it.

I'm more attracted by the code simplification than performance boost here.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36030] add internal API function to create tuple without items array initialization

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

Once I had wrote a similar patch that adds PyTuple_FromArray, but never 
published it because I did not found enough use cases for this function. 
Although I considered using it only for removing some code duplication, but 
Sergey shown that it can be used for small performance boost in some special 
cases. I am still not sure, but this argument makes this change a tiny bit more 
attractive. Leave this on Raymond.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Help, Can't find the default proxy in requests by config

2019-02-21 Thread Evi1 T1me
On Thursday, February 21, 2019 at 7:12:40 AM UTC-5, Evi1 T1me wrote:
> ```bash
> ~ python3
> Python 3.7.0 (default, Oct 22 2018, 14:54:27)
> [Clang 10.0.0 (clang-1000.11.45.2)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import requests
> >>> r = requests.get('https://www.baidu.com')
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 159, in _new_conn
> (self._dns_host, self.port), self.timeout, **extra_kw)
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
> line 80, in create_connection
> raise err
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
> line 70, in create_connection
> sock.connect(sa)
> ConnectionRefusedError: [Errno 61] Connection refused
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 594, in urlopen
> self._prepare_proxy(conn)
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 805, in _prepare_proxy
> conn.connect()
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 301, in connect
> conn = self._new_conn()
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 168, in _new_conn
> self, "Failed to establish a new connection: %s" % e)
> urllib3.exceptions.NewConnectionError: 
> : Failed to 
> establish a new connection: [Errno 61] Connection refused
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 
> 449, in send
> timeout=timeout
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 638, in urlopen
> _stacktrace=sys.exc_info()[2])
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 
> 398, in increment
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
> urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.baidu.com', 
> port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
> connect to proxy.', 
> NewConnectionError(' 0x10e3ce550>: Failed to establish a new connection: [Errno 61] Connection 
> refused')))
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 75, in 
> get
> return request('get', url, params=params, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 60, in 
> request
> return session.request(method=method, url=url, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 
> 533, in request
> resp = self.send(prep, **send_kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 
> 646, in send
> r = adapter.send(request, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 
> 510, in send
> raise ProxyError(e, request=request)
> requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.baidu.com', 
> port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
> connect to proxy.', 
> NewConnectionError(' 0x10e3ce550>: Failed to establish a new connection: [Errno 61] Connection 
> refused')))
> ```
> 
> Check the proxy 
> 
> ```bash
> >>> print(requests.utils.get_environ_proxies('https://www.baidu.com'))
> {'http': 'http://127.0.0.1:', 'https': 'http://127.0.0.1:'}
> ```
> 
> Check bash environment
> 
> ```bash
> ~ set | grep proxy
> ```
> Nothing output.
> 
> ```bash
> ➜  ~ netstat -ant | grep 
> tcp4   5  0  127.0.0.1.54437127.0.0.1. CLOSE_WAIT
> tcp4 653  0  127.0.0.1.54436127.0.0.1. CLOSE_WAIT
> tcp4   5  0  127.0.0.1.54434127.0.0.1. CLOSE_WAIT
> ```
> 
> ```bash
> ➜  ~ lsof -i:
> COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
> JavaAppli 77714 zerop   54u  IPv6 0x975257a323b5690f  0t0  TCP 
> localhost:54434->localhost:ddi-tcp-1 (CLOSE_WAIT)
> JavaAppli 77714 zerop   55u  IPv6 0x975257a33daa290f  0t0  TCP 
> localhost:54436->localhost:ddi-tcp-1 (CLOSE_WAIT)
> JavaAppli 77714 zerop   56u  IPv6 0x975257a3366b600f  0t0  TCP 
> localhost:54437->localhost:ddi-tcp-1 (CLOSE_WAIT)
> ```
> 
> ```bash
> ➜  ~ ps -ef | grep 77714
>   501 77714 1   0 11:17AM ?? 3:33.55 /Applications/Burp Suite 
> Community Edition.app/Contents/MacOS/JavaApplicationStub
>   501 84408 82855   0  5:54AM ttys0020:00.00 grep --color=auto 
> --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg 
> --exclude-dir=.svn 77714
> ```
> 
> Restart 

[issue36066] Add `empty` block to `for` and `while` loops.

2019-02-21 Thread Karthikeyan Singaravelan


Karthikeyan Singaravelan  added the comment:

Slightly similar proposal in the past : 
https://mail.python.org/pipermail/python-ideas/2016-March/038897.html

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36065] Add unified C API for accessing bytes and bytearray

2019-02-21 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

The unified C API already exists. It is called the buffer protocol.

https://docs.python.org/3/c-api/buffer.html#buffer-related-functions

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36066] Add `empty` block to `for` and `while` loops.

2019-02-21 Thread Stéphane Wirtel

Stéphane Wirtel  added the comment:

This issue should be discussed on python-ideas

https://mail.python.org/mailman/listinfo/python-ideas

--
nosy: +matrixise

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: using zip for transpose

2019-02-21 Thread Robin Becker

On 21/02/2019 13:49, Peter Otten wrote:

Robin Becker wrote:

...




Isn't df.values a numpy array? Then try the more direct and likely more
efficient

df.values.tolist()

or, if you ever want to transpose

df.values.T.tolist()

The first seems to achieve what your sample code does. (In addition it also
converts the numpy type to the corresponding python builtin, i. e.
numpy.float64 becomes float etc.)


Thanks for the pointer.

In fact we were working through all the wrong methods eg iterrows (slow) or 
iterating over columns which created the need for a
transpose.

However, df.values.tolist() seems to create a list of row lists which is what 
is actually needed and it is the fastest.
So to convert df to something for reportlab table this seems most efficient

rlab_table_data=[['Mean','Max','Min','TestA','TestB']]+df.values.tolist()

thanks again
--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


[issue36065] Add unified C API for accessing bytes and bytearray

2019-02-21 Thread Ronald Oussoren


Ronald Oussoren  added the comment:

What is your use case for this? Is that something that can use the buffer API 
instead of these low-level APIs?

--
nosy: +ronaldoussoren

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36066] Add `empty` block to `for` and `while` loops.

2019-02-21 Thread Stéphane Wirtel

Stéphane Wirtel  added the comment:

yep and there was no followup. maybe we could close this issue.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36066] Add `empty` block to `for` and `while` loops.

2019-02-21 Thread Karthikeyan Singaravelan


Karthikeyan Singaravelan  added the comment:

I would recommend posting this on python-ideas to get some feedback. This 
introduces new control flow and breaks some old assumptions as in third case 
empty block is executed and there might be code that depends upon current 
for-else behavior where else should be executed. Also reading the examples 
initially this seems to add little cognitive overhead too since there is now 
empty and else that are executed based on empty iterable, breaking out of the 
loop, natural ending of the loop which might make this little hard to teach too.

--
nosy: +xtreak

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36066] Add `empty` block to `for` and `while` loops.

2019-02-21 Thread WloHu


New submission from WloHu :

###
Description

Adding `empty` block to loops will extend them to form for-empty-else and 
while-empty-else. The idea is that `empty` block will execute when loop 
iteration wasn't performed because iterated element was empty. The idea is 
taken from Django framework' `{% empty %}` block 
(https://docs.djangoproject.com/en/2.1/ref/templates/builtins/#for-empty).

###
Details

There are combinations how this loop should work together with `else` block 
(`for` loop taken as example):
1. for-empty - `empty` block runs when iteration wasn't performed, i.e. ended 
naturally because of empty iterator;
2. for-else - `else` block runs when iteration ended naturally either because 
iterator was empty or exhausted, behavior the same as currently implemented;
3. for-empty-else - in this form there is split depending on the way in which 
loop ended naturally:
-- empty iterator - only `empty` block is executed,
-- non-empty iterator - only `else` block is executed.

In 3rd case `else` block is not executed together with `empty` block because 
this can be done by using for-else form. The only reason to make this case work 
differently is code duplication in case when regardless of the way, in which 
loop ended naturally, there is common code we want to execute. E.g.:
```
for:
...
empty:
statement1
statement2
else:
statement1
statement3
```

However implementing the "common-avoid-duplication" case will be inconsisted 
with `try-except` which executes only 1st matching `except` block.

###
Current alternative solutions

In case when iterable object works well with "empty test" (e.g.: `list`, `set`) 
the most simple solution is:
```
if iterable:
print("Empty")
else:
for item in iterable:
print(item)
else:
print("Ended naturally - non-empty.")
```

Which looks good and is simple enough to avoid extending the language. However 
in general this would fail if `iterable` object is a generator which is always 
truthy and fails the expectations of "empty test".
In such case special handling should be made to make it work in general. So far 
I see 3 options:
- use helper variable `x = list(iterable)` and do "empty test" as shown above - 
this isn't an option for unbound `iterable` like stream or asynchronous message 
queue;
- test generator for emptiness a.k.a. peek next element:
```
try:
first = next(iterable)
except StopIteration:
print("Empty")
else:
for item in itertools.chain([first], iterable):
print(item)
else:
print("Ended naturally - non-empty.")
```

- add `empty` flag inside loop:
```
empty = True
for item in iterable:
empty = False  # Sadly executed for each `item`.
print(item)
else:
if empty:
print("Empty")
else
print("Ended naturally - non-empty.")
```

The two latter options aren't really idiomatic compared to proposed:
```
for item in iterable:
print(item)
empty:
print("Empty")
else:
print("Ended naturally - non-empty.")
```

###
Enchancement pros and cons
Pros:
- more idiomatic solution to handle natural loop exhaustion for empty iterator,
- shorter horizontal indentation compared to current alternatives,
- quite consistent flow control splitting compared to `try-except`,
- not so exotic as it's already implemented in Django (`{% empty %}`) and 
Jinja2 (`{% else %}`).

Cons:
- new keyword/token,
- applies to even smaller number of usecases than for-else which is still 
considered exotic.

###
Actual (my) usecase (shortened):
```
empty = True
for message in messages:
empty = False
try:
decoded = message.decode()
except ...:
...
... # Handle different exception types.
else:
log.info("Success")
break
else:
if empty:
error_message = "No messages."
else:
error_message = "Failed to decode available messages."
log.error(error_message)
```

###
One more thing to convince readers

Considering that Python "went exotic" with for-else and while-else to solve `if 
not_found: print('Not found.')` case, adding `empty` seems like next inductive 
step in controling flow of loops.

###
Alternative solution

Enhance generators to work in "empty test" which peeks for next element behind 
the scenes. This will additionally solve annoying issue for testing empty 
generators, which currently must be handled as special case of iterable object. 
Moreover this solution doesn't require new language keywords.

--
components: Interpreter Core
messages: 336221
nosy: wlohu
priority: normal
severity: normal
status: open
title: Add `empty` block to `for` and `while` loops.
type: enhancement
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 

[issue36065] Add unified C API for accessing bytes and bytearray

2019-02-21 Thread STINNER Victor


Change by STINNER Victor :


--
nosy:  -vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >