[issue33330] Better error handling in PyImport_Cleanup()

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread Christian Gollwitzer

Am 23.05.18 um 07:22 schrieb Chris Angelico:

On Wed, May 23, 2018 at 9:51 AM, bartc  wrote:

Sorry, but I don't think you're right at all. unless the official references
for the language specifically say that commas are primarily for constructing
tuples, and all other uses are exceptions to that rule.


"A tuple consists of a number of values separated by commas"
https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences

"Separating items with commas"
https://docs.python.org/3/library/stdtypes.html#tuple

"Note that tuples are not formed by the parentheses, but rather by use
of the comma operator."
https://docs.python.org/3/reference/expressions.html#parenthesized-forms

Enough examples? Commas make tuples, unless context specifies otherwise.


I'd think that the definitive answer is in the grammar, because that is 
what is used to build the Python parser:


https://docs.python.org/3/reference/grammar.html

Actually, I'm a bit surprised that tuple, list etc. does not appear 
there as a non-terminal. It is a bit hard to find, and it seems that 
"atom:" is the starting point for parsing tuples, lists etc.


Christian
--
https://mail.python.org/mailman/listinfo/python-list


Re: how to handle captcha through machanize module or any module

2018-05-22 Thread SACHIN CHAVAN
On Wednesday, December 18, 2013 at 6:26:17 PM UTC+5:30, Jai wrote:
> please do replay how to handle captcha through machanize module

I have the same issue, nothing find a solution yet!
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue31493] IDLE cond context: fix code update and font update timers

2018-05-22 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

The patch fixed immediate problem #2 above.  #1 is a separate enhancement and I 
listed it as #4 on the new master issue #33610.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread Chris Angelico
On Wed, May 23, 2018 at 9:51 AM, bartc  wrote:
> On 22/05/2018 16:57, Chris Angelico wrote:
>>
>> On Wed, May 23, 2018 at 1:43 AM, Ian Kelly  wrote:
>
>
>>> In other words, the rule is not really as simple as "commas make
>>> tuples". I stand by what I wrote.
>>
>>
>> Neither of us is wrong here.
>
>
> Sorry, but I don't think you're right at all. unless the official references
> for the language specifically say that commas are primarily for constructing
> tuples, and all other uses are exceptions to that rule.

"A tuple consists of a number of values separated by commas"
https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences

"Separating items with commas"
https://docs.python.org/3/library/stdtypes.html#tuple

"Note that tuples are not formed by the parentheses, but rather by use
of the comma operator."
https://docs.python.org/3/reference/expressions.html#parenthesized-forms

Enough examples? Commas make tuples, unless context specifies otherwise.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue32831] IDLE: Add docstrings and tests for codecontext

2018-05-22 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

If you are following up with a new patch that includes code changes, make a new 
issue.  In the opening message, please briefly list the mini-issues fixed so I 
can refer changes to purposes.  Perhaps something like

* getspacesfirstword - function and param1 names
...

List the new issue as a dependency of new 'improve code context' issue 33610 
that addresses point 2.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33441] Expose the sigset_t converter via private API

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

Since posix_spawn() has been removed in 3.7, there is no need to backport this 
feature.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33610] IDLE: Make multiple improvements to CodeContext

2018-05-22 Thread Terry J. Reedy

Change by Terry J. Reedy :


--
dependencies: +IDLE cond context: fix code update and font update timers, IDLE: 
Add docstrings and tests for codecontext, Idle Code Context: separate changing 
current and future editors
stage: test needed -> 
title: IDLE: Improve CodeContext (various issues) -> IDLE: Make multiple 
improvements to CodeContext
versions: +Python 3.6, Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33537] Help on importlib.resources outputs the builtin open description

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33475] Fix converting AST expression to string and optimize parenthesis

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33610] IDLE: Improve CodeContext (various issues)

2018-05-22 Thread Terry J. Reedy

New submission from Terry J. Reedy :

There are a number of possible improvements to CodeContext.  They be separate 
issues (and PRs) that are dependencies of this master index issue. Some should 
be fairly easy now that we have the new tests.

1. #32831 added docstrings and tests.  Review has notes.
2. Follow-up may revise and do some user-invisible code cleanups.
3. #31493 cancelled the event loops when instances are deleted.
4. Spinoff from above: 1 or 2 events loops for class, not each instance.
5. #22703: separate initial context state of new window from toggling the state 
of current windows.  Current behavior is buggy.
6. Gray-out Options => Code Context on non-editor windows. (This would have to 
be revised again if windows became panes in master window.)
7. Change fixed # of lines to variable # of lines as needed, up to limit.  
About 15 is limit for 4-space indents in 80 char lines. 
8. Click on context line jumps to line. Show it at top of window.
9. Reenable user config of context colors?
10. Somehow mark blocks in editor.  Subtle background color change?
Or tag just the indents, or, if add line numbers, the line numbers?

--
assignee: terry.reedy
components: IDLE
messages: 317357
nosy: cheryl.sabella, terry.reedy
priority: normal
severity: normal
stage: test needed
status: open
title: IDLE: Improve CodeContext (various issues)
type: enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33454] Mismatched C function signature in _xxsubinterpreters.channel_close()

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed
versions:  -Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27300] tempfile.TemporaryFile(): missing errors=... argument

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

Thank you Stephan for your contribution!

--
resolution:  -> fixed
stage:  -> resolved
status: open -> closed
versions: +Python 3.8 -Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27300] tempfile.TemporaryFile(): missing errors=... argument

2018-05-22 Thread Serhiy Storchaka

New submission from Serhiy Storchaka :


New changeset 825aab95fde959541859383f8ea7e7854ebfd49f by Serhiy Storchaka 
(sth) in branch 'master':
bpo-27300: Add the errors parameter to tempfile classes. (GH-6696)
https://github.com/python/cpython/commit/825aab95fde959541859383f8ea7e7854ebfd49f


--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27485] urllib.splitport -- is it official or not?

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

This change made test_urlparse failing when ran with -We. Or just producing a 
lot of warnings in the default mode.

==
ERROR: test_splitattr (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1113, in 
test_splitattr
self.assertEqual(splitattr('/path;attr1=value1;attr2=value2'),
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1103, in splitattr
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splitattr() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splithost (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1006, in 
test_splithost
self.assertEqual(splithost('//www.example.org:80/foo/bar/baz.html'),
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 977, in splithost
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splithost() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splitnport (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1077, in 
test_splitnport
self.assertEqual(splitnport('parrot:88'), ('parrot', 88))
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1049, in 
splitnport
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splitnport() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splitpasswd (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1050, in 
test_splitpasswd
self.assertEqual(splitpasswd('user:ab'), ('user', 'ab'))
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1013, in 
splitpasswd
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splitpasswd() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splitport (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1066, in 
test_splitport
self.assertEqual(splitport('parrot:88'), ('parrot', '88'))
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1026, in splitport
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splitport() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splitquery (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1091, in 
test_splitquery
self.assertEqual(splitquery('http://python.org/fake?foo=bar'),
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1073, in 
splitquery
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splitquery() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splittag (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 1101, in 
test_splittag
self.assertEqual(splittag('http://example.com?foo=bar#baz'),
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", line 1088, in splittag
DeprecationWarning, stacklevel=2)
DeprecationWarning: urllib.parse.splittag() is deprecated as of 3.8, use 
urllib.parse.urlparse() instead

==
ERROR: test_splittype (test.test_urlparse.Utility_Tests)
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython-gc/Lib/test/test_urlparse.py", line 998, in 
test_splittype
self.assertEqual(splittype('type:opaquestring'), ('type', 'opaquestring'))
  File "/home/serhiy/py/cpython-gc/Lib/urllib/parse.py", 

[issue13886] readline-related test_builtin failure

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

This old issue still is not fixed. Martin, Xavier, could you open a PR please?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31529] IDLE: Add docstrings and tests for editor.py reload functions

2018-05-22 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

There is a reference to this issue in #31494, msg302628 (CodeContext loops).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread Mikhail V
On Wed, May 23, 2018 at 2:25 AM, Dan Strohl  wrote:
>

>>
>> Explanation:
>> [here i'll use same symbol /// for the data entry point, but of course it 
>> can be
>> changed if a better idea comes later. Also for now, just for simplicity - 
>> the rule
>> is that the contents of a block starts always on the new line.
>>
>> So, e.g. this:
>>
>> data = /// s4
>> first line
>> last line
>> the rest python code
>>
>> - will parse the block and knock out leading 4 spaces.
>> i.e. if the first line has 5 leading spaces then 1 space will be left in the 
>> string.
>> Block parsing terminates when the next line does not satisfy the indent
>> sequence (4 spaces in this case).
>
>
> Personally though, I would not hard code it to knock out 4 leading spaces.
> I would have it handle spaces the same was that the existing parser does,

If I understand you correctly, then I think I have this option already
described,
i.e. these:

>> data = /// ts
>>
>> - "any whitespace" (mimic current Python behaviour)
>>
>> data = /// s# or
>> data = /// t
>>
>> - simply count amount of spaces (tabs) from first
>>   line and proceed, otherwise terminate.


Though hard-coded knock-out is also very useful, e.g. for this:

data = /// s4
First line indented more (8 spaces)
second - less (4 spaces)
rest code

So this will preserve formatting.


>> data = /// "???"
>> ??? abc foo bar
>> ???
>>
>> - defines indent character by string: crazy idea but why not.
>>
>
> Nope, don't like this one... It's far enough from Python normal that it seems
> unlikely to not get through, and (personally at least), I struggle to see the 
> benefit.

Heh, that was merely joke - but OTOH one could use it for hard-coded
indent sequences:

data = /// ""
First line indented more (8 spaces)
second - less (4 spaces)
rest code

A bit sloppy look, but it generalizes some uses. But granted - I don't
see much real applications besides than space and tab indented block
anyway - so it's under question.


>> Language  parameter, e.g.:
>> data = /// t1."yaml"
>>
>> -this can be reserved for future usage by code analysis tools or dynamic
>> syntax highlighting.
>>
>
> I can see where this might be interesting, but again, I just don't see the 
> need,

I think you're right  - if need a directive for some analysis tool then one
can make for example a consideration to precede the whole statement
with a directive, say in a comment:

# lang "yaml"
data = /// t
first line
last line
rest




Also I am thinking about this - there might be one useful 'hack".
One could even allow single-line usage, e.g.;
(with a semicolon)

data = /// s2:  first line

- so this would start parsing just after colon :
"pretending it is block.
This may be not so fat-fingered-proof and 'inconsistent',
but in the end of the day, might be a win actually.



M
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: "Data blocks" syntax specification draft

2018-05-22 Thread Mikhail V
On Tue, May 22, 2018 at 1:25 PM, bartc  wrote:
> On 22/05/2018 03:49, Mikhail V wrote:
>>
>> On Mon, May 21, 2018 at 3:48 PM, bartc  wrote:
>>
>> # t
>> # t
>>11  22  33
>>
>
> Is this example complete? Presumably it means ((11,22,33),).

Yep.

>
>> You get the point?
>> So basically all nice chars are already occupied.
>
> You mean for introducing tuple, list and dict literals?

No, I've meant the node (or whole data block) entry point chars ///.

So in such an language full of various operators, it is
just hard to find anything that does not clash with some
current meaning yet looks adequate and is ASCII.

A quick note: thanks for your comments, I'll  just note that I've
moved to some more promising related proposal (see last posts in this thread),
so the original one will move to later considerations.
so the nuances will be reconsidered anyway.


> Python already uses
> (, [ and { for those, with the advantage of having a closing ), ] and } to
> make it easier to see where each ends.

Ehm, for inline usage - i.e. everywhere on a line - closing tags are necessity.
My whole idea was about indentation based data - so there it is redundant noise.
You may disagree as well, since I've noticed in some previous thread
that you prefer Fortran-like termination tags. So that's quite personal I think.

As for more objective points: currently many brackets are 'overloaded'
and moreover, they are all almost "homoglyphs" - very similar characters.

For an inline solution I'd prefer it e.g.
for a tuple:

{t  11  22  33}

nested:
{t  11  {t  11  22}  22}

Yes, it IS worse than:
{11  {11  22}  22}

But still one might need _various types_ so brackets only - has
limited potential in this sense.


>>> The ///d dictionary example is ambiguous: can you have more than one
>>> key:value per line or not? If so, it would look like this:
>>>
>>>///d "a" "b" "c" "d" "e" "f"
>>
>>
>> ///d   "a" "b""c" "d""e" "f"
>>
>> Now better? :-)
>
>
> Not really.
> Suppose one got accidentally missed out, and there was some
> spurious name at the end,

Not sure about your case, but it is up to you to format it to make it
more readable - whitespace / newline separation gives enough possibilities to
do so. And for me the lack of commas is of great benefit.
 _Some_ punctuation can be added in some cases (see e.g. "node
stacking section" in original document - dicts may adopt this as well
but I see no necessity at least for this case)


> It's not clear whether:
>
>   ///d a b
> c e
> f x
>
> is allowed ?

allowed, but i'd say merely discouraged for industrial usage. better start
newline at least for multiline contents.

> I think this is an interesting first draft of an idea, but it doesn't seem
> rigorous. And people don't like that triple stroke prefix, or those single
> letter codes (why not just use 'tuple', 'list', 'dict')?
>
> For example, here is a proposal I've just made up for a similar idea, but to
> make such constructors obey similar rules to Python blocks:
>
>  tuple:
>  10
>  20
>  30
>
>  list:
>  list:
>  10
>  tuple: 5,6,7
>  30
>  "forty"
>  "fifty"
>

Cool, I've commented on similar option actually in some post before -
it might be ok
with type codes grayed-out and good coloring, but in black and white it is a bit
"wall of text" - not much emphasis on _structure_. But its ok - good option.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33565] strange tracemalloc results

2018-05-22 Thread INADA Naoki

INADA Naoki  added the comment:

I can't reproduce it:

  File "test1.py", line 19, in do_get_obj
response = s3_client.get_object(Bucket='archpi.dabase.com', Key='style.css')
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/client.py",
 line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/client.py",
 line 599, in _make_api_call
operation_model, request_dict)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/endpoint.py",
 line 148, in make_request
return self._send_request(request_dict, operation_model)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/endpoint.py",
 line 173, in _send_request
request = self.create_request(request_dict, operation_model)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/endpoint.py",
 line 157, in create_request
operation_name=operation_model.name)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/hooks.py",
 line 227, in emit
return self._emit(event_name, kwargs)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/hooks.py",
 line 210, in _emit
response = handler(**kwargs)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/signers.py",
 line 90, in handler
return self.sign(operation_name, request)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/signers.py",
 line 156, in sign
auth.add_auth(request)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/auth.py",
 line 420, in add_auth
super(S3SigV4Auth, self).add_auth(request)
  File 
"/Users/inada-n/tmp/boto-leak/venv/lib/python3.6/site-packages/botocore/auth.py",
 line 352, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread bartc

On 22/05/2018 16:57, Chris Angelico wrote:

On Wed, May 23, 2018 at 1:43 AM, Ian Kelly  wrote:



In other words, the rule is not really as simple as "commas make
tuples". I stand by what I wrote.


Neither of us is wrong here.


Sorry, but I don't think you're right at all. unless the official 
references for the language specifically say that commas are primarily 
for constructing tuples, and all other uses are exceptions to that rule.


AFAICS, commas are used just like commas everywhere - used as 
separators. The context tells Python what the resulting sequence is.


 "Commas make tuples" is a useful

oversimplification in the same way that "asterisk means
multiplication" is. The asterisk has other meanings in specific
contexts (eg unpacking), but outside of those contexts, it means
multiplication.


I don't think that's quite right either. Asterisk is just an overloaded 
token but you will what it's for as soon as it's encountered.


Comma seems to be used only as a separator.


--
bartc
--
https://mail.python.org/mailman/listinfo/python-list


[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread miss-islington

miss-islington  added the comment:


New changeset 2751dccca4098b799ea575bb35cec9959c74684a by Miss Islington (bot) 
in branch '3.7':
bpo-33604: Remove Pending from hmac Deprecation warning. (GH-7062)
https://github.com/python/cpython/commit/2751dccca4098b799ea575bb35cec9959c74684a


--
nosy: +miss-islington

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RE: "Data blocks" syntax specification draft

2018-05-22 Thread Dan Strohl via Python-list

> -Original Message-
> 
> I think it would be appropriate to propose an alternative to TQS for this
> specific purposes. Namely for making it easier to implement parsers and
> embedded syntaxes.
> 
> So what do I have now with triple quoted strings - a simple example:
> 
> if 1:
> s = """\
> print ("\n") \\
> foo = 5
> """
> 
> So there is a _possibility_ in the sense it is possible to do, so let's say I 
> have a
> lib with a parser, etc. Though now a developer and a user will face quite real
> issues:
> 
> - TQS itself has its specific purpose already in many contents,
>   which may mean for example hard-coded syntax highlighting
> - there a lot of things happening here: e.g. in the above example
>   I use "\n" which I assume a part of string, or \\ - but it is interpreted.
>   Maybe some other things regarding escaping. This particular
>   issue maybe a blocker for making use of TQS in some data cases,
>   Say if the target source text need these very characters.
> 

Yup, I can see this, I do use """ in a number of ways, often to comment out 
large chunks of code. (OK, I probably should not, but I do).

> - indentation is the part of TQS. That is of couse by design
>   so and it's quite logical, though it is hard-coded behaviour and thus
>   does not make the presentation a natural part of blocks containing
>   this string.
> - appearance: imagine you have some small chunks of embedded
>   code parts and you will still have the closing """ everywhere -
>   that would be really hairy.
> 
> 

And yup, that does cause some challenges sometimes.

> 
> Explanation:
> [here i'll use same symbol /// for the data entry point, but of course it can 
> be
> changed if a better idea comes later. Also for now, just for simplicity - the 
> rule
> is that the contents of a block starts always on the new line.
> 
> So, e.g. this:
> 
> data = /// s4
> first line
> last line
> the rest python code
> 
> - will parse the block and knock out leading 4 spaces.
> i.e. if the first line has 5 leading spaces then 1 space will be left in the 
> string.
> Block parsing terminates when the next line does not satisfy the indent
> sequence (4 spaces in this case).
> Another obvious type: tabs:

OK, I CAN see this as a potentially useful suggestion.  There are a number of 
times where I would like to define a large chunk of text, but using tqs and 
having it suddenly move to the left is painful visually.  Right now, I tend to 
either a) do it anyway, b) do it in a separate module and import the variables, 
or c) do it and parse the string to remove the extra spaces.

Personally though, I would not hard code it to knock out 4 leading spaces.   I 
would have it handle spaces the same was that the existing parser does, if 
there are 4 spaces indending the next line, then it removes 4 spaces, if there 
are 6 spaces, it removes 6 spaces, etc... ignoring additional spaces within the 
data-string object.  Once it hits a line that has the same number if indenting 
spaces as the initial token, the data-string object is finished.

> 
> data = /// t1
> first line
> last line
> the rest python code
> 
> Will do the same but with one tabstop character.
> 

Tabs / spaces should be handled as normal (up to the data-string object starts, 
after which, it pulls off the first x tabs or spaces, and leaves anything else) 

> Actually that's it!
> Some further ideas:
> 
> data = /// ts
> - "any whitespace" (mimic current Python behaviour)
> 
> data = /// s# or
> data = /// t
> - simply count amount of spaces (tabs) from first
>   line and proceed, otherwise terminate.
> 
> data = /// "???"
> ??? abc foo bar
> ???
> 
> - defines indent character by string: crazy idea but why not.
> 

Nope, don't like this one... It's far enough from Python normal that it seems 
unlikely to not get through, and (personally at least), I struggle to see the 
benefit.

> Language  parameter, e.g.:
> data = /// t1."yaml"
> 
> -this can be reserved for future usage by code analysis tools or dynamic
> syntax highlighting.
> 

I can see where this might be interesting, but again, I just don't see the 
need, if the spec returns a string, you can use that string in any parser you 
want. If you want to customize how it's handled, then you can always create a 
custom object for it.

> That's just a rough specification.
> 
> What should it give as result:
> 

To me, this seems like a simply additional specification for a TQS, with the 
only enhancement being that it's an indented TQS basically, so the return is a 
string.

> 1. No clash with current TQS rules - less worries
>   about reserved characters.
> 
> 2. Built-in indentation parsing parameter makes it more or
>   less natural continuation of Python blocks and is char-precise,
>   which is very important here.
> 
> 3. Independent of the indent of containing block!
> 
> 4. Parameter descriptor can be developed in such manner
>that it allows more customisation and 

[issue33355] Windows 10 buildbot: 15 min timeout on test_mmap.test_large_filesize()

2018-05-22 Thread David Bolen

David Bolen  added the comment:

I have migrated the win8 and win10 builders to a different machine type on 
Azure, which seems to have restored more reasonable performance for the tests, 
at least in the first set of builds.  Assuming that continues to hold true, it 
should resolve this issue.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Getting Unicode decode error using lxml.iterparse

2018-05-22 Thread digitig
I'm trying to read my iTunes library in Python using iterparse. My current stub 
is:

 Snip 

import sys
import datetime
import xml.etree.ElementTree as ET
import argparse
import re

class Library:

unmarshallers = {
# collections
"array": lambda x: [v.text for v in x],
"dict": lambda x:
dict((x[i].text, x[i+1].text) for i in range(0, len(x), 2)),
"key": lambda x: x.text or "",

# simple types
"string": lambda x: x.text or "",
"data": lambda x: base64.decodestring(x.text or ""),
"date": lambda x: datetime.datetime(*map(int, re.findall("\d+", 
x.text))),
"true": lambda x: True,
"false": lambda x: False,
"real": lambda x: float(x.text),
"integer": lambda x: int(x.text)
}

def load(self, file):
print('Starting...')
parser = ET.iterparse(file)
for action, elem in parser:
unmarshal = self.unmarshallers.get(elem.tag)
if unmarshal:
data = unmarshal(elem)
elem.clear()
elem.text = data
print(elem.text)
elif elem.tag != "plist":
raise IOError("unknown plist type: %r" % elem.tag)
return parser.root[0].text

def __init__(self, infile):
self.root = self.load(infile)

if __name__ == "__main__":
parser = argparse.ArgumentParser(description = "Parse an iTunes library 
file to a set of CSV files suitable for import to a database.")
parser.add_argument('infile', nargs='?', type=argparse.FileType('r'), 
default=sys.stdin)
args=parser.parse_args()
print('Infile = ', args.infile)
library = Library(args.infile)


My input file (reduced to home in on the error) is:


 snip -





15078

NamePart 2. The Death Of Enkidu. 
Skon Přitele Mého Mne Zdeptal Težče





 snip 






15078

NamePart 2. The Death Of Enkidu. 
Skon Přitele Mého Mne Zdeptal Težče







I'm getting an error on one part of the XML:


 File "C:\Users\digit\Anaconda3\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]

UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 202: 
character maps to 


I suspect the issue is that it's using cp1252.py, which I don't think is UTF-8 
as specified in the XML prolog. Is this an iterparse problem, or am I using it 
wrongly?


Thanks.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread miss-islington

Change by miss-islington :


--
pull_requests: +6695

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Gregory P. Smith

Gregory P. Smith  added the comment:


New changeset 8bb0b5b03cffa2a2e74f248ef479a9e7fbe95cf4 by Gregory P. Smith 
(Matthias Bussonnier) in branch 'master':
bpo-33604: Remove Pending from hmac Deprecation warning. (GH-7062)
https://github.com/python/cpython/commit/8bb0b5b03cffa2a2e74f248ef479a9e7fbe95cf4


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread Mikhail V
On Tue, May 22, 2018 at 9:01 AM, Christian Gollwitzer  wrote:
> Am 22.05.18 um 04:17 schrieb Mikhail V:
>>> YAML comes to mind
>>
>>
>> Actually plugging a data syntax in existing language is not a new idea.
>> Though I don't know real success stories.
>>
>
> Thing is, you can do it already now in the script, without modifying the
> Python interpreter, by parsing a triple-quoted string. See the examples
> right here: http://pyyaml.org/wiki/PyYAMLDocumentation
>


Yes. That is exactly what I wanted to discuss actually.
So the feature, which makes it possible in this case  is
triple quote string (TQS).

I think it would be appropriate to propose an alternative
to TQS for this specific purposes. Namely for making it
easier to implement parsers and embedded syntaxes.

So what do I have now with triple quoted strings -
a simple example:

if 1:
s = """\
print ("\n") \\
foo = 5
"""

So there is a _possibility_ in the sense it is possible to do, so
let's say I have a lib with a parser, etc. Though now a developer
and a user will face quite real issues:

- TQS itself has its specific purpose already in many contents,
  which may mean for example hard-coded syntax highlighting
- there a lot of things happening here: e.g. in the above example
  I use "\n" which I assume a part of string, or \\ - but it is interpreted.
  Maybe some other things regarding escaping. This particular
  issue maybe a blocker for making use of TQS in some data cases,
  Say if the target source text need these very characters.

- indentation is the part of TQS. That is of couse by design
  so and it's quite logical, though it is hard-coded behaviour and thus
  does not make the presentation a natural part of blocks containing
  this string.
- appearance: imagine you have some small chunks of embedded
  code parts and you will still have the closing """ everywhere -
  that would be really hairy.


The alternative proposal therefore comes down to a "data block" syntax,
without much assumption about the contents of the block.

This should be simpler to implement, because it should not need a lot
of parsing rules - only some basic options. At the same time it enables
the 'embedding' of user-defined blocks/syntax more naturally
looking than TQS.

My thoughts on possible solution.
-

Problem one: make it look natural inside python source.
Current Python behaviour: very simply speaking, first leading white space
on a line is taken and compared with the one from the next line.
Its Okay for statements, but not okay for raw text data -
because probably I want custom leading whitespaces:

string =
 abc
   abc

(So the TQS takes it simple - grabs it from the line beginning)

So the idea:
- add such a block  to syntax
- *force explicit parameter for the indent charaters.*

Explanation:
[here i'll use same symbol /// for the data entry point, but of course it can
be changed if a better idea comes later. Also for now, just for simplicity -
the rule is that the contents of a block starts always on the new line.

So, e.g. this:

data = /// s4
first line
last line
the rest python code

- will parse the block and knock out leading 4 spaces.
i.e. if the first line has 5 leading spaces then 1 space will be left
in the string. Block parsing terminates when the next line does not
satisfy the indent sequence (4 spaces in this case).
Another obvious type: tabs:

data = /// t1
first line
last line
the rest python code

Will do the same but with one tabstop character.

Actually that's it!
Some further ideas:

data = /// ts
- "any whitespace" (mimic current Python behaviour)

data = /// s# or
data = /// t
- simply count amount of spaces (tabs) from first
  line and proceed, otherwise terminate.

data = /// "???"
??? abc foo bar
???

- defines indent character by string: crazy idea but why not.

Language  parameter, e.g.:
data = /// t1."yaml"

-this can be reserved for future usage by code analysis tools
or dynamic syntax highlighting.

That's just a rough specification.

What should it give as result:

1. No clash with current TQS rules - less worries
  about reserved characters.

2. Built-in indentation parsing parameter makes it more or
  less natural continuation of Python blocks and is char-precise,
  which is very important here.

3. Independent of the indent of containing block!

4. Parameter descriptor can be developed in such manner
   that it allows more customisation and additions in the future.


Does seem to be more generalized problem-solving here.

One problem, as usual - tabs may be implicitly converted
to spaces by some software. That obviously could brake
something, but so is with any tabs, and its not related to
Python problem.


Is there something I miss here?
What caveats can be with such approach?



M
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10442: character maps to

2018-05-22 Thread Chris Angelico
On Wed, May 23, 2018 at 8:31 AM, Peter J. Holzer  wrote:
> On 2018-05-23 07:38:27 +1000, Chris Angelico wrote:
>> On Wed, May 23, 2018 at 7:23 AM, Peter J. Holzer  wrote:
>> >> The best you can do is to go ask the canonical source of the
>> >> file what encoding the file is _supposed_ to be in.
>> >
>> > I disagree on both counts.
>> >
>> > 1) For any given file it is almost always possible to find the correct
>> >encoding (or *a* correct encoding, as there may be more than one).
>>
>> You can find an encoding which is capable of decoding a file. That's
>> not the same thing.
>
> If the result is correct, it is the same thing.
>
> If I have an input file
>
> 4c 69 65 62 65 20 47 72 fc df 65 0a
>
> and I decode it correctly to
>
> Liebe Grüße
>
> it doesn't matter whether I used ISO-8859-1 or ISO-8859-2. The mapping
> for all bytes in the input file is the same in both encodings.

Sure, but if you try it as ISO-8859-5 or  -7, you won't get an error,
but you also won't get that string. So it DOES matter.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10442: character maps to

2018-05-22 Thread Peter J. Holzer
On 2018-05-23 07:38:27 +1000, Chris Angelico wrote:
> On Wed, May 23, 2018 at 7:23 AM, Peter J. Holzer  wrote:
> >> The best you can do is to go ask the canonical source of the
> >> file what encoding the file is _supposed_ to be in.
> >
> > I disagree on both counts.
> >
> > 1) For any given file it is almost always possible to find the correct
> >encoding (or *a* correct encoding, as there may be more than one).
> 
> You can find an encoding which is capable of decoding a file. That's
> not the same thing.

If the result is correct, it is the same thing.

If I have an input file 

4c 69 65 62 65 20 47 72 fc df 65 0a

and I decode it correctly to

Liebe Grüße

it doesn't matter whether I used ISO-8859-1 or ISO-8859-2. The mapping
for all bytes in the input file is the same in both encodings.


> >This may require domain-specific knowledge (e.g. it may be necessary
> >to recognize the human language and know at least some distinctive
> >words, or to know some special symbols likely to be used in a data
> >file), and it almost always takes a bit of detective work and trial
> >and error. But I don't think I ever encountered a file where I
> >couldn't figure out the encoding.
> 
> Look up the old classic "bush hid the facts" hack with Windows
> Notepad. A pure ASCII file that got misdetected based on the byte
> patterns in it.

And would you have made the same mistake as notepad? Nope, I'm quite
sure that you are able to recognize an ASCII file with an English
sentence as ASCII. You wouldn't even consider that it could be UTF-16LE.


> If you restrict yourself to ASCII-compatible eight-bit encodings, you
> MAY be able to figure out what something is.
[...]
> But there are a number of annoyingly similar encodings around, where a
> large number of the mappings are the same, but you're left with just a
> few ambiguous bytes.

They are rarely ambiguous if you speak the language.

> And if you're considering non-ASCII-compatible encodings, things get a
> lot harder. UTF-16 can represent large slabs of Chinese text using the
> same bytes that would represent alphanumeric characters; so how can
> you distinguish it from base-64?

I'll ask my Chinese colleague to read it. If he can read it, it's almost
certainly Chinese and not base-64.

As I said, domain knowledge may be necessary. If you are decoding a file
which may contain a Chinese text, you may have to know Chinese to check
whether the decoded text makes sense.

If your job is to figure out the encoding of files which you don't
understand (and hence can't check whether your results are correct) I
will concede that this is impossible.

> I have encountered MANY files where I couldn't figure out the
> encoding. Some of them were quite possibly in ancient encodings (some
> had CR line endings), some were ambiguous, and on multiple occasions,
> I've had to deal with files that had more than one encoding in the
> same block of content.

Well, files with multiple encodings break the assumption that there is
*one* correct encoding. While I have encountered such files, too (as
well as multi-encodings and random errors), I don't think we were
talking about that.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Spam levels.

2018-05-22 Thread Grant Edwards
On 2018-05-22, Peter J. Holzer  wrote:

> I didn't read on Gmane. I read on my usenet server. But the broken
> messages were all coming from Gmane. It is possible that the breakage
> only occurs when Gmane passes the message to other Usenet servers,
> although I have no idea how that could happen (frankly, I have no idea
> why Gmane should replace message-ids at all - it just doesn't make
> sense).

I never figured out exactly what the broken scenarios were nor did I
try to figure out which gateway was causing them.

Ignoring Google Groups, there are 9 possible combinations:

   Usenet <---[gateway]---> M-List <---[gateway]---> Gmane

 1. Usenet followup to M-List posting
 2. Usenet followup to Gmane posting
 3. Usenet followup to Usenet posting
 4. M-List followup to Usenet posting
 5. M-List followup to Gmane posting
 6. M-List followup to M-List posting
 7. Gmane  followup to Usenet posting
 8. Gmane  followup to M-List posting
 9. Gmane  followup to Gmane posting

Most of the combinations seem to work most of the time.  It looked
like there was at least 1 broken scenario when subscribed either via
Gmane or via "real" Usenet, but it's pretty difficult to glean the the
signal from the noise created by people with broken MUAs and/or NNTP
clients.

It's actually pretty impressive it all works as well as it does...

In any case, ignoring all postings from Google Groups is recommended.

-- 
Grant Edwards   grant.b.edwardsYow! Today, THREE WINOS
  at   from DETROIT sold me a
  gmail.comframed photo of TAB HUNTER
   before his MAKEOVER!

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Spam levels.

2018-05-22 Thread Peter J. Holzer
On 2018-05-22 20:42:43 +, Grant Edwards wrote:
> On 2018-05-22, Peter J. Holzer  wrote:
> > On 2018-05-21 15:42:28 +, Grant Edwards wrote:
> >> I switched from Usenet to Gmane mainly because references headers are
> >> bit more consistent on Gmane, so threading works somewhat better.
> >
> > This is interesting, because Gmane was the reason I switched from
> > reading on usenet to reading the mailinglist: Every article coming
> > through the Gmane gateway had broken headers, which completely messed up
> > threading (It looked like Gmane was replacing Message-Ids with their
> > own). I haven't checked recently whether that is still the case.
> >
> > On the mailing-list threading seems to work.
> 
> I've never tried reading the mailing list directly (I'm not willing to
> give up slrn), but the last time I ran NNTP threading tests (I refuse
> to admin how much time I spent writing a Python app to do that), My
> Usenet feed was noticably worse than Gmane.  Gmane had a fair amount
> of breakage as well, but was better than Usenet.

I didn't read on Gmane. I read on my usenet server. But the broken
messages were all coming from Gmane. It is possible that the breakage
only occurs when Gmane passes the message to other Usenet servers,
although I have no idea how that could happen (frankly, I have no idea
why Gmane should replace message-ids at all - it just doesn't make
sense).

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Tkinter and root vs. Wayland

2018-05-22 Thread Grant Edwards
For a couple decades now, I've been distributing a couple smallish
Tkinter applications that need to run as root for a variety of reasons
(raw Ethernet access, starting/stopping daemons, loading and unloading
kernel modules, reading and writing config files that are owned by
root).

As part of RedHat's switch to Wayland, they've decided that GUI X11
apps running as root will no longer be allowed to connect to the
Wayland desktop server/compositor/whatever-it's-called.  When it was
pointed out to RedHat that this will break lots of applications, the
official word from on high is that all GUI apps requiring root
privileges need to be redesigned so that their GUI is running as a
normal user.

How does one do that in a Tkinter app?  Do I need to start as root and
fork a process that drops privledges and starts Tkinter and then the
two processes communicate via sockets or Posix queues or whatnot?

Can Python multiprocessing be used in this way?

-- 
Grant Edwards   grant.b.edwardsYow! If our behavior is
  at   strict, we do not need fun!
  gmail.com

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33570] OpenSSL 1.1.1 / TLS 1.3 cipher suite changes

2018-05-22 Thread miss-islington

miss-islington  added the comment:


New changeset cd57b48ef9a70b7ef693ba52aaf38d7c945ab5d3 by Miss Islington (bot) 
in branch '3.7':
bpo-33570: TLS 1.3 ciphers for OpenSSL 1.1.1 (GH-6976)
https://github.com/python/cpython/commit/cd57b48ef9a70b7ef693ba52aaf38d7c945ab5d3


--
nosy: +miss-islington

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10442: character maps to

2018-05-22 Thread Chris Angelico
On Wed, May 23, 2018 at 7:23 AM, Peter J. Holzer  wrote:
>> The best you can do is to go ask the canonical source of the
>> file what encoding the file is _supposed_ to be in.
>
> I disagree on both counts.
>
> 1) For any given file it is almost always possible to find the correct
>encoding (or *a* correct encoding, as there may be more than one).

You can find an encoding which is capable of decoding a file. That's
not the same thing.

>This may require domain-specific knowledge (e.g. it may be necessary
>to recognize the human language and know at least some distinctive
>words, or to know some special symbols likely to be used in a data
>file), and it almost always takes a bit of detective work and trial
>and error. But I don't think I ever encountered a file where I
>couldn't figure out the encoding.

Look up the old classic "bush hid the facts" hack with Windows
Notepad. A pure ASCII file that got misdetected based on the byte
patterns in it.

If you restrict yourself to ASCII-compatible eight-bit encodings, you
MAY be able to figure out what something is. (I have that exact
situation when parsing subtitles files.) Bizarre constructs like
"Tuuleen jδiseen mδ nostan pδδn" are a strong indication that the
encoding is wrong - if most of a word is ASCII, it's likely that the
non-ASCII bytes represent accented characters, not characters from a
completely different alphabet. But there are a number of annoyingly
similar encodings around, where a large number of the mappings are the
same, but you're left with just a few ambiguous bytes.

And if you're considering non-ASCII-compatible encodings, things get a
lot harder. UTF-16 can represent large slabs of Chinese text using the
same bytes that would represent alphanumeric characters; so how can
you distinguish it from base-64?

I have encountered MANY files where I couldn't figure out the
encoding. Some of them were quite possibly in ancient encodings (some
had CR line endings), some were ambiguous, and on multiple occasions,
I've had to deal with files that had more than one encoding in the
same block of content. (Or more frequently, not files but socket
connections. Same difference.) So no, you cannot always figure out a
file's encoding from its contents. Because that will, on some
occasions, violate the laws of physics - granted, that's merely a
misdemeanour in some states.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Target WSGI script cannot be loaded as Python module.

2018-05-22 Thread Νίκος
Τη Τρίτη, 22 Μαΐου 2018 - 10:55:54 μ.μ. UTC+3, ο χρήστης Alexandre Brault 
> > Any ideas as to why iam getting the above error although i have python36 
> > isntalled along with all modules? why can it find it?
> How did you install geoip2? Was it by any chance in a virtual
> environment? If it was, you need to tell mod_wsgi to use this virtual
> environment; otherwise, it'll use the global environment that probably
> doesn't have geoip2 installed

I have both python installed in parallel.
python2.7 and python3.6

I have installed the modules as

pip3.6 install bottle bottle-pymysql geopip2
and they were installed successfully.

I dont know why error log is complaining that it cnanot see the modules.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10442: character maps to

2018-05-22 Thread Peter J. Holzer
On 2018-05-20 15:43:54 +0200, Karsten Hilbert wrote:
> On Sun, May 20, 2018 at 04:59:12AM -0700, bellcanada...@gmail.com wrote:
> 
> > On Saturday, 19 May 2018 19:48:20 UTC-4, Skip Montanaro  wrote:
> > > As Chris indicated, you'll have to figure out the correct encoding. You
> > > might want to check out the chardet module (available on PyPI, I believe)
> > > and see if it can come up with a better guess. I imagine there are other
> > > encoding guessers out there. That's just one I'm familiar with.
> > 
> > thank you for the reply, but how exactly am i supposed to find oout what is 
> > the correct encodeing??
> 
> One CAN NOT.
> 
> The best you can do is to go ask the canonical source of the
> file what encoding the file is _supposed_ to be in.

I disagree on both counts.

1) For any given file it is almost always possible to find the correct
   encoding (or *a* correct encoding, as there may be more than one).

   This may require domain-specific knowledge (e.g. it may be necessary
   to recognize the human language and know at least some distinctive
   words, or to know some special symbols likely to be used in a data
   file), and it almost always takes a bit of detective work and trial
   and error. But I don't think I ever encountered a file where I
   couldn't figure out the encoding.

   (If you have several files in the same encoding, it may not be
   possible to figure out the encoding from a subset of them. For
   example, the files may all be in ISO-8859-2, but the subset you have
   contains only characters <= 0x7F. But if you have several files, they
   may not all be the same encoding, either).

2) The canonical source of the file may not know. This is quite frequent
   when the source is some non-technical person. Then you get answers
   like "it's ASCII" (although the file contains umlauts, which aren't
   in ASCII) or "it's ANSI" (which isn't an encoding, although Windows
   pretends it is). Or they may not be aware that the file is converted
   somewhere in the pipeline, to that the file they generated isn't
   actually the file you received. So ask (or check the docs), but
   verify!

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33609] Document that dicts preserve insertion order

2018-05-22 Thread Yury Selivanov

New submission from Yury Selivanov :

I don't see it documented that dicts preserve insertion order. 3.7 what's new 
points to [1], but that section doesn't have a "version changed" tag.

IMO, [1] should have two version changed tags: one for 3.6, and one for 3.7.

Also, it would be great if we could document how the order is preserved when a 
key is deleted from the dict etc.

[1] https://docs.python.org/3.7/library/stdtypes.html#mapping-types-dict

--
assignee: docs@python
components: Documentation
messages: 317346
nosy: docs@python, inada.naoki, ned.deily, vstinner, yselivanov
priority: normal
severity: normal
status: open
title: Document that dicts preserve insertion order
type: enhancement
versions: Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33608] [subinterpreters] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.

2018-05-22 Thread STINNER Victor

Change by STINNER Victor :


--
title: Add a cross-interpreter-safe mechanism to indicate that an object may be 
destroyed. -> [subinterpreters] Add a cross-interpreter-safe mechanism to 
indicate that an object may be destroyed.

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33607] [subinterpreters] Explicitly track object ownership (and allocator).

2018-05-22 Thread STINNER Victor

Change by STINNER Victor :


--
title: Explicitly track object ownership (and allocator). -> [subinterpreters] 
Explicitly track object ownership (and allocator).

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: what does := means simply?

2018-05-22 Thread Peter J. Holzer
On 2018-05-20 16:36:12 -0400, Richard Damon wrote:
> 2) Try to maximize portability by not only looking at the specs, but
> also common implementations, and choosing the options that maximize the
> acceptability of your output to tools that don't fully meet the specs.
> Also, if a common implementation generates something not quite to the
> standard, try to make it so you can accept that output too.

This is the well-known "be conservative in what you send and liberal in
what you accept" principle. 

It has fallen into disfavour over the last decade or so. There are
several reasons:

* Being liberal in what you accept is problematic because you are
  accepting input which has no specified meaning and interpret it as you
  see fit - but there is no guarantee that this is the interpretation
  that the sender intended. This may result in silent data loss. In many
  cases an error message is better.
* Accepting non-standard input is also problematic because such input is
  probably not well-tested. The code is much more likely to contain
  bugs, maybe even security-critical bugs.
* If some features of a spec are rarely used, programmers may not
  implement them. When they are needed, they won't work. A recent
  example is that the TLS working group found that they can't use the
  version number field to signal the version number because too many
  implementations got it wrong (I have no idea how that happened. We are
  already on the 6th version and all previous upgrades used the version
  field).

Of course, if a popular implementation has known bugs you may have no
choice but make concessions.

hp


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33570] OpenSSL 1.1.1 / TLS 1.3 cipher suite changes

2018-05-22 Thread miss-islington

Change by miss-islington :


--
pull_requests: +6694

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33570] OpenSSL 1.1.1 / TLS 1.3 cipher suite changes

2018-05-22 Thread Christian Heimes

Christian Heimes  added the comment:


New changeset e8eb6cb7920ded66abc5d284319a8539bdc2bae3 by Christian Heimes in 
branch 'master':
bpo-33570: TLS 1.3 ciphers for OpenSSL 1.1.1 (GH-6976)
https://github.com/python/cpython/commit/e8eb6cb7920ded66abc5d284319a8539bdc2bae3


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Spam levels.

2018-05-22 Thread Grant Edwards
On 2018-05-22, Peter J. Holzer  wrote:
> On 2018-05-21 15:42:28 +, Grant Edwards wrote:
>> I switched from Usenet to Gmane mainly because references headers are
>> bit more consistent on Gmane, so threading works somewhat better.
>
> This is interesting, because Gmane was the reason I switched from
> reading on usenet to reading the mailinglist: Every article coming
> through the Gmane gateway had broken headers, which completely messed up
> threading (It looked like Gmane was replacing Message-Ids with their
> own). I haven't checked recently whether that is still the case.
>
> On the mailing-list threading seems to work.

I've never tried reading the mailing list directly (I'm not willing to
give up slrn), but the last time I ran NNTP threading tests (I refuse
to admin how much time I spent writing a Python app to do that), My
Usenet feed was noticably worse than Gmane.  Gmane had a fair amount
of breakage as well, but was better than Usenet.

-- 
Grant Edwards   grant.b.edwardsYow! Did YOU find a
  at   DIGITAL WATCH in YOUR box
  gmail.comof VELVEETA?

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33608] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.

2018-05-22 Thread pmpp

Change by pmpp :


--
nosy: +pmpp

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33608] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:

"That is relatively straight-forward except in one case: indicating that the 
other interpreter doesn't need the object to exist any more (similar to 
PyBuffer_Release() but more general)"

Why an interpreter would access an object from a different interpreter? Each 
interpreter should have its own object space, no?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
nosy: +christian.heimes, gregory.p.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33569] dataclasses InitVar does not maintain any type info

2018-05-22 Thread Eric V. Smith

Eric V. Smith  added the comment:

This seems like a reasonable request.

--
nosy: +eric.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33562] Check that the global settings for asyncio are not changed by tests

2018-05-22 Thread STINNER Victor

Change by STINNER Victor :


--
nosy: +vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Matthias Bussonnier

Matthias Bussonnier  added the comment:

I've sent two PRs, one that updates the deprecation from 
PendingDeprecationWarning to DeprecationWarning that likely should get applied 
to 3.6, and 3.7 ?

And a PR to actually remove the default in 3.8.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32400] inspect.isdatadescriptor false negative

2018-05-22 Thread Aaron Hall

Change by Aaron Hall :


--
nosy: +Aaron Hall

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Matthias Bussonnier

Change by Matthias Bussonnier :


--
pull_requests: +6693

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33565] strange tracemalloc results

2018-05-22 Thread Alexander Mohr

Alexander Mohr  added the comment:

yes, memory does go up.  If you click the botocore bug link you'll see a graph 
of memory usage over time.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33605] Detect accessing event loop from a different thread outside of _debug

2018-05-22 Thread Yury Selivanov

Yury Selivanov  added the comment:

> I suggest that asyncio should be stricter about this error and that methods 
> and functions that operate on the event loop, such as call_soon, call_later, 
> create_task, ensure_future, and close, should all call _check_thread() even 
> when not in debug mode. _check_thread() warns that it "should only be called 
> when self._debug == True", hinting at "performance reasons", but that doesn't 
> seem justified. threading.get_ident() is efficiently implemented in C, and 
> comparing that integer to another cached integer is about as efficient an 
> operation as it gets.

I'd be OK with this if the performance penalty is within 0.5% in 
microbenchmarks for asyncio & uvloop.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33607] Explicitly track object ownership (and allocator).

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:

"Either we'd add 2 pointer-size fields to PyObject or we would keep a separate 
hash table (or two) pointing from each object to the info (...)"

The expect a huge impact on the memory footprint. I dislike the idea.

Currently, the smallest Python object is:

>>> sys.getsizeof(object())
16

It's just two pointers. Adding two additional pointers would simply double the 
size of the object.

"Now the C-API offers a way to switch the allocator, so there's no guarantee
that the right allocator is used in PyMem_Free()."

I would expect that either all interpreters use the same memory allocator, or 
that each interpreter uses its own allocator. If you use one allocator per 
interpreter, calling PyMem_Free() from the wrong interpreter would just crash. 
As you get a crash when you call free() on an object allocated by PyMem_Free(). 
You can extend PYTHONMALLOC=debug to detect bugs. This builtin debugger is 
already able to catch misuses of allocators. Adding extra pointers to this 
debugger is acceptable since it doesn't modify the footprint of the default 
mode.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: what does := means simply?

2018-05-22 Thread Peter J. Holzer
On 2018-05-20 11:37:14 -0400, Dennis Lee Bieber wrote:
> On Sun, 20 May 2018 12:38:59 +0100, bartc  declaimed the
> following:
> >Then the /same software/ probably wouldn't work anywhere else. I mean 
> >taking source which doesn't know or care about what system its on, and 
> >that operates on a ppm file downloaded from the internet.
> 
>   And software that handles binary PPM on a big-endian system probably
> won't run on a little-endian system, if said PPM is using a maxval >255
> (meaning each R,G,B takes up two bytes, and said bytes would be seen in
> reverse order when moving from one system to the other).

The byte order in PPM files is well-defined.

It is trivial to write a portable C program which reads raw PPM files.
And by "portable" I mean that the same program works on big- or little
endian systems, on ASCII or EBCDIC systems, on systems where 1 byte is
more than 8 bits (as long as the file stores still one octet per byte),
etc.

hp


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33565] strange tracemalloc results

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:

> this means 21 blocks were not released, and in this case leaked because 
> nothing should be held onto after the first iteration (creating the initial 
> connector in the connection pool.  In the head object case that's going to be 
> a new connector per iteration, however the old one should go away.

I'm not sure that I understand properly. If you call the function many times, 
does the memory usage increase?

I'm not sure that this issue is a bug in Python.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Spam levels.

2018-05-22 Thread Peter J. Holzer
On 2018-05-21 15:42:28 +, Grant Edwards wrote:
> I switched from Usenet to Gmane mainly because references headers are
> bit more consistent on Gmane, so threading works somewhat better.

This is interesting, because Gmane was the reason I switched from
reading on usenet to reading the mailinglist: Every article coming
through the Gmane gateway had broken headers, which completely messed up
threading (It looked like Gmane was replacing Message-Ids with their
own). I haven't checked recently whether that is still the case.

On the mailing-list threading seems to work.

hp


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33516] unittest.mock: Add __round__ to supported magicmock methods

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:

Thank you Martijn Pieters for the feature request/bug report, and thanks John 
Reese for the implementation!

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33516] unittest.mock: Add __round__ to supported magicmock methods

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:


New changeset 6c4fab0f4b95410a1a964a75dcdd953697eff089 by Victor Stinner (John 
Reese) in branch 'master':
bpo-33516: Add support for __round__ in MagicMock (GH-6880)
https://github.com/python/cpython/commit/6c4fab0f4b95410a1a964a75dcdd953697eff089


--
nosy: +vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Target WSGI script cannot be loaded as Python module.

2018-05-22 Thread Alexandre Brault
On 2018-05-22 02:29 PM, Νίκος wrote:
> Hello all,
>
> Iam tryign to run a bootle script iw rote as wsgi app and iam gettign the 
> follwing eroor.
>
> ===
> [Tue May 22 06:49:45.763808 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500] mod_wsgi (pid=24298): Target WSGI script 
> '/home/nikos/public_html/app.py' cannot be loaded as Python module.
> [Tue May 22 06:49:45.763842 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500] mod_wsgi (pid=24298): Exception occurred processing WSGI 
> script '/home/nikos/public_html/app.py'.
> [Tue May 22 06:49:45.763872 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500] Traceback (most recent call last):
> [Tue May 22 06:49:45.763911 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500]   File "/home/nikos/public_html/app.py", line 4, in 
> 
> [Tue May 22 06:49:45.763951 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500] import re, os, sys, socket, time, datetime, locale, 
> codecs, random, smtplib, subprocess, geoip2.database, bottle_pymysql
> [Tue May 22 06:49:45.763976 2018] [:error] [pid 24298] [client 
> 46.103.59.37:14500] ImportError: No module named geoip2.database
> ===
>
> He is the relative httpd-vhosts.conf
>
> 
> ServerName superhost.gr
>
> WSGIDaemonProcess public_html user=nikos group=nikos processes=1 threads=5
> WSGIScriptAlias / /home/nikos/public_html/app.py
>
> ProxyPass / http://superhost.gr:5000/
> ProxyPassReverse / http://superhost:5000/
>
>
> 
> WSGIProcessGroup public_html
> WSGIApplicationGroup %{GLOBAL}
> WSGIScriptReloading On
>
> Options -Indexes +IncludesNOEXEC +SymLinksIfOwnerMatch +ExecCGI
>
> AddHandler cgi-script .cgi .py
> AddHandler wsgi-script .wsgi .py
>
> AllowOverride None
> Require all granted
> 
> 
>
>
> Any ideas as to why iam getting the above error although i have python36 
> isntalled along with all modules? why can it find it?
How did you install geoip2? Was it by any chance in a virtual
environment? If it was, you need to tell mod_wsgi to use this virtual
environment; otherwise, it'll use the global environment that probably
doesn't have geoip2 installed

Alex
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue23188] Provide a C helper function to chain raised (but not yet caught) exceptions

2018-05-22 Thread Eric Snow

Eric Snow  added the comment:

good point :)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33608] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.

2018-05-22 Thread Eric Snow

Eric Snow  added the comment:

As a lesser (IMHO) alternative, we could also modify Py_DECREF
to respect a new "shared" marker of some sort (perhaps relative
to #33607), but that would probably still require one of the
refcount-based solutions (and add a branch to Py_DECREF).

--
versions: +Python 3.8 -Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Target WSGI script cannot be loaded as Python module.

2018-05-22 Thread Νίκος
Hello all,

Iam tryign to run a bootle script iw rote as wsgi app and iam gettign the 
follwing eroor.

===
[Tue May 22 06:49:45.763808 2018] [:error] [pid 24298] [client 
46.103.59.37:14500] mod_wsgi (pid=24298): Target WSGI script 
'/home/nikos/public_html/app.py' cannot be loaded as Python module.
[Tue May 22 06:49:45.763842 2018] [:error] [pid 24298] [client 
46.103.59.37:14500] mod_wsgi (pid=24298): Exception occurred processing WSGI 
script '/home/nikos/public_html/app.py'.
[Tue May 22 06:49:45.763872 2018] [:error] [pid 24298] [client 
46.103.59.37:14500] Traceback (most recent call last):
[Tue May 22 06:49:45.763911 2018] [:error] [pid 24298] [client 
46.103.59.37:14500]   File "/home/nikos/public_html/app.py", line 4, in 
[Tue May 22 06:49:45.763951 2018] [:error] [pid 24298] [client 
46.103.59.37:14500] import re, os, sys, socket, time, datetime, locale, 
codecs, random, smtplib, subprocess, geoip2.database, bottle_pymysql
[Tue May 22 06:49:45.763976 2018] [:error] [pid 24298] [client 
46.103.59.37:14500] ImportError: No module named geoip2.database
===

He is the relative httpd-vhosts.conf


ServerName superhost.gr

WSGIDaemonProcess public_html user=nikos group=nikos processes=1 threads=5
WSGIScriptAlias / /home/nikos/public_html/app.py

ProxyPass / http://superhost.gr:5000/
ProxyPassReverse / http://superhost:5000/



WSGIProcessGroup public_html
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On

Options -Indexes +IncludesNOEXEC +SymLinksIfOwnerMatch +ExecCGI

AddHandler cgi-script .cgi .py
AddHandler wsgi-script .wsgi .py

AllowOverride None
Require all granted




Any ideas as to why iam getting the above error although i have python36 
isntalled along with all modules? why can it find it?
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33608] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.

2018-05-22 Thread Eric Snow

New submission from Eric Snow :

In order to keep subinterpreters properly isolated, objects
from one interpreter should not be used in C-API calls in
another interpreter.  That is relatively straight-forward
except in one case: indicating that the other interpreter
doesn't need the object to exist any more (similar to
PyBuffer_Release() but more general).  I consider the
following solutions to be the most viable.  Both make use
of recounts to protect cross-interpreter usage (with incref
before sharing).

1. manually switch interpreters (new private function)
  a. acquire the GIL
  b. if refcount > 1 then decref and release the GIL
  c. switch
  d. new thread (or re-use dedicated one)
  e. decref
  f. kill thread
  g. switch back
  h. release the GIL
2. change pending call mechanism (see Py_AddPendingCall) to
   per-interpreter instead of global (add "interp" arg to
   signature of new private C-API function)
  a. queue a function that decrefs the object
3. new cross-interpreter-specific private C-API function
  a. queue the object for decref (a la Py_AddPendingCall)
 in owning interpreter

I favor #2, since it is more generally applicable.  #3 would
probably be built on #2 anyway.  #1 is relatively inefficient.
With #2, Py_AddPendingCall() would become a simple wrapper
around the new private function.

--
messages: 317333
nosy: eric.snow, ncoghlan, serhiy.storchaka, vstinner, yselivanov
priority: normal
severity: normal
stage: needs patch
status: open
title: Add a cross-interpreter-safe mechanism to indicate that an object may be 
destroyed.
versions: Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

Please ignore the last paragraph. It was my mistake, all add_subparsers() 
parameters are keyword-only, and _SubParsersAction is a privale class.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Anthony Sottile

Anthony Sottile  added the comment:

The bug is orthogonal, you can trigger it without the `required=` keyword 
argument via the (currently suggested) monkeypatch workaround which restores 
the pre-3.3 behaviour:

import argparse

parser = argparse.ArgumentParser()
subp = parser.add_subparsers()
subp.add_parser('test')
subp.required = True
parser.parse_args()


$ python3 test.py
Traceback (most recent call last):
  File "test.py", line 7, in 
parser.parse_args()
  File 
"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/argparse.py",
 line 1730, in parse_args
args, argv = self.parse_known_args(args, namespace)
  File 
"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/argparse.py",
 line 1762, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
  File 
"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/argparse.py",
 line 1997, in _parse_known_args
', '.join(required_actions))
TypeError: sequence item 0: expected str instance, NoneType found


Also note that when `dest` is specified it works fine:


import argparse

parser = argparse.ArgumentParser()
subp = parser.add_subparsers(dest='cmd')
subp.add_parser('test')
subp.required = True
parser.parse_args()

$ python3 test.py
usage: test.py [-h] {test} ...
test.py: error: the following arguments are required: cmd

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33607] Explicitly track object ownership (and allocator).

2018-05-22 Thread Eric Snow

New submission from Eric Snow :

When an object is created it happens relative to the current
thread (ergo interpreter) and the current allocator (part of
global state).  We do not track either of these details for
the object.  It may make sense to start doing so (reasons next).

Regarding tracking the interpreter, that originating interpreter
can be thought of as the owner.  Any lifecycle operations should
happen relative to that interpreter.  Furthermore, the object
should be used in C-API calls only in that interpreter (i.e.
when the current thread's Py_ThreadState belongs to that
interpreter).  This hasn't been an issue since currently all
interpreters in the process share the GIL, as well as the fact
that subinterpreters haven't been heavily used historically.
However, the possibility of no longer sharing the GIL suggests
that tracking the owning interpreter (and perhaps even other
"sharing" interpreters) would be important.  Furthermore,
in the last few years subinterpreters have seen increasing usage
(see Openstack Ceph), and knowing the originating interpreter
for an object can be useful there.  Regardless, even in the
single interpreter case knowing the owning interpreter is
important during runtime finalization (which is currently
slightly broken), which impacts CPython embedders.

Regarding the allocator, there used to be just a single global
one that the runtime used from start to finish.  Now the C-API
offers a way to switch the allocator, so there's no guarantee
that the right allocator is used in PyMem_Free().  This has
already had a negative impact on efforts to clean up CPython's
runtime initialization.  It also results in problems during
finalization.  Additionally, we are looking into moving the
allocator from the global runtime state to the per-interpreter
(or even per-thread or per-context) state value.  In that world
it would be essential to know which allocator was used when
creating the object.  There are other possible applications
based on knowing an object's allocator, but I'll stop there.

To sort all this out we would need to track per-object:

* originating allocator (pointer or id)
* owning interpreter (pointer or id)
* (possibly) "sharing" interpreters (linked list?)

Either we'd add 2 pointer-size fields to PyObject or we would
keep a separate hash table (or two) pointing from each object
to the info (similar to how we've considered doing for
refcounts).  To alleviate impact on the common case (not
embedded, single interpreter, same allocator), we could default
to not tracking interpreter/allocator and take a lookup failure
to mean "main interpreter, default allocator".

--
messages: 317330
nosy: eric.snow, ncoghlan, vstinner
priority: normal
severity: normal
status: open
title: Explicitly track object ownership (and allocator).
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

Wouldn't be better to first fix this bug, and only after that add the 
'required' parameter?

Adding it introduced yet one bug: when pass arguments as positional, the 'help' 
argument will be swallowed. You can add new parameters only after existing 
positional parameters.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33606] Improve logging performance when logger disabled

2018-05-22 Thread Vinay Sajip

New submission from Vinay Sajip :

If a logger is disabled (by setting it's disabled attribute to True), the check 
for this is done late in the dispatch of the logging event - during the 
handle() call - rather than isEnabledFor(), which would short-circuit some 
processing. So the check for logger.disabled should be moved to isEnabledFor().

Credit to Abhijit Gadgil for raising this:

https://stackoverflow.com/questions/50453121/logger-disabled-check-much-later-in-python-logging-module-whats-the-rationale

--
assignee: vinay.sajip
components: Library (Lib)
messages: 317328
nosy: vinay.sajip
priority: normal
severity: normal
stage: needs patch
status: open
title: Improve logging performance when logger disabled
type: enhancement
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33603] Subprocess Thread handles grow with each call and aren't released [Windows]

2018-05-22 Thread Gregory P. Smith

Change by Gregory P. Smith :


--
title: Subprocess Thread handles  grow with each call and aren't released until 
script ends -> Subprocess Thread handles grow with each call and aren't 
released [Windows]

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33593] Support heapq on typed arrays?

2018-05-22 Thread Raymond Hettinger

Raymond Hettinger  added the comment:

As noted by Serhiy, the interaction with the Array type would incur significant 
overhead.  Your fastest approach will be to follow his suggest to first convert 
to a list and then perform heap manipulations.

Marking this as closed.  Thank you for the suggestion.

--
resolution:  -> rejected
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Anthony Sottile

Anthony Sottile  added the comment:

That's a separate issue (also a bug introduced by the bad 3.3 patch): 
https://bugs.python.org/issue29298

I have an open PR to fix it as well but it has not seen review action: 
https://github.com/python/cpython/pull/3680

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

I tried to use add_subparsers() with required=True and have found it not usable.

import argparse
parser = argparse.ArgumentParser(prog='PROG')
subparsers = parser.add_subparsers(required=True)
parser_a = subparsers.add_parser('a')
parser_b = subparsers.add_parser('b')
parser.parse_args([])

The result:

Traceback (most recent call last):
  File "", line 1, in 
  File "/home/serhiy/py/cpython/Lib/argparse.py", line 1745, in parse_args
args, argv = self.parse_known_args(args, namespace)
  File "/home/serhiy/py/cpython/Lib/argparse.py", line 1777, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
  File "/home/serhiy/py/cpython/Lib/argparse.py", line 2012, in 
_parse_known_args
', '.join(required_actions))
TypeError: sequence item 0: expected str instance, NoneType found

Seems that not only the default value should be changed, but the whole PR 3027 
should be reverted.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33605] Detect accessing event loop from a different thread outside of _debug

2018-05-22 Thread Hrvoje Nikšić

New submission from Hrvoje Nikšić :

Looking at StackOverflow's python-asyncio tag[1], it appears that it's a very 
common mistake for users to invoke asyncio functions or methods from a thread 
other than the event loop thread. In some cases this happens because the user 
is careless with threads and hasn't read the documentation. But in many cases 
what happens is that a third-party library invoked a callback in a different 
thread without making it transparent that that's what it's doing.

The trouble is that in many cases doing these things, e.g. calling 
loop.call_soon() or loop.create_task() from the wrong thread, will *appear to 
work*. The typical event loop is busy servicing different coroutines, so a 
function or task enqueued without a proper wakeup gets picked up soon enough. 
This is, of course, a disaster waiting to happen because it could easily lead 
to corruption of event loop's data structures. But users don't know that, and 
many of them become aware of the problem only after wondering "why does my code 
start working when I add a coroutine that does nothing but asyncio.sleep(0.1) 
in an infinite loop?" Some may never even fix their code, just assuming that 
asyncio takes a long time to process a new task or something like that.

I suggest that asyncio should be stricter about this error and that methods and 
functions that operate on the event loop, such as call_soon, call_later, 
create_task, ensure_future, and close, should all call _check_thread() even 
when not in debug mode. _check_thread() warns that it "should only be called 
when self._debug == True", hinting at "performance reasons", but that doesn't 
seem justified. threading.get_ident() is efficiently implemented in C, and 
comparing that integer to another cached integer is about as efficient an 
operation as it gets.

The added benefit would be a vast improvement of robustness of asyncio-based 
programs, saving many hours of debugging.


[1]
Here is an incomplete list of questions where the users stumbled on this 
problem, and that's only from the last three months or so:

https://stackoverflow.com/questions/49906034/python-asyncio-run-forever-and-tasks
https://stackoverflow.com/questions/49851514/python-websockets-and-gtk-confused-about-asyncio-queue
https://stackoverflow.com/questions/49533612/using-asyncio-loop-reference-in-another-file
https://stackoverflow.com/questions/49093623/strange-behaviour-when-task-added-to-empty-loop-in-different-thread
https://stackoverflow.com/questions/48836285/python-asyncio-event-wait-not-responding-to-event-set
https://stackoverflow.com/questions/48833644/how-to-wait-for-asynchronous-callback-in-the-background-i-e-not-invoked-by-us
https://stackoverflow.com/questions/48695670/running-asynchronous-code-synchronously-in-separate-thread

--
components: asyncio
messages: 317324
nosy: asvetlov, hniksic, yselivanov
priority: normal
severity: normal
status: open
title: Detect accessing event loop from a different thread outside of _debug
type: enhancement
versions: Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33603] Subprocess Thread handles grow with each call and aren't released until script ends

2018-05-22 Thread Eryk Sun

Eryk Sun  added the comment:

The 2nd example with subprocess.run() creates two threads in the Python 
process, since you're redirecting both stdout and stderr to pipes and run() 
calls communicate(). The first example with subprocess.Popen() shouldn't create 
any threads. In either case, nothing in subprocess should be opening a handle 
for a thread.

Please attach a minimal script that reproduces the problem, preferably running 
a command everyone can test such as "python.exe -V" and preferably with 
shell=False if the problem can be reproduced without the shell. Also, describe 
your Python setup, i.e. the installed distribution and packages. Something 
could be monkey patching the subprocess module.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Matthias Bussonnier

Change by Matthias Bussonnier :


--
keywords: +patch
pull_requests: +6692
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33604] HMAC default to MD5 marked as to be removed in 3.6

2018-05-22 Thread Matthias Bussonnier

New submission from Matthias Bussonnier :

HMAC docs says digestmod=md5 default will be removed in 3.6, but was not. 

We should likely bumpt that to 3.8 in the documentation, and actually remove it 
in 3.8

--
messages: 317322
nosy: mbussonn
priority: normal
severity: normal
status: open
title: HMAC default to MD5 marked as to be removed in 3.6
versions: Python 3.6, Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Anthony Sottile

Anthony Sottile  added the comment:

Is there then no pathway for actually fixing the bug?  aka how can I get 
`required=True` to be the default.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33531] test_asyncio: test_subprocess test_stdin_broken_pipe() failure on Travis CI

2018-05-22 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

The patch seems to have worked.  The last AppVeyor failure was a day ago, when 
testing the 3.7 backport of the fix.
https://ci.appveyor.com/project/python/cpython/history

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33109] argparse: make new 'required' argument to add_subparsers default to False instead of True

2018-05-22 Thread Ned Deily

Ned Deily  added the comment:

> Considering the huge popularity of these SO questions, I don't think this 
> should be reverted [...]

As I understand it (and, again, I make no claim to be an argparse expert), 
there does not seem to be one absolutely correct answer here; there has to be a 
tradeoff.  If we revert the change in default as in PR 6919, users porting code 
from 2.7 will continue to run into the unfortunate change in behavior 
introduced in 3.3.  But, with the reversion, those users are no worse off than 
they were before: the existing workarounds, like those in the cited SO answers, 
still apply.  And it's a one-time annoyance for them, along with all the other 
changes they may need to make to port to a current Python 3.x.  Whereas, if the 
change is not reverted, then we introduce a new incompatibility to a new class 
of users, that is, those upgrading from Python 3.3 through 3.6 to 3.7, 
generating a new set of SO questions, etc.  That seems to be making a 
less-than-ideal situation worse.  So, as release manager, I continue to think 
that the reversion (PR 6919) should go in to 3.7.0.  (For 3.8 and beyond, it 
 would be great to have at least one core developer take responsibility for 
argparse enhancements.)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30940] Documentation for round() is incorrect.

2018-05-22 Thread Serhiy Storchaka

Change by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33603] Subprocess Thread handles grow with each call and aren't released until script ends

2018-05-22 Thread GranPrego

GranPrego  added the comment:

Process explorer is showing the handles as belonging to the python executable. 
I can see the cmd process start,then the executable which terminates cleanly.  
I can see thread handles appearing under the python process, with some 
terminating, but more green than red, hence the increase.  I can post a 
screenshot tomorrow. Thanks

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33593] Support heapq on typed arrays?

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

Workaround:

alist = list(a)
heapq.heapify(alist)
a[:] = alist

And it should be not much slower than using heapq.heapify() directly if it 
could support general sequences. Using it with array.array would add 
significant overhead due to boxing.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread Chris Angelico
On Wed, May 23, 2018 at 3:51 AM, bartc  wrote:
> On 22/05/2018 15:25, Chris Angelico wrote:
>>
>> On Tue, May 22, 2018 at 8:25 PM, bartc  wrote:
>>>
>>> Note that Python tuples don't always need a start symbol:
>>>
>>> a = 10,20,30
>>>
>>> assigns a tuple to a.
>>
>>
>> The tuple has nothing to do with the parentheses, except for the
>> special case of the empty tuple. It's the comma.
>
>
> No? Take these:
>
>  a = (10,20,30)
>  a = [10,20,30]
>  a = {10,20,30}
>
> If you print type(a) after each, only one of them is a tuple - the one with
> the round brackets.

And this isn't a tuple either:

import os, sys, math

If you've actually read the other emails in this thread, you'll see
that this has already been said.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue33603] Subprocess Thread handles grow with each call and aren't released until script ends

2018-05-22 Thread Antoine Pitrou

Change by Antoine Pitrou :


--
nosy: +giampaolo.rodola, gregory.p.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: "Data blocks" syntax specification draft

2018-05-22 Thread bartc

On 22/05/2018 15:25, Chris Angelico wrote:

On Tue, May 22, 2018 at 8:25 PM, bartc  wrote:

Note that Python tuples don't always need a start symbol:

a = 10,20,30

assigns a tuple to a.


The tuple has nothing to do with the parentheses, except for the
special case of the empty tuple. It's the comma.


No? Take these:

 a = (10,20,30)
 a = [10,20,30]
 a = {10,20,30}

If you print type(a) after each, only one of them is a tuple - the one 
with the round brackets.


The 10,20,30 in those other contexts doesn't create a tuple, nor does it 
here:


  f(10,20,30)

Or here:

  def g(a,b,c):

Or here in Python 2:

  print 10,20,30

and no doubt in a few other cases. It's just that special case I 
highlighted where an unbracketed sequence of expressions yields a tuple.


The comma is just generally used to separate expressions, it's not 
specific to tuples.


--
bart
--
https://mail.python.org/mailman/listinfo/python-list


[issue33592] Document contextvars C API

2018-05-22 Thread miss-islington

miss-islington  added the comment:


New changeset afec2d583a06849c5080c6cd40266743c8e04b3e by Miss Islington (bot) 
in branch '3.7':
bpo-33592: Document the C API in PEP 567 (contextvars) (GH-7033)
https://github.com/python/cpython/commit/afec2d583a06849c5080c6cd40266743c8e04b3e


--
nosy: +miss-islington

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33593] Support heapq on typed arrays?

2018-05-22 Thread Diego Argueta

Diego Argueta  added the comment:

However I do see your point about the speed.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23188] Provide a C helper function to chain raised (but not yet caught) exceptions

2018-05-22 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

There is usually more complex code between PyErr_Fetch() and 
_PyErr_ChainExceptions():

PyObject *exc, *val, *tb, *close_result;
PyErr_Fetch(, , );
close_result = _PyObject_CallMethodId(result, _close, NULL);
_PyErr_ChainExceptions(exc, val, tb);
Py_XDECREF(close_result);

Many exceptions can be raised and silenced or overridden between, but we are 
interesting in chaining the only latest exception (or restoring the original 
exception if all exceptions between were silenced).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33597] Compact PyGC_Head

2018-05-22 Thread Yury Selivanov

Yury Selivanov  added the comment:

This is such a great idea. +1 from me.

--
nosy: +yselivanov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33603] Subprocess Thread handles grow with each call and aren't released until script ends

2018-05-22 Thread Eryk Sun

Eryk Sun  added the comment:

The thread handle that CreateProcess returns gets immediately closed in 
Popen._execute_child, so I can't see how this is due to subprocess. Please 
check to make sure these thread handles aren't for threads in your own process. 
Set the lower pane of Process Explorer to show the handle list. For a thread 
handle, the name field shows the executable name and PID of the process to 
which the thread is attached.

--
nosy: +eryksun

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33593] Support heapq on typed arrays?

2018-05-22 Thread Diego Argueta

Diego Argueta  added the comment:

I was referring to the C arrays in the Python standard library: 
https://docs.python.org/3/library/array.html

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33592] Document contextvars C API

2018-05-22 Thread miss-islington

Change by miss-islington :


--
pull_requests: +6691

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33592] Document contextvars C API

2018-05-22 Thread Yury Selivanov

Yury Selivanov  added the comment:


New changeset b2f5f59ae15564b991f3ca4850e6ad28d9faacbc by Yury Selivanov (Elvis 
Pranskevichus) in branch 'master':
bpo-33592: Document the C API in PEP 567 (contextvars) (GH-7033)
https://github.com/python/cpython/commit/b2f5f59ae15564b991f3ca4850e6ad28d9faacbc


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33565] strange tracemalloc results

2018-05-22 Thread Alexander Mohr

Alexander Mohr  added the comment:

that's not going to affect 
http://pytracemalloc.readthedocs.io/api.html#get_traced_memory.  There is no 
filter for that :)

as to your sum that's exactly what my original callstack lists:
21 memory blocks: 4.7 KiB

this means 21 blocks were not released, and in this case leaked because nothing 
should be held onto after the first iteration (creating the initial connector 
in the connection pool.  In the head object case that's going to be a new 
connector per iteration, however the old one should go away.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33565] strange tracemalloc results

2018-05-22 Thread STINNER Victor

STINNER Victor  added the comment:

Oh. Usually, I strip traces allocated by tracemalloc using filters:

http://pytracemalloc.readthedocs.io/examples.html#pretty-top

snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, ""),
tracemalloc.Filter(False, ""),
))

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23188] Provide a C helper function to chain raised (but not yet caught) exceptions

2018-05-22 Thread Eric Snow

Eric Snow  added the comment:

FTR, see PEP 490 ("Chain exceptions at C level") which proposed implicitly 
chaining exceptions in the PyErr_* API.

While that PEP was rejected (not all exceptions should be chained), it does 
make a good point about the clunkiness of using _PyErr_ChainExceptions():

PyObject *exc, *val, *tb;
PyErr_Fetch(, , );
PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
_PyErr_ChainExceptions(exc, val, tb);

So if we are going to add a public helper function, let's consider adding one 
that simplifies usage.  For instance, how about one that indicates the next 
exception set should chain:

PyErr_ChainNext();
PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);

Or perhaps we should revive PEP 490 with an opt out mechanism for the cases 
where we don't want chaining:

PyErr_NoChainNext();
PyErr_Format(PyExc_RuntimeError, "uh-oh");

--
nosy: +eric.snow, vstinner
versions: +Python 3.8 -Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33597] Compact PyGC_Head

2018-05-22 Thread INADA Naoki

INADA Naoki  added the comment:

In Doc folder:

  make clean
  make PYTHON=../python venv
  /usr/bin/time make html

master:

  113.15user 0.41system 1:55.46elapsed 98%CPU (0avgtext+0avgdata 
205472maxresident)k
  18800inputs+223544outputs (1major+66066minor)pagefaults 0swaps

  111.07user 0.44system 1:51.72elapsed 99%CPU (0avgtext+0avgdata 
205052maxresident)k
  0inputs+223376outputs (0major+65855minor)pagefaults 0swaps

patched:

  109.92user 0.44system 1:50.43elapsed 99%CPU (0avgtext+0avgdata 
195832maxresident)k
  0inputs+223376outputs (0major+63206minor)pagefaults 0swaps

  110.70user 0.40system 1:51.50elapsed 99%CPU (0avgtext+0avgdata 
195516maxresident)k
  0inputs+223376outputs (0major+62723minor)pagefaults 0swaps

It seems reduced 5% memory footprint, and performance difference is very small.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33565] strange tracemalloc results

2018-05-22 Thread Alexander Mohr

Alexander Mohr  added the comment:

I believe your method is flawed, when enabling tracemalloc it's going to be 
using memory as well to store the traces.  I still believe you need to use the 
method I mentioned and further even if we don't take into account the total 
memory the traces I mentioned need to be explained.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >