[issue39442] from __future__ import annotations makes dataclasses.Field.type a string, not type

2022-04-08 Thread Marco Barisione


Marco Barisione  added the comment:

Actually, sorry I realise I can pass `include_extras` to `get_type_hints`.
Still, it would be nicer not to have to do that.

--

___
Python tracker 
<https://bugs.python.org/issue39442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39442] from __future__ import annotations makes dataclasses.Field.type a string, not type

2022-04-08 Thread Marco Barisione


Marco Barisione  added the comment:

This is particularly annoying if you are using `Annotated` with a dataclass.

For instance:
```
from __future__ import annotations

import dataclasses
from typing import Annotated, get_type_hints


@dataclasses.dataclass
class C:
v: Annotated[int, "foo"]


v_type = dataclasses.fields(C)[0].type
print(repr(v_type))  # "Annotated[int, 'foo']"
print(repr(get_type_hints(C)["v"]))  # 
print(repr(eval(v_type)))  # typing.Annotated[int, 'foo']
```

In the code above it looks like the only way to get the `Annotated` so you get 
get its args is using `eval`. The problem is that, in non-trivial, examples, 
`eval` would not be simple to use as you need to consider globals and locals, 
see https://peps.python.org/pep-0563/#resolving-type-hints-at-runtime.

--
nosy: +barisione

___
Python tracker 
<https://bugs.python.org/issue39442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2022-02-17 Thread Marco Pagliaricci


Marco Pagliaricci  added the comment:

Andrew,
many thanks for your time, solving this issue.
I think your solution is the best to fix this little problem and I agree
with you on backporting.

My Best Regards,
and thanks again.

Marco

On Thu, Feb 17, 2022 at 10:29 AM Andrew Svetlov 
wrote:

>
> Andrew Svetlov  added the comment:
>
> I have a pull request for the issue.
> It doesn't use `Future.set_exception()` but creates a new CancelledError()
> with propagated message.
> The result is the same, except raised exceptions are not comparable by
> `is` check.
> As a benefit, `_cancelled_exc` works with the patch, exc.__context__ is
> correctly set.
>
> The patch is not backported because it changes existing behavior a little.
> I'd like to avoid a situation when third-party code works with Python
> 3.11+, 3.10.3+, and 3.9.11+ only.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue45390>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue45390>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37295] Possible optimizations for math.comb()

2021-10-17 Thread Marco Cognetta


Change by Marco Cognetta :


--
keywords: +patch
nosy: +mcognetta
nosy_count: 6.0 -> 7.0
pull_requests: +27293
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/29020

___
Python tracker 
<https://bugs.python.org/issue37295>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2021-10-09 Thread Marco Pagliaricci

Marco Pagliaricci  added the comment:

Chris,
ok, I have modified the snippet of code to better show what I mean.
Still here, the message of the CancelledError exception is lost, but if I
comment line 10, and uncomment line 11, so I throw a ValueError("TEST"),
that "TEST" string will be printed, so the message is not lost.
Again, I just find this behavior very counter-intuitive, and should be VERY
WELL documented in the docs.

Thanks,
M.

On Sat, Oct 9, 2021 at 3:06 PM Chris Jerdonek 
wrote:

>
> Chris Jerdonek  added the comment:
>
> I still don't see you calling asyncio.Task.exception() in your new
> attachment...
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue45390>
> ___
>

--
Added file: https://bugs.python.org/file50334/task_bug.py

___
Python tracker 
<https://bugs.python.org/issue45390>
___import asyncio


async def job():
print("job(): START...")
try:
await asyncio.sleep(5.0)
except asyncio.CancelledError as e:
print("job(): CANCELLED!", e)
raise asyncio.CancelledError("TEST")
#raise ValueError("TEST")
print("job(): DONE.")


async def cancel_task_after(task, time):
try:
await asyncio.sleep(time)
except asyncio.CancelledError:
print("cancel_task_after(): CANCELLED!")
except Exception as e:
print("cancel_task_after(): Exception!", e.__class__.__name__, 
e)
task.cancel("Hello!")


async def main():
task = asyncio.create_task(job())
# RUN/CANCEL task.
try:
await asyncio.gather(task, cancel_task_after(task, 1.0))
except asyncio.CancelledError as e:
try:
task_exc = task.exception()
except BaseException as be:
task_exc = be
print("In running task, we encountered a cancellation! 
Excpetion message is: ", e)
print("   ^ Task exc is:", task_exc.__class__.__name__, 
task_exc)
except Exception as e:
print("In running task, we got a generic Exception:", 
e.__class__.__name__, e)
# GET result.
try:
result = task.result()
except asyncio.CancelledError as e:
print("Task has been cancelled, exception message is: ", e)
except Exception as e:
try:
task_exc = task.exception()
except BaseException as be:
task_exc = be
print("Task raised generic exception", e.__class__.__name__, e)
print("  ^ Task exc is:", task_exc.__class__.__name__, task_exc)
else:
print("Task result is: ", result)


if __name__=="__main__":
asyncio.run(main())

___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2021-10-09 Thread Marco Pagliaricci

Marco Pagliaricci  added the comment:

Chris,
I'm attaching to this e-mail the code I'm referring to.
As you can see, in line 10, I re-raise the asyncio.CancelledError exception
with a message "TEST".
That message is lost, due to the reasons we've talked about.

My point is that, if we substitute that line 10, with the commented line
11, and we comment the line 10, so we raise a ValueError("TEST") exception,
as you can see, the message "TEST" is NOT LOST.
I just find this counter-intuitive, and error-prone.

AT LEAST should be very well specified in the docs.

Regards,
M.

On Sat, Oct 9, 2021 at 2:51 PM Chris Jerdonek 
wrote:

>
> Chris Jerdonek  added the comment:
>
> > 2) Now: if I re-raise the asyncio.CancelledError as-is, I lose the
> message,
> if I call the `asyncio.Task.exception()` function.
>
> Re-raise asyncio.CancelledError where? (And what do you mean by
> "re-raise"?) Call asyncio.Task.exception() where? This isn't part of your
> example, so it's not clear what you mean exactly.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue45390>
> ___
>

--
Added file: https://bugs.python.org/file50333/task_bug.py

___
Python tracker 
<https://bugs.python.org/issue45390>
___import asyncio


async def job():
print("job(): START...")
try:
await asyncio.sleep(5.0)
except asyncio.CancelledError as e:
print("job(): CANCELLED!", e)
#raise asyncio.CancelledError("TEST")
raise ValueError("TEST")
print("job(): DONE.")


async def cancel_task_after(task, time):
try:
await asyncio.sleep(time)
except asyncio.CancelledError:
print("cancel_task_after(): CANCELLED!")
except Exception as e:
print("cancel_task_after(): Exception!", e.__class__.__name__, 
e)
task.cancel("Hello!")


async def main():
task = asyncio.create_task(job())
# RUN/CANCEL task.
try:
await asyncio.gather(task, cancel_task_after(task, 1.0))
except asyncio.CancelledError as e:
print("In running task, we encountered a cancellation! 
Excpetion message is: ", e)
except Exception as e:
print("In running task, we got a generic Exception:", 
e.__class__.__name__, e)
# GET result.
try:
result = task.result()
except asyncio.CancelledError as e:
print("Task has been cancelled, exception message is: ", e)
except Exception as e:
print("Task raised generic exception", e.__class__.__name__, e)
else:
print("Task result is: ", result)


if __name__=="__main__":
asyncio.run(main())

___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2021-10-09 Thread Marco Pagliaricci


Marco Pagliaricci  added the comment:

OK, I see your point.
But I still can't understand one thing and I think it's very confusing:

1) if you see my example, inside the job() coroutine, I get correctly
cancelled with an `asyncio.CancelledError` exception containing my message.
2) Now: if I re-raise the asyncio.CancelledError as-is, I lose the message,
if I call the `asyncio.Task.exception()` function.
3) If I raise a *new* asyncio.CancelledError with a new message, inside the
job() coroutine's `except asyncio.CancelledError:` block, I still lose the
message if I call `asyncio.Task.exception()`.
4) But if I raise another exception, say `raise ValueError("TEST")`, always
from the `except asyncio.CancelledError:` block of the job() coroutine, I
*get* the message!
I get `ValueError("TEST")` by calling `asyncio.Task.exception()`, while I
don't with the `asyncio.CancelledError()` one.

Is this really wanted? Sorry, but I still find this a lot confusing.
Shouldn't it be better to return from the `asyncio.Task.exception()` the
old one (containing the message) ?
Or, otherwise, create a new instance of the exception for *ALL* the
exception classes?

Thank you for your time,
My Best Regards,

M.

On Thu, Oct 7, 2021 at 10:25 AM Thomas Grainger 
wrote:

>
> Thomas Grainger  added the comment:
>
> afaik this is intentional https://bugs.python.org/issue31033
>
> --
> nosy: +graingert
>
> ___
> Python tracker 
> <https://bugs.python.org/issue45390>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue45390>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2021-10-06 Thread Marco Pagliaricci


New submission from Marco Pagliaricci :

I've spotted a little bug in how asyncio.CancelledError() exception is 
propagated inside an asyncio.Task.

Since python 3.9 the asyncio.Task.cancel() method has a new 'msg' parameter, 
that will create an asyncio.CancelledError(msg) exception incorporating that 
message.

The exception is successfully propagated to the coroutine the asyncio.Task is 
running, so the coroutine successfully gets raised an 
asyncio.CancelledError(msg) with the specified message in 
asyncio.Task.cancel(msg) method.

But, once the asyncio.Task is cancelled, is impossible to retrieve that 
original asyncio.CancelledError(msg) exception with the message, because it 
seems that *a new* asyncio.CancelledError() [without the message] is raised 
when asyncio.Task.result() or asyncio.Task.exception() methods are called.

I have the feeling that this is just wrong, and that the original message 
specified in asyncio.Task.cancel(msg) should be propagated even also 
asyncio.Task.result() is called.

I'm including a little snippet of code that clearly shows this bug.

I'm using python 3.9.6, in particular:
Python 3.9.6 (default, Aug 21 2021, 09:02:49) 
[GCC 10.2.1 20210110] on linux

--
components: asyncio
files: task_bug.py
messages: 403294
nosy: asvetlov, pagliaricci.m, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.Task doesn't propagate CancelledError() exception correctly.
type: behavior
versions: Python 3.9
Added file: https://bugs.python.org/file50328/task_bug.py

___
Python tracker 
<https://bugs.python.org/issue45390>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44921] dict subclassing is slow

2021-08-19 Thread Marco Sulla


Marco Sulla  added the comment:

Since probably Monica are taking her holidays, I try to decipher her answer.

Probably, the more problematic function spotted by Monica is update_one_slot. I 
re-quote her sentence:

update_one_slot looks for the parent implementation by trying to find the 
generated wrapper methods through an MRO search.

dict doesn't have generated wrappers for sq_contains and mp_subscript, because 
it provides explicit __contains__ and __getitem__ implementations.

Instead of inheriting sq_contains and mp_subscript, update_one_slot ends up 
giving the subclass sq_contains and mp_subscript implementations that perform 
an MRO search for __contains__ and __getitem__ and call those. This is much 
less efficient than inheriting the C slots directly.

The solution for Monica is to change the behaviour of update_one_slot for these 
cases (no wrappers, C slots directly).

I don't know the implications of this change...

--

___
Python tracker 
<https://bugs.python.org/issue44921>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44921] dict subclassing is slow

2021-08-17 Thread Marco Sulla


Marco Sulla  added the comment:

 I not finished my phrase. I'm sure that if there's a way to turn lemons 
into lemonade, she is **MUCH** more skilled than me to find one.

--

___
Python tracker 
<https://bugs.python.org/issue44921>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44921] dict subclassing is slow

2021-08-17 Thread Marco Sulla


Marco Sulla  added the comment:

Since my knowledge of this is very poor, I informed Monica about the issue. I'm 
quite sure that if there's a way to turn lemons into lemonade :)

--

___
Python tracker 
<https://bugs.python.org/issue44921>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44921] dict subclassing is slow

2021-08-15 Thread Marco Sulla


New submission from Marco Sulla :

I asked on SO why subclassing dict makes the subclass much slower in some 
operations. This is the answer by Monica 
(https://stackoverflow.com/a/59914459/1763602):

Indexing and in are slower in dict subclasses because of a bad interaction 
between a dict optimization and the logic subclasses use to inherit C slots. 
This should be fixable, though not from your end.

The CPython implementation has two sets of hooks for operator overloads. There 
are Python-level methods like __contains__ and __getitem__, but there's also a 
separate set of slots for C function pointers in the memory layout of a type 
object. Usually, either the Python method will be a wrapper around the C 
implementation, or the C slot will contain a function that searches for and 
calls the Python method. It's more efficient for the C slot to implement the 
operation directly, as the C slot is what Python actually accesses.

Mappings written in C implement the C slots sq_contains and mp_subscript to 
provide in and indexing. Ordinarily, the Python-level __contains__ and 
__getitem__ methods would be automatically generated as wrappers around the C 
functions, but the dict class has explicit implementations of __contains__ and 
__getitem__, because the explicit implementations 
(https://github.com/python/cpython/blob/v3.8.1/Objects/dictobject.c) are a bit 
faster than the generated wrappers:

static PyMethodDef mapp_methods[] = {
DICT___CONTAINS___METHODDEF
{"__getitem__", (PyCFunction)(void(*)(void))dict_subscript,METH_O | 
METH_COEXIST,
 getitem__doc__},
...

(Actually, the explicit __getitem__ implementation is the same function as the 
mp_subscript implementation, just with a different kind of wrapper.)

Ordinarily, a subclass would inherit its parent's implementations of C-level 
hooks like sq_contains and mp_subscript, and the subclass would be just as fast 
as the superclass. However, the logic in update_one_slot 
(https://github.com/python/cpython/blob/v3.8.1/Objects/typeobject.c#L7202) 
looks for the parent implementation by trying to find the generated wrapper 
methods through an MRO search.

dict doesn't have generated wrappers for sq_contains and mp_subscript, because 
it provides explicit __contains__ and __getitem__ implementations.

Instead of inheriting sq_contains and mp_subscript, update_one_slot ends up 
giving the subclass sq_contains and mp_subscript implementations that perform 
an MRO search for __contains__ and __getitem__ and call those. This is much 
less efficient than inheriting the C slots directly.

Fixing this will require changes to the update_one_slot implementation.

Aside from what I described above, dict_subscript also looks up __missing__ for 
dict subclasses, so fixing the slot inheritance issue won't make subclasses 
completely on par with dict itself for lookup speed, but it should get them a 
lot closer.

As for pickling, on the dumps side, the pickle implementation has a dedicated 
fast path 
(https://github.com/python/cpython/blob/v3.8.1/Modules/_pickle.c#L4291) for 
dicts, while the dict subclass takes a more roundabout path through 
object.__reduce_ex__ and save_reduce.

On the loads side, the time difference is mostly just from the extra opcodes 
and lookups to retrieve and instantiate the __main__.A class, while dicts have 
a dedicated pickle opcode for making a new dict. If we compare the disassembly 
for the pickles:

In [26]: pickletools.dis(pickle.dumps({0: 0, 1: 1, 2: 2, 3: 3, 4: 4}))  

 
0: \x80 PROTO  4
2: \x95 FRAME  25
   11: }EMPTY_DICT
   12: \x94 MEMOIZE(as 0)
   13: (MARK
   14: KBININT10
   16: KBININT10
   18: KBININT11
   20: KBININT11
   22: KBININT12
   24: KBININT12
   26: KBININT13
   28: KBININT13
   30: KBININT14
   32: KBININT14
   34: uSETITEMS   (MARK at 13)
   35: .STOP
highest protocol among opcodes = 4

In [27]: pickletools.dis(pickle.dumps(A({0: 0, 1: 1, 2: 2, 3: 3, 4: 4})))   

 
0: \x80 PROTO  4
2: \x95 FRAME  43
   11: \x8c SHORT_BINUNICODE '__main__'
   21: \x94 MEMOIZE(as 0)
   22: \x8c SHORT_BINUNICODE 'A'
   25: \x94 MEMOIZE(as 1)
   26: \x93 STACK_GLOBAL
   27: \x94 MEMOIZE(as 2)
   28: )EMPTY_TUPLE
   29: \x81 NEWOBJ
   30: \x94 MEMOIZE(as 3)
   31: (MARK
   32: KBININT10
   34: KBININT10
   36: KBININT11
   38: KBININT11
   40: KBININT12
   42: KBININT12
   44: K   

[issue39940] Micro-optimizations to PySequence_Tuple()

2021-08-05 Thread Marco Sulla


Marco Sulla  added the comment:

Close it, I have no time now :-(

--
resolution:  -> later
stage:  -> resolved
status: pending -> closed

___
Python tracker 
<https://bugs.python.org/issue39940>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44585] csv library does not correctly interpret some files

2021-07-08 Thread Marco E.


New submission from Marco E. :

The CSV library does not correctly interpret files in the following format 
(test.csv):

"A" ,"B"  ,"C"
"aa","bbb",""
"a" ,"bb" ,"ccc"
""  ,"b"  ,"cc"


This program:

import csv
from pathlib import Path


def main():
with Path('test.csv').open('rt') as csv_file:
csv.register_dialect('my_dialect', quotechar='"', delimiter=',',
 quoting=csv.QUOTE_ALL, skipinitialspace=True)
reader = csv.DictReader(csv_file, dialect='my_dialect')
for row in reader:
print(row)


if __name__ == '__main__':
main()


produces the following output:

{'A ': 'aa', 'B  ': 'bbb', 'C': ''}
{'A ': 'a ', 'B  ': 'bb ', 'C': 'ccc'}
{'A ': '  ', 'B  ': 'b  ', 'C': 'cc'}


this instead is the expected result:

{'A': 'aa', 'B': 'bbb', 'C': ''}
{'A': 'a', 'B': 'bb', 'C': 'ccc'}
{'A': '', 'B': 'b', 'C': 'cc'}


why?

Thank you,
Marco

--
components: Library (Lib)
messages: 397139
nosy: voidloop
priority: normal
severity: normal
status: open
title: csv library does not correctly interpret some files
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44585>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42828] Python readline module

2021-01-05 Thread Marco Franzo


Marco Franzo  added the comment:

So, I use Ubuntu 20.10 and the terminal
is the one distributed with the system.

I think this problem born in my code here:

def generate_input():
while True:
str = input().strip()
yield helloworld_pb2.Operazione(operazione = str)


I think the string

os.system('stty sane')

it can be very useful for those who have the shell unusable at the end of the 
program.

If i remove import readline, I no longer have any problems, but i need 
the features of readline

--
Added file: https://bugs.python.org/file49722/io_console.py

___
Python tracker 
<https://bugs.python.org/issue42828>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42828] Python readline module

2021-01-05 Thread Marco Franzo


New submission from Marco Franzo :

It would be better to write at the end of the program this:

os.system('stty sane')

because when you import readline, at the end of program, the console remains 
unusable

--
assignee: docs@python
components: Documentation
messages: 384379
nosy: docs@python, frenzisys
priority: normal
severity: normal
status: open
title: Python readline module
type: enhancement
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue42828>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41374] socket.TCP_* no longer available with cygwin 3.1.6+

2021-01-02 Thread Marco Atzeri


Marco Atzeri  added the comment:

The Analysis is correct. 
Removing the test for CYGWIN and always include the 


solved the problem building all python (3.6,3.7,3.8) packages

https://sourceware.org/pipermail/cygwin-apps/2020-December/040845.html
https://sourceware.org/pipermail/cygwin-announce/2020-December/009853.html

attached patch used on the build.
Similar one was applied to the rebuild of 2.7.18

--
nosy: +matzeri
versions: +Python 3.6, Python 3.7
Added file: https://bugs.python.org/file49717/3.6.12-socketmodule.patch

___
Python tracker 
<https://bugs.python.org/issue41374>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36964] `python3 -m venv NAME`: virtualenv is not portable

2020-12-02 Thread Marco Sulla


Marco Sulla  added the comment:

The PR will probably be rejected... you can do something like this:

1. in the venv on our machine, do `pip freeze`. This gives you the whole list 
of installed dependencies
2. download all the packages using `pip download`
3. copy all the packages on the cloud, create the venv and install them using 
`pip install $PATH_TO_PACKAGE`

--

___
Python tracker 
<https://bugs.python.org/issue36964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-11-01 Thread Marco Sulla


Marco Sulla  added the comment:

I did PGO+LTO... --enable-optimizations --with-lto

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-31 Thread Marco Sulla


Marco Sulla  added the comment:

Well, actually Serhiy is right, it does not seem that the macro benchs did show 
something significant. Maybe the code can be used in other parts of CPython, 
for example in _pickle, where dicts are loaded. But it needs also to expose, 
maybe internally only, dictresize() and DICT_NEXT_VERSION(). Not sure it's 
something desirable.

There's something that I do not understand: the speedup to unpack_sequence. I 
checked the pyperformance code, and it's a microbench for:

a, b = some_sequence

It should *not* be affected by the change. Anyway, I run the bench other 10 
times, and the lowest value with the CPython code without the PR is not lower 
than 67.7 ns. With the PR, it reaches 53.5 ns. And I do not understand why. 
Maybe it affects the creation of the dicts with the local and global vars?

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-30 Thread Marco Sulla


Marco Sulla  added the comment:

Well, following your example, since split dicts seems to be no more supported, 
I decided to be more drastic. If you see the last push in PR 22346, I do not 
check anymore but always resize, so the dict is always combined. This seems to 
be especially good for the "unpack_sequence" bench, even if I do not know what 
it is:

| chaos   | 132 ms   | 136 ms | 1.03x slower | 
Significant (t=-18.09) |
| crypto_pyaes| 136 ms   | 141 ms | 1.03x slower | 
Significant (t=-11.60) |
| float   | 133 ms   | 137 ms | 1.03x slower | 
Significant (t=-16.94) |
| go  | 276 ms   | 282 ms | 1.02x slower | 
Significant (t=-11.58) |
| logging_format  | 12.3 us  | 12.6 us| 1.02x slower | 
Significant (t=-9.75)  |
| logging_silent  | 194 ns   | 203 ns | 1.05x slower | 
Significant (t=-9.00)  |
| logging_simple  | 11.3 us  | 11.6 us| 1.02x slower | 
Significant (t=-12.56) |
| mako| 16.5 ms  | 17.4 ms| 1.05x slower | 
Significant (t=-17.34) |
| meteor_contest  | 116 ms   | 120 ms | 1.04x slower | 
Significant (t=-25.59) |
| nbody   | 158 ms   | 166 ms | 1.05x slower | 
Significant (t=-12.73) |
| nqueens | 107 ms   | 111 ms | 1.03x slower | 
Significant (t=-11.39) |
| pickle_pure_python  | 631 us   | 619 us | 1.02x faster | 
Significant (t=6.28)   |
| regex_compile   | 206 ms   | 214 ms | 1.04x slower | 
Significant (t=-24.24) |
| regex_v8| 28.4 ms  | 26.7 ms| 1.06x faster | 
Significant (t=10.92)  |
| richards| 87.8 ms  | 90.3 ms| 1.03x slower | 
Significant (t=-10.91) |
| scimark_lu  | 165 ms   | 162 ms | 1.02x faster | 
Significant (t=4.55)   |
| scimark_sor | 210 ms   | 215 ms | 1.02x slower | 
Significant (t=-10.14) |
| scimark_sparse_mat_mult | 6.45 ms  | 6.64 ms| 1.03x slower | 
Significant (t=-6.66)  |
| spectral_norm   | 158 ms   | 171 ms | 1.08x slower | 
Significant (t=-29.11) |
| sympy_expand| 599 ms   | 619 ms | 1.03x slower | 
Significant (t=-21.93) |
| sympy_str   | 376 ms   | 389 ms | 1.04x slower | 
Significant (t=-23.80) |
| sympy_sum   | 233 ms   | 239 ms | 1.02x slower | 
Significant (t=-14.70) |
| telco   | 7.40 ms  | 7.61 ms| 1.03x slower | 
Significant (t=-10.08) |
| unpack_sequence | 70.0 ns  | 56.1 ns| 1.25x faster | 
Significant (t=10.62)  |
| xml_etree_generate  | 108 ms   | 106 ms | 1.02x faster | 
Significant (t=5.52)   |
| xml_etree_iterparse | 133 ms   | 130 ms | 1.02x faster | 
Significant (t=11.33)  |
| xml_etree_parse | 208 ms   | 204 ms | 1.02x faster | 
Significant (t=9.19)   |

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34204] Bump the default pickle protocol in shelve

2020-10-27 Thread Marco Castelluccio


Marco Castelluccio  added the comment:

I've opened https://github.com/python/cpython/pull/22751 to fix this, I know 
there was already a PR, but it seems to have been abandoned.

--

___
Python tracker 
<https://bugs.python.org/issue34204>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34204] Bump the default pickle protocol in shelve

2020-10-27 Thread Marco Castelluccio


Change by Marco Castelluccio :


--
nosy: +marco-c
nosy_count: 6.0 -> 7.0
pull_requests: +21928
pull_request: https://github.com/python/cpython/pull/22751

___
Python tracker 
<https://bugs.python.org/issue34204>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42141] Speedup various dict inits

2020-10-25 Thread Marco Sulla


Marco Sulla  added the comment:

Well, after a second thought I think you're right, there's no significant 
advantage and too much duplicated code.

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue42141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42141] Speedup various dict inits

2020-10-25 Thread Marco Sulla


Marco Sulla  added the comment:

The fact is that, IMHO, PGO will "false" the results, since it's quite 
improbable that in the test battery there's a test of creation of a dict from 
another dict with an hole. It seems to me that the comparison between the 
normal builds are more significant.

--

___
Python tracker 
<https://bugs.python.org/issue42141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42141] Speedup various dict inits

2020-10-25 Thread Marco Sulla


Marco Sulla  added the comment:

Note that this time I've no slowdown in the macro bench, since I used normal 
builds, not optimized ones. I suppose an optimized build will show slowdown 
because the new functions are not in the test battery.

--

___
Python tracker 
<https://bugs.python.org/issue42141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42141] Speedup various dict inits

2020-10-25 Thread Marco Sulla


Marco Sulla  added the comment:

I'm quite sure I not invented the wheel :) but I think it's a good improvement:

| pathlib | 35.8 ms  | 35.1 ms| 1.02x 
faster | Significant (t=13.21) |
| scimark_monte_carlo | 176 ms   | 172 ms | 1.02x 
faster | Significant (t=9.48)  |
| scimark_sor | 332 ms   | 325 ms | 1.02x 
faster | Significant (t=11.96) |
| telco   | 11.0 ms  | 10.8 ms| 1.03x 
faster | Significant (t=8.52)  |
| unpickle_pure_python| 525 us   | 514 us | 1.02x 
faster | Significant (t=19.97) |
| xml_etree_process   | 132 ms   | 129 ms | 1.02x 
faster | Significant (t=17.59) |

--
components: +Interpreter Core
type:  -> performance
versions: +Python 3.10

___
Python tracker 
<https://bugs.python.org/issue42141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42141] Speedup various dict inits

2020-10-24 Thread Marco Sulla


New submission from Marco Sulla :

The PR #22948 is an augmented version of #22346. It speeds up also the creation 
of:

1. dicts from other dicts that are not "perfect" (combined and without holes)
2. fromkeys
3. copies of dicts with many holes
4. dict from keywords, as in #22346

A sample bench:

python -m pyperf timeit --rigorous "dict(o)" -s """
from uuid import uuid4

def getUuid():
return str(uuid4())

o = {getUuid():getUuid() for i in range(1000)}
delkey = getUuid()
o[delkey] = getUuid()
del o[delkey]
"""
.

Before #22948:
Mean +- std dev: 35.9 us +- 0.6 us

After:
Mean +- std dev: 26.4 us +- 0.4 us

--
messages: 379540
nosy: Marco Sulla
priority: normal
pull_requests: 21865
severity: normal
status: open
title: Speedup various dict inits

___
Python tracker 
<https://bugs.python.org/issue42141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-24 Thread Marco Sulla


Marco Sulla  added the comment:

I commented out sqlalchemy in the requirements.txt in the pyperformance source 
code, and it worked. I had also to skip tornado:

pyperformance run -r 
-b,-sqlalchemy_declarative,-sqlalchemy_imperative,-tornado_http -o 
../perf_master.json

This is my result:

pyperformance compare perf_master.json perf_dict_init.json -O table | grep 
Significant
| 2to3| 356 ms   | 348 ms  | 1.02x 
faster | Significant (t=7.28)   |
| fannkuch| 485 ms   | 468 ms  | 1.04x 
faster | Significant (t=9.68)   |
| pathlib | 22.5 ms  | 22.1 ms | 1.02x 
faster | Significant (t=13.02)  |
| pickle_dict | 29.0 us  | 30.3 us | 1.05x 
slower | Significant (t=-92.36) |
| pickle_list | 4.55 us  | 4.64 us | 1.02x 
slower | Significant (t=-10.87) |
| pyflate | 735 ms   | 702 ms  | 1.05x 
faster | Significant (t=6.67)   |
| regex_compile   | 197 ms   | 193 ms  | 1.02x 
faster | Significant (t=2.81)   |
| regex_v8| 24.5 ms  | 23.9 ms | 1.02x 
faster | Significant (t=17.63)  |
| scimark_fft | 376 ms   | 386 ms  | 1.03x 
slower | Significant (t=-15.07) |
| scimark_lu  | 154 ms   | 158 ms  | 1.03x 
slower | Significant (t=-12.94) |
| sqlite_synth| 3.35 us  | 3.21 us | 1.04x 
faster | Significant (t=17.65)  |
| telco   | 6.54 ms  | 7.14 ms | 1.09x 
slower | Significant (t=-8.51)  |
| unpack_sequence | 58.8 ns  | 61.5 ns | 1.04x 
slower | Significant (t=-19.66) |

It's strange that some benchmarks are slower, since the patch only do two 
additional checks to dict_vectorcall. Maybe they use many little dicts?

@methane:
> Would you implement some more optimization based on your PR to demonstrate 
> your idea?

I already done them, I'll do a PR.

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-23 Thread Marco Sulla


Marco Sulla  added the comment:

@Mark.Shannon I tried to run pyperformance, but wheel does not work for Python 
3.10. I get the error:

AssertionError: would build wheel with unsupported tag ('cp310', 'cp310', 
'linux_x86_64')

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-23 Thread Marco Sulla


Marco Sulla  added the comment:

@methane: well, to be honest, I don't see much difference between the two 
pulls. The major difference is that you merged insertdict_init in 
dict_merge_init.

But I kept insertdict_init separate on purpose, because this function can be 
used in other future dedicated function on creation time only. Furthermore it's 
more simple to maintain, since it's quite identical to insertdict.

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-10-22 Thread Marco Sulla


Marco Sulla  added the comment:

Another bench:

python -m pyperf timeit --rigorous "dict(ihinvdono='doononon', 
gowwondwon='nwog', bdjbodbob='nidnnpn', nwonwno='vndononon', 
dooodbob='iohiwipwgpw', doidonooq='ndwnnpnpnp', fndionqinqn='ndjboqoqjb', 
nonoeoqgoqb='bdboboqbgoqeb', jdnvonvoddo='nvdjnvndvonoq', 
njnvodnoo='hiehgieba', nvdnvwnnp='wghgihpa', nvfnwnnq='nvdknnnqkm', 
ndonvnipnq='fndjnaobobvob', fjafosboab='ndjnodvobvojb', 
nownwnojwjw='nvknnndnow', niownviwnwnwi='nownvwinvwnwnwj')"

Result without pull:
Mean +- std dev: 486 ns +- 8 ns

Result with pull:
Mean +- std dev: 328 ns +- 4 ns

I compiled both with optimizations and lto.

Some arch info:

python -VV
Python 3.10.0a1+ (heads/master-dirty:dde91b1953, Oct 22 2020, 14:00:51) 
[GCC 10.1.1 20200718]

uname -a
Linux buzz 4.15.0-118-generic #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020 
x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.5 LTS

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42071] Shelve should default to the default Pickle protocol instead of hardcoding version 3

2020-10-18 Thread Marco Castelluccio


Change by Marco Castelluccio :


--
keywords: +patch
pull_requests: +21713
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/22751

___
Python tracker 
<https://bugs.python.org/issue42071>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42071] Shelve should default to the default Pickle protocol instead of hardcoding version 3

2020-10-18 Thread Marco Castelluccio


New submission from Marco Castelluccio :

Shelve is currently defaulting to Pickle protocol 3, instead of using Pickle's 
default protocol for the Python version in use.

This way, Shelve's users don't benefit from improvements introduced in newer 
Pickle protocols, unless they notice it and manually pass a newer protocol 
version to shelve.open or the Shelf constructor.

--
components: Library (Lib)
messages: 378885
nosy: marco-c
priority: normal
severity: normal
status: open
title: Shelve should default to the default Pickle protocol instead of 
hardcoding version 3
type: enhancement
versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42071>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41901] Added some explaining to pickle errors.

2020-10-02 Thread Marco Sulla


Marco Sulla  added the comment:

I closed it for this reason:

https://github.com/python/cpython/pull/22438#issuecomment-702794261

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue41901>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41901] Added some explaining to pickle errors.

2020-10-01 Thread Marco Sulla


Marco Sulla  added the comment:

I do not remember the problem I had, but when I experimented with frozendict I 
get one of these errors. I failed to understand the problem so I added the 
additional info.

Maybe adding an assert in debug mode? It will be visible only to devs.

--

___
Python tracker 
<https://bugs.python.org/issue41901>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41901] Added some explaining to pickle errors.

2020-10-01 Thread Marco Sulla


New submission from Marco Sulla :

All pickle error messages in typeobject.c was a generic "cannot pickle 'type' 
object". Added some explaining for every individual error.

--
components: Interpreter Core
messages: 377747
nosy: Marco Sulla
priority: normal
pull_requests: 21494
severity: normal
status: open
title: Added some explaining to pickle errors.
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41901>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-09-23 Thread Marco Sulla


Marco Sulla  added the comment:

> `dict(**o)` is not common use case. Could you provide some other benchmarks?

You can do

python -m timeit -n 200 "dict(key1=1, key2=2, key3=3, key4=4, key5=5, 
key6=6, key7=7, key8=8, key9=9, key10=10)"

or with pyperf. In this case, since the dict is little, I observed a speedup of 
25%.

--

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41835] Speed up dict vectorcall creation using keywords

2020-09-22 Thread Marco Sulla


New submission from Marco Sulla :

I've done a PR that speeds up the vectorcall creation of a dict using keyword 
arguments. The PR in practice creates a insertdict_init(), a specialized 
version of insertdict. I quote the comment to the function:

Same to insertdict but specialized for inserting without resizing and for dict 
that are populated in a loop and was empty before (see the empty arg).
Note that resizing must be done before calling this function. If not 
possible, use insertdict(). Furthermore, ma_version_tag is left unchanged, you 
have to change it after calling this function (probably at the end of a loop).

This change speeds up the code up to a 30%. Tested with:

python -m timeit -n 2000  --setup "from uuid import uuid4 ; o =
{str(uuid4()).replace('-', '') : str(uuid4()).replace('-', '') for i
in range(1)}" "dict(**o)"

------
components: Interpreter Core
messages: 377318
nosy: Marco Sulla, inada.naoki
priority: normal
pull_requests: 21398
severity: normal
status: open
title: Speed up dict vectorcall creation using keywords
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41835>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41740] Improve error message for string concatenation via `sum`

2020-09-07 Thread Marco Paolini


Marco Paolini  added the comment:

I was thinking to just clarify a bit the error message that results from 
Py_NumberAdd. This won't make it slower in the "hot" path

doing something like (not compile tested, sorry)

--- a/Python/bltinmodule.c
+++ b/Python/bltinmodule.c
@@ -2451,8 +2451,13 @@ builtin_sum_impl(PyObject *module, PyObject *iterable, 
PyObject *start)
 Py_DECREF(result);
 Py_DECREF(item);
 result = temp;
-if (result == NULL)
+if (result == NULL) {
+ if (PyUnicode_Check(item) || PyBytes_Check(item) || 
PyByteArray_Check(item))
+ PyErr_SetString(PyExc_TypeError,
+   "sum() can't sum bytes, strings or byte-arrays [use 
.join(seq) instead]");
+   }
 break;
+   }
 }
 Py_DECREF(iter);
 return result;

--

___
Python tracker 
<https://bugs.python.org/issue41740>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41740] string concatenation via `sum`

2020-09-07 Thread Marco Paolini


Marco Paolini  added the comment:

also worth noting, the start argument is type checked instead. Maybe we could 
apply the same checks to the items of the iterable?

python3 -c "print(sum(('a', 'b', 'c'), start='d'))"
Traceback (most recent call last):
  File "", line 1, in 
TypeError: sum() can't sum strings [use ''.join(seq) instead]


see 
https://github.com/python/cpython/blob/c96d00e88ead8f99bb6aa1357928ac4545d9287c/Python/bltinmodule.c#L2310

--

___
Python tracker 
<https://bugs.python.org/issue41740>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41740] string concatenation via `sum`

2020-09-07 Thread Marco Paolini


Marco Paolini  added the comment:

This happens because the default value for the start argument is zero , hence 
the first operation is `0 + 'a'`

--
nosy: +mpaolini

___
Python tracker 
<https://bugs.python.org/issue41740>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41472] webbrowser uses deprecated env variables to detect desktop type

2020-08-04 Thread Marco Trevisan


Change by Marco Trevisan :


--
keywords: +patch
pull_requests: +20875
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21731

___
Python tracker 
<https://bugs.python.org/issue41472>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41472] webbrowser uses deprecated env variables to detect desktop type

2020-08-04 Thread Marco Trevisan


New submission from Marco Trevisan :

Webbrowser uses env variables such as GNOME_DESKTOP_SESSION_ID that have been 
dropped by GNOME in recent releases

--
components: Library (Lib)
messages: 374806
nosy: Trevinho
priority: normal
severity: normal
status: open
title: webbrowser uses deprecated env variables to detect desktop type
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41472>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34624] -W option and PYTHONWARNINGS env variable does not accept module regexes

2020-07-08 Thread Marco Paolini


Marco Paolini  added the comment:

hello Thomas,

do you need any help fixing the conflicts in your PR?


even if Lib/warnings.py changed a little in the last 2 years, your PR is still 
good!

--
nosy: +mpaolini

___
Python tracker 
<https://bugs.python.org/issue34624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41185] lib2to3 generation of pickle files is racy

2020-07-01 Thread Marco Barisione


New submission from Marco Barisione :

The generation of pickle files in load_grammar in lib2to3/pgen2/driver.py is 
racy as other processes may end up reading a half-written pickle file.

This is reproducible with the command line tool, but it's easier to reproduce 
by importing lib2to3. You just need different processes importing lib2to3 at 
the same time to make this happen, see the attached reproducer.

I tried with Python 3.9 for completeness and, while it happens there as well, 
it seems to be less frequent ony my computer than when using Python 3.6 (2% 
failure rate instead of 50% failure rate).

--
components: 2to3 (2.x to 3.x conversion tool)
files: pool.py
messages: 372760
nosy: barisione
priority: normal
severity: normal
status: open
title: lib2to3 generation of pickle files is racy
versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9
Added file: https://bugs.python.org/file49284/pool.py

___
Python tracker 
<https://bugs.python.org/issue41185>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39940] Micro-optimizations to PySequence_Tuple()

2020-03-11 Thread Marco Sulla


New submission from Marco Sulla :

This is a little PR with some micro-optimizations to the PySequence_Tuple() 
function. Mainly, it simply add a support variable new_n_tmp_1 instead of 
reassigning newn multiple times.

--
components: Interpreter Core
messages: 363974
nosy: Marco Sulla
priority: normal
pull_requests: 18296
severity: normal
status: open
title: Micro-optimizations to PySequence_Tuple()
type: performance
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39940>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39842] partial_format()

2020-03-04 Thread Marco Sulla


Marco Sulla  added the comment:

@Eric V. Smith: that you for your effort, but I'll never use an API marked as 
private, that is furthermore undocumented.

--

___
Python tracker 
<https://bugs.python.org/issue39842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39842] partial_format()

2020-03-04 Thread Marco Sulla


Marco Sulla  added the comment:

> What would "{} {}".partial_format({}) return?
`str.partial_format()` was proposed exactly to avoid such tricks.

> It is not possible to implement a "safe" variant of str.format(),
> because in difference to Template it can call arbitrary code

If you read the documentation of `Template.safe_substitute()`, you can read 
also this function is not safe at all.

But Python, for example, does not implement private class attributes. Because 
Python is for adult and consciousness people, no?

--

___
Python tracker 
<https://bugs.python.org/issue39842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39848] Warning: 'classifiers' should be a list, got type 'tuple'

2020-03-04 Thread Marco Sulla


Change by Marco Sulla :


--
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed
type:  -> behavior

___
Python tracker 
<https://bugs.python.org/issue39848>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19610] Give clear error messages for invalid types used for setup.py params (e.g. tuple for classifiers)

2020-03-04 Thread Marco Sulla


Marco Sulla  added the comment:

This is IMHO broken.

1. _ensure_list() allows strings, because, documentation says, they are split 
in finalize_options(). But finalize_options() does only split keywords and 
platforms. It does _not_ split classifiers.

2. there's no need that keywords, platforms and classifiers must be a list. 
keywords and platforms can be any iterable, and classifiers can be any non 
text-like iterable. 

Indeed, keywords are written to file using ','.join(), and platforms and 
classifiers are written using DistributionMetadata._write_list(). They both 
accepts any iterable, so I do not understand why such a strict requirement.

------
nosy: +Marco Sulla

___
Python tracker 
<https://bugs.python.org/issue19610>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39842] partial_format()

2020-03-04 Thread Marco Sulla


Marco Sulla  added the comment:

> Do you have some concrete use case for this?

Yes, for EWA:
https://marco-sulla.github.io/ewa/

Since it's a code generator, it uses templates a lot, and much times I feel the 
need for a partial substitution. In the end I solved with some ugly tricks.

Furthermore, if the method exists in the stdlib for `string.Template`, I 
suppose it was created because it was of some utility.

--

___
Python tracker 
<https://bugs.python.org/issue39842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39848] Warning: 'classifiers' should be a list, got type 'tuple'

2020-03-04 Thread Marco Sulla


New submission from Marco Sulla :

I got this warning. I suppose that `distutils` can use any iterable.

--
components: Distutils
messages: 363354
nosy: Marco Sulla, dstufft, eric.araujo
priority: normal
severity: normal
status: open
title: Warning: 'classifiers' should be a list, got type 'tuple'
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39848>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39820] Bracketed paste mode for REPL

2020-03-04 Thread Marco Sulla


Marco Sulla  added the comment:

IMHO such a feature is useful for sysops that does not have a graphical 
interface, as Debian without an X. That's why vi is (unluckily) very popular 
also in 2020. IDLE can't be used in this cases.

Windows users can't remotely login without a GUI, so the feature for such 
platforms can be not implemented, since there's builtin solutions (IDLE)

--

___
Python tracker 
<https://bugs.python.org/issue39820>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39842] partial_format()

2020-03-03 Thread Marco Sulla


New submission from Marco Sulla :

In `string` module, there's a very little known class `Template`. It implements 
a very simple template, but it has an interesting method: `safe_substitute()`.

`safe_substitute()` permits you to not fill the entire Template at one time. On 
the contrary, it substitute the placeholders that are passed, and leave the 
others untouched.

I think it could be useful to have a similar method for the format 
minilanguage. I propose a partial_format() method.

=== WHY I think this is useful? ===

This way, you can create subtemplates from a main template. You could want to 
use the template for creating a bunch of strings, all of them with the same 
value for some placeholders, but different values for other ones. This way you 
have *not* to reuse the same main template and substitute every time the 
placeholders that does not change.

`partial_format()` should act as `safe_substitute()`: if some placeholder 
misses a value, no error will be raised. On the contrary, the placeholder is 
leaved untouched.

Some example:

>>> "{} {}".partial_format(1)
'1 {}'


>>> "{x} {a}".partial_format(a="elephants")
'{x} elephants'

>>> "{:-f} and {:-f} nights".partial_format(1000)
'1000 and {:-f} nights'

--
components: Interpreter Core
messages: 363317
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: partial_format()
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39820] Bracketed paste mode for REPL

2020-03-03 Thread Marco Sulla


Marco Sulla  added the comment:

Excuse me, but my original "holistic" proposal was rejected and it was 
suggested to me to propose only relevant changes, and one for issue. Now you 
say exactly the contrary. I feel a bit confused.

PS: yes, I can, and I use, IPython. But IMHO IPython does too much things and 
its design is not very pythonic. Bracketed paste mode is a good feature, and I 
think REPL will be much more useful if it implements it.

On the contrary, if you don't think IPython is good, pythonic and essential, I 
suppose there's no problem to substitute REPL with IPython in CPython core 
itself.

--
nosy: +eryksun, steven.daprano, terry.reedy

___
Python tracker 
<https://bugs.python.org/issue39820>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-03 Thread Marco Sulla


Marco Sulla  added the comment:

I agree with Pablo Galindo Salgado: https://bugs.python.org/issue35912#msg334942

The "quick and dirty" solution is to change MAINCC to CC, for _testembed.c AND 
python.c (g++ fails with both).

After that, _testembed.c and python.c should be changed so they can be compiled 
with a c++ compiler, and a system test should be added.

Anyway, I found the original patch:
https://bugs.python.org/file6816/cxx-main.patch

In the original patch, the README contained detailed information. I think these 
informations could be restored, maybe in ./configure --help

Anyway, I have a question. In README, it's stated:

There are platforms that do not require you to build Python
with a C++ compiler in order to use C++ extension modules.
E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such
a platform.

All x86 platforms? Also x86-64? And what does it means "Linux with ELF"? It 
means that Linux has shared libraries or that Python is compiled with 
--enable-shared? And what it means gcc 3 and 4? It means gcc 3+?

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39820] Bracketed paste mode for REPL

2020-03-03 Thread Marco Sulla


Marco Sulla  added the comment:

Please read the message of Terry J. Reed: 
https://bugs.python.org/issue38747#msg356345 

I quote the relevant part below

> Skipping the rest of your post, I will just restate why I closed this
> issue.
> 
> 1. It introduces too many features not directly related.  The existing
> unix-only completions uses two modules.  I suspect some of the other
> features would also need new modules.  (But Marco, please don't rush 
> to immediately open 8 new issues.)

--

___
Python tracker 
<https://bugs.python.org/issue39820>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39820] Bracketed paste mode for REPL

2020-03-02 Thread Marco Sulla


Marco Sulla  added the comment:

> Is this even possible in a plain text console?

Yes. See Jupyter Console (aka IPython).

--

___
Python tracker 
<https://bugs.python.org/issue39820>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39820] Bracketed paste mode for REPL

2020-03-01 Thread Marco Sulla


New submission from Marco Sulla :

I suggest to add an implementation of bracketed paste mode in the REPL.

Currently if you, for example, copy & paste a piece of Python code to see if it 
works, if the code have a blank line without indentation and the previous and 
next line are indented, REPL raises an error.
If you create a .py, paste the same code and run it with the python 
interpreter, no error is raised, since the syntax is legit.

Bracketed paste mode is implemented in many text editors, as vi.

--
components: Interpreter Core
messages: 363109
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Bracketed paste mode for REPL
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39820>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

Furthermore, I have not understood a think: if I understood well, 
--with-cxx-main is used on _some_ platforms that have problems with C++ 
extensions. What platforms? Is there somewhere a unit test for testing if 
Python compiled on one of these platforms with 
-with-cxx-main= works, and if a C++ extension works with 
this build?

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

Okay... if I have understood well, the problem is with C++ Extensions.

Some questions:

1. does this problem exists yet?
2. if yes, maybe Python have to wrap the python.c and _testembed.c so they can 
also be compiled with a C++ compiler?
3. --with-cxx-main is not somewhat misleading? There's no documentation, and I 
interpreted it as "the _main_ compiler for C++", while it means "the compiler 
for main()". Should I suggest (maybe in another issue) to deprecate it and use 
--with-mainfun-compiler ?

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39813] test_ioctl skipped -- Unable to open /dev/tty

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

OS: Lubuntu 18.04.4

Steps to reproduce:

sudo apt-get install git libbz2-dev liblzma-dev uuid-dev libffi-dev 
libsqlite3-dev libreadline-dev libssl-dev libgdbm-dev libgdbm-compat-dev tk-dev 
libncurses5-dev
git clone https://github.com/python/cpython.git
cd cpython
CC=gcc-9 CXX=g++-9 ./configure --enable-optimizations --with-lto
make -j 4
make test

marco@buzz:~/sources/cpython_test$ python3.9
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux

--

___
Python tracker 
<https://bugs.python.org/issue39813>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

Mmmm... wait a moment. It seems the behavior is intended:

https://bugs.python.org/issue1324762

I quote:


The patch contains the following changes:
[...]
2) The compiler used to translate python's main() function is 
stored in the configure / Makefile variable MAINCC. By 
default, MAINCC=$(CC). [...] If 
--with-cxx-main= is on the configure command 
line, then MAINCC=.


Honestly I have _no idea_ why this change was made. Unluckily, the link to the 
discussion is broken.

--
nosy: +cludwig

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

https://github.com/python/cpython/pull/18721

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Change by Marco Sulla :


--
keywords: +patch
pull_requests: +18079
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18721

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-03-01 Thread Marco Sulla


Marco Sulla  added the comment:

The problem is here:

Programs/_testembed.o: $(srcdir)/Programs/_testembed.c
$(MAINCC) -c $(PY_CORE_CFLAGS) -o $@ $(srcdir)/Programs/_testembed.c

`MAINCC` in my Makefile is `g++-9`. Probably, MAINCC is set to the value of 
``--with-cxx-main`, if specified.

I replaced `MAINCC` with `CC` at this line, and it works.

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39813] test_ioctl skipped -- Unable to open /dev/tty

2020-03-01 Thread Marco Sulla


New submission from Marco Sulla :

During `make test`, I get the error in the title.

(venv_3_9) marco@buzz:~/sources/cpython_test$ ll /dev/tty
crw-rw-rw- 1 root tty 5, 0 Mar  1 15:24 /dev/tty

--
components: Tests
messages: 363063
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: test_ioctl skipped -- Unable to open /dev/tty
type: compile error
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39813>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39788] Exponential notation should return an int if it can

2020-02-29 Thread Marco Sulla


Marco Sulla  added the comment:

> >>> int(1e100)
> 1159028911097599180468360808563945281389781327557747838772170381060813469985856815104

.
Oh my God... I'm just more convinced than before :-D

> Ya, this change will never be made - give up gracefully :-)

Well, even if it's Tim Peters himself that ask it to me :-) I can't.

--

___
Python tracker 
<https://bugs.python.org/issue39788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39788] Exponential notation should return an int if it can

2020-02-29 Thread Marco Sulla


Marco Sulla  added the comment:

All the examples you mentioned seems to me to fix code, instead of breaking it.

About 1e300**1, it's not a bug at all. No one can stop you to full your RAM 
in many other ways :-D

About conventions, it does not seems to me that Python cares about other 
languages very much, if it's more natural for normal people to expect a result 
instead of a consolidated one among devs. See `1 / 2 == 0.5`, for example.

> But by your own feature request, this would return an int and your 
"feature" would bite you

You're citing the *optional* extra to the original idea. We can agree it is not 
a good addition at all.

I continue to think that nE+m, where n and m are integers, should return an 
integer. If this can break old code, I'm the first to think it should not be 
implemented, but I don't see any problem (yet).

--

___
Python tracker 
<https://bugs.python.org/issue39788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39788] Exponential notation should return an int if it can

2020-02-29 Thread Marco Sulla


Marco Sulla  added the comment:

Sorry, but I can't figure out what code can break this change. Integers are 
implicitly converted to floats in operations with floats. How can this change 
break old code?

> if you are worried about the performance

No, I'm worried about the expectations of coders.
Personally, I expected that 1E2 returned a integer. And this is not true.
If I wanted a float, I'd wrote 1.0E2 . The fact the exponential notation 
returns always a float is really misleading, IMHO.

--

___
Python tracker 
<https://bugs.python.org/issue39788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39788] Exponential notation should return an int if it can

2020-02-28 Thread Marco Sulla


New submission from Marco Sulla :

(venv_3_9) marco@buzz:~/sources/python-frozendict$ python
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 1E9
>>> type(a)


IMHO if the exponent is positive, and the  "base number" (1 in the example) is 
an integer, the result should be an integer.

Optionally, also if the "base number" has a number of decimal places <= the 
exponent, the result should be an integer. Example:

1.25E2 == 125

If the user wants a float, it can write

1.2500E2 == 125.0

--
components: Interpreter Core
messages: 362918
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Exponential notation should return an int if it can
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39784] Tuple comprehension

2020-02-28 Thread Marco Sulla


New submission from Marco Sulla :

I think a tuple comprehension could be very useful.

Currently, the only way to efficiently create a tuple from a comprehension is 
to create a list comprehension (generator comprehensions are more slow) and 
convert it with `tuple()`.

A tuple comprehension will do exactly the same thing, but without the creation 
of the intermediate list.

IMHO a tuple comprehension can be very useful, because:

1. there are many cases in which you create a list with a comprehension, but 
you'll never change it later. You could simply convert it with `tuple()`, but 
it will require more time
2. tuples uses less memory than lists
3. tuples can be interned

As syntax, I propose 

(* expr for x in iterable *)

with absolutely no blank character between the character ( and the *, and the 
same for ).

Well, I know, it's a bit strange syntax... but () are already taken by 
generator comprehensions. Furthermore, the * remembers a snowflake, and tuples 
are a sort of "frozenlists".

--
components: Interpreter Core
messages: 362888
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Tuple comprehension
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39784>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers

2020-02-26 Thread Marco Sulla


Marco Sulla  added the comment:

> I also distinctly remember seeing code (and writing such code myself) that 
> performs computation on timeouts and does not care if the end value goes 
> below 0.

This is not a good statistics. Frankly we can't measure the impact of the 
change from these considerations. And furthermore, `asyncio.sleep()` is used 
often, testing and mocking apart? I doubt it.

> we always try to have a very good explanation "why" we want to bother 
> ourselves and users to break backwards compat.

Coherence and unhide mistakes are *very* strong points.

That said, I'm not so much interested in practice. Do as you wish. The problem 
is I always considered Python a very elegant programming language, and this 
behavior is not elegant at all. But, hey, amen.

--

___
Python tracker 
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers

2020-02-26 Thread Marco Sulla


Change by Marco Sulla :


--
resolution: not a bug -> rejected

___
Python tracker 
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34396] Certain methods that heap allocated subtypes inherit suffer a 50-80% performance penalty

2020-02-26 Thread Marco Sulla


Marco Sulla  added the comment:

I asked why on StackOverflow, and an user seemed to find the reason. The 
problem for him/her is in `update_one_slot()`. 

`dict` implements directly `__contains__()` and `__getitem__()`. Usually, 
`sq_contains` and `mp_subscript` are wrapped to implement `__contains__()` and 
`__getitem__()`, but this way `dict` is a little faster.

The problem is that `update_one_slot()` searches for the wrappers. If it does 
not find them, it does not inherit the `__contains__()` and `__getitem__()` of 
the class, but create a `__contains__()` and `__getitem__()` functions that do 
an MRO search and call the superclass method. This is why `__contains__()` and 
`__getitem__()` of `dict` subclasses are slower.


Is it possible to modify `update_one_slot()` so that, if no wrapper is found, 
the explicit implementation is inherited?

SO answer: https://stackoverflow.com/a/59914459/1763602

--
components: +C API -Interpreter Core
nosy: +Marco Sulla
versions: +Python 3.9 -Python 3.8

___
Python tracker 
<https://bugs.python.org/issue34396>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39754] update_one_slot() does not inherit sq_contains and mp_subscript if they are explictly declared

2020-02-26 Thread Marco Sulla


Change by Marco Sulla :


--
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue39754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39754] update_one_slot() does not inherit sq_contains and mp_subscript if they are explictly declared

2020-02-25 Thread Marco Sulla


New submission from Marco Sulla :

I noticed that `__contains__()` and `__getitem__()` of subclasses of `dict` are 
much slower. I asked why on StackOverflow, and an user seemed to find the 
reason.

The problem for him/her is that `dict` implements directly `__contains__()` and 
`__getitem__()`. Usually, `sq_contains` and `mp_subscript` are wrapped to 
implement `__contains__()` and `__getitem__()`, but this way `dict` is a little 
faster, I suppose.

The problem is that `update_one_slot()` searches for the wrappers. If it does 
not find them, it does not inherit the `__contains__()` and `__getitem__()` of 
the class, but create a `__contains__()` and `__getitem__()` functions that do 
an MRO search and call the superclass method. This is why `__contains__()` and 
`__getitem__()` of `dict` subclasses are slower.


Is it possible to modify `update_one_slot()` so that, if no wrapper is found, 
the explicit implementation is inherited?

SO answer: https://stackoverflow.com/a/59914459/1763602

--
components: C API
messages: 362662
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: update_one_slot() does not inherit sq_contains and mp_subscript if they 
are explictly declared
type: performance
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39754>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-02-23 Thread Marco Sulla

Marco Sulla  added the comment:

I think in this case the error is more trivial: simply `Programs/_testembed.c` 
is compiled with g++ but it should be compiled with gcc.

Indeed, there are much gcc-only options in the compilation of 
`Programs/_testembed.c`, and g++ complains about them:

> cc1plus: warning: ‘-Werror=’ argument ‘-Werror=implicit-function-declaration’ 
> is not valid for C++
> cc1plus: warning: command line option ‘-std=c99’ is valid for C/ObjC but not 
> for C++

--

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers

2020-02-23 Thread Marco Sulla


Marco Sulla  added the comment:

I see that many breaking changes was done in recent releases. I get only the 
ones for `asyncio` in Python 3.8:

https://bugs.python.org/issue36921
https://bugs.python.org/issue36373
https://bugs.python.org/issue34790
https://bugs.python.org/issue32528
https://bugs.python.org/issue34687
https://bugs.python.org/issue32314

So I suppose the ship isn't sailed yet.


Passing a negative number to a function that should sleep the task for x 
seconds is a mistake. And mistakes should never pass silently.

Furthermore, coherence matters. It's really confusing that two functions in two 
builtin modules that are quite identical have a different behavior.

IMHO, deprecating and then removing support for negative argument in 
`asyncio.sleep()` is very much less breaking compared to issues #36921 and 
#36373 .

--
type:  -> behavior

___
Python tracker 
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39628] msg.walk memory leak?

2020-02-21 Thread Marco


Marco  added the comment:

uhm, no.
I can no more reproduce this. I was wrong. Sorry for the noise.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue39628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39695] Failed to build _uuid module, but libraries was installed

2020-02-20 Thread Marco Sulla


Marco Sulla  added the comment:

Ah, well, this is not possible. I was banned from the mailing list. I wrote my 
"defense" to conduct...@python.org in date 2019-12-29, and I'm still waiting 
for a response...

--

___
Python tracker 
<https://bugs.python.org/issue39695>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39695] Failed to build _uuid module, but libraries was installed

2020-02-20 Thread Marco Sulla


Marco Sulla  added the comment:

Well, the fact is, basically, for the other libraries you have not to re-run 
`configure`. You have to install only the missing C libraries and redo `make`. 
This works, for example, for zlib, lzma, ctypes, sqlite3, readline, bzip2.

Furthermore, it happened to me that I re-run `configure` without a `make clean` 
before: the `make` process stopped because configuration was changed, and I 
have to do a `make clean`. A big waste of time.

--

___
Python tracker 
<https://bugs.python.org/issue39695>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers

2020-02-20 Thread Marco Sulla


Marco Sulla  added the comment:

> I recall very many cases in third-party libraries and commercial applications

Source?

--

___
Python tracker 
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39698] asyncio.sleep() does not adhere to time.sleep() behavior for negative numbers

2020-02-20 Thread Marco Sulla


New submission from Marco Sulla :

Python 3.9.0a3+ (heads/master-dirty:f2ee21d858, Feb 19 2020, 23:19:22) 
[GCC 9.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> time.sleep(-1)
Traceback (most recent call last):
  File "", line 1, in 
ValueError: sleep length must be non-negative
>>> import asyncio
>>> async def f():
... await asyncio.sleep(-1)
... print("no exception")
... 
>>> asyncio.run(f())
no exception

I think that also `asyncio.sleep()` should raise `ValueError` if the argument 
is less than zero.

--
components: asyncio
messages: 362314
nosy: Marco Sulla, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.sleep() does not adhere to time.sleep() behavior for negative 
numbers
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39697] Failed to build with --with-cxx-main=g++-9.2.0

2020-02-20 Thread Marco Sulla

New submission from Marco Sulla :

I tried to compile Python 3.9 with:


CC=gcc-9.2.0  ./configure --enable-optimizations --with-lto 
--with-cxx-main=g++-9.2.0
make -j 2

I got this error:

g++-9.2.0 -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall
-flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 
-Wextra -Wno-unused-result -Wno-unused-parameter 
-Wno-missing-field-initializers -Werror=implicit-function-declaration 
-fvisibility=hidden -fprofile-generate -I./Include/internal  -I. -I./Include
-DPy_BUILD_CORE -o Programs/_testembed.o ./Programs/_testembed.c
cc1plus: warning: ‘-Werror=’ argument ‘-Werror=implicit-function-declaration’ 
is not valid for C++
cc1plus: warning: command line option ‘-std=c99’ is valid for C/ObjC but not 
for C++
sed -e "s,@EXENAME@,/usr/local/bin/python3.9," < ./Misc/python-config.in 
>python-config.py
LC_ALL=C sed -e 's,\$(\([A-Za-z0-9_]*\)),\$\{\1\},g' < Misc/python-config.sh 
>python-config
gcc-9.2.0 -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 
-Wall-flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g 
-std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter 
-Wno-missing-field-initializers -Werror=implicit-function-declaration 
-fvisibility=hidden -fprofile-generate -I./Include/internal  -I. -I./Include
-DPy_BUILD_CORE \
  -DGITVERSION="\"`LC_ALL=C git --git-dir ./.git rev-parse --short HEAD`\"" 
\
  -DGITTAG="\"`LC_ALL=C git --git-dir ./.git describe --all --always 
--dirty`\"" \
  -DGITBRANCH="\"`LC_ALL=C git --git-dir ./.git name-rev --name-only 
HEAD`\"" \
  -o Modules/getbuildinfo.o ./Modules/getbuildinfo.c
In file included from ./Include/internal/pycore_atomic.h:15,
 from ./Include/internal/pycore_gil.h:11,
 from ./Include/internal/pycore_pystate.h:11,
 from ./Programs/_testembed.c:10:
/usr/local/lib/gcc/x86_64-pc-linux-gnu/9.2.0/include/stdatomic.h:40:9: error: 
‘_Atomic’ does not name a type

I suppose simply `Programs/_testembed.c` is a C source file and must not be 
compiled with g++

PS: as a workaround, `--with-cxx-main=gcc-9.2.0` works, but probably it's not 
optimal.

--
components: Build
messages: 362313
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Failed to build with --with-cxx-main=g++-9.2.0
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39696] Failed to build _ssl module, but libraries was installed

2020-02-20 Thread Marco Sulla


New submission from Marco Sulla :

Similarly to enhancement request #39695, I missed to install the debian package 
with the include files for SSL, before compiling Python 3.9.

After installed it, `make` continued to not find the libraries and skipped the 
creation of module _ssl.

Searching on internet, I found that doing:

make clean
./configure etc
make

works.

Maybe the SSL library check is done only at configure phase?

--
components: Build
messages: 362311
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Failed to build _ssl module, but libraries was installed
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39696>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39695] Failed to build _uuid module, but libraries was installed

2020-02-20 Thread Marco Sulla


New submission from Marco Sulla :

When I first done `make` to compile Python 3.9, I did not installed some debian 
development packages, like `uuid-dev`. So `_uuid` module was not built.

After installed the debian package I re-run `make`, but it failed to build 
`_uuid` module. I had to edit manually `Modules/_uuidmodule.c` and remove all 
the `#ifdef` directives and leave only `#include `

Maybe `HAVE_UUID_UUID_H` and `HAVE_UUID_H` are created at `configure` phase 
only?

--
components: Build
messages: 362309
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Failed to build _uuid module, but libraries was installed
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39695>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39628] msg.walk memory leak?

2020-02-13 Thread Marco


New submission from Marco :

Hello,

 if I write

```
msg = email.message_from_bytes(...)
for part in msg.walk():
  content_type = part.get_content_type()
  if not part.get_content_maintype() == 'multipart':
 filename = part.get_filename(None)
 attachment = part.get_payload(decode=True)
```

if the mime parts are more than one, then the memory increases at each 
iteration and will never be released.

--
components: email
messages: 361959
nosy: barry, falon, r.david.murray
priority: normal
severity: normal
status: open
title: msg.walk memory leak?
type: resource usage
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue39628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39516] ++ does not throw a SyntaxError

2020-02-04 Thread Marco Sulla


Marco Sulla  added the comment:

> this is the sort of thing that is usually best suited to be reported by 
> linters, not the Python runtime.

TL;DR: if you write something like `a -- b`, it's quite extraordinary that you 
really wanted to write this. You probably wanted to write `a - -b` or, more 
probably, `a -= b`. So the parser is **masking a potential subtle bug**, that 
can cause hours of stressing debugging, because probably the program will run 
without problem but gives you a wrong result, or will throw an exception but in 
a completely different point.

Long version:

Normally I agree, but not in this case.

PEP 8 defines line guides for writing a more readable code. They are not 
mandatory because:

1. there are cases in which is more readable if you not follow PEP 8 (for 
example, using `\` with long `with` statements)
2. there are cases in which the rule is not followed because of using it in 
fast tests (as for from module import *)
3. sometimes is simply not possible to follow PEP 8 (for example, many classes 
can easily implement __eq__, but implementing all the other comparison 
operators many times is simply not possible)
4. sometimes the recommendation can't be followed, because it's not what you 
want to achive (for example, sometimes you need to check the exact class of an 
object and use `type(a) == SomeClass` instead of `isinstance(a, SomeClass)`)
5. there are cases that PEP 8 does not work. For example, bool(numpy.ndarray) 
does not work, you must do len(numpy.ndarray)
6. sometimes, it's simply a matter of style. One prefers a style, another one 
prefer another style

That said, none of these valid border cases can be applied to this case:

1. `a+-b` can be NEVER more readable than `a + -b`
2. `a++b` is clearly faster because you have to write... 2 spaces less. Well, I 
think that you'll never write a ton of binary operators followed by a unary 
one, so I suppose two little blank spaces does not slow down you too much :-D
3. it's always possible to separate `a * -b`, for example
4. if you write something like `a -- b`, it's quite extraordinary that you 
really wanted to write this. You probably wanted to write `a - -b` or, more 
probably, `a -= b`. So the parser is **masking a potential subtle bug**, that 
can cause hours of stressing debugging, because probably the program will run 
without problem but gives you a wrong result, or will throw an exception but in 
a completely different point.
5. See 3
6. this is IMHO not a matter of style. Writing `a ++ b` is simply ugly, 
**much** unreadable and prone to errors.

--

___
Python tracker 
<https://bugs.python.org/issue39516>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39516] ++ does not throw a SyntaxError

2020-02-01 Thread Marco Sulla


Marco Sulla  added the comment:

> `++` isn't special

Indeed the problem is that no error or warning is raised if two operators are 
consecutive, without a space between. All the cases you listed are terribly 
unreadable and hardly intelligible. 

Anyway I do not agree `++` is not special:

> you should know that this example is a syntax error because you are missing 
> the right hand operand, not because `++` has no meaning

But you should know that in a *lot* of other popular languages, `++` and `--` 
are unary operators, so it's particularly surprisingly to see that they *seems* 
to work in Python, even if they *seems* to be a binary operator.

This is completely confusing and messy. Frankly, I'm not a PEP 8 orthodox at 
all. I think that you can write `a+b`. It's not elegant, it's a bit less 
readable that `a + b`, but it's not the end of the world. 

But you should *not* be allowed to write `a+-b` without at least a warning, 
because `+-` seems a binary operator. And you should not be able to write `a+ 
-b` too, with the interpreter that acts like Ponzio Pilato, because what's 
this? Is it an unary `+` or an unary `-`? 
We know the unary is the `-`, `a+` has no sense. but for someone that does not 
know Python, __it's not readable__. So, IMHO, the interpreter should at least 
raise a warning if the syntax is not: 
`a + -b`
for any combination of binary and unary operators.

--

___
Python tracker 
<https://bugs.python.org/issue39516>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39516] ++ does not throw a SyntaxError

2020-02-01 Thread Marco Sulla


Marco Sulla  added the comment:

> This is not a bug
No one said it's a bug. It's a defect.

> This has been part of Python since version 1
There are many things that was part of Python 1 that was removed.

> `++` should never be an operator in the future, precisely because it already 
> has a meaning today

This is not a "meaning". `++` means nothing. Indeed

>>> 1++
  File "", line 1
1++
  ^
SyntaxError: invalid syntax

> The first expression is not "unreadable". The fact that you were able to read 
> it and diagnose it [...]

The fact I understood it it's because I'm a programmer with more than 10 years 
of experience, mainly in Python. And I discovered this defect by acccident, 
because I wanted to write `a += b` and instead I wrote `a ++ b`. And when 
happened, I didn't realized why it didn't raised a SyntaxError or, at least, a 
SyntaxWarning. I had to take some minutes to realize the problem. 

So, in my "humble" opinion, it's *highly* unreadable and surprising.

--

___
Python tracker 
<https://bugs.python.org/issue39516>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39516] ++ does not throw a SyntaxError

2020-02-01 Thread Marco Sulla


New submission from Marco Sulla :

Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 1 ++ 2
3

This is probably because the interpreter reads:

1 + +2

1. ++ could be an operator in future. Probably not. Probably never. But you 
never know.
2. A space between an unary operator and the object should not be allowed
3. the first expression is clearly unreadable and hard to understand, so 
completely unpythonic

--
components: Interpreter Core
messages: 361159
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: ++ does not throw a SyntaxError
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39516>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11986] Min/max not symmetric in presence of NaN

2019-12-23 Thread Marco Sulla


Marco Sulla  added the comment:

marco@buzz:~$ python3.9
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal as Dec, BasicContext as Bctx
>>> a = Dec("1981", Bctx)
>>> b = Dec("nan", Bctx)
>>> a.max(b)
Decimal('1981')
>>> b.max(a)
Decimal('1981')
>>> Bctx.max(a, b)
Decimal('1981')
>>> Bctx.max(b, a)
Decimal('1981')


`Decimal` completely adheres to IEEE 754 standard.

There's a very, very simple and generic solution for builtin min and max:



_sentinel = object()

def max(*args, key=None, default=_sentinel):
args_len = len(args)

if args_len == 0:
if default is _sentinel:
fname = max.__name__
raise ValueError(f"{fname}() expected 1 argument, got 0")

return default
elif args_len == 1:
seq = args[0]
else:
seq = args

it = iter(seq)

vmax = next(it, _sentinel)

if vmax is _sentinel:
if default is _sentinel:
fname = max.__name__
raise ValueError(f"{fname}() arg is an empty sequence")

return default


first_comparable = False

if key is None:
for val in it:
if vmax < val:
vmax = val
first_comparable = True
elif not first_comparable and not val < vmax :
# equal, or not comparable object, like NaN
vmax = val
else:
fmax = key(vmax)

for val in it:
fval = key(val)

if fmax < fval :
fmax = fval
vmax = val
first_comparable = True
elif not first_comparable and not fval < fmax:
fmax = fval
vmax = val

return vmax


This function continues to give undefined behavior with sets... but who 
calculates the "maximum" or "minimum" of sets?

--
nosy: +Marco Sulla

___
Python tracker 
<https://bugs.python.org/issue11986>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-23 Thread Marco Sulla


Marco Sulla  added the comment:

Excuse me, ignore my previous post.

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-23 Thread Marco Sulla


Marco Sulla  added the comment:

marco@buzz:~$ python3.9
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01) 
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal as Dec, BasicContext as Bctx
>>> a = Dec("1981", Bctx)
>>> b = Dec("nan", Bctx)
>>> a.max(b)
Decimal('1981')
>>> b.max(a)
Decimal('1981')
>>> Bctx.max(a, b)
Decimal('1981')
>>> Bctx.max(b, a)
Decimal('1981')

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-15 Thread Marco Sulla


Marco Sulla  added the comment:

Excuse me, I had an epiphany.

NaN returns False for every comparison.

So in teory any element of the iterable should result minor that NaN.

So NaN should treated as the highest element, and should be at the end of the 
sorted result!

Indeed this is the behavior in Java. NaNs are in the end of the sorted iterator.

On the contrary, Python sorting does not move the NaN from its position.

Why?

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-15 Thread Marco Sulla


Marco Sulla  added the comment:

> No idea what "are minor that another object" could possibly mean.

Oh my god... a < b?

> I don't know what purpose would be served by checking ">=" too

Well, it's very simple. Since the sorting algorithm checks if a < b, if this 
check fails, I propose to check also a >= b. If this is false too, the iterable 
contains an unorderable object. From this point, the check will never done 
again, an unorderable object is sufficient to raise the warning.

The check a >= b is *not* for ordering the iterable, is only for checking if 
the elements are orderable or not, and raise the warning.

Furthermore, I suppose that if someone is sure that its iterable is 
unorderable-free and want a fine-grained boost to speed, a flag can added. If 
true, sorting will not use the algorithm with the check, but the old algorithm.

> You haven't addressed any of the points he (Dickinson) raised

Dickinson said you have to check for total preorder. If you have understood my 
idea, this is not needed at all.

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-15 Thread Marco Sulla


Marco Sulla  added the comment:

Anyway, Java by default puts NaNs at the end of the iterable:

https://onlinegdb.com/SJjuiXE0S

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36095] Better NaN sorting.

2019-12-15 Thread Marco Sulla


Marco Sulla  added the comment:

Excuse me, but have you, Dickinson and Peters, read how I propose to check if 
the object is orderable or not? 

I explained it in a very detailed way, and this does not change the float 
comparison. And does not need to check first if the iterable it totally 
preordered.

Can you please read my post?

--

___
Python tracker 
<https://bugs.python.org/issue36095>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   >