On 2 March 2017 at 07:00, Victor Stinner wrote:
> Hi,
>
> Your document doesn't explain how you configured the host to run
> benchmarks. Maybe you didn't tune Linux or anything else? Be careful
> with modern hardware which can make funny (or not) surprises.
Victor, do you know if you or anyone
in multiprocessing/forking.py#129, `os._exit` cause child process don't close
open
file. For example:
```
from multiprocessing import Process
def f():
global log # prevent gc close the file
log = open("info.log", "w")
log.write("***hello world***\n")
p = Pro
> I strongly prefer numeric order for signals.
>
> --Guido (mobile)
+1
Numerical values of UNIX signals are often more widely known than their names.
For example, every UNIX user knows what signal 9 does.
___
Python-Dev mailing list
Python-Dev@pyth
I strongly prefer numeric order for signals.
--Guido (mobile)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.c
There are a few modules that have had their constants redefined as Enums, such
as signal, which has revealed a minor nit:
>>> pp(list(signal.Signals))
[,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
]
The resulting enumeration is neither in alpha no
We updated profile-opt to use the testsuite subset based on what distros
had already been using for their training runs. As for the comment about
the test suite not being good for training Mostly a myth. The test
suite exercises the ceval loop well as well as things like re and json
sufficientl
On Thu, Mar 2, 2017 at 4:07 AM, Antoine Pitrou wrote:
> On Wed, 1 Mar 2017 19:58:14 +0100
> Matthias Klose wrote:
>> On 01.03.2017 18:51, Antoine Pitrou wrote:
>> > As for the high level: what if the training set used for PGO in Xenial
>> > has become skewed or inadequate?
>>
>> running the tests