[issue32823] Regression in test -j behavior and time in 3.7.0b1

2019-03-23 Thread Terry J. Reedy


Terry J. Reedy  added the comment:

Thanks for checking on this.  3.7.2 and 3.8.0a2 64 bit installed, as well as 
respository debug 32 bit 3.8.0a2+, are back to normal, with the quickest tests 
reporting within a few seconds.  I will assume fixed for Zach also, hence 
closing.

--
resolution:  -> out of date
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2019-03-23 Thread Cheryl Sabella


Cheryl Sabella  added the comment:

Terry,

Do you still see this happening?  When I run the tests on Windows 10 with 12 
CPUs (using -j0), the tests run quickly with just the last one taking more time.

0:01:42 [418/420] test_socket passed (40 sec 239 ms) -- running: 
test_multiprocessing_spawn (58 sec 500 ms)
0:01:53 [419/420] test_venv passed (32 sec 267 ms) -- running: 
test_multiprocessing_spawn (1 min 9 sec)
running: test_multiprocessing_spawn (1 min 39 sec)
0:02:28 [420/420] test_multiprocessing_spawn passed (1 min 44 sec)

--
nosy: +cheryl.sabella

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-04-09 Thread Zachary Ware

Zachary Ware  added the comment:

Ok, interesting.  The above results are from running on my Windows 8.1 
buildbot, running on Azure.  Running on a local fully-updated (I think) Windows 
10 VM, I get results similar to what Terry reports: results come in batches of 
X for -jX.  Does anyone know of any changes we've made that would cause Windows 
10 to behave differently here, or has something changed in Windows itself to 
cause the issue?

Steve or Eryk, do you get the same?

--
nosy: +eryksun

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-04-05 Thread Zachary Ware

Zachary Ware  added the comment:

I can't reproduce this.  Even running `python -m test -j14` on a machine with 2 
vCPUs, I get my first results at 2s, and even test___all__ passes in only 6 
(though on my first run, with -j0 (== -j4), test___all__ took 41s but other 
results came in first).  Same results with Debug and Release Win32 builds.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-03-31 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

python -m test -j0

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-03-30 Thread Zachary Ware

Zachary Ware  added the comment:

Terry, could you give me the full command you use to get that result?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-03-29 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

Yes. For 3.7 and 3.8, nothing appears until 41 seconds because test___all__ 
takes 40 seconds.  For 3.6, the batch result appears at 37 seconds because 
test___all__ runs in 36 seconds.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-03-29 Thread Ned Deily

Ned Deily  added the comment:

Terry, is this still an issue?  Are other Windows developers seeing this?

--
nosy: +ned.deily, paul.moore, steve.dower, tim.golden, zach.ware

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-02-11 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

For me, on Windows, the tests for time, datetime, and _strptime have failures 
on the master branch, so I don't want to submit a PR just now.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-02-11 Thread Terry J. Reedy

Change by Terry J. Reedy :


--
Removed message: https://bugs.python.org/msg312021

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32823] Regression in test -j behavior and time in 3.7.0b1

2018-02-11 Thread Terry J. Reedy

New submission from Terry J. Reedy :

For me, on Windows, recent changes, since 3.7.0a4, have greatly reduced the 
benefit of j0 (= j14 on my system).  For 3.6.4 (installed), the times are 
around 13:30 and 2:20. The times for 3.7.0a4 (installed) are about the same 
parallel and would have been for serial if not for a crash.  For 3.7.0b1 and 
master, compiled with debug on, the times are around 24:00 and 11:10.  As shown 
by the increase for serial from 13:30 to 24:00, debug by itself slows tests 
down by nearly half.

For parallel, the slowdown from 2:20 to 11:40, is increased by an extra factor 
of more than 2.  The reason is that parallel tests are done in batches of n (at 
least on windows).  Before, a new test started in one of the 14 processes 
whenever the test in a process finished. All 12 cpus were kept busy until less 
than 12 tests remained. Now, 14 new tests are started when all the previous 14 
finish.  Therefore, cpus wait while the slowest test in a batch of 14 finishes.

Beginning of old log:
Run tests in parallel using 14 child processes
0:00:00 [  1/413] test__opcode passed
0:00:00 [  2/413] test__locale passed
0:00:00 [  3/413] test__osx_support passed
0:00:00 [  4/413] test_abc passed
0:00:01 [  5/413] test_abstract_numbers passed
0:00:01 [  6/413] test_aifc passed
0:00:02 [  7/413] test_array passed
0:00:02 [  8/413] test_asdl_parser skipped
test_asdl_parser skipped -- test irrelevant for an installed Python
0:00:03 [  9/413] test_argparse passed
0:00:04 [ 10/413] test_ast passed
0:00:04 [ 11/413] test_asyncgen passed
0:00:05 [ 12/413] test_unittest passed
0:00:06 [ 13/413] test_asynchat passed
0:00:06 [ 14/413] test_atexit passed
0:00:06 [ 15/413] test_audioop passed
0:00:06 [ 16/413] test_augassign passed
0:00:07 [ 17/413] test_asyncore passed
0:00:07 [ 18/413] test_baseexception passed
0:00:07 [ 19/413] test_base64 passed
...
0:00:29 [107/413] test_enum passed
0:00:30 [108/413] test_enumerate passed
0:00:30 [109/413] test_eof passed
0:00:30 [110/413] test_epoll skipped
test_epoll skipped -- test works only on Linux 2.6
0:00:30 [111/413] test_errno passed


Beginning of new log:
Run tests in parallel using 14 child processes
running: test_grammar (30 sec), test_opcodes (30 sec), test_dict (30 sec), 
test_builtin (30 sec), test_exceptions (30 sec), test_types (30 sec), 
test_unittest (30 sec), test_doctest (30 sec), test_doctest2 (30 sec), 
test_support (30 sec), test___all__ (30 sec), test___future__ (30 sec), 
test__locale (30 sec), test__opcode (30 sec)
0:00:41 [  1/414] test_support passed -- running: test_grammar (41 sec), 
test_opcodes (41 sec), test_dict (41 sec), test_builtin (41 sec), 
test_exceptions (41 sec), test_types (41 sec), test_doctest (41 sec), 
test___all__ (41 sec), test___future__ (41 sec), test__locale (41 sec), 
test__opcode (41 sec)
0:00:41 [  2/414] test_doctest2 passed -- running: test_grammar (41 sec), 
test___all__ (41 sec), test__locale (41 sec)
0:00:41 [  3/414] test_unittest passed -- running: test___all__ (41 sec), 
test__locale (41 sec)
0:00:41 [  4/414] test__opcode passed
0:00:41 [  5/414] test_dict passed
0:00:41 [  6/414] test_types passed
0:00:41 [  7/414] test___future__ passed
0:00:41 [  8/414] test_builtin passed
0:00:41 [  9/414] test_doctest passed
0:00:41 [ 10/414] test_opcodes passed
0:00:41 [ 11/414] test_exceptions passed
0:00:41 [ 12/414] test_grammar passed
0:00:41 [ 13/414] test___all__ passed (40 sec)
0:00:41 [ 14/414] test__locale passed

The slowest test took 40 sec, and the rest of the cpus waited.

[snip list of running tests]
0:01:25 [ 15/414] test_audioop passed
0:01:25 [ 17/414] test_abstract_numbers passed
0:01:25 [ 18/414] test_abc passed
0:01:25 [ 19/414] test_aifc passed
0:01:25 [ 20/414] test_asdl_parser passed
0:01:25 [ 21/414] test_asyncgen passed
0:01:25 [ 22/414] test_atexit passed
0:01:25 [ 23/414] test_asyncio passed (42 sec)
[snip output]
0:01:25 [ 25/414] test_ast passed
0:01:25 [ 26/414] test_asynchat passed
0:01:25 [ 27/414] test_array passed
0:01:25 [ 28/414] test__osx_support passed

28 tests done in 85 seconds versus 111 in 30 seconds.
I think whatever caused this should be reversed.

--
components: Tests
messages: 312020
nosy: ncoghlan, terry.reedy, vstinner
priority: normal
severity: normal
status: open
title: Regression in test -j behavior and time in 3.7.0b1
type: performance
versions: Python 3.7, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com