I haven’t used nose’s parallel execution plugin, but I have used pytest
with xdist with success. If your tests are designed to run in any order and
are properly sandboxed to prevent crosstalk between concurrent runs, which
they *should* be, then in my experience it works very well.


On Fri, Sep 27, 2019 at 6:51 PM Kenneth Knowles <[email protected]> wrote:

> Do things go wrong when nose is configured to use parallel execution?
>
> On Fri, Sep 27, 2019 at 5:09 PM Chad Dombrova <[email protected]> wrote:
>
>> By the way, the outcome on this was that splitting the python precommit
>> job into one job per python version resulted in increasing the total test
>> completion time by 66%, which is obviously not good.  This is because we
>> are using Gradle to run the python tests tasks in parallel (the jenkins VMs
>> have 16 cores each, utilized across 2 slots, IIRC), but after the split
>> there were only 1-2 gradle tasks per test.  Since the python test runner,
>> nose, is currently not using parallel execution, there were not enough
>> concurrent tasks to make proper use of the VM's CPUs.
>>
>> tl;dr  I'm going to create a followup PR to split out just the Lint job
>> (same as we have Spotless for Java).   This is our best ROI for now.
>>
>> -chad
>>
>>
>> On Fri, Sep 27, 2019 at 3:27 PM Kyle Weaver <[email protected]> wrote:
>>
>>> > Do we have good pypi caching?
>>>
>>> Building Python SDK harness containers takes 2 mins each (times 4, the
>>> number of versions) on my machine, even if nothing has changed. But we're
>>> already paying that cost, so I don't think splitting the jobs should make
>>> it any worse. (https://issues.apache.org/jira/browse/BEAM-8277 if
>>> anyone has any ideas)
>>>
>>> Kyle Weaver | Software Engineer | github.com/ibzib | [email protected]
>>>
>>>
>>> On Wed, Sep 25, 2019 at 11:21 AM Pablo Estrada <[email protected]>
>>> wrote:
>>>
>>>> Thanks Chad, and thank you for notifying on the dev list.
>>>>
>>>> On Wed, Sep 25, 2019 at 10:59 AM Kenneth Knowles <[email protected]>
>>>> wrote:
>>>>
>>>>> Nice.
>>>>>
>>>>> Do we have good pypi caching? If not this could add a lot of overhead
>>>>> to our already-backed-up CI queue. (btw I still think your change is good,
>>>>> and just makes proper caching more important)
>>>>>
>>>>> Kenn
>>>>>
>>>>> On Tue, Sep 24, 2019 at 9:55 PM Chad Dombrova <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>> I'm working to make the CI experience with python a bit better, and
>>>>>> my current initiative is splitting up the giant Python PreCommit job 
>>>>>> into 5
>>>>>> separate jobs into separate jobs for Lint, Py2, Py3.5, Py3.6, and Py3.7.
>>>>>>
>>>>>> Around 11am Pacific time tomorrow I'm going to initiate the seed
>>>>>> jobs, at which point all PRs will start to run the new precommit jobs.
>>>>>> It's a bit of a chicken-and-egg scenario with testing this, so there 
>>>>>> could
>>>>>> be issues that pop up after the seed jobs are created, but I'll be 
>>>>>> working
>>>>>> to resolve those issues as quickly as possible.
>>>>>>
>>>>>> If you run into problems because of this change, please let me know
>>>>>> on the github PR.
>>>>>>
>>>>>> Here's the PR: https://github.com/apache/beam/pull/9642
>>>>>> Here's the Jira: https://issues.apache.org/jira/browse/BEAM-8213#
>>>>>>
>>>>>> The upshot is that after this is done you'll get better feedback on
>>>>>> python test failures!
>>>>>>
>>>>>> Let me know if you have any concerns.
>>>>>>
>>>>>> thanks,
>>>>>> chad
>>>>>>
>>>>>>

Reply via email to