Yes, this is largely due to the addition of Python 3 test suites.

Running tests in parallel is actively being investigated by +Mark Liu
<mark...@google.com> in this Jira ticket [1] and this PR [2]. We will add
other Python 3.6 and 3.7 test suites only to postcommit until then.

[1] https://issues.apache.org/jira/browse/BEAM-6527
[2] https://github.com/apache/beam/pull/7675

Kind regards,
Robbe

[image: https://ml6.eu] <https://ml6.eu/>

* Robbe Sneyders*

ML6 Gent
<https://www.google.be/maps/place/ML6/@51.037408,3.7044893,17z/data=!3m1!4b1!4m5!3m4!1s0x47c37161feeca14b:0xb8f72585fdd21c90!8m2!3d51.037408!4d3.706678?hl=nl>

M: +32 474 71 31 08


On Sat, 9 Mar 2019 at 20:22, Robert Bradshaw <rober...@google.com> wrote:

> Perhaps this is the duplication of all (or at least most) previously
> existing tests for running under Python 3. I agree that this is excessive;
> we should probably split out Py2, Py3, and the linters into separate
>  targets.
>
> We could look into using detox or retox to parallelize the testing as
> well. (The issue last time was suppression of output on timeout, but that
> can be worked around by adding timeouts to the individual tox targets.)
>
> On Fri, Mar 8, 2019 at 11:26 PM Mikhail Gryzykhin <mig...@google.com>
> wrote:
>
>> Hi everyone,
>>
>> Seems that our python pre-commits grow up in time really fast
>> <http://104.154.241.245/d/_TNndF2iz/pre-commit-test-latency?orgId=1&from=now-6M&to=now>
>> .
>>
>> Did anyone follow trend or know what are the biggest changes that
>> happened with python lately?
>>
>> I don't see a single jump, but duration of pre-commits almost doubled
>> since new year.
>>
>> [image: image.png]
>>
>> Regards,
>> --Mikhail
>>
>> Have feedback <http://go/migryz-feedback>?
>>
>

Reply via email to