There's a lot of value to switching to pytest even without xdist. Could we
prune back the goals of this first PR to just achieving feature parity with
nose, and make a followup PR for xdist?
-chad
On Mon, Oct 7, 2019 at 12:04 PM Udi Meiri wrote:
>
>
> On Fri, Oct 4, 2019 at 10:35 AM Chad
On Fri, Oct 4, 2019 at 10:35 AM Chad Dombrova wrote:
>
> I have a WiP PR to convert Beam to use pytest, but it's been stalled.
>>
>
> What would it take to get it back on track?
>
Besides needing to convert ITs (removing save_main_session), which can be
split out to a later PR, there's
> I have a WiP PR to convert Beam to use pytest, but it's been stalled.
>
What would it take to get it back on track?
> Another nice thing about pytest is that you'll be able to tell which suite
> a test belongs to.
>
pytest has a lot of quality of life improvements over nose. The biggest
and
Hi all,
I've posted a new PR that just splits out the python lint job here:
https://github.com/apache/beam/pull/9706
I'll be running the seed job shortly unless anyone objects.
-chad
On Tue, Oct 1, 2019 at 9:04 PM Chad Dombrova wrote:
> I haven’t used nose’s parallel execution plugin, but I
I haven’t used nose’s parallel execution plugin, but I have used pytest
with xdist with success. If your tests are designed to run in any order and
are properly sandboxed to prevent crosstalk between concurrent runs, which
they *should* be, then in my experience it works very well.
On Fri, Sep
Do things go wrong when nose is configured to use parallel execution?
On Fri, Sep 27, 2019 at 5:09 PM Chad Dombrova wrote:
> By the way, the outcome on this was that splitting the python precommit
> job into one job per python version resulted in increasing the total test
> completion time by
By the way, the outcome on this was that splitting the python precommit job
into one job per python version resulted in increasing the total test
completion time by 66%, which is obviously not good. This is because we
are using Gradle to run the python tests tasks in parallel (the jenkins VMs
> Do we have good pypi caching?
Building Python SDK harness containers takes 2 mins each (times 4, the
number of versions) on my machine, even if nothing has changed. But we're
already paying that cost, so I don't think splitting the jobs should make
it any worse.
Thanks Chad, and thank you for notifying on the dev list.
On Wed, Sep 25, 2019 at 10:59 AM Kenneth Knowles wrote:
> Nice.
>
> Do we have good pypi caching? If not this could add a lot of overhead to
> our already-backed-up CI queue. (btw I still think your change is good, and
> just makes
Nice.
Do we have good pypi caching? If not this could add a lot of overhead to
our already-backed-up CI queue. (btw I still think your change is good, and
just makes proper caching more important)
Kenn
On Tue, Sep 24, 2019 at 9:55 PM Chad Dombrova wrote:
> Hi all,
> I'm working to make the CI
Hi all,
I'm working to make the CI experience with python a bit better, and my
current initiative is splitting up the giant Python PreCommit job into 5
separate jobs into separate jobs for Lint, Py2, Py3.5, Py3.6, and Py3.7.
Around 11am Pacific time tomorrow I'm going to initiate the seed jobs,
11 matches
Mail list logo