On Thu, Nov 14, 2019 at 2:58 PM Ahmet Altay wrote:
>
> On Thu, Nov 14, 2019 at 2:55 PM Mikhail Gryzykhin wrote:
>>
>> Hi Everyone,
>>
>> Python precommit phrase timeouts for (roughly) 80% of the jobs in 2 hours.
>> This also blocks release branch validation. I suggest to bump the timeout to
>>
On Thu, Nov 14, 2019 at 2:55 PM Mikhail Gryzykhin wrote:
> Hi Everyone,
>
> Python precommit phrase timeouts for (roughly) 80% of the jobs in 2 hours.
> This also blocks release branch validation. I suggest to bump the timeout
> to 3 hours while we are working on a proper solution. This way many
Hi Everyone,
Python precommit phrase timeouts for (roughly) 80% of the jobs in 2 hours.
This also blocks release branch validation. I suggest to bump the timeout
to 3 hours while we are working on a proper solution. This way many people
can get unblocked.
I believe the change can be rather
I'm removing the additional interactive test env + suite and add
[interactive] dependencies as extra dependencies in tests_require:
https://github.com/apache/beam/pull/10068
On Mon, Nov 11, 2019 at 2:15 PM Robert Bradshaw wrote:
> On Fri, Nov 8, 2019 at 5:45 PM Ahmet Altay wrote:
> >
> > I
On Fri, Nov 8, 2019 at 5:45 PM Ahmet Altay wrote:
>
> I looked at the log but I could not figure what is causing the timeout
> because the gradle scan links are missing. I sampled a few of the successful
> jobs, It seems like python 3.7 and python 2 are running 3 tests in serial
>
I looked at the log but I could not figure what is causing the timeout
because the gradle scan links are missing. I sampled a few of the
successful jobs, It seems like python 3.7 and python 2 are running 3 tests
in serial {interactive, py37cython, py37gcp} and {docs, py27cython,
py27gcp}
Just saw another 2-hour timeout:
https://builds.apache.org/job/beam_PreCommit_Python_Commit/9440/ , so
perhaps we're not out of the woods yet (though in general things have
been a lot better).
On Tue, Nov 5, 2019 at 10:52 AM Ahmet Altay wrote:
>
> GCP tests are already on separate locations. IO
+1 to moving the GCP tests outside of core. If there are issues that only
show up on GCP tests but not in core, it might be an indication that there
needs to be another test in core covering that, but I think that should be
pretty rare.
On Mon, Nov 4, 2019 at 8:33 PM Kenneth Knowles wrote:
> +1
+1 to moving forward with this
Could we move GCP tests outside the core? Then only code changes
touches/affecting GCP would cause them to run in precommit. Could still run
them in postcommit in their own suite. If the core has reasonably stable
abstractions that the connectors are built on, this
PR for the proposed change: https://github.com/apache/beam/pull/9985
On Mon, Nov 4, 2019 at 1:35 PM Udi Meiri wrote:
> +1
>
> On Mon, Nov 4, 2019 at 12:09 PM Robert Bradshaw
> wrote:
>
>> +1, this seems like a good step with a clear win.
>>
>> On Mon, Nov 4, 2019 at 12:06 PM Ahmet Altay
Python precommits are still timing out on #9925. I am guessing that means
this change would not be enough.
I am proposing cutting down the number of test variants we run in
precommits. Currently for each version we ran the following variants
serially:
- base: Runs all unit tests with tox
-
https://github.com/apache/beam/pull/9925
On Tue, Oct 29, 2019 at 10:24 AM Udi Meiri wrote:
>
> I don't have the bandwidth right now to tackle this. Feel free to take it.
>
> On Tue, Oct 29, 2019 at 10:16 AM Robert Bradshaw wrote:
>>
>> The Python SDK does as well. These calls are coming from
>>
I don't have the bandwidth right now to tackle this. Feel free to take it.
On Tue, Oct 29, 2019 at 10:16 AM Robert Bradshaw
wrote:
> The Python SDK does as well. These calls are coming from
> to_runner_api, is_stateful_dofn, and validate_stateful_dofn which are
> invoked once per pipene or
Noting for the benefit of the thread archive in case someone goes digging
and wonders if this affects other SDKs: the Java SDK
memoizes DoFnSignatures and generated DoFnInvoker classes.
Kenn
On Mon, Oct 28, 2019 at 6:59 PM Udi Meiri wrote:
> Re: #9283 slowing down tests, ideas for slowness:
>
*not deciles, but 9-percentiles : )
On Mon, Oct 28, 2019 at 5:31 PM Pablo Estrada wrote:
> I've ran the tests in Python 2 (without cython), and used a utility to
> track runtime for each test method. I found some of the following things:
> - Total test methods run: 2665
> - Total test runtime:
I've ran the tests in Python 2 (without cython), and used a utility to
track runtime for each test method. I found some of the following things:
- Total test methods run: 2665
- Total test runtime: 990 seconds
- Deciles of time spent:
- 1949 tests run in the first 9% of time
- 173 in the 9-18%
I have written https://github.com/apache/beam/pull/9910 to reduce
FnApiRunnerTest variations.
I'm not in a rush to merge, but rather happy to start a discussion.
I'll also try to figure out if there are other tests slowing down the suite
significantly.
Best
-P.
On Fri, Oct 25, 2019 at 7:41 PM
Thanks, Brian.
+Udi Meiri
As next step, it would be good to know whether slowdown is caused by tests
in this PR, or its effect on other tests, and to confirm that only Python 2
codepaths were affected.
On Fri, Oct 25, 2019 at 6:35 PM Brian Hulette wrote:
> I did a bisect based on the runtime
I did a bisect based on the runtime of `./gradlew
:sdks:python:test-suites:tox:py2:testPy2Gcp` around the commits between 9/1
and 9/15 to see if I could find the source of the spike that happened
around 9/6. It looks like it was due to PR#9283 [1]. I thought maybe this
search would reveal some
I think it makes sense to remove some of the extra FnApiRunner
configurations. Perhaps some of the multiworkers and some of the grpc
versions?
Best
-P.
On Fri, Oct 25, 2019 at 12:27 PM Robert Bradshaw
wrote:
> It looks like fn_api_runner_test.py is quite expensive, taking 10-15+
> minutes on
It looks like fn_api_runner_test.py is quite expensive, taking 10-15+
minutes on each version of Python. This test consists of a base class
that is basically a validates runner suite, and is then run in several
configurations, many more of which (including some expensive ones)
have been added
I took another look at this and precommit ITs are already running in
parallel, albeit in the same suite. However it appears Python precommits
became slower, especially Python 2 precommits [35 min per suite x 3
suites], see [1]. Not sure yet what caused the increase, but precommits
used to be
Ack. Separating precommit ITs to a different suite sounds good. Anyone is
interested in doing that?
On Thu, Oct 24, 2019 at 2:41 PM Valentyn Tymofieiev
wrote:
> This should not increase the queue time substantially, since precommit ITs
> are running *sequentially* with precommit tests, unlike
This should not increase the queue time substantially, since precommit ITs
are running *sequentially* with precommit tests, unlike multiple precommit
tests which run in parallel to each other.
The precommit ITs we run are batch and streaming wordcount tests on Py2 and
one Py3 version, so it's not
+1 to separating ITs from precommit. Downside would be, when Chad tried to
do something similar [1] it was noted that the total time to run all
precommit tests would increase and also potentially increase the queue time.
Another alternative, we could run a smaller set of IT tests in precommits
One improvement could be move to Precommit IT tests into a separate suite
from precommit tests, and run it in parallel.
On Thu, Oct 24, 2019 at 11:41 AM Brian Hulette wrote:
> Python Precommits are taking quite a while now [1]. Just visually it looks
> like the average length is 1.5h or so, but
26 matches
Mail list logo