phew - it seems the slow down was related to the surefire plugin. i'd
updated that on master to a newer version and it required some additional
configuration options to run our stuff properly. once i reverted, to the
same version as 3.4-dev the build speeds made more sense. i guess i'm going
to keep it there for release. i imagine we'll revisit the upgrade in the
future at some point.

On Tue, Apr 27, 2021 at 3:41 PM Stephen Mallette <[email protected]>
wrote:

> I've been trying to sort out TINKERPOP-2550 with kerberos and recently
> haven't been hassled by those sorts of errors in travis where they've been
> occurring with some regularity. So that's good. I've also been testing and
> re-testing around the deadlock issues I'd alluded to prior to code freeze.
> Travis jobs seem much more stable on 3.4-dev, but since I've only just
> started seeing master start to shape up the same way, I'd say I still need
> to keep testing that branch.
>
> I'm also noticing that the build on master is almost twice as long as
> 3.4-dev. I don't know why and I can't say with certainty when that started
> to happen. My benchmarking effort didn't show a slow down with
> driver/server operations so that finding is somewhat at odds with the build
> itself. There are more tests on master but I didn't think there were
> significantly more to make up for this difference. I'll be looking into
> this issue further as well.
>
>
> On Sat, Apr 24, 2021 at 5:47 AM Stephen Mallette <[email protected]>
> wrote:
>
>> Another code freeze week and this time a big one for 3.5.0. All PRs of a
>> blocking nature were merged so that's good. I'm still finding Travis a bit
>> finicky with kerberos even after a fair bit of effort on
>>
>> https://issues.apache.org/jira/browse/TINKERPOP-2504
>>
>> but that's always been an issue so I'm not thinking that a blocker. I
>> expect to close that issue and open a new one specific to kerberos unless.
>> I'm still examining some issues with:
>>
>> https://issues.apache.org/jira/browse/TINKERPOP-2550
>>
>> despite it being closed. Found another interesting deadlock that can
>> occur on close where a synchronized method was calling itself in low
>> resource environments like travis. I'm going to watch travis for more hangs
>> and run more tests this week.If you'd like to help test this, you can run
>> the docker build with --cpus=0.9 (smart HadoopMarch idea there) or go to
>> the TestClientFactory and set the workerPoolSize on the Cluster in the
>> build() method to 1 and then run the build. Or alternatively, just write
>> your own little tests with a Cluster you construct with that setting. I'd
>> be happy to hear if anyone managed to do the latter actually and if they
>> ran to some success or failure and what kind of test they wrote in the
>> process. I think the problem at this point tend to generate from
>> sessionless requests and open/close situations but any sort of testing is
>> helpful.
>>
>> Other than that, let's focus on testing and documentation review this
>> week and then head on to release. Thanks!
>>
>

Reply via email to