I realized we might be pushing nexus instance too much due to our weekly
snapshots mechanism, resulting in significant storage demands increase for
KIE. I am discussing with INFRA in
https://issues.apache.org/jira/browse/INFRA-25812, if we can adjust the
number of weekly snapshots retained.

Regards
Jan

On Wed, 22 May 2024 at 09:29, Jan Šťastný <jstastn...@apache.org> wrote:

> Hello, after a short period of time when SNAPSHOTs worked fine after the
> recent refactoring, there emerged problems once again.
>
> Since a few days ago, SNAPSHOT deployments started failing with HTTP 503
> error, reported as https://issues.apache.org/jira/browse/INFRA-25807
>
> Regards
> Jan
>
> On Thu, 16 May 2024 at 08:38, Jan Šťastný <jstastn...@apache.org> wrote:
>
>> Hello,
>>
>> Once again, our SNAPSHOT deploying CI pipelines were not successful
>> during the past few days.
>>
>> This time, we're hitting timeouts too often during artifact uploads.
>> Incidentally we were working on unifying deploy procedures (also for
>> SNAPSHOTS) across KIE podling pipelines (
>> https://github.com/apache/incubator-kie-issues/issues/1123), which
>> should bring a failover mechanism for such timeouts.
>>
>> I went ahead and merged the change for Jenkinsfile.buildchain file, which
>> is the one actually used in nightlies for drools and kogito-* projects.
>> Currently running a skipTests run to validate the change, if successful,
>> I'll chase reviewers for the rest of my relevant PRs to have this fixed for
>> tonight.
>>
>> Regards
>> Jan
>>
>> On Tue, 7 May 2024 at 15:00, Jan Šťastný <jstastn...@apache.org> wrote:
>>
>>> Jobs have been successfully generated now.
>>>
>>> Nightlies including snapshot deployment have passed (after some reruns).
>>>
>>> Generally speaking, I don't think the Jenkins instance is healthy, so
>>> please report problems on the mailing list if they persist. Or you can
>>> rerun if you know where things reside, every committer has job execution
>>> permissions.
>>>
>>> Often snapshots uploads fail due to timeout, we already have one tweak
>>> to this on the way.
>>>
>>> Regards
>>> Jan
>>>
>>> On Mon, 6 May 2024 at 19:46, Jan Šťastný <jstastn...@apache.org> wrote:
>>>
>>>> I tried selective removal of old jobs for drools and kogito pipelines,
>>>> but seed runs were still not going through. So I dropped all existing jobs
>>>> and now seeds are running correctly, already generating jobs for branches
>>>> (nighly, PR, ...).
>>>> Downsides are: loss of execution history and retrigger of PR checks for
>>>> all open PRs (again reminder to do regular clean up of stale PRs).
>>>> Upside is that the nightly build should trigger overnight.
>>>>
>>>> Regards
>>>> Jan
>>>>
>>>> Dne po 6. 5. 2024 16:05 uživatel Alex Porcelli <a...@porcelli.me>
>>>> napsal:
>>>>
>>>>> Thank you, Jan - really appreciate your proactiveness!
>>>>>
>>>>> Please keep us posted!
>>>>>
>>>>> On Mon, May 6, 2024 at 9:58 AM Jan Šťastný <jstastn...@apache.org>
>>>>> wrote:
>>>>> >
>>>>> > The workaround resolves the issue.
>>>>> >
>>>>> > But to apply the workaround, it turns out the easiest way is to drop
>>>>> the
>>>>> > existing jobs. It's not clear to me why all of a sudden existing
>>>>> jobs are
>>>>> > not replaced during the DSL generation. Also there's no error message
>>>>> > signalling the possible reason. I only realized that this is a
>>>>> > differentiator between my working tests and failing seed execution
>>>>> after
>>>>> > the merge.
>>>>> >
>>>>> > Now I took OptaPlanner pipelines as a guinea pig to test this
>>>>> assumption,
>>>>> > and after removing existing jobs, the dsl code generation worked
>>>>> correctly.
>>>>> > I have triggered nightly build after the DSL generation and the
>>>>> problem is
>>>>> > gone.
>>>>> >
>>>>> > As a result though, the job execution history is lost, including the
>>>>> "age"
>>>>> > of a possible test failure. For the sake of timeliness of unblocking
>>>>> > nightly builds though, I am gonna replicate this approach for kogito
>>>>> and
>>>>> > drools pipelines too.
>>>>> >
>>>>> > Regards
>>>>> > Jan
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Mon, 6 May 2024 at 09:25, Jan Šťastný <jstastn...@apache.org>
>>>>> wrote:
>>>>> >
>>>>> > > Hello,
>>>>> > >  it seems that since May 1st the nightly builds fail to trigger
>>>>> due to git
>>>>> > > clone error.
>>>>> > >
>>>>> > > The root problem of this failure is not clear at the moment, but
>>>>> there is
>>>>> > > supposed to be a workaround available. I need to figure out how to
>>>>> apply
>>>>> > > all across our ASF Jenkins CI jobs.
>>>>> > >
>>>>> > > Regards
>>>>> > > Jan
>>>>> > >
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: dev-unsubscr...@kie.apache.org
>>>>> For additional commands, e-mail: dev-h...@kie.apache.org
>>>>>
>>>>>

Reply via email to