ll-dep' failed
> > 16:24:28 make: *** [install-dep] Error 100
> > 16:24:28 rm -f /w/workspace/vpp-csit-verify-virl-master/build-
> > root/scripts/.version
> > /w/workspace/vpp-csit-verify-virl-master/build-root/rpm/*.tar.gz
> > 16:24:28 E: Package 'pkg-config'
sit-verify-virl-master/build-root/rpm/*.tar.gz
> 16:24:28 E: Package 'pkg-config' has no installation candidate
> 16:24:28 E: Unable to locate package chrpath
> 16:24:28 E: Unable to locate package default-jdk-headless
>
> Anything I need to do on my side ?
>
> --a
>
>
> O
The following changes have resolved this issue:
https://gerrit.fd.io/r/#/c/4990/
https://gerrit.fd.io/r/#/c/4993/
Builds are passing
vpp-verify-master-ubuntu1604
vpp-csit-verify-virl-master
Thank you,
Vanessa
On Wed Feb 01 11:28:33 2017, valderrv wrote:
> We have opened a ticket with our
https://gerrit.fd.io/r/#/c/4754/1 unable to resolve jenkins.fd.io and
nexus.fd.io in one build. This is not an ongoing issue.
Logging https://gerrit.fd.io/r/#/c/4758/ in another RT since these are not
related.
Thank you,
Vanessa
On Wed Jan 18 11:40:17 2017, alaga...@gmail.com wrote:
>
Peter,
Please let us know if you are still having these issues.
Thank you,
Vanessa
On Mon Oct 10 10:44:46 2016, valderrv wrote:
> Peter,
>
> This issue is resolved. This was a result of a vendor issue and the
> new version of Gerrit actively rejects the status messages that
> Jenkins is
Dave,
We're seeing intermittent errors related to plugins. We still have the JClouds
plugin installed even though we are using OpenStack. The JClouds plugin isn't
installed on the sandbox so I think it's a low risk change to remove the
JClouds plugin in production. After I remove the plugin
This change is complete. Jenkins builds have started. Please report any
issues to valderrv via fdio-infra on IRC.
Thank you,
We had intermittent builds failing that required Jenkins maintenance.
Notifications were sent to t...@lists.fd.io, disc...@lists.fd.io, and
//nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp/
> >
> > Ub14 and centos7 are ok. Any idea why is that happening?
> >
> > This is a bigger problem for our integration jobs (we are discussing
> > workaround now).
> >
> >
All FD.io certs have been updated except for Nexus which will be updated next
week.
On Fri Mar 10 11:51:51 2017, emsearcy wrote:
> Update: https://fd.io, https://wiki.fd.io, and https://lists.fd.io
> have had their certs updated. Other sites including Gerrit are still
> in progress; will send
I spoke with the vendor. They are implenting a fix at this time. They are
also adding monitoring so they can proactively alert and resolve this issue in
the future.
On Wed Aug 16 11:29:10 2017, dwallacelf wrote:
> Resend without graphics... See link for details.
>
> On 8/16/17 11:27 AM, Dave
I spoke with the vendor. They are implenting a fix at this time. They are
also adding monitoring so they can proactively alert and resolve this issue in
the future.
On Wed Aug 16 11:03:24 2017, ksek...@cisco.com wrote:
> Hi helpd...@fd.io,
>
> I just noticed a 360 minute build timeout in
Root cause:
There was a misconfigured flavor type which did not have proper shares/limits
so it was consuming significantly more CPU power than it should be (unfairly).
When the VMs landed on Hypervisor the CPU was severely limited
On Wed Aug 16 17:50:49 2017, valderrv wrote:
> I spoke with
Still looking into this issue.
On Wed May 31 11:37:54 2017, hagbard wrote:
> Anton,
> We are getting patches not triggering again:
>
> https://gerrit.fd.io/r/#/c/6959/
>
> Ed
>
> On Wed, May 31, 2017 at 7:49 AM, Anton Baranov via RT <
> fdio-helpd...@rt.linuxfoundation.org> wrote:
>
>
Restarting the plugin appears to have resolved this issue. I've run several
rechecks that all triggered properly. Since this issue appeared to be resolved
earlier we'll leave this ticket open for now. Please let us know if this
happens again.
Thank you,
Vanessa
On Wed May 31 13:15:14 2017,
This appears to be an isolated incident caused by an intermittent
infrastructure issue. All jobs since 5805 have passed. I've added this issue
to the Jenkins failure cause management.
Thank you,
Vanessa
On Wed Jun 07 19:07:18 2017, dwallacelf wrote:
> Dear held...@fd.io
>
> Please
Jan,
We are looking into this issue.
Thank you,
Vanessa
On Fri Jun 16 09:12:55 2017, jgel...@cisco.com wrote:
> Hello Anton,
>
> Unfortunately we are still having issues with ssh connection timeouts
> during tests on virl. Could you, please, have a look on it?
>
> Thank you very much.
>
Florin,
It doesn't appear the failures are related to the new instances. We
will be switching to a new performance (4c/8GB) instance when it's
available. I've submitted a request to the vendor and will follow up
tomorrow morning. Adding the additional CPU resources may
improve/resolve the
This issue appears to be resolved with the switch to the new instances.
On Wed Sep 06 15:48:23 2017, valderrv wrote:
> We are in the process of switching to dedicated instances that should
> resolve this issue. We hope to have this complete tomorrow around
> 9:00am PDT
>
>
> On 09/06/2017
This issue appears to be resolved with the switch to the new instances.
On Fri Aug 25 12:16:52 2017, valderrv wrote:
> It appears build times have recovered. I opened a ticket with the
> vendor to determine the root cause of the timeouts and will update the
> ticket when I receive a response.
>
We are in the process of switching to dedicated instances that should
resolve this issue. We hope to have this complete tomorrow around
9:00am PDT
On 09/06/2017 02:40 PM, Florin Coras wrote:
> Hi,
>
> Any news regarding this? We are 1 week away from API freeze and the
> infra makes it almost
Switching to the new flavors appears to have resolved this issue.
Thank you,
Vanessa
On Wed Sep 13 15:09:49 2017, valderrv wrote:
> We do see intermittent slow response times from Nexus. We are
> investigating the cause.
>
> On Wed Sep 13 09:56:20 2017, dbarach wrote:
> > See gerrit
Florin,
I'm looking into the issue now.
Thank you,
Vanessa
On 10/02/2017 11:49 AM, Florin Coras wrote:
> Hi Vanessa,
>
> It would seem we’re running out of executors and the build queue keeps on
> growing. Could you take a look at it?
>
> Thanks,
> Florin
On Mon Oct 02 16:02:06 2017, fcoras.li...@gmail.com wrote:
> Queue is back down to 1.
>
> Thanks a lot Vanessa!
>
> Florin
>
> > On Oct 2, 2017, at 11:59 AM, Vanessa Valderrama
> > wrote:
> >
> > Florin,
> >
> > I'm looking into the issue now.
> >
> > Thank
It appears build times have recovered. I opened a ticket with the vendor to
determine the root cause of the timeouts and will update the ticket when I
receive a response.
Thank you,
Vanessa
On Thu Aug 24 13:57:31 2017, dwallacelf wrote:
> Dear helpdesk,
>
> Please investigate this build
$ ssh -6 -v -p29418 gerrit.fd.io
OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /users/dbarach/.ssh/config
debug1: /users/dbarach/.ssh/config line 5: Applying options for gerrit.fd.io
debug1: Reading configuration data /etc/ssh/ssh_config
debug1:
Can you point me to specific changes that aren't in sync. The branches appear
to be in sync.
On Mon Nov 20 21:45:46 2017, fcoras.li...@gmail.com wrote:
> Hi,
>
> It seems that git master head points at a gerrit patch merged 10h
> ago[1]. There have been several recent merges and none show up
This error typically occurs when an artifact that already exists is trying to
be deployed. Would you like to remove the artifacts that were deployed on
10/10? I believe that will resolve this error.
vpp-dpdk-dev-17.08-vpp2_amd64-deb.deb
Thank you,
Vanessa
On Wed Nov 08 08:49:28 2017, hagbard
This error typically occurs when an artifact that already exists is trying to
be deployed. Would you like to remove the artifacts that were deployed on
10/10? I believe that will resolve this error.
vpp-dpdk-dev-17.08-vpp2_amd64-deb.deb
Thank you,
Vanessa
On Wed Nov 08 08:49:28 2017, hagbard
Ed
>
> On Wed, Nov 8, 2017 at 7:52 AM Vanessa Valderrama via RT <
> fdio-helpd...@rt.linuxfoundation.org> wrote:
>
> > This error typically occurs when an artifact that already exists is
> > trying
> > to be deployed. Would you like to remove the artifacts
all given that the version in nexus matches the version in
> the
> repo which means we should simply be installing it from the repo
> rather
> than building it...
>
> Ed
>
> On Wed, Nov 8, 2017 at 7:52 AM Vanessa Valderrama via RT <
> fdio-helpd...@rt.linuxfoundation.org>
Anton and I worked with Dave to troubleshoot this issue. It appears to be
isolated. He's going to contact Cisco IT and approved this ticket being closed.
On Thu Oct 26 13:45:06 2017, valderrv wrote:
> When Dave returns from his conference we'll scheudle time to
> troubleshoot this issue with
Anton actually resolved this issue for us.
On Tue May 22 14:56:02 2018, fcoras.li...@gmail.com wrote:
> It is! Thank you, Vanessa!
>
> Florin
>
> > On May 22, 2018, at 11:41 AM, Vanessa Valderrama via RT > helpd...@rt.linuxfoundation.org> wrote:
> >
>
This issue should be resolved.
Thank you,
Vanessa
On Tue May 22 02:59:02 2018, mvarl...@suse.de wrote:
> Roughly a week ago, I noticed there was a DNS/IP change when cloning a
> new VPP
> repo... I wonder if what I saw is somehow connected to this issue.
> On Mon, 2018-05-21 at 16:34 -0700,
Peter,
The fd.io.master.centos7 repo had to be cleaned up significantly to eliminate
Jenkins build timeout errors. This was discussed in the TSC. Going forward
we'll only be keeping an average of 10 of the current release candidate
artifacts in the repository. Please let me know if this
Marco,
The error is due to trying to install a package older than the one already
installed. It appears this issue has been resolved as the centos7 builds are
no longer failing due to this issue.
Thank you,
Vanessa
On Tue Feb 20 07:09:27 2018, mvarl...@suse.de wrote:
> Hi,
>
> This
er – Software
> Cisco Systems Limited
>
>
> -Original Message-
> From: Vanessa Valderrama via RT [mailto:fdio-
> helpd...@rt.linuxfoundation.org]
> Sent: Monday, June 04, 2018 9:47 PM
> To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco)
>
> Cc: csit-.
us.
> Right now Nexus is not an option for us anymore. This also means that
> Nexus artifacts will not be tested by CSIT.
>
> Peter Mikus
> Engineer – Software
> Cisco Systems Limited
>
> -Original Message-
> From: Vanessa Valderrama via RT [mailto:fdio-
> helpd...
tifacts posted on
> Nexus) then means that safe value is around 100-120 artifacts.
> This I am talking about master branch of VPP. Stable branches we
> should be ok with just 10-15 artifacts.
>
> Peter Mikus
> Engineer – Software
> Cisco Systems Limited
>
> -Original Me
Matus,
I believe the vpp-dpdk-devel package error can be resolved by doing a rebase.
Thank you,
Vanessa
On Tue Mar 13 11:13:13 2018, matfa...@cisco.com wrote:
> Hi,
>
> I see issues with opensuse and centos verify jobs for stable/1801
> branch.
> The error is same for both jobs:
> 11:39:26
The issue has been resolved. The root cause was a change to the
global-macros.yaml file postbuildscript. This doesn't work on the VPP
containers because it uses the lf-infra-ship-logs builder which can't run on
the VPP containers at this time.
We didn't anticipate this being an issue prior
The patch was tested on the sandbox and merged but because ci-management
verify/merge jobs weren't running successfully the patch didn't update the jobs.
A patch was merged to resolve the issue with the ci-management verify/merge
jobs.
I've run recheck on all VPP jobs that were unstable are
Maciek,
We're looking into this now.
On Fri Nov 16 09:02:19 2018, mackonstan wrote:
> Nexus(?) backend to nginx is reporting down.
> Resolution time?
>
> Cheers,
> -Maciek
>
> [cid:876751CF-C733-46B9-9A07-8298C0A0A0C8@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent
Apparently, we have an issue with an upstream network provider which is
impacting some of our CI services.
We're working on getting more information including an ETA if possible. I'll
send a notification to the community as well.
Thank you,
Vanessa
On Fri Nov 16 10:18:59 2018, valderrv
The hardware has been replace and services have been restored. Thank you for
your patience. Please open an RT ticket is you are experiencing any issues.
helpd...@fd.io
Thank you,
Vanessa
On Fri Nov 16 10:26:44 2018, valderrv wrote:
> Apparently, we have an issue with an upstream network
We are aware of users experiencing authentication issues when trying to
log into gerrit.fd.io. We have identified the cause and are working to
resolve it as quickly as possible. I have requested an ETA but have not
been provided one at this time.
Thank you,
Vanessa
On Tue Feb 19 09:57:19 2019,
The issue should be resolved now.
On Tue Feb 19 10:11:16 2019, valderrv wrote:
> We are aware of users experiencing authentication issues when trying to
> log into gerrit.fd.io. We have identified the cause and are working to
> resolve it as quickly as possible. I have requested an ETA but have
This issue is resolved. We will send an email to the community detailing the
cause for the issues this morning.
Thank you,
Vanessa
On Tue May 14 10:48:27 2019, dbarach wrote:
> "executor starvation"... Something is seriously wrong...
>
> Thanks... Dave
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You
47 matches
Mail list logo