Le mardi 07 juin 2016 à 09:54 +0200, Michael Scherer a écrit :
> Le lundi 06 juin 2016 à 21:18 +0200, Niels de Vos a écrit :
> > On Mon, Jun 06, 2016 at 09:59:02PM +0530, Nigel Babu wrote:
> > > On Mon, Jun 6, 2016 at 12:56 PM, Poornima Gurusiddaiah 
> > > <[email protected]>
> > > wrote:
> > > 
> > > > Hi,
> > > >
> > > > There are multiple issues that we saw with regressions lately:
> > > >
> > > > 1. On certain slaves the regression fails during build and i see those 
> > > > on
> > > > slave26.cloud.gluster.org, slave25.cloud.gluster.org and may be others
> > > > also.
> > > >     Eg:
> > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21422/console
> > > >
> > > 
> > > Are you sure this isn't a code breakage?
> > 
> > No, it really does not look like that.
> > 
> > This is an other one, it seems the testcase got killed for some reason:
> > 
> >   
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21459/console
> > 
> > It was running on slave25.cloud.gluster.org too... Is it possible that
> > there is some watchdog or other configuration checking for resources and
> > killing testcases on occasion? The number of slaves where this happens
> > seems limited, were these more recently installed/configured?
> 
> So dmesg speak of segfault in yum
> 
> yum[2711] trap invalid opcode ip:7f2efac38d60 sp:7ffd77322658 error:0 in
> libfreeblpriv3.so[7f2efabe6000+72000]
> 
> and
> https://access.redhat.com/solutions/2313911
> 
> That's exactly the problem.
> [root@slave25 ~]# /usr/bin/curl https://google.com
> Illegal instruction
> 
> I propose to remove the builder from rotation while we investigate.

Or we can:

export NSS_DISABLE_HW_AES=1

to work around, cf the bug listed on the article.

Not sure the best way to deploy that.
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS


Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Gluster-infra mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-infra

Reply via email to