On Thu, Jun 18, 2015 at 10:19:27AM +0200, Niels de Vos wrote:
Good to know, but it would be much more helpful if someone could install
VMs there and add them to the Jenkins instance... Who can do that, or
who can guide someone else to get it done?
How will that help, since we are having
On Thu, Jun 18, 2015 at 12:29:14PM +, Emmanuel Dreyfus wrote:
On Thu, Jun 18, 2015 at 10:19:27AM +0200, Niels de Vos wrote:
Good to know, but it would be much more helpful if someone could install
VMs there and add them to the Jenkins instance... Who can do that, or
who can guide
Le jeudi 18 juin 2015 à 17:57 +0200, Emmanuel Dreyfus a écrit :
Niels de Vos nde...@redhat.com wrote:
I'm not sure what limitation you mean. Did we reach the limit of slaves
that Jenkins can reasonably address?
No I mean its inability to catch a new DNS record.
It might be a glibc
Niels de Vos nde...@redhat.com wrote:
I'm not sure what limitation you mean. Did we reach the limit of slaves
that Jenkins can reasonably address?
No I mean its inability to catch a new DNS record.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
Vijay Bellur vbel...@redhat.com wrote:
I did dare just now and have rebooted Jenkins :). Let us see how this
iteration works out.
Excellent! That fixed the Jenkins resolution problem, and we now have 10
NetBSD slave VM online.
So we have two problems and their fixes available, for adding new
On Thursday 18 June 2015 02:52 PM, Emmanuel Dreyfus wrote:
Justin Clift jus...@gluster.org wrote:
If the DNS problem does turn out to be the dodgy iWeb hardware firewall,
then this fixes the DNS issue. (if not... well damn!)
The DNS problem was worked around by installing a /etc/hosts, but
Justin Clift jus...@gluster.org wrote:
If the DNS problem does turn out to be the dodgy iWeb hardware firewall,
then this fixes the DNS issue. (if not... well damn!)
The DNS problem was worked around by installing a /etc/hosts, but
jenkins does not realize it is there. It should probably be
On 18 Jun 2015, at 16:57, Emmanuel Dreyfus m...@netbsd.org wrote:
Niels de Vos nde...@redhat.com wrote:
I'm not sure what limitation you mean. Did we reach the limit of slaves
that Jenkins can reasonably address?
No I mean its inability to catch a new DNS record.
Priority wise, my
On Thu, Jun 18, 2015 at 12:57:05AM +0100, Justin Clift wrote:
On 17 Jun 2015, at 20:14, Niels de Vos nde...@redhat.com wrote:
On Wed, Jun 17, 2015 at 03:14:31PM +0200, Michael Scherer wrote:
Le mercredi 17 juin 2015 à 11:58 +0100, Justin Clift a écrit :
On 17 Jun 2015, at 10:53, Michael
On 18 Jun 2015, at 09:19, Niels de Vos nde...@redhat.com wrote:
On Thu, Jun 18, 2015 at 12:57:05AM +0100, Justin Clift wrote:
On 17 Jun 2015, at 20:14, Niels de Vos nde...@redhat.com wrote:
On Wed, Jun 17, 2015 at 03:14:31PM +0200, Michael Scherer wrote:
Le mercredi 17 juin 2015 à 11:58 +0100,
Le jeudi 18 juin 2015 à 10:19 +0200, Niels de Vos a écrit :
On Thu, Jun 18, 2015 at 12:57:05AM +0100, Justin Clift wrote:
On 17 Jun 2015, at 20:14, Niels de Vos nde...@redhat.com wrote:
On Wed, Jun 17, 2015 at 03:14:31PM +0200, Michael Scherer wrote:
Le mercredi 17 juin 2015 à 11:58
On Wed, Jun 17, 2015 at 12:13:46PM +, Emmanuel Dreyfus wrote:
On Wed, Jun 17, 2015 at 07:44:14AM -0400, Vijay Bellur wrote:
Do we still have the NFS crash that was causing tests to hang?
Do we still have it on rebased patchsets?
Yes, the fixes depend on the refcounting change which does
On Wed, Jun 17, 2015 at 03:14:31PM +0200, Michael Scherer wrote:
Le mercredi 17 juin 2015 à 11:58 +0100, Justin Clift a écrit :
On 17 Jun 2015, at 10:53, Michael Scherer msche...@redhat.com wrote:
Le mercredi 17 juin 2015 à 11:48 +0200, Michael Scherer a écrit :
Le mercredi 17 juin 2015 à
On Wed, Jun 17, 2015 at 07:44:14AM -0400, Vijay Bellur wrote:
Do we still have the NFS crash that was causing tests to hang?
Do we still have it on rebased patchsets?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
On Wednesday 17 June 2015 05:20 AM, Emmanuel Dreyfus wrote:
On Wed, Jun 17, 2015 at 11:05:38AM +0200, Niels de Vos wrote:
I've already scripted the reboot-vm job to use Rackspace API, the DNS
requesting and formatting the results into some file can't be that
difficult. Let me know if a
On Wednesday 17 June 2015 08:13 AM, Emmanuel Dreyfus wrote:
On Wed, Jun 17, 2015 at 07:44:14AM -0400, Vijay Bellur wrote:
Do we still have the NFS crash that was causing tests to hang?
Do we still have it on rebased patchsets?
I am not certain. I am still trying to come to terms with my
cloud.gluster.org is served by Rackspace Cloud DNS. AFAICT, there is
no readily available option to do zone transfers from it. We might
have to contact the Rackspace support to find out if they can do it as
a special request.
On Wed, Jun 17, 2015 at 11:50 AM, Emmanuel Dreyfus m...@netbsd.org
: [Gluster-devel] [Gluster-infra] NetBSD regressions not being
triggered for patches
cloud.gluster.org is served by Rackspace Cloud DNS. AFAICT, there is
no readily available option to do zone transfers from it. We might
have to contact the Rackspace support to find out if they can do it as
a special
- Original Message -
From: Kaushal M kshlms...@gmail.com
To: Emmanuel Dreyfus m...@netbsd.org
Cc: Gluster Devel gluster-devel@gluster.org, gluster-infra
gluster-in...@gluster.org
Sent: Wednesday, 17 June, 2015 11:59:22 AM
Subject: Re: [Gluster-devel] [Gluster-infra] NetBSD
On Wed, Jun 17, 2015 at 11:59:22AM +0530, Kaushal M wrote:
cloud.gluster.org is served by Rackspace Cloud DNS
Perhaps we can change that and setup a DNS for the zone?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
-in...@gluster.org
Sent: Wednesday, 17 June, 2015 11:59:22 AM
Subject: Re: [Gluster-devel] [Gluster-infra] NetBSD regressions not being
triggered for patches
cloud.gluster.org is served by Rackspace Cloud DNS. AFAICT, there is
no readily available option to do zone transfers from it. We
Le mercredi 17 juin 2015 à 11:48 +0200, Michael Scherer a écrit :
Le mercredi 17 juin 2015 à 08:20 +0200, Emmanuel Dreyfus a écrit :
Venky Shankar yknev.shan...@gmail.com wrote:
If that's the case, then I'll vote for this even if it takes some time
to get things in workable state.
On Wed, Jun 17, 2015 at 11:59:22AM +0530, Kaushal M wrote:
cloud.gluster.org is served by Rackspace Cloud DNS. AFAICT, there is
no readily available option to do zone transfers from it. We might
have to contact the Rackspace support to find out if they can do it as
a special request.
Not sure
On Wed, Jun 17, 2015 at 11:05:38AM +0200, Niels de Vos wrote:
I've already scripted the reboot-vm job to use Rackspace API, the DNS
requesting and formatting the results into some file can't be that
difficult. Let me know if a /etc/hosts format would do, or if you expect
something else.
On 17 Jun 2015, at 10:53, Michael Scherer msche...@redhat.com wrote:
Le mercredi 17 juin 2015 à 11:48 +0200, Michael Scherer a écrit :
Le mercredi 17 juin 2015 à 08:20 +0200, Emmanuel Dreyfus a écrit :
Venky Shankar yknev.shan...@gmail.com wrote:
If that's the case, then I'll vote for this
Just moving Gerrit and Jenkins out of iWeb should help a lot.
On Wed, Jun 17, 2015 at 4:28 PM, Justin Clift jus...@gluster.org wrote:
On 17 Jun 2015, at 10:53, Michael Scherer msche...@redhat.com wrote:
Le mercredi 17 juin 2015 à 11:48 +0200, Michael Scherer a écrit :
Le mercredi 17 juin 2015
On 17 Jun 2015, at 07:29, Kaushal M kshlms...@gmail.com wrote:
cloud.gluster.org is served by Rackspace Cloud DNS. AFAICT, there is
no readily available option to do zone transfers from it. We might
have to contact the Rackspace support to find out if they can do it as
a special request.
Le mercredi 17 juin 2015 à 11:58 +0100, Justin Clift a écrit :
On 17 Jun 2015, at 10:53, Michael Scherer msche...@redhat.com wrote:
Le mercredi 17 juin 2015 à 11:48 +0200, Michael Scherer a écrit :
Le mercredi 17 juin 2015 à 08:20 +0200, Emmanuel Dreyfus a écrit :
Venky Shankar
The problem was nbslave71. It used to be picked first for all changes
and would fail instantly. I've disabled it now. The other slaves are
working correctly.
~kaushal
On Thu, Jun 11, 2015 at 12:56 PM, Emmanuel Dreyfus m...@netbsd.org wrote:
On Thu, Jun 11, 2015 at 12:39:43PM +0530, Atin
On Thu, Jun 11, 2015 at 12:57:58PM +0530, Kaushal M wrote:
The problem was nbslave71. It used to be picked first for all changes
and would fail instantly. I've disabled it now. The other slaves are
working correctly.
Saddly the Jenkins upgrade did not help here. Last time I investigated
the
30 matches
Mail list logo