Re: [Gluster-infra] Move of the ci.gluster.org server from one location to another location

2016-08-23 Thread Michael Scherer
Le mercredi 17 août 2016 à 18:46 +0200, Michael Scherer a écrit :
> Le lundi 08 août 2016 à 15:11 +0200, Michael Scherer a écrit :
> > Le mercredi 03 août 2016 à 14:43 +0200, Michael Scherer a écrit :
> > > Le jeudi 28 juillet 2016 à 10:21 -0700, Amye Scavarda a écrit :
> > > > On Thu, Jul 28, 2016 at 4:05 AM, Niels de Vos  wrote:
> > > > 
> > > > > On Thu, Jul 28, 2016 at 06:21:33AM -0400, Kaleb KEITHLEY wrote:
> > > > > > On 07/27/2016 05:18 PM, Amye Scavarda wrote:
> > > > > > >
> > > > > > >
> > > > > > > From the latest updates on the cage list, it looks like we're 
> > > > > > > tracking
> > > > > > > for these moves/downtime for August 8-9? That'll be 
> > > > > > > Monday-Tuesday.
> > > > > > > Any complaints about that schedule?
> > > > > >
> > > > > > 3.8.x releases are scheduled for the 10th of each month. In 
> > > > > > yesterday's
> > > > > > Community meeting Niels said that 3.8.2 is planned to be released on
> > > > > > schedule.
> > > > > >
> > > > > > If this goes as planned then it shouldn't be a problem, right?
> > > > >
> > > > > We just need to make sure that all patches for 3.8.2 have been tested 
> > > > > in
> > > > > the CI before the move. It shortens the development cycle a few days,
> > > > > and it would have been nice to know a little more in advance and
> > > > > mentioned in the community meeting.
> > > > >
> > > > > I do not think there are any critical patches for the release, so I do
> > > > > not expect any problems with the outage either. (Although it really
> > > > > isn't nice to treat 3.8 releases as guinea pig for infrastructure
> > > > > changes.)
> > > > >
> > > > > Niels
> > > > >
> > > > > I brought this up to the community cage group this morning, and we'll 
> > > > > look
> > > > to move the VMs after the 3.8.2 release.
> > > > I'll let Michael put in more details around exact timing.
> > > 
> > > So we still do not have the exact timing, that's waiting on IT to
> > > configure the network port (and then I have to copy data, and configure
> > > the server for new network, and admin cards, and various stuff). 
> > 
> > So After coming back from weekend, and dealing with my backlog of mail
> > and expenses, I just received a notification that IT did moved the
> > server (on friday evening) and it does even answer to ping on the admin
> > interface. So I will configure it for internet access later today or
> > tomorow (depending on my capacity to read all mails and doing meetings),
> > and will then plan to the test move once I am confident the server is
> > ok.
> > 
> 
> So, news about the server.
> 
> I did see there was some weird lvm corruption (that I didn't
> investigate), but couldn't find the exact fix. Turn out that this was
> just removing extranous PV from the VG and that's it. However, since we
> are speaking of moving production workload on it, I will need to
> reformat it to use hardware raid, and so doing that tonight. 

Ok so I did reformat and rename the server (with some pain, since the
idrac interface is a bit annoying, and I did hit a few roadblock with
java on linux, with Centos iso, with anaconda partitioning choices) Now,
that's 4 disk in hardware raid 5, and the name is
myrmicinae.rht.gluster.org (a type of ants).

I will finish the ansiblization later.

> Then i will start to test the copy of VM once that part is done.

On that front, seems doing any kind of disk snapshot requires some
downtime, because RHEL do remov^W differenciate features between qemu
for RHEL and qemu for RHEV, so Centos inherit the limitation. 

So I guess we might need a few reboot, which will also permit to get
kernel upgrade, etc. I have no idea of the needed downtime, but since
that's just a reboot, I hope I can do that outside offices hours.


-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Can I update /opt/qa for NetBSD?

2016-08-23 Thread Emmanuel Dreyfus
Nigel Babu  wrote:

> Practically, I only need someone to look at one line:

Where is this fs used? 

For instance, mount(8) knows about ffs, not UFS:
$ mount
/dev/raid0a on / type ffs (log, local)
/dev/raid0e on /mail type ffs (log, nodev, nosuid, local, with quotas)
/dev/raid1a on /ssd type ffs (log, nodev, nosuid, local)
kernfs on /kern type kernfs (local)


-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Can I update /opt/qa for NetBSD?

2016-08-23 Thread Nigel Babu
Hello folks,

I've been thinking about a different approach here. I'd like to fork the
build.sh/regression.sh/smoke.sh scripts for NetBSD until we unify them. This
seems to be the approach with most gains at this point.

This way, I can make smaller bite sized changes that can be tested in
production (as a separate test job). This also means what's in the repo is
what's currently running.

So, I'd have netbsd-smoke.sh, netbsd-regression.sh, and netbsd-build.sh for a
while. I will work on merging the changes into one script. Watch out for me
nagging you for reviews :)

Speaking of harassing for reviews: 
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/50

Practically, I only need someone to look at one line: 
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/50/files#diff-db1659608ae45b2644416de2b9a76d3cR8

--
nigelb
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1368441] NetBSD machines hang on umount

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1368441

Kaleb KEITHLEY  changed:

   What|Removed |Added

   Keywords||Triaged
   Assignee|b...@gluster.org|nig...@redhat.com



-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=9R1O3gBEXo&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1365430] Issues with slave21.cloud.gluster.org

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1365430

Nigel Babu  changed:

   What|Removed |Added

 Status|ASSIGNED|CLOSED
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-08-23 07:29:35



--- Comment #1 from Nigel Babu  ---
This is now fixed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=jQQLHiVXxm&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1359879] Cleanup hanging on NetBSD machines

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1359879

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |CLOSED
 Resolution|--- |DUPLICATE
Last Closed||2016-08-23 07:31:21



--- Comment #3 from Nigel Babu  ---
This is a dupe of bug 1368441, which is now fixed. The issues that's causing
umount to hang is being resolved in bug 1369401

*** This bug has been marked as a duplicate of bug 1368441 ***

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=MF9RPaaXZ7&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1368441] NetBSD machines hang on umount

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1368441



--- Comment #6 from Nigel Babu  ---
*** Bug 1359879 has been marked as a duplicate of this bug. ***

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=hFesduSWfS&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1367999] Build logs not accessible in centos setup

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1367999

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |CLOSED
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-08-23 07:28:56



--- Comment #5 from Nigel Babu  ---
This should now be fixed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=Ou7pNH8PJK&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1368441] NetBSD machines hang on umount

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1368441

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |CLOSED
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-08-23 07:28:15



--- Comment #5 from Nigel Babu  ---
The issue of new tests failing have now been fixed with the addition of the
`pkill gluster`. I've filed bug 1369401 to track what's causing umount to hang
in the first place.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=qk49LZJd5Q&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Please welcome Worker Ant

2016-08-23 Thread Niels de Vos
On Mon, Aug 22, 2016 at 05:17:49PM -0700, Amye Scavarda wrote:
> On Mon, Aug 22, 2016 at 2:47 AM, Niels de Vos  wrote:
> 
> > On Mon, Aug 22, 2016 at 11:16:15AM +0200, Michael Scherer wrote:
> > > Le lundi 22 août 2016 à 12:41 +0530, Nigel Babu a écrit :
> > > > Hello,
> > > >
> > > > I've just switched the bugzilla authentication on Gerrit today. From
> > today
> > > > onwards, Gerrit will comment on Bugzilla using a new account:
> > > >
> > > > bugzilla-...@gluster.org
> > > >
> > > > If you notice any issues, please file a bug against
> > project-infrastructure.
> > > > This bot will only comment on public bugs, so if you open a review
> > request when
> > > > the bug is private, it will fail. This is intended behavior.
> > >
> > > You got me curious, in which case are private bugs required ?
> > >
> > > Do we need to make sure that the review is private also or something ?
> >
> > I do not expect any private bugs. On occasion it happens that a bug gets
> > cloned from Red Hat Gluster Storage and it can contain customer details.
> > Sometimes those bugs are not cleaned during the cloning (BAD!) and keep
> > the references to details from customers.
> >
> > All bugs that we have in the GlusterFS product in bugzilla must be
> > public, and must have a public description of the reported problem.
> >
> > Niels
> >
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> Is it worth denying all posts from RHGS and causing people to manually
> submit bugs?
> I realize that's using a hammer on an edge-case, but if someone is moving a
> bug to the gluster.org project, I feel like it's on them to scrub it.

I do not think it is possible to configure the 'clone' functionality in
Bugzilla to reject clones from one product (RHGS) to an other
(GlusterFS). We could ask the Bugzilla team, I guess. It would be a
clean way of doing things, and should get us better quality bugs in the
Gluster community.

It definitely is the task of the person cloning the bug to

1. clean it from customer references and private data
2. make sure the correct component/sub-component is set
3. have an understandable problem description and other supportive data
4. .. all other standard Bug Triage points

Maybe it is good to repeat this towards RHGS Engineering + Management
again, if it happens regularly.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] [Bug 1368339] Access to slave{24, 27 and one other centos-slave}.cloud.gluster.org

2016-08-23 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1368339

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |CLOSED
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-08-23 03:48:23



--- Comment #5 from Nigel Babu  ---
Excellent, I've put the machines back into the pool

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=sLYzEisg9y&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra