Re: [Gluster-infra] [Gluster-devel] Gluster and GCC 5.1

2015-07-06 Thread Kaleb S. KEITHLEY
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 07/05/2015 06:54 PM, Michael Scherer wrote: Le lundi 29 juin 2015 à 18:48 -0400, Kaleb Keithley a écrit : From: Niels de Vos nde...@redhat.com On Mon, Jun 29, 2015 at 02:45:50PM -0400, Kaleb S. KEITHLEY wrote: On 06/29/2015 02:40 PM

Re: [Gluster-infra] [Gluster-devel] Gluster and GCC 5.1

2015-06-29 Thread Kaleb S. KEITHLEY
On 06/29/2015 02:45 PM, Kaleb S. KEITHLEY wrote: E.g. by building on Fedora Rawhide last year, reporting the bugs, and getting them fixed before Rawhide turned into Fedora 22. Immediately after the Fedora 22 release, we had a similar issue reported from a user with gcc v5 and was fixed

[Gluster-infra] nfs-ganesha-de...@gluster.org?

2015-07-20 Thread Kaleb S. KEITHLEY
The NFS-ganesha project is looking for somewhere to host their mailing lists after SourceForge's most recent meltdown. Could we — would we be willing to — host their lists? (All two of them I think. Probably with a few dozen subscribers at most I'm guessing.) -- Kaleb

Re: [Gluster-infra] broken link

2015-09-09 Thread Kaleb S. KEITHLEY
On 09/09/2015 01:27 PM, Joe Julian wrote: > http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo > is broken. Should CentOS be a symlink to EPEL.repo? > > Michael, is that handled in salt or is it a manual thing? It's manual. Whatever stuckee^h^h^h^h^h^h^hvolunteer

Re: [Gluster-infra] Some gerrit email is getting "high" spam scores

2016-02-17 Thread Kaleb S. KEITHLEY
On 02/16/2016 04:47 PM, Michael Scherer wrote: > Le lundi 15 février 2016 à 17:11 +0100, Michael Scherer a écrit : >> Le lundi 15 février 2016 à 17:07 +0100, Michael Scherer a écrit : >>> Le lundi 15 février 2016 à 08:03 -0500, Kaleb Keithley a écrit : Hi, e.g. ...

Re: [Gluster-infra] All rpm jobs are now in jenkins job builder

2016-06-29 Thread Kaleb S. KEITHLEY
On 06/28/2016 08:07 PM, Sankarshan Mukhopadhyay wrote: > > I have a follow-up question on the production of these artifacts - > when do we check whether the RPMs or, the images produced are sane? > For example, that the RPMs are packaged well and as per specifications > ... > For

Re: [Gluster-infra] Coverty report on another server than download

2017-02-21 Thread Kaleb S. KEITHLEY
On 02/21/2017 10:40 AM, Michael Scherer wrote: Hi, I am pondering on putting the result of coverty scan on a separate website than download.gluster.org, or if possible, on a separate vhost. I did deploy today a server that verify the integrity of download.gluster.org, and I realize that since

Re: [Gluster-infra] [Gluster-devel] Dropping nightly build from download.gluster.org ?

2017-03-24 Thread Kaleb S. KEITHLEY
On 03/24/2017 09:39 AM, Niels de Vos wrote: > On Thu, Mar 23, 2017 at 05:29:05PM -0400, Michael Scherer wrote: >> Another example: >> pub/gluster/glusterfs has various directory for versions of glusterfs, >> but also do have libvirt, vagrant and nfs-ganesha, who are not version, >> and might be

Re: [Gluster-infra] [Gluster-devel] Cleaning up Jenkins

2017-04-20 Thread Kaleb S. KEITHLEY
On 04/20/2017 08:17 AM, Shyam wrote: On 04/20/2017 01:27 AM, Nigel Babu wrote: Hello folks, As I was testing the Jenkins upgrade, I realized we store quite a lot of old builds on Jenkins that doesn't seem to be useful. I'm going to start cleaning them slowly in anticipation of moving Jenkins

Re: [Gluster-infra] [gluster-packaging] [Gluster-Maintainers] glusterfs-3.13.1 released

2017-12-21 Thread Kaleb S. KEITHLEY
https://build.gluster.org/job/release-new/lastSuccessfulBuild/artifact/glusterfs-3.13.1.tar.gz Check with NigelB about the copy to bits.gluster.org On 12/21/2017 05:07 AM, Niels de Vos wrote: On Thu, Dec 21, 2017 at 02:07:50AM +, jenk...@build.gluster.org wrote: SRC:

[Gluster-infra] download.gluster.org: /dev/mapper/vg_download-lv_www_pub I/O errors

2018-02-12 Thread Kaleb S. KEITHLEY
FYI At around 13:00 EST Reiner030 reported in #gluster that download.gluster.org had been rejecting http requests for at least 30 minutes. I signed on and saw that $subject was getting I/O errors. I unmounted (remounted to replay the journal and unmounted again.) I ran xfs_repair, then