Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaushal M
On Wed, Feb 28, 2018 at 9:50 PM, Kaleb S. KEITHLEY  wrote:
> On 02/28/2018 10:49 AM, Kaushal M wrote:
>> We have a GlusterD2-4.0.0rc1 release.
>>
>> Aravinda, Prashanth and the rest of the GD2 developers have been
>> working hard on getting more stuff merged into GD2 before the 4.0
>> release.
>>
>> At the same time I have been working on getting GD2 packaged for Fedora.
>> I've been able to get all the required dependencies updated and have
>> submitted to the package maintainer for merging.
>> I'm now waiting on the maintainer to accept those updates. Once the
>> updates have been accepted, the GD2 spec can get accepted [2].
>> I expect this to take at least another week on the whole.
>>
>> In the meantime, I've been building all the updated dependencies and
>> glusterd2-v4.0.0rc1, on the GD2 copr [3].
>>
>> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
>> release from [4]. And this is where I hit the blocker.
>>
>> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
>> opened an issue on the GD2 issue tracker for it [5].
>> In short, GD2 fails to read options from xlators, as dlopen fails with
>> a missing symbol error.
>>
>> ```
>> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
>> 
>> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
>> failed; dlerror =
>> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
>> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"
>
>
> see https://review.gluster.org/#/c/19225/
>
>
> glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
> server.so the run-time linker can resolve the symbol — for now.
>
> Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
> will stop working in the near future even when RTLD_LAZY is passed as a
> flag. (As I understand the proposed changes.)
>
> It should still work, e.g., on Fedora 27 and el7 though.
>
> glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
> need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.

Thanks for the pointer Kaleb!

But, I'm testing on Fedora 27, where this shouldn't theoretically happen.
So then, why am I hitting this. Is it something to do with the way the
packages are built?
Or is there some runtime ld configuration that has been set up.

In any case, we should push and get the offending functions moved into
libglusterfs.
That should solve the problem for us.

>
> --
>
> Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] two potential memory leak place found on glusterfs 3.12.3

2018-02-28 Thread Storage, Dev (Nokia - Global)
Hi glusterfs experts,
   Good day!
   During our recent test we found that after execute some glusterfs 
command, there are obvious memory leak found for glusterd process, when we 
compare statedump of glusterd process before and after command executions.


1>   Each time of command "gluster volume list" command there are some memory 
lost from section [mgmt/glusterd.management - usage-type gf_common_mt_char 
memusage] in glusterd statedump, after investigation, we found that

For __glusterd_handle_cli_list_volume , after glusterd_submit_reply, should 
free rsp.dict.dict_val.

2>   Each time of command "gluster volume status " command there 
are some memory lost from section mgmt/glusterd.management - usage-type 
gf_common_mt_strdup memusage in glusterd statedump, after investigation, we 
found that

For glusterd_mgmt_v3_unlock, before gf_timer_call_cancel, should free pointer 
"data" in mgmt_lock_timer->timer.

Could you help to comment on the above two findings? Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] two potential memory leak place found on glusterfs 3.12.3

2018-02-28 Thread Storage, Dev (Nokia - Global)
HI,

I think I find another suspicious memory leak.
In function vasprintf, the memory is allocated by GF_ALLOC, but when free this 
memory, it is not by GF_FREE , instead, it is freed by FREE or free. This will 
make memory accounting not accurate, also will have some memory lost.
There are many log interfaces in logging.c
Anyone know why the memory is not freed by GF_FREE ?
I see following explanation from _gf_log_eh
/* Use FREE instead of GF_FREE since str2 was allocated by vasprintf */
but why ?

From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: Tuesday, February 27, 2018 10:37 AM
To: Raghavendra Gowdappa ; Gaurav Yadav 
Cc: Storage, Dev (Nokia - Global) ; Csaba Henk 
; gluster-devel@gluster.org; Madappa, Kaushal 

Subject: Re: two potential memory leak place found on glusterfs 3.12.3

+Gaurav

On Mon, Feb 26, 2018 at 2:02 PM, Raghavendra Gowdappa 
> wrote:
+glusterd devs

On Mon, Feb 26, 2018 at 1:41 PM, Storage, Dev (Nokia - Global) 
> wrote:
Hi glusterfs experts,
   Good day!
   During our recent test we found that after execute some glusterfs 
command, there are obvious memory leak found for glusterd process, when we 
compare statedump of glusterd process before and after command executions.


1>   Each time of command “gluster volume list” command there are some memory 
lost from section [mgmt/glusterd.management - usage-type gf_common_mt_char 
memusage] in glusterd statedump, after investigation, we found that

For __glusterd_handle_cli_list_volume , after glusterd_submit_reply, should 
free rsp.dict.dict_val.
 This needs to be looked at.

2>   Each time of command “gluster volume status ” command there 
are some memory lost from section mgmt/glusterd.management - usage-type 
gf_common_mt_strdup memusage in glusterd statedump, after investigation, we 
found that

For glusterd_mgmt_v3_unlock, before gf_timer_call_cancel, should free pointer 
“data” in mgmt_lock_timer->timer.

Gaurav is working on a patch where this has been already identified. So the 
analysis on point 2 seems to be correct.


Could you help to comment on the above two findings? Thanks!


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaleb S. KEITHLEY
On 02/28/2018 10:49 AM, Kaushal M wrote:
> We have a GlusterD2-4.0.0rc1 release.
> 
> Aravinda, Prashanth and the rest of the GD2 developers have been
> working hard on getting more stuff merged into GD2 before the 4.0
> release.
> 
> At the same time I have been working on getting GD2 packaged for Fedora.
> I've been able to get all the required dependencies updated and have
> submitted to the package maintainer for merging.
> I'm now waiting on the maintainer to accept those updates. Once the
> updates have been accepted, the GD2 spec can get accepted [2].
> I expect this to take at least another week on the whole.
> 
> In the meantime, I've been building all the updated dependencies and
> glusterd2-v4.0.0rc1, on the GD2 copr [3].
> 
> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
> release from [4]. And this is where I hit the blocker.
> 
> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
> opened an issue on the GD2 issue tracker for it [5].
> In short, GD2 fails to read options from xlators, as dlopen fails with
> a missing symbol error.
> 
> ```
> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
> failed; dlerror =
> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"


see https://review.gluster.org/#/c/19225/


glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
server.so the run-time linker can resolve the symbol — for now.

Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
will stop working in the near future even when RTLD_LAZY is passed as a
flag. (As I understand the proposed changes.)

It should still work, e.g., on Fedora 27 and el7 though.

glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaushal M
We have a GlusterD2-4.0.0rc1 release.

Aravinda, Prashanth and the rest of the GD2 developers have been
working hard on getting more stuff merged into GD2 before the 4.0
release.

At the same time I have been working on getting GD2 packaged for Fedora.
I've been able to get all the required dependencies updated and have
submitted to the package maintainer for merging.
I'm now waiting on the maintainer to accept those updates. Once the
updates have been accepted, the GD2 spec can get accepted [2].
I expect this to take at least another week on the whole.

In the meantime, I've been building all the updated dependencies and
glusterd2-v4.0.0rc1, on the GD2 copr [3].

I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
release from [4]. And this is where I hit the blocker.

GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
opened an issue on the GD2 issue tracker for it [5].
In short, GD2 fails to read options from xlators, as dlopen fails with
a missing symbol error.

```
FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
failed; dlerror =
/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"

```

My preliminary investigation points to the problem being in the
glusterfs packages.
An externally built GD2 binary that works against a source install of
glusterfs-v4.0.0rc1, fails with the same error with the packaged
install.
I'll continue investigating and try to find the cause. Any help with
this is appreciated.

~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0.0rc1
[2]: https://bugzilla.redhat.com/1540553
[3]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[4]: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/Fedora/fedora-27/x86_64/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Continuous tests failure on Fedora RPM builds

2018-02-28 Thread Amar Tumballi
Looks like the tests here are continuously failing:
https://build.gluster.org/job/devrpm-fedora/

It would be great if someone takes a look at it.

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Softserve- serve yourself a VM

2018-02-28 Thread Deepshikha Khandelwal
Hi,

We have launched the alpha version of  SOFTSERVE[1], which allows Gluster
Github organization members to provision virtual machines for a specified
duration of time. These machines will be deleted automatically afterwards.

Now you don’t need to file a bug to get VM. It’s just a form away with a
dashboard to monitor the machines.

Once the machine is up, you can access it via SSH and run your debugging
(test regression runs).

We’ve enabled certain limits for this application:

   1.

   Maximum allowance of 5 VM at a time across all the users. User have to
   wait until a slot is available for them after 5 machines allocation.
   2.

   User will get the requesting machines maximum upto 4 hours.
   3.

   Access to only Gluster organization members.


These limits may be resolved in the near future. This service is ready to
use and if you find any problems, feel free to file an issue on the github
repository[2].

[1]https://softserve.gluster.org

[2]https://github.com/gluster/softserve


Thanks,

Deepshikha
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-28 Thread Pranith Kumar Karampuri
I found the following memory leak present in 3.13, 4.0 and master:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078

I will clone/port to 4.0 as soon as the patch is merged.

On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero  wrote:

> Hi all,
>
> Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
> 3.10.0-693.17.1.el7.x86_64
>
> This package works ok
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-
> gluster40-0.9-1.el7.centos.x86_64.rpm
>
> # yum install http://cbs.centos.org/kojifiles/work/tasks/1548/
> 311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
> # yum install glusterfs-server
> # systemctl start glusterd
> # systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
> vendor preset: disabled)
>Active: active (running) since Wed 2018-02-28 09:18:46 -03; 53s ago
>  Main PID: 2070 (glusterd)
>CGroup: /system.slice/glusterd.service
>└─2070 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
> INFO
>
> Feb 28 09:18:46 centos-7 systemd[1]: Starting GlusterFS, a clustered
> file-system server...
> Feb 28 09:18:46 centos-7 systemd[1]: Started GlusterFS, a clustered
> file-system server.
>
>
>
>
> This one fails http://cbs.centos.org/kojifiles/work/tasks/1548/
> 311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>
> # yum install -y
> https://buildlogs.centos.org/centos/7/storage/x86_64/
> gluster-4.0/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
> Loaded plugins: fastestmirror, langpacks
> glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
>   | 571 kB  00:00:00
> Examining /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm:
> glusterfs-4.0.0-0.1.rc1.el7.x86_64
> Marking /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
> as an update to glusterfs-3.12.6-1.el7.x86_64
> Resolving Dependencies
> --> Running transaction check
> ---> Package glusterfs.x86_64 0:3.12.6-1.el7 will be updated
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-server-3.12.6-1.el7.x86_64
> base
>   | 3.6 kB  00:00:00
> centos-gluster312
>   | 2.9 kB  00:00:00
> extras
>   | 3.4 kB  00:00:00
> purpleidea-vagrant-libvirt
>   | 3.0 kB  00:00:00
> updates
>   | 3.4 kB  00:00:00
> centos-gluster312/7/x86_64/primary_db
>   |  87 kB  00:00:00
> Loading mirror speeds from cached hostfile
>  * base: centos.xfree.com.ar
>  * extras: centos.xfree.com.ar
>  * updates: centos.xfree.com.ar
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-api-3.12.6-1.el7.x86_64
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-fuse-3.12.6-1.el7.x86_64
> ---> Package glusterfs.x86_64 0:4.0.0-0.1.rc1.el7 will be an update
> --> Processing Dependency: glusterfs-libs = 4.0.0-0.1.rc1.el7 for
> package: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> --> Finished Dependency Resolution
> Error: Package: glusterfs-server-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>Requires: glusterfs = 3.12.6-1.el7
>Removing: glusterfs-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>glusterfs = 3.12.6-1.el7
>Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> (/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
>glusterfs = 4.0.0-0.1.rc1.el7
>Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
>glusterfs = 3.8.4-18.4.el7.centos
>Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.0-1.el7
>Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.1-1.el7
>Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.1-2.el7
>Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.3-1.el7
>Available: glusterfs-3.12.4-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.4-1.el7
>Available: glusterfs-3.12.5-2.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.5-2.el7
> Error: Package: glusterfs-api-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>Requires: glusterfs = 3.12.6-1.el7
>Removing: glusterfs-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>glusterfs = 3.12.6-1.el7
>Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> (/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
>glusterfs = 4.0.0-0.1.rc1.el7
>Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
>glusterfs = 3.8.4-18.4.el7.centos
>

[Gluster-devel] Coverity covscan for 2018-02-28-e2766c32 (master branch)

2018-02-28 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-02-28-e2766c32
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-28 Thread Javier Romero
Hi all,

Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
3.10.0-693.17.1.el7.x86_64

This package works ok
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm

# yum install 
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
# yum install glusterfs-server
# systemctl start glusterd
# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
   Active: active (running) since Wed 2018-02-28 09:18:46 -03; 53s ago
 Main PID: 2070 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─2070 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Feb 28 09:18:46 centos-7 systemd[1]: Starting GlusterFS, a clustered
file-system server...
Feb 28 09:18:46 centos-7 systemd[1]: Started GlusterFS, a clustered
file-system server.




This one fails 
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm

# yum install -y
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
Loaded plugins: fastestmirror, langpacks
glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
  | 571 kB  00:00:00
Examining /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm:
glusterfs-4.0.0-0.1.rc1.el7.x86_64
Marking /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
as an update to glusterfs-3.12.6-1.el7.x86_64
Resolving Dependencies
--> Running transaction check
---> Package glusterfs.x86_64 0:3.12.6-1.el7 will be updated
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-server-3.12.6-1.el7.x86_64
base
  | 3.6 kB  00:00:00
centos-gluster312
  | 2.9 kB  00:00:00
extras
  | 3.4 kB  00:00:00
purpleidea-vagrant-libvirt
  | 3.0 kB  00:00:00
updates
  | 3.4 kB  00:00:00
centos-gluster312/7/x86_64/primary_db
  |  87 kB  00:00:00
Loading mirror speeds from cached hostfile
 * base: centos.xfree.com.ar
 * extras: centos.xfree.com.ar
 * updates: centos.xfree.com.ar
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-api-3.12.6-1.el7.x86_64
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-fuse-3.12.6-1.el7.x86_64
---> Package glusterfs.x86_64 0:4.0.0-0.1.rc1.el7 will be an update
--> Processing Dependency: glusterfs-libs = 4.0.0-0.1.rc1.el7 for
package: glusterfs-4.0.0-0.1.rc1.el7.x86_64
--> Finished Dependency Resolution
Error: Package: glusterfs-server-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   Requires: glusterfs = 3.12.6-1.el7
   Removing: glusterfs-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   glusterfs = 3.12.6-1.el7
   Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
(/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
   glusterfs = 4.0.0-0.1.rc1.el7
   Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
   glusterfs = 3.8.4-18.4.el7.centos
   Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.0-1.el7
   Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-1.el7
   Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-2.el7
   Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.3-1.el7
   Available: glusterfs-3.12.4-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.4-1.el7
   Available: glusterfs-3.12.5-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.5-2.el7
Error: Package: glusterfs-api-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   Requires: glusterfs = 3.12.6-1.el7
   Removing: glusterfs-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   glusterfs = 3.12.6-1.el7
   Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
(/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
   glusterfs = 4.0.0-0.1.rc1.el7
   Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
   glusterfs = 3.8.4-18.4.el7.centos
   Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.0-1.el7
   Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-1.el7
   Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-2.el7
   Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.3-1.el7
   Available: