Re: [Gluster-users] (PLEASE UNDERSTAND our concern as TOP PRIORITY) : Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy

2019-02-13 Thread Shaik Salam
Hi John/Michael

Could you please at least provide workaround for this issue.
We stuck from last two months and unable to use our setup
We tried following ways.

1. heketi pod restart
2. export pending operations and clear and import but still same issue 
when I try to create single volume at a time.


Please understand our concern and provide workaround.

BR
Salam



From:   Shaik Salam/HYD/TCS
To: "John Mulligan" 
Cc: "gluster-users@gluster.org List" , 
"Michael Adam" , Madhu Rajanna 
Date:   01/25/2019 04:03 PM
Subject:Re: [Gluster-users] Failed to provision volume with 
StorageClass "glusterfs-storage": glusterfs: server busy


Hi John,

Could you please have look my issue If you have time (atleast provide 
workaround).
Thanks in advance.

BR
Salam




From:   "Shaik Salam" 
To: 
Cc: "gluster-users@gluster.org List" , 
"Michael Adam" 
Date:   01/25/2019 02:55 PM
Subject:Re: [Gluster-users] Failed to provision volume with 
StorageClass "glusterfs-storage": glusterfs: server busy
Sent by:gluster-users-boun...@gluster.org



"External email. Open with Caution"
Hi John, 

Please find db dump and heketi log. Here kernel version. Please let me 
know If you need more information. 

[root@app2 ~]# uname -a 
Linux app2.matrix.nokia.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 
16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux 

Hardware: HP GEN8 

OS; NAME="CentOS Linux" 
VERSION="7 (Core)" 
ID="centos" 
ID_LIKE="rhel fedora" 
VERSION_ID="7" 
PRETTY_NAME="CentOS Linux 7 (Core)" 
ANSI_COLOR="0;31" 
CPE_NAME="cpe:/o:centos:centos:7" 
HOME_URL="https://www.centos.org/"; 
BUG_REPORT_URL="https://bugs.centos.org/"; 

CENTOS_MANTISBT_PROJECT="CentOS-7" 
CENTOS_MANTISBT_PROJECT_VERSION="7" 
REDHAT_SUPPORT_PRODUCT="centos" 
REDHAT_SUPPORT_PRODUCT_VERSION="7" 

  




From:"Madhu Rajanna"  
To:"Shaik Salam" , "John Mulligan" 
 
Cc:"gluster-users@gluster.org List" , 
"Michael Adam"  
Date:01/24/2019 10:52 PM 
Subject:Re: Failed to provision volume with StorageClass 
"glusterfs-storage": glusterfs: server busy 



"External email. Open with Caution" 
Adding John who is having more idea about how to debug this one. 

@Shaik Salam can you some more info on the hardware on which you are 
running heketi (kernel details) 

On Thu, Jan 24, 2019 at 7:42 PM Shaik Salam  wrote: 
Hi Madhu, 

Sorry to disturb could you please provide atleast work around (to clear 
requests which stuck) to move further. 
We are also not able to find root cause from glusterd logs. Please find 
attachment. 

BR 
Salam 

  



From:Shaik Salam/HYD/TCS 
To:"Madhu Rajanna"  
Cc:"gluster-users@gluster.org List" , 
"Michael Adam"  
Date:01/24/2019 04:12 PM 
Subject:Re: Failed to provision volume with StorageClass 
"glusterfs-storage": glusterfs: server busy 


Hi Madhu, 

Please let me know If any other information required. 

BR 
Salam 




From:Shaik Salam/HYD/TCS 
To:"Madhu Rajanna"  
Cc:"gluster-users@gluster.org List" , 
"Michael Adam"  
Date:01/24/2019 03:23 PM 
Subject:Re: Failed to provision volume with StorageClass 
"glusterfs-storage": glusterfs: server busy 


Hi Madhu, 

This is complete one after restart of heketi pod and process log. 

BR 
Salam 

[attachment "heketi-pod-complete.log" deleted by Shaik Salam/HYD/TCS] 
[attachment "ps-aux.txt" deleted by Shaik Salam/HYD/TCS] 




From:"Madhu Rajanna"  
To:"Shaik Salam"  
Cc:"gluster-users@gluster.org List" , 
"Michael Adam"  
Date:01/24/2019 01:55 PM 
Subject:Re: Failed to provision volume with StorageClass 
"glusterfs-storage": glusterfs: server busy 



"External email. Open with Caution" 
the logs you provided is not complete, not able to figure out which 
command is struck, can you reattach the complete output of `ps aux` and 
also attach complete heketi logs. 

On Thu, Jan 24, 2019 at 1:41 PM Shaik Salam  wrote: 
Hi Madhu, 

Please find requested info. 

BR 
Salam 

  



From:Madhu Rajanna  
To:Shaik Salam  
Cc:"gluster-users@gluster.org List" , 
Michael Adam  
Date:01/24/2019 01:33 PM 
Subject:Re: Failed to provision volume with StorageClass 
"glusterfs-storage": glusterfs: server busy 



"External email. Open with Caution" 
the heketi logs you have attached is not complete i believe, can you 
povide  the complete heketi logs 
and also an we get the output of "ps aux" from the gluster pods ? I want 
to see if any lvm commands or gluster commands are "stuck". 


On Thu, Jan 24, 2019 at 1:16 PM Shaik Salam  wrote: 
Hi Madhu. 

I tried lot of times restarted heketi pod but not resolved. 

sh-4.4# heketi-cli server operations info 
Operation Counts: 
  Total: 0 
  In-Flight: 0 
  New: 0 
  Stale: 0 

Now you can see all operations are zero. Now I try to create single volume 
below is observation in-flight reaching slowly to 8. 

sh-4.4# heketi-cli server operati

[Gluster-users] Gluster Container Storage: Release Update

2019-02-13 Thread Amar Tumballi Suryanarayan
Hello everyone,

We are announcing v1.0RC release of GlusterCS this week!**

The version 1.0 is due along with *glusterfs-6.0* next month. Below are the
Goals for v1.0:

   - RWX PVs - Scale and Performance
   - RWO PVs - Simple, leaner stack with Gluster’s Virtual Block.
   - Thin Arbiter (2 DataCenter Replicate) Support for RWX volume.
  - RWO hosting volume to use Thin Arbiter volume type would be still
  in Alpha.
   - Integrated monitoring.
   - Simple Install / Overall user-experience.

Along with above, we are in Alpha state to support GCS on ARM architecture.
We are also trying to get the website done for GCS @
https://gluster.github.io/gcs

We are looking for some validation of the GCS containers, and the overall
gluster stack, in your k8s setup.

While we are focusing more on getting stability, and better
user-experience, we are also trying to ship few tech-preview items, for
early preview. The main item on this is loopback based bricks (
https://github.com/gluster/glusterd2/pull/1473), which allows us to bring
more data services on top of Gluster with more options in container world,
specially with backup and recovery.

The above feature also makes better snapshot/clone story for gluster in
containers with reflink support on XFS. *(NOTE: this will be a future
improvement)*

This email is a request for help with regard to testing and feedback on
this new stack, in its alpha release tag. Do let us know if there are any
concerns. We are ready to take anything from ‘This is BS!!’ to ‘Wow! this
looks really simple, works without hassle’

[image: :smile:]

Btw, if you are interested to try / help, few things to note:

   - GCS uses CSI spec v1.0, which is only available from k8s 1.13+
   - We do have weekly meetings on GCS as announced in
   https://lists.gluster.org/pipermail/gluster-devel/2019-January/055774.html -
   Feel free to jump in if interested.
  - ie, Every Thursday, 15:00 UTC.
   - GCS doesn’t have any operator support yet, but for simplicity, you can
   also try using https://github.com/aravindavk/kubectl-gluster
  - Planned to be integrated in later versions.
   - We are not great at creating cool website, help in making GCS homepage
   would be great too :-)

Interested? feel free to jump into Architecture call today.

Regards,
Gluster Container Storage Team

PS: The meeting minutes, where the release pointers were discussed is @
https://hackmd.io/sj9ik9SCTYm81YcQDOOrtw?both

** - subject to resolving some blockers @
https://waffle.io/gluster/gcs?label=GCS%2F1.0

-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
Let me know if you still see problems.

Thanks,
Nithya

On Thu, 14 Feb 2019 at 09:05, Patrick Nixon  wrote:

> Thanks for the follow up.  After reviewing the logs Vijay mentioned,
> nothing useful was found.
>
> I wiped removed and wiped the brick tonight.   I'm in the process of
> balancing the new brick and will resync the files onto the full gluster
> volume when that completes
>
> On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran 
> wrote:
>
>>
>>
>> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon  wrote:
>>
>>> The files are being written to via the glusterfs mount (and read on the
>>> same client and a different client). I try not to do anything on the nodes
>>> directly because I understand that can cause weirdness.   As far as I can
>>> tell, there haven't been any network disconnections, but I'll review the
>>> client log to see if there any indication.   I don't recall any issues last
>>> time I was in there.
>>>
>>>
>> If I understand correctly, the files are written to the volume from the
>> client , but when the same client tries to list them again, those entries
>> are not listed. Is that right?
>>
>> Do the files exist on the bricks?
>> Would you be willing to provide a tcpdump of the client when doing this?
>> If yes, please do the following:
>>
>> On the client system:
>>
>>- tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
>>- Copy the files to the volume using the client
>>- List the contents of the directory in which the files should exist
>>- Stop the tcpdump capture and send it to us.
>>
>>
>> Also provide the name of the directory and the missing files.
>>
>> Regards,
>> NIthya
>>
>>
>>
>>
>>
>>> Thanks for the response!
>>>
>>> On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur  wrote:
>>>


 On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon  wrote:

> Hello!
>
> I have an 8 node distribute volume setup.   I have one node that
> accept files and stores them on disk, but when doing an ls, none of the
> files on that specific node are being returned.
>
>  Can someone give some guidance on what should be the best place to
> start troubleshooting this?
>


 Are the files being written from a glusterfs mount? If so, it might be
 worth checking if the network connectivity is fine between the client (that
 does ls) and the server/brick that contains these files. You could look up
 the client log file to check if there are any messages related to
 rpc disconnections.

 Regards,
 Vijay


> # gluster volume info
>
> Volume Name: gfs
> Type: Distribute
> Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 8
> Transport-type: tcp
> Bricks:
> Brick1: gfs01:/data/brick1/gv0
> Brick2: gfs02:/data/brick1/gv0
> Brick3: gfs03:/data/brick1/gv0
> Brick4: gfs05:/data/brick1/gv0
> Brick5: gfs06:/data/brick1/gv0
> Brick6: gfs07:/data/brick1/gv0
> Brick7: gfs08:/data/brick1/gv0
> Brick8: gfs04:/data/brick1/gv0
> Options Reconfigured:
> cluster.min-free-disk: 10%
> nfs.disable: on
> performance.readdir-ahead: on
>
> # gluster peer status
> Number of Peers: 7
> Hostname: gfs03
> Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
> State: Peer in Cluster (Connected)
> Hostname: gfs08
> Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
> State: Peer in Cluster (Connected)
> Hostname: gfs07
> Uuid: dd699f55-1a27-4e51-b864-b4600d630732
> State: Peer in Cluster (Connected)
> Hostname: gfs06
> Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
> State: Peer in Cluster (Connected)
> Hostname: gfs04
> Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
> State: Peer in Cluster (Connected)
> Hostname: gfs02
> Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
> State: Peer in Cluster (Connected)
> Hostname: gfs05
> Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
> State: Peer in Cluster (Connected)
>
> All gluster nodes are running glusterfs 4.0.2
> The clients accessing the files are also running glusterfs 4.0.2
> Both are Ubuntu
>
> Thanks!
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

 ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Patrick Nixon
Thanks for the follow up.  After reviewing the logs Vijay mentioned,
nothing useful was found.

I wiped removed and wiped the brick tonight.   I'm in the process of
balancing the new brick and will resync the files onto the full gluster
volume when that completes

On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran 
wrote:

>
>
> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon  wrote:
>
>> The files are being written to via the glusterfs mount (and read on the
>> same client and a different client). I try not to do anything on the nodes
>> directly because I understand that can cause weirdness.   As far as I can
>> tell, there haven't been any network disconnections, but I'll review the
>> client log to see if there any indication.   I don't recall any issues last
>> time I was in there.
>>
>>
> If I understand correctly, the files are written to the volume from the
> client , but when the same client tries to list them again, those entries
> are not listed. Is that right?
>
> Do the files exist on the bricks?
> Would you be willing to provide a tcpdump of the client when doing this?
> If yes, please do the following:
>
> On the client system:
>
>- tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
>- Copy the files to the volume using the client
>- List the contents of the directory in which the files should exist
>- Stop the tcpdump capture and send it to us.
>
>
> Also provide the name of the directory and the missing files.
>
> Regards,
> NIthya
>
>
>
>
>
>> Thanks for the response!
>>
>> On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur  wrote:
>>
>>>
>>>
>>> On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon  wrote:
>>>
 Hello!

 I have an 8 node distribute volume setup.   I have one node that accept
 files and stores them on disk, but when doing an ls, none of the files on
 that specific node are being returned.

  Can someone give some guidance on what should be the best place to
 start troubleshooting this?

>>>
>>>
>>> Are the files being written from a glusterfs mount? If so, it might be
>>> worth checking if the network connectivity is fine between the client (that
>>> does ls) and the server/brick that contains these files. You could look up
>>> the client log file to check if there are any messages related to
>>> rpc disconnections.
>>>
>>> Regards,
>>> Vijay
>>>
>>>
 # gluster volume info

 Volume Name: gfs
 Type: Distribute
 Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 8
 Transport-type: tcp
 Bricks:
 Brick1: gfs01:/data/brick1/gv0
 Brick2: gfs02:/data/brick1/gv0
 Brick3: gfs03:/data/brick1/gv0
 Brick4: gfs05:/data/brick1/gv0
 Brick5: gfs06:/data/brick1/gv0
 Brick6: gfs07:/data/brick1/gv0
 Brick7: gfs08:/data/brick1/gv0
 Brick8: gfs04:/data/brick1/gv0
 Options Reconfigured:
 cluster.min-free-disk: 10%
 nfs.disable: on
 performance.readdir-ahead: on

 # gluster peer status
 Number of Peers: 7
 Hostname: gfs03
 Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
 State: Peer in Cluster (Connected)
 Hostname: gfs08
 Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
 State: Peer in Cluster (Connected)
 Hostname: gfs07
 Uuid: dd699f55-1a27-4e51-b864-b4600d630732
 State: Peer in Cluster (Connected)
 Hostname: gfs06
 Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
 State: Peer in Cluster (Connected)
 Hostname: gfs04
 Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
 State: Peer in Cluster (Connected)
 Hostname: gfs02
 Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
 State: Peer in Cluster (Connected)
 Hostname: gfs05
 Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
 State: Peer in Cluster (Connected)

 All gluster nodes are running glusterfs 4.0.2
 The clients accessing the files are also running glusterfs 4.0.2
 Both are Ubuntu

 Thanks!
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
On Tue, 12 Feb 2019 at 08:30, Patrick Nixon  wrote:

> The files are being written to via the glusterfs mount (and read on the
> same client and a different client). I try not to do anything on the nodes
> directly because I understand that can cause weirdness.   As far as I can
> tell, there haven't been any network disconnections, but I'll review the
> client log to see if there any indication.   I don't recall any issues last
> time I was in there.
>
>
If I understand correctly, the files are written to the volume from the
client , but when the same client tries to list them again, those entries
are not listed. Is that right?

Do the files exist on the bricks?
Would you be willing to provide a tcpdump of the client when doing this? If
yes, please do the following:

On the client system:

   - tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
   - Copy the files to the volume using the client
   - List the contents of the directory in which the files should exist
   - Stop the tcpdump capture and send it to us.


Also provide the name of the directory and the missing files.

Regards,
NIthya





> Thanks for the response!
>
> On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur  wrote:
>
>>
>>
>> On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon  wrote:
>>
>>> Hello!
>>>
>>> I have an 8 node distribute volume setup.   I have one node that accept
>>> files and stores them on disk, but when doing an ls, none of the files on
>>> that specific node are being returned.
>>>
>>>  Can someone give some guidance on what should be the best place to
>>> start troubleshooting this?
>>>
>>
>>
>> Are the files being written from a glusterfs mount? If so, it might be
>> worth checking if the network connectivity is fine between the client (that
>> does ls) and the server/brick that contains these files. You could look up
>> the client log file to check if there are any messages related to
>> rpc disconnections.
>>
>> Regards,
>> Vijay
>>
>>
>>> # gluster volume info
>>>
>>> Volume Name: gfs
>>> Type: Distribute
>>> Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 8
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: gfs01:/data/brick1/gv0
>>> Brick2: gfs02:/data/brick1/gv0
>>> Brick3: gfs03:/data/brick1/gv0
>>> Brick4: gfs05:/data/brick1/gv0
>>> Brick5: gfs06:/data/brick1/gv0
>>> Brick6: gfs07:/data/brick1/gv0
>>> Brick7: gfs08:/data/brick1/gv0
>>> Brick8: gfs04:/data/brick1/gv0
>>> Options Reconfigured:
>>> cluster.min-free-disk: 10%
>>> nfs.disable: on
>>> performance.readdir-ahead: on
>>>
>>> # gluster peer status
>>> Number of Peers: 7
>>> Hostname: gfs03
>>> Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs08
>>> Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs07
>>> Uuid: dd699f55-1a27-4e51-b864-b4600d630732
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs06
>>> Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs04
>>> Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs02
>>> Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
>>> State: Peer in Cluster (Connected)
>>> Hostname: gfs05
>>> Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
>>> State: Peer in Cluster (Connected)
>>>
>>> All gluster nodes are running glusterfs 4.0.2
>>> The clients accessing the files are also running glusterfs 4.0.2
>>> Both are Ubuntu
>>>
>>> Thanks!
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-13 Thread Darrell Budic
Ah, ok, that’s what I thought. Then I have no complaints about improved 
defaults for the fuse case as long as the use case groups retain appropriately 
optimized settings. Thanks!

> On Feb 12, 2019, at 11:14 PM, Raghavendra Gowdappa  
> wrote:
> 
> 
> 
> On Tue, Feb 12, 2019 at 11:09 PM Darrell Budic  > wrote:
> Is there an example of a custom profile you can share for my ovirt use case 
> (with gfapi enabled)?
> 
> I was speaking about a group setting like "group metadata-cache". Its just 
> that custom options one would turn on for a class of applications or problems.
> 
> Or are you just talking about the standard group settings for virt as a 
> custom profile?
> 
>> On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa > > wrote:
>> 
>> https://review.gluster.org/22203 
>> 
>> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa > > wrote:
>> All,
>> 
>> We've found perf xlators io-cache and read-ahead not adding any performance 
>> improvement. At best read-ahead is redundant due to kernel read-ahead and at 
>> worst io-cache is degrading the performance for workloads that doesn't 
>> involve re-read. Given that VFS already have both these functionalities, I 
>> am proposing to have these two translators turned off by default for native 
>> fuse mounts.
>> 
>> For non-native fuse mounts like gfapi (NFS-ganesha/samba) we can have these 
>> xlators on by having custom profiles. Comments?
>> 
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1665029 
>> 
>> 
>> regards,
>> Raghavendra
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>> 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users