Re: [Gluster-users] Release 3.12.5: Scheduled for the 12th of January

2018-01-10 Thread Hans Henrik Happe
Hi,

I wonder how this procedure works. I could add a bug that I think is a
*blocker*, but there might not be consensus.

Cheers,
Hans Henrik

On 11-01-2018 07:02, Jiffin Tony Thottan wrote:
> Hi,
> 
> It's time to prepare the 3.12.5 release, which falls on the 10th of
> each month, and hence would be 12-01-2018 this time around.
> 
> This mail is to call out the following,
> 
> 1) Are there any pending *blocker* bugs that need to be tracked for
> 3.12.5? If so mark them against the provided tracker [1] as blockers
> for the release, or at the very least post them as a response to this
> mail
> 
> 2) Pending reviews in the 3.12 dashboard will be part of the release,
> *iff* they pass regressions and have the review votes, so use the
> dashboard [2] to check on the status of your patches to 3.12 and get
> these going
> 
> 3) I have made checks on what went into 3.10 post 3.12 release and if
> these fixes are already included in 3.12 branch, then status on this is
> *green*
> as all fixes ported to 3.10, are ported to 3.12 as well.
> 
> Thanks,
> Jiffin
> 
> [1] Release bug tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5
> 
> [2] 3.12 review dashboard:
> https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12.5: Scheduled for the 12th of January

2018-01-10 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-10 Thread Raghavendra Gowdappa
+gluster-devel

- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Omar Kohl" 
> Cc: gluster-users@gluster.org
> Sent: Wednesday, January 10, 2018 11:47:31 AM
> Subject: Re: [Gluster-users] Exact purpose of network.ping-timeout
> 
> 
> 
> - Original Message -
> > From: "Raghavendra Gowdappa" 
> > To: "Omar Kohl" 
> > Cc: gluster-users@gluster.org
> > Sent: Wednesday, January 10, 2018 10:56:21 AM
> > Subject: Re: [Gluster-users] Exact purpose of network.ping-timeout
> > 
> > Sorry about the delayed response. Had to dig into the history to answer
> > various "why"s.
> > 
> > - Original Message -
> > > From: "Omar Kohl" 
> > > To: gluster-users@gluster.org
> > > Sent: Tuesday, December 26, 2017 6:41:48 PM
> > > Subject: [Gluster-users] Exact purpose of network.ping-timeout
> > > 
> > > Hi,
> > > 
> > > I have a question regarding the "ping-timeout" option. I have been
> > > researching its purpose for a few days and it is not completely clear to
> > > me.
> > > Especially that it is apparently strongly encouraged by the Gluster
> > > community not to change or at least decrease this value!
> > > 
> > > Assuming that I set ping-timeout to 10 seconds (instead of the default
> > > 42)
> > > this would mean that if I have a network outage of 11 seconds then
> > > Gluster
> > > internally would have to re-allocate some resources that it freed after
> > > the
> > > 10 seconds, correct? But apart from that there are no negative
> > > implications,
> > > are there? For instance if I'm copying files during the network outage
> > > then
> > > those files will continue copying after those 11 seconds.
> > > 
> > > This means that the only purpose of ping-timeout is to save those extra
> > > resources that are used by "short" network outages. Is that correct?
> > 
> > Basic purpose of ping-timer/heartbeat is to identify an unresponsive brick.
> > Unresponsiveness can be caused due to various reasons like:
> > * A deadlocked server. We no longer see too many instances of deadlocked
> > bricks/server
> > * Slow execution of fops in brick stack. For eg.,
> > - due to lock contention. There have been some efforts to fix the lock
> > contention on brick stack.
> > - bad backend OS/filesystem. Posix health checker was an effort to fix
> > this.
> > - Not enough threads for execution etc
> >   Note that ideally its not the job of ping framework to identify this
> >   scenario and following the same thought process we've shielded the
> >   processing of ping requests on bricks from the costs of execution of
> >   requests to Glusterfs Program.
> > 
> > * Ungraceful shutdown of network connections. For eg.,
> > - hard shutdown of machine/container/VM running the brick
> > - physically pulling out the network cable
> >   Basically all those different scenarios where TCP/IP doesn't get a chance
> >   to inform the other end that it is going down. Note that some of the
> >   scenarios of ungraceful network shutdown can be identified using
> >   TCP_KEEPALIVE and TCP_USERTIMEOUT [1]. However, at the time when
> >   heartbeat
> >   mechanism was introduced in Glusterfs, TCP_KEEPALIVE couldn't identify
> >   all
> >   the ungraceful network shutdown scenarios and TCP_USER_TIMEOUT was yet to
> >   be implemented in Linux kernel. One scenario which TCP_KEEPALIVE could
> 
> s/could/couldn't/
> 
> >   identify was the exact scenario TCP_USER_TIMEOUT aims to solve -
> >   identifying an hard network shutdown when data is in transit. However
> >   there might be other limitations in TCP_KEEPALIVE which we need to test
> >   out before retiring heart beat mechanism in favor of TCP_KEEPALIVE and
> >   TCP_USER_TIMEOUT.
> > 
> > The next interesting question would be why we need to identify an
> > unresponsive brick. Various reasons why we need to do that would be:
> > * To replace/fix any problems the brick might have
> > * Almost all of the cluster translators - DHT, AFR, EC - wait for a
> > response
> > from all of their children - either successful or failure - before sending
> > the response back to application. This means one or more slow/unresponsive
> > brick can increase the latencies of fops/syscalls even though other bricks
> > are responsive and healthy. However there are ongoing efforts to minimize
> > the effect of few slow/unresponsive bricks [2]. I think principles of [2]
> > can applied to DHT and AFR too.
> > 
> > Some recent discussions on the necessity of ping framework in glusterfs can
> > be found at [3].
> > 
> > Having given all the above reasons for the existence of ping framework, its
> > also important that ping-framework shouldn't bring down an otherwise
> > healthy
> > connection (False positives). Reasons are:
> > * As pointed out by Joe Julian in another mail on this thread, each
> > connection carries some state on bricks like locks/open-fds 

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-10 Thread Nithya Balachandran
Hi Jose,

Gluster is working as expected. The Distribute-replicated type just means
that there are now 2 replica sets and files will be distributed across
them.

A volume of type Replicate (1xn where n is the number of bricks in the
replica set) indicates there is no distribution  (all files on the
volume will be present on all the bricks in the volume).


A volume of type Distributed-Replicate indicates the volume is both
distributed (as in files will only be created on one of the replicated
sets) and replicated. So in the above example, a file will exist on either
Brick1 and Brick2 or Brick3 and Brick4.


After the add brick, the volume will have a total capacity of 28TB and
store 2 copies of every file. Let me know if that is not what you are
looking for.


Regards,
Nithya


On 10 January 2018 at 20:40, Jose Sanchez  wrote:

>
>
> Hi Nithya
>
> This is what i have so far, I have peer both cluster nodes together as
> replica, from node 1A and 1B , now when i tried to add it , i get the error
> that it is already part of a volume. when i run the cluster volume info , i
> see that has switch to distributed-replica.
>
> Thanks
>
> Jose
>
>
>
>
>
> [root@gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
> 2634
> Self-heal Daemon on localhost   N/A   N/AY
> 3132
> Self-heal Daemon on gluster02ib N/A   N/AY
> 2626
>
>
> Task Status of Volume scratch
> 
> --
> There are no active volume tasks
>
>
> [root@gluster01 ~]#
>
> [root@gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root@gluster01 ~]#
>
>
> -
>
> [root@gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>
>
> [root@gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
> 2634
> Self-heal Daemon on gluster02ib N/A   N/AY
> 2626
> Self-heal Daemon on localhost   N/A   N/AY
> 3132
>
>
> Task Status of Volume scratch
> 
> --
> There are no active volume tasks
>
>
> [root@gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Distributed-Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root@gluster01 ~]#
>
>
>
> 
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131-0001
> carc.unm.edu
>
>
> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran 
> wrote:
>
> Hi,
>
> Please let us know what commands you ran so far and the output of the *gluster
> volume info* command.
>
> Thanks,
> Nithya
>
> On 9 January 2018 at 23:06, Jose Sanchez  wrote:
>
>> Hello
>>
>> We are trying to setup Gluster for our project/scratch storage HPC
>> machine using a replicated mode with 2 nodes, 2 bricks each (14tb each).
>>
>> Our goal is to be able to have a replicated system between node 1 and 2
>> (A bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so
>> we can have a total of 28tb replicated mode.
>>
>> Node 1 [ (Brick A) (Brick B) ]
>> Node 2 [ (Brick A) (Brick B) ]
>> 
>> 14Tb + 14Tb = 28Tb
>>
>> At this  I was able to create the replica nodes between node 1 and 2
>> (brick A) but I’ve not been able to add to the replica together, Gluster
>> switches to distributed replica   when i add it with only 14Tb.
>>
>> Any 

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-10 Thread Jose Sanchez


Hi Nithya

This is what i have so far, I have peer both cluster nodes together as replica, 
from node 1A and 1B , now when i tried to add it , i get the error that it is 
already part of a volume. when i run the cluster volume info , i see that has 
switch to distributed-replica. 

Thanks

Jose





[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   3140 
Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634 
Self-heal Daemon on localhost   N/A   N/AY   3132 
Self-heal Daemon on gluster02ib N/A   N/AY   2626 
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]#


-

[root@gluster01 ~]# gluster volume add-brick scratch replica 2 
gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
volume add-brick: failed: /gdata/brick2/scratch is already part of a volume


[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   3140 
Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634 
Self-heal Daemon on gluster02ib N/A   N/AY   2626 
Self-heal Daemon on localhost   N/A   N/AY   3132 
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Distributed-Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Brick3: gluster01ib:/gdata/brick2/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]# 




Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001
carc.unm.edu 


> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran  wrote:
> 
> Hi,
> 
> Please let us know what commands you ran so far and the output of the gluster 
> volume info command.
> 
> Thanks,
> Nithya
> 
> On 9 January 2018 at 23:06, Jose Sanchez  > wrote:
> Hello
> 
> We are trying to setup Gluster for our project/scratch storage HPC machine 
> using a replicated mode with 2 nodes, 2 bricks each (14tb each). 
> 
> Our goal is to be able to have a replicated system between node 1 and 2 (A 
> bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we 
> can have a total of 28tb replicated mode. 
> 
> Node 1 [ (Brick A) (Brick B) ]
> Node 2 [ (Brick A) (Brick B) ]
> 
>   14Tb + 14Tb = 28Tb
> 
> At this  I was able to create the replica nodes between node 1 and 2 (brick 
> A) but I’ve not been able to add to the replica together, Gluster switches to 
> distributed replica   when i add it with only 14Tb.
> 
> Any help will be appreciated.
> 
> Thanks
> 
> Jose
> 
> -
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131
> carc.unm.edu 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Blocking IO when hot tier promotion daemon runs

2018-01-10 Thread Tom Fite
I should add that additional testing has shown that only accessing files is
held up, IO is not interrupted for existing transfers. I think this points
to the heat metadata in the sqlite DB for the tier, is it possible that a
table is temporarily locked while the promotion daemon runs so the calls to
update the access count on files are blocked?


On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite  wrote:

> The sizes of the files are extremely varied, there are millions of small
> (<1 MB) files and thousands of files larger than 1 GB.
>
> Attached is the tier log for gluster1 and gluster2. These are full of
> "demotion failed" messages, which is also shown in the status:
>
> [root@pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
> Node Promoted files   Demoted filesStatus
>  run time in h:m:s
> ----
>   -
> localhost259400in
> progress  112:21:49
> pod-sjc1-gluster2 02917154  in progress
>   112:21:49
>
> Is it normal to have promotions and demotions only happen on each server
> but not both?
>
> Volume info:
>
> [root@pod-sjc1-gluster1 ~]# gluster volume info
>
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
> Status: Started
> Snapshot Count: 13
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: pod-sjc1-gluster1:/data/brick1/gv0
> Brick2: pod-sjc1-gluster2:/data/brick1/gv0
> Brick3: pod-sjc1-gluster1:/data/brick2/gv0
> Brick4: pod-sjc1-gluster2:/data/brick2/gv0
> Brick5: pod-sjc1-gluster1:/data/brick3/gv0
> Brick6: pod-sjc1-gluster2:/data/brick3/gv0
> Options Reconfigured:
> performance.cache-refresh-timeout: 60
> performance.stat-prefetch: on
> server.allow-insecure: on
> performance.flush-behind: on
> performance.rda-cache-limit: 32MB
> network.tcp-window-size: 1048576
> performance.nfs.io-threads: on
> performance.write-behind-window-size: 4MB
> performance.nfs.write-behind-window-size: 512MB
> performance.io-cache: on
> performance.quick-read: on
> features.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> performance.cache-invalidation: on
> performance.md-cache-timeout: 600
> network.inode-lru-limit: 9
> performance.cache-size: 4GB
> server.event-threads: 16
> client.event-threads: 16
> features.barrier: disable
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> cluster.lookup-optimize: on
> server.outstanding-rpc-limit: 1024
> auto-delete: enable
>
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Hot Bricks:
> Brick pod-sjc1-gluster2:/data/
> hot_tier/gv049219 0  Y
>  26714
> Brick pod-sjc1-gluster1:/data/
> hot_tier/gv049199 0  Y
>  21325
> Cold Bricks:
> Brick pod-sjc1-gluster1:/data/
> brick1/gv0  49152 0  Y
>  3178
> Brick pod-sjc1-gluster2:/data/
> brick1/gv0  49152 0  Y
>  4818
> Brick pod-sjc1-gluster1:/data/
> brick2/gv0  49153 0  Y
>  3186
> Brick pod-sjc1-gluster2:/data/
> brick2/gv0  49153 0  Y
>  4829
> Brick pod-sjc1-gluster1:/data/
> brick3/gv0  49154 0  Y
>  3194
> Brick pod-sjc1-gluster2:/data/
> brick3/gv0  49154 0  Y
>  4840
> Tier Daemon on localhostN/A   N/AY
>  20313
> Self-heal Daemon on localhost   N/A   N/AY
>  32023
> Tier Daemon on pod-sjc1-gluster1N/A   N/AY
>  24758
> Self-heal Daemon on pod-sjc1-gluster2   N/A   N/AY
>  12349
>
> Task Status of Volume gv0
> 
> --
> There are no active volume tasks
>
>
> On Tue, Jan 9, 2018 at 10:33 PM, Hari Gowtham  wrote:
>
>> Hi,
>>
>> Can you send the volume info, and volume status output and the tier logs.
>> And I need to know the size of the files that are being stored.
>>
>> On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite  wrote:
>> > I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server
>> / 3
>> > bricks per server distributed replicated volume.
>> >
>> > I'm seeing IO get blocked across all client FUSE threads for 10 to 15
>> > seconds while the promotion daemon runs. I see the 'glustertierpro'
>> thread
>> > jump to 99% CPU usage on both boxes when these delays occur and they
>> happen
>> > every 25 minutes (my tier-promote-frequency 

Re: [Gluster-users] Announcing Glusterfs release 3.12.4 (Long Term Maintenance)

2018-01-10 Thread Niels de Vos
On Fri, Jan 05, 2018 at 06:21:26PM -0600, Darrell Budic wrote:
> Hey Niels,
> 
> Installed 3.12.4 from centos-gluster312-test on my dev ovirt hyper
> converged cluster. Everything looks good and is working as expected
> for storage, migration, & healing. Need any specifics?

Thanks Darrell!
Sorry for the delay, I have been on holidays and returned yesterday. The
packages have now been marked for release and will hopefully land on the
CentOS mirrors later today.

Niels


> 
>   -D
> 
> > From: Jiffin Tony Thottan 
> > Subject: [Gluster-users] Announcing Glusterfs release 3.12.4 (Long Term 
> > Maintenance)
> > Date: December 19, 2017 at 12:14:15 AM CST
> > To: gluster-users@gluster.org, gluster-de...@gluster.org, 
> > annou...@gluster.org
> > 
> > The Gluster community is pleased to announce the release of Gluster 3.12.4 
> > (packages available at [1,2,3]).
> > 
> > Release notes for the release can be found at [4].
> > 
> > We still carry following major issue that is reported in the release-notes 
> > as follows,
> > 
> > 1.) - Expanding a gluster volume that is sharded may cause file corruption
> > 
> > Sharded volumes are typically used for VM images, if such volumes are 
> > expanded or possibly contracted (i.e add/remove bricks and rebalance) there 
> > are reports of VM images getting corrupted.
> > 
> > The last known cause for corruption (Bug #1465123) has a fix with this 
> > release. As further testing is still in progress, the issue is retained as 
> > a major issue.
> > 
> > Status of this bug can be tracked here, #1465123
> > 
> > Thanks,
> > Gluster community
> > 
> > 
> > [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.4/
> > [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> > [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> > [4] Release notes: 
> > https://gluster.readthedocs.io/en/latest/release-notes/3.12.4/
> > 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users