[Gluster-devel] glusterfs-3.7.3 released

2015-07-29 Thread Kaushal M
Hi All.

I'm pleased to announce the release of glusterfs-3.7.3. This release
includes a lot of bug fixes and stabilizes the 3.7 branch further. The
summary of the bugs fixed is available at the end of this mail.

The source and RPMs can be available at
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/ . I'll
notify the list as other packages become available.

Thanks all who submitted fixes for this release.

Regards,
Kaushal

## Bugs fixed in this release

1212842: tar on a glusterfs mount displays file changed as we read
it even though the file was not changed
1214169: glusterfsd crashed while rebalance and self-heal were in progress
1217722: Tracker bug for Logging framework expansion.
1219358: Disperse volume: client crashed while running iozone
1223318: brick-op failure for glusterd command should log error
message in cmd_history.log
122: BitRot :- Handle brick re-connection sanely in bitd/scrub process
1226830: Scrubber crash upon pause
1227572: Sharding - Fix posix compliance test failures.
1227808: Issues reported by Cppcheck static analysis tool
1228535: Memory leak in marker xlator
1228640: afr: unrecognized option in re-balance volfile
1229282: Disperse volume: Huge memory leak of glusterfsd process
1229563: Disperse volume: Failed to update version and size (error 2)
seen during delete operations
1230327: context of access control translator should be updated
properly for GF_POSIX_ACL_*_KEY xattrs
1230399: [Snapshot] Scheduled job is not processed when one of the
node of shared storage volume is down
1230523: glusterd: glusterd crashing if you run  re-balance and vol
status  command parallely.
1230857: Files migrated should stay on a tier for a full cycle
1231024: scrub frequecny and throttle change information need to be
present in Scrubber log
1231608: Add regression test for cluster lock in a heterogeneous cluster
1231767: tiering:compiler warning with gcc v5.1.1
1232173: Incomplete self-heal and split-brain on directories found
when self-healing files/dirs on a replaced disk
1232185: cli correction: if tried to create multiple bricks on same
server shows replicate volume instead of disperse volume
1232199: Skip zero byte files when triggering signing
1232333: Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
1232335: nfs-ganesha: volume is not in list of exports in case of
volume stop followed by volume start
1232602: bug-857330/xml.t fails spuriously
1232612: Disperse volume: misleading unsuccessful message with heal
and heal full
1232660: Change default values of allow-insecure and bind-insecure
1232883: Snapshot daemon failed to run on newly created dist-rep
volume with uss enabled
1232885: [SNAPSHOT]: man gluster needs modification for few snapshot commands
1232886: [SNAPSHOT]: Output message when a snapshot create is issued
when multiple bricks are down needs to be improved
1232887: [SNAPSHOT] : Snapshot delete fails with error - Snap might
not be in an usable state
1232889: Snapshot: When Cluster.enable-shared-storage is enable,
shared storage should get mount after Node reboot
1233041: glusterd crashed when testing heal full on replaced disks
1233158: Null pointer dreference in dht_migrate_complete_check_task
1233518: [Backup]: Glusterfind session(s) created before starting the
volume results in 'changelog not available' error, eventually
1233555: gluster v set help needs to be updated for
cluster.enable-shared-storage option
1233559: libglusterfs: avoid crash due to ctx being NULL
1233611: Incomplete conservative merge for split-brained directories
1233632: Disperse volume: client crashed while running iozone
1233651: pthread cond and mutex variables of fs struct has to be
destroyed conditionally.
1234216: nfs-ganesha: add node fails to add a new node to the cluster
1234225: Data Tiering: add tiering set options to volume set help
(cluster.tier-demote-frequency and cluster.tier-promote-frequency)
1234297: Quota: Porting logging messages to new logging framework
1234408: STACK_RESET may crash with concurrent statedump requests to a
glusterfs process
1234584: nfs-ganesha:delete node throws error and pcs status also
notifies about failures, in fact I/O also doesn't resume post grace
period
1234679: Disperse volume : 'ls -ltrh' doesn't list correct size of the
files every time
1234695: [geo-rep]: Setting meta volume config to false when meta
volume is stopped/deleted leads geo-rep to faulty
1234843: GlusterD does not store updated peerinfo objects.
1234898: [geo-rep]: Feature fan-out fails with the use of meta volume config
1235203: tiering: tier status shows as  progressing  but there is no
rebalance daemon running
1235208: glusterd: glusterd crashes while importing a USS enabled
volume which is already started
1235242: changelog: directory renames not getting recorded
1235258: nfs-ganesha: ganesha-ha.sh --refresh-config not working
1235297: [geo-rep]: set_geo_rep_pem_keys.sh needs modification in
gluster path to support mount broker functionality

Re: [Gluster-devel] reviving spurious failures tracking

2015-07-29 Thread Vijay Bellur

On Wednesday 29 July 2015 03:40 PM, Pranith Kumar Karampuri wrote:

hi,
 I just updated
https://public.pad.fsfe.org/p/gluster-spurious-failures with the latest
spurious failures we saw in linux and NetBSD regressions. Could you guys
update with any more spurious regressions that you guys are observing
but not listed on the pad. Could you guys help in fixing these issues
fast as the number of failures is increasing quite a bit nowadays.



I think we have been very tolerant for failing tests and it is time to 
change this behavior. I propose that:


- we block commits for components that have failing tests listed in the 
tracking etherpad.



- once failing tests are addressed on a particular branch, normal patch 
merging can resume.


- If there are tests that cannot be fixed easily in the near term, we 
move such tests to a different folder or drop such test units.


Thoughts?

Regards,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] reviving spurious failures tracking

2015-07-29 Thread Pranith Kumar Karampuri

hi,
I just updated 
https://public.pad.fsfe.org/p/gluster-spurious-failures with the latest 
spurious failures we saw in linux and NetBSD regressions. Could you guys 
update with any more spurious regressions that you guys are observing 
but not listed on the pad. Could you guys help in fixing these issues 
fast as the number of failures is increasing quite a bit nowadays.



Tests to be fixed (Linux)
tests/bugs/distribute/bug-1066798.t 
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12908/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12908/console%29(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12907/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12907/console%29
tests/bitrot/bug-1244613.t 
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12906/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12906/console%29
tests/bugs/snapshot/bug-1109889.t 
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12905/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12905/console%29
tests/bugs/replicate/bug-1238508-self-heal.t 
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12904/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12904/console%29
tests/basic/nufa.t 
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12902/console) 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12902/console%29


On NetBSD:
tests/basic/mount-nfs-auth.t 
(http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8796/console) 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8796/console%29
tests/basic/tier/tier-attach-many.t 
(http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8789/console) 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8789/console%29
tests/basic/afr/arbiter.t 
(http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8785/console) 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8785/console%29
tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t 
(http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8784/console) 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8784/console%29
tests/basic/quota.t 
(http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8780/console) 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8780/console%29


First step is to move the tests above to Tests being looked at: (please 
put your name against the test you are looking into): section by the 
respective developers.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] v3.6.3 doesn't respect default ACLs?

2015-07-29 Thread Raghavendra Bhat

On 07/27/2015 08:30 PM, Glomski, Patrick wrote:
I built a patched version of 3.6.4 and the problem does seem to be 
fixed on a test server/client when I mounted with those flags (acl, 
resolve-gids, and gid-timeout). Seeing as it was a test system, I 
can't really provide anything meaningful as to the performance hit 
seen without the gid-timeout option. Thank you for implementing it so 
quickly, though!


Is there any chance of getting this fix incorporated in the upcoming 
3.6.5 release?


Patrick


I am planning to include this fix in 3.6.5. This fix is still under 
review. Once it is accepted in master, it cab be backported to 
release-3.6 branch. I will wait till then and make 3.6.5.


Regards,
Raghavendra Bhat




On Thu, Jul 23, 2015 at 6:27 PM, Niels de Vos nde...@redhat.com 
mailto:nde...@redhat.com wrote:


On Tue, Jul 21, 2015 at 10:30:04PM +0200, Niels de Vos wrote:
 On Wed, Jul 08, 2015 at 03:20:41PM -0400, Glomski, Patrick wrote:
  Gluster devs,
 
  I'm running gluster v3.6.3 (both server and client side). Since my
  application requires more than 32 groups, I don't mount with
ACLs on the
  client. If I mount with ACLs between the bricks and set a
default ACL on
  the server, I think I'm right in stating that the server
should respect
  that ACL whenever a new file or folder is made.

 I would expect that the ACL gets in herited on the brick. When a new
 file is created without the default ACL, things seem to be
wrong. You
 mention that creating the file directly on the brick has the correct
 ACL, so there must be some Gluster component interfering.

 You reminded me on IRC about this email, and that helped a lot.
Its very
 easy to get distracted when trying to investigate things from the
 mailinglists.

 I had a brief look, and I think we could reach a solution. An
ugly patch
 for initial testing is ready. Well... it compiles. I'll try to
run some
 basic tests tomorrow and see if it improves things and does not
crash
 immediately.

 The change can be found here:
 http://review.gluster.org/11732

 It basically adds a resolve-gids mount option for the FUSE client.
 This causes the fuse daemon to call getgrouplist() and retrieve
all the
 groups for the UID that accesses the mountpoint. Without this
option,
 the behavior is not changed, and /proc/$PID/status is used to
get up to
 32 groups (the $PID is the process that accesses the mountpoint).

 You probably want to also mount with gid-timeout=N where N is
seconds
 that the group cache is valid. In the current master branch this
is set
 to 300 seconds (like the sssd default), but if the groups of a used
 rarely change, this value can be increased. Previous versions had a
 lower timeout which could cause resolving the groups on almost each
 network packet that arrives (HUGE performance impact).

 When using this option, you may also need to enable
server.manage-gids.
 This option allows using more than ~93 groups on the bricks. The
network
 packets can only contain ~93 groups, when server.manage-gids is
enabled,
 the groups are not sent in the network packets, but are resolved
on the
 bricks with getgrouplist().

The patch linked above had been tested, corrected and updated. The
change works for me on a test-system.

A backport that you should be able to include in a package for 3.6 can
be found here: http://termbin.com/f3cj
Let me know if you are not familiar with rebuilding patched packages,
and I can build a test-version for you tomorrow.

On glusterfs-3.6, you will want to pass a gid-timeout mount option
too.
The option enables caching of the resolved groups that the uid belongs
too, if caching is not enebled (or expires quickly), you will probably
notice a preformance hit. Newer version of GlusterFS set the
timeout to
300 seconds (like the default timeout sssd uses).

Please test and let me know if this fixes your use case.

Thanks,
Niels



 Cheers,
 Niels

  Maybe an example is in order:
 
  We first set up a test directory with setgid bit so that our new
  subdirectories inherit the group.
  [root@gfs01a hpc_shared]# mkdir test; cd test; chown
pglomski.users .;
  chmod 2770 .; getfacl .
  # file: .
  # owner: pglomski
  # group: users
  # flags: -s-
  user::rwx
  group::rwx
  other::---
 
  New subdirectories share the group, but the umask leads to
them being group
  read-only.
  [root@gfs01a test]# mkdir a; getfacl a
  # file: a
  # owner: root
  # group: users
  # flags: -s-
  user::rwx
  group::r-x
  other::r-x
 
  Setting default ACLs on the server allows group write to new
directories
  made 

[Gluster-devel] REMINDER: Weekly Gluster meeting today at 12UTC

2015-07-29 Thread Kaushal M
Hi all,

The weekly Gluster community meeting will be starting in
#gluster-meeting on Freenode at 12UTC (~90 minutes from now).

Agenda: https://public.pad.fsfe.org/p/gluster-community-meetings

Current Agenda topics are:
- Last week's action items
- Gluster 3.7
- Gluster 3.6
- Gluster 3.5
- Gluster 4.0
- Open Floor

Thanks,
Kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] reviving spurious failures tracking

2015-07-29 Thread Mohammed Rafi K C
I have updated tiering related spurious failures to proper state.

Regards
Rafi KC

On 07/29/2015 03:40 PM, Pranith Kumar Karampuri wrote:
 hi,
 I just updated
 https://public.pad.fsfe.org/p/gluster-spurious-failures with the
 latest spurious failures we saw in linux and NetBSD regressions. Could
 you guys update with any more spurious regressions that you guys are
 observing but not listed on the pad. Could you guys help in fixing
 these issues fast as the number of failures is increasing quite a bit
 nowadays.


 Tests to be fixed (Linux)
 tests/bugs/distribute/bug-1066798.t
 (http://build.gluster.org/job/rackspace-regression-2GB-triggered/12908/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12908/console%29(http://build.gluster.org/job/rackspace-regression-2GB-triggered/12907/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12907/console%29
 tests/bitrot/bug-1244613.t
 (http://build.gluster.org/job/rackspace-regression-2GB-triggered/12906/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12906/console%29
 tests/bugs/snapshot/bug-1109889.t
 (http://build.gluster.org/job/rackspace-regression-2GB-triggered/12905/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12905/console%29
 tests/bugs/replicate/bug-1238508-self-heal.t
 (http://build.gluster.org/job/rackspace-regression-2GB-triggered/12904/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12904/console%29
 tests/basic/nufa.t
 (http://build.gluster.org/job/rackspace-regression-2GB-triggered/12902/console)
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/12902/console%29

 On NetBSD:
 tests/basic/mount-nfs-auth.t
 (http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8796/console)
 http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8796/console%29
 tests/basic/tier/tier-attach-many.t
 (http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8789/console)
 http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8789/console%29
 tests/basic/afr/arbiter.t
 (http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8785/console)
 http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8785/console%29
 tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t
 (http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8784/console)
 http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8784/console%29
 tests/basic/quota.t
 (http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8780/console)
 http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8780/console%29

 First step is to move the tests above to Tests being looked at:
 (please put your name against the test you are looking into): section
 by the respective developers.

 Pranith


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] reviving spurious failures tracking

2015-07-29 Thread Pranith Kumar Karampuri



On 07/29/2015 06:10 PM, Emmanuel Dreyfus wrote:

On Wed, Jul 29, 2015 at 04:06:43PM +0530, Vijay Bellur wrote:

- If there are tests that cannot be fixed easily in the near term, we move
such tests to a different folder or drop such test units.

A tests/disabled directory seemsthe way to go. But before going there,
the test maintaniner should be notified. Perhaps we should have a list
of contacts ina comment on the topof each test?

Jeff already implemented bad-tests infra already. We can use the same?

Pranith




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] reviving spurious failures tracking

2015-07-29 Thread Emmanuel Dreyfus
On Wed, Jul 29, 2015 at 04:06:43PM +0530, Vijay Bellur wrote:
 - If there are tests that cannot be fixed easily in the near term, we move
 such tests to a different folder or drop such test units.

A tests/disabled directory seemsthe way to go. But before going there, 
the test maintaniner should be notified. Perhaps we should have a list
of contacts ina comment on the topof each test?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-29 Thread Josh Boon
Hey Prasanna,

Thanks for your help! One of the issues we've had is DD doesn't seem to 
reproduce it. Anything that logs and handles large volumes, think mail and web 
servers, tends to segfault the most frequently. I could write up a load test 
and we could put apache on it and try that as that's closet to what we run. 
Also if you don't object would I be able to get on the machine to figure out 
apparmor and do a writeup? Most folks probably won't be able to disable it 
completely. 

Best,
Josh

- Original Message -
From: Prasanna Kalever pkale...@redhat.com
To: Josh Boon glus...@joshboon.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel 
gluster-devel@gluster.org
Sent: Wednesday, July 29, 2015 1:54:34 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

Below are my setup details:


# qemu-system-x86_64 --version  
  

  
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard 

  
# gluster --version 
  

  
glusterfs 3.6.3 built on Jul 29 2015 16:01:10   
  
Repository revision: git://git.gluster.com/glusterfs.git
  
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com   
  
GlusterFS comes with ABSOLUTELY NO WARRANTY.
  

  
# lsb_release -a
  

  
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.   
Distributor ID: Ubuntu  
  
Description:Ubuntu 14.04 LTS
  
Release:14.04   
  
Codename:   trusty  
  

  
# gluster vol info  
  

  
Volume Name: vol1   
  
Type: Replicate 
  
Volume ID: ad78ac6c-c55e-4f4a-8b1b-a11865f1d01e 
  
Status: Started 
  
Number of Bricks: 1 x 2 = 2 
  
Transport-type: tcp 
  
Bricks: 
  
Brick1: 10.70.1.156:/brick1 
  
Brick2: 10.70.1.156:/brick2 
  
Options Reconfigured:   
  
server.allow-insecure: on   
  
storage.owner-uid: 116  
  
storage.owner-gid: 125  
  
  

Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-29 Thread Prasanna Kalever
Hi Josh Boon,

Below are my setup details:


# qemu-system-x86_64 --version  
  

  
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard 

  
# gluster --version 
  

  
glusterfs 3.6.3 built on Jul 29 2015 16:01:10   
  
Repository revision: git://git.gluster.com/glusterfs.git
  
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com   
  
GlusterFS comes with ABSOLUTELY NO WARRANTY.
  

  
# lsb_release -a
  

  
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.   
Distributor ID: Ubuntu  
  
Description:Ubuntu 14.04 LTS
  
Release:14.04   
  
Codename:   trusty  
  

  
# gluster vol info  
  

  
Volume Name: vol1   
  
Type: Replicate 
  
Volume ID: ad78ac6c-c55e-4f4a-8b1b-a11865f1d01e 
  
Status: Started 
  
Number of Bricks: 1 x 2 = 2 
  
Transport-type: tcp 
  
Bricks: 
  
Brick1: 10.70.1.156:/brick1 
  
Brick2: 10.70.1.156:/brick2 
  
Options Reconfigured:   
  
server.allow-insecure: on   
  
storage.owner-uid: 116  
  
storage.owner-gid: 125  
  

  

  
#gluster vol status 
  

  
Status of volume: vol1  
  
Gluster process PortOnline  Pid 
  
--  
  
Brick 10.70.1.156:/brick1   

[Gluster-devel] Minutes of the weekly Gluster community meeting 29-July-2015

2015-07-29 Thread Kaushal M
Thank you to eveyone who took part in the meeting. Today's meeting
finished exactly on time and we covered a lot of topics.

I've added the links to the meeting logs and the meeting summary to
the end of this mail.

As always, the next community meeting will be on next Wednesday
1200UTC in #gluster-meeting.
I've attached a calendar invite that can be imported into your calendars.

Thanks again.

Cheers,
Kaushal

---
Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-29/gluster-meeting.2015-07-29-12.01.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-29/gluster-meeting.2015-07-29-12.01.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-29/gluster-meeting.2015-07-29-12.01.log.html


Meeting summary
---
* Roll call  (kshlm, 12:01:45)

* Last week's action items  (kshlm, 12:03:35)

*   (kshlm, 12:04:20)

* tigert summarize options for integrating a calendar on the website,
  call for feedback  (kshlm, 12:04:41)
  * ACTION: tigert summarize options for integrating a calendar on the
website, call for feedback  (kshlm, 12:05:40)

* raghu to reassess timeline for next 3.6 release  (kshlm, 12:05:57)
  * AGREED: Do releases periodically every month. (excluding exceptions
for urgent fixes)  (kshlm, 12:11:56)
  * ACTION: hagarth and raghu to prepare release-schedule  (kshlm,
12:14:49)

* overclk to create a feature page about lockdep in couple of week's
  time  (kshlm, 12:15:45)
  * ACTION: overclk to create a feature page about lockdep in couple of
week's time  (kshlm, 12:16:32)

* Gluster 3.7  (kshlm, 12:16:48)
  * ACTION: hagarth to do 3.7.3 announcement on the gluster blog and
social media.  (kshlm, 12:24:15)
  * ACTION: ndevos to help kshlm close bugs  (kshlm, 12:24:38)
  * pranithk to be release-manager for 3.7.5  (kshlm, 12:24:52)
  * ACTION: pranithk to write up a post announcing EC's production
readiness.  (kshlm, 12:29:08)

* Gluster 3.6  (kshlm, 12:30:43)

* Gluster 3.5  (kshlm, 12:34:27)
  * ndevos hopes to release 3.5.6 in two weeks time.  (kshlm, 12:38:52)

* Open floor  (kshlm, 12:39:14)

* Gluster 4.0  (kshlm, 12:40:13)

* Open floor  (kshlm, 12:46:18)

* regression test failures  (kshlm, 12:48:03)
  * LINK: https://public.pad.fsfe.org/p/gluster-spurious-failures is the
pad with the failures  (ndevos, 12:53:23)
  * regression failure tracker
https://public.pad.fsfe.org/p/gluster-spurious-failures  (kshlm,
12:54:04)

* DiSTAF  (kshlm, 12:54:33)
  * ACTION: rtalur to send update mailing list with a DiSTAF how-to and
start discussion on enhancements to DiSTAF.  (kshlm, 12:56:15)
  * REMINDER to put (even minor) interesting topics on
https://public.pad.fsfe.org/p/gluster-weekly-news  (ndevos,
12:58:55)
  * ACTION: kshlm to test the new jenkins slave in ci.gluster.org
(kshlm, 12:58:57)
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//www.marudot.com//iCal Event Maker
X-WR-CALNAME:Gluster Community Meeting
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20150729T131350Z
UID:20150729t131350z-1363560...@marudot.com
DTSTART;TZID=Etc/UTC:20150805T12
DTEND;TZID=Etc/UTC:20150805T13
SUMMARY:Gluster Community Meeting 05-Aug-2015
URL:https%3A%2F%2Fpublic.pad.fsfe.org%2Fp%2Fgluster-community-meetings
DESCRIPTION:This is the weekly Gluster community meeting. The agenda is available at https://public.pad.fsfe.org/p/gluster-community-meetings
LOCATION:#gluster-meeting on freenode
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:Gluster Community Meeting 05-Aug-2015
TRIGGER:-PT30M
END:VALARM
END:VEVENT
END:VCALENDAR___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel