Re: [Gluster-devel] Proactive backports to release branches

2015-01-14 Thread Atin Mukherjee


On 01/14/2015 04:44 PM, Vijay Bellur wrote:
 On 01/14/2015 04:34 PM, Vijay Bellur wrote:
 Hi All,

 We have been normally following a reactive process to backport patches
 to release branches. The backport guidelines page [1] describes it in
 more detail. Given the rate at which our master branch moves, I think it
 is becoming hard for users to identify which patch(es) can potentially
 fix problems being faced in their deployments. I have also heard a
 similar problem reported by release maintainers about not being able to
 cherry pick patches from master to respective releases as the backports
 could be non-trivial in nature. Overall this can lead us to do minor
 releases not having the right content from a stability  usability point
 of view.

 I have been thinking about this problem and here are some solutions that
 crossed my mind:

 1. Developers become more pro-active in backporting patches to release
 branches.
IMO this should be the best approach, developers need to be more
proactive to assess the importance of the patch and ensure they are
getting backported. I think it will be really tedious and harsh for a
release maintainer or even a component maintainer to follow up each and
every patches going into master and decide whether they are candidates
for backport.

~Atin

 2. Release maintainers open a patch acceptance window from component
 maintainers for every minor release. component owners can nominate
 patches for inclusion in a minor release and work with respective
 developers to have those patches backported.

 3. Component maintainers notify release maintainers about important
 patches when they merge it on master. Release maintainers can then work
 with developers  component maintainers to have the backports in a minor
 release.

 4. We nominate serious bugs as release blockers during our weekly bug
 triage and ensure that these bugs get addressed for a minor release.

 We might end up needing a combination of these and other ideas to make
 the minor releases contain the right content for our users. Your
 thoughts and ideas on addressing this problem would be very welcome.

 Once there is consensus, I will be happy to document this process on
 gluster.org.

 Thanks,
 Vijay
 
 and adding the missing link:
 
 [1]
 http://www.gluster.org/community/documentation/index.php/Backport_Guidelines
 
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ctime weirdness

2015-01-14 Thread Emmanuel Dreyfus
Anand Avati av...@gluster.org wrote:

 I don't think the problem is with the handling of SETATTR in either NetBSD
 or Linux. I am guessing NetBSD FUSE is _using_ SETATTR to update atime upon
 open? Linux FUSE just leaves it to the backend filesystem to update atime.
 Whenever there is a SETATTR fop, ctime is _always_ bumped.

Yes, I understood that: the kernel shall not send an atime update when
the file is readen. I fixed it in NetBSD FUSE.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Quota command bug in 3.6.1?

2015-01-14 Thread Vijaikumar M

Hi Raghuram,

Thanks for reporting the problem.

We will submit the fix upstream soon.

Thanks,
Vijay


On Wednesday 14 January 2015 01:50 PM, Raghuram BK wrote:
When I issue quota list command with the xml option, it seems to 
return non-xml data :


[root@fractalio-66f2 fractalio]# gluster --version
glusterfs 3.6.1 built on Jan 13 2015 16:46:51
Repository revision: git://git.gluster.com/glusterfs.git 
http://git.gluster.com/glusterfs.git

Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU 
General Public License.


[root@primary templates]# gluster volume quota vol1  list --xml
?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutput
  opRet0/opRet
  opErrno0/opErrno
  opErrstr/
  volQuota/
/cliOutput
  Path   Hard-limit Soft-limit Used  
Available  Soft-limit exceeded? Hard-limit exceeded?

---
/ 10.0GB   80% 0Bytes  
10.0GB  No   No


--

*Fractalio Data, India*

Mobile: +91 96635 92022

Email: r...@fractalio.com mailto:g...@fractalio.com

Web: www.fractalio.com http://www.fractalio.com/




___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] An interesting take on GoTo statement by Dijkstra -

2015-01-14 Thread Nagaprasad Sathyanarayana
In a quest to find why good programmers are wary of the Go To 
statement, came across this interesting article by /Edsger W. Dijkstra/ .


http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html.

Cheers
Naga


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ctime weirdness

2015-01-14 Thread Anand Avati
I don't think the problem is with the handling of SETATTR in either NetBSD
or Linux. I am guessing NetBSD FUSE is _using_ SETATTR to update atime upon
open? Linux FUSE just leaves it to the backend filesystem to update atime.
Whenever there is a SETATTR fop, ctime is _always_ bumped.

Thanks

On Mon Jan 12 2015 at 5:03:51 PM Emmanuel Dreyfus m...@netbsd.org wrote:

 Hello

 Here is a NetBSD behavior that looks pathological:
 (it happens on a FUSE mount but not a native mount):

 # touch a
 # stat -x a
   File: a
   Size: 0FileType: Regular File
   Mode: (0644/-rw-r--r--) Uid: (0/root)  Gid: (0/
 wheel)
 Device: 203,7   Inode: 13726586830943880794Links: 1
 Access: Tue Jan 13 01:57:25 2015
 Modify: Tue Jan 13 01:57:25 2015
 Change: Tue Jan 13 01:57:25 2015
 # cat a  /dev/null
 # stat -x a
   File: a
   Size: 0FileType: Regular File
   Mode: (0644/-rw-r--r--) Uid: (0/root)  Gid: (0/
 wheel)
 Device: 203,7   Inode: 13726586830943880794Links: 1
 Access: Tue Jan 13 01:57:31 2015
 Modify: Tue Jan 13 01:57:25 2015
 Change: Tue Jan 13 01:57:31 2015

 NetBSD FUSE implementation does not sends ctime with SETATTR. Looking at
 glusterfs FUSE xlator, I see setattr code does not handle ctime either.

 How does that happen? What wrong does NetBSD SETATTR does?

 --
 Emmanuel Dreyfus
 http://hcpnet.free.fr/pubz
 m...@netbsd.org
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Quota command bug in 3.6.1?

2015-01-14 Thread Raghuram BK
When I issue quota list command with the xml option, it seems to return
non-xml data :

[root@fractalio-66f2 fractalio]# gluster --version
glusterfs 3.6.1 built on Jan 13 2015 16:46:51
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.

[root@primary templates]# gluster volume quota vol1  list --xml
?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutput
  opRet0/opRet
  opErrno0/opErrno
  opErrstr/
  volQuota/
/cliOutput
  Path   Hard-limit Soft-limit   Used
Available  Soft-limit exceeded? Hard-limit exceeded?
---
/ 10.0GB   80%  0Bytes
10.0GB  No   No

-- 

*Fractalio Data, India*

Mobile: +91 96635 92022

Email: r...@fractalio.com g...@fractalio.com
Web: www.fractalio.com
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proactive backports to release branches

2015-01-14 Thread Vijay Bellur

On 01/14/2015 04:34 PM, Vijay Bellur wrote:

Hi All,

We have been normally following a reactive process to backport patches
to release branches. The backport guidelines page [1] describes it in
more detail. Given the rate at which our master branch moves, I think it
is becoming hard for users to identify which patch(es) can potentially
fix problems being faced in their deployments. I have also heard a
similar problem reported by release maintainers about not being able to
cherry pick patches from master to respective releases as the backports
could be non-trivial in nature. Overall this can lead us to do minor
releases not having the right content from a stability  usability point
of view.

I have been thinking about this problem and here are some solutions that
crossed my mind:

1. Developers become more pro-active in backporting patches to release
branches.

2. Release maintainers open a patch acceptance window from component
maintainers for every minor release. component owners can nominate
patches for inclusion in a minor release and work with respective
developers to have those patches backported.

3. Component maintainers notify release maintainers about important
patches when they merge it on master. Release maintainers can then work
with developers  component maintainers to have the backports in a minor
release.

4. We nominate serious bugs as release blockers during our weekly bug
triage and ensure that these bugs get addressed for a minor release.

We might end up needing a combination of these and other ideas to make
the minor releases contain the right content for our users. Your
thoughts and ideas on addressing this problem would be very welcome.

Once there is consensus, I will be happy to document this process on
gluster.org.

Thanks,
Vijay


and adding the missing link:

[1] 
http://www.gluster.org/community/documentation/index.php/Backport_Guidelines


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proactive backports to release branches

2015-01-14 Thread Vijay Bellur

Hi All,

We have been normally following a reactive process to backport patches 
to release branches. The backport guidelines page [1] describes it in 
more detail. Given the rate at which our master branch moves, I think it 
is becoming hard for users to identify which patch(es) can potentially 
fix problems being faced in their deployments. I have also heard a 
similar problem reported by release maintainers about not being able to 
cherry pick patches from master to respective releases as the backports 
could be non-trivial in nature. Overall this can lead us to do minor 
releases not having the right content from a stability  usability point 
of view.


I have been thinking about this problem and here are some solutions that 
crossed my mind:


1. Developers become more pro-active in backporting patches to release 
branches.


2. Release maintainers open a patch acceptance window from component 
maintainers for every minor release. component owners can nominate 
patches for inclusion in a minor release and work with respective 
developers to have those patches backported.


3. Component maintainers notify release maintainers about important 
patches when they merge it on master. Release maintainers can then work 
with developers  component maintainers to have the backports in a minor 
release.


4. We nominate serious bugs as release blockers during our weekly bug 
triage and ensure that these bugs get addressed for a minor release.


We might end up needing a combination of these and other ideas to make 
the minor releases contain the right content for our users. Your 
thoughts and ideas on addressing this problem would be very welcome.


Once there is consensus, I will be happy to document this process on 
gluster.org.


Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggestion needed to make use of iobuf_pool as rdma buffer.

2015-01-14 Thread Anand Avati
On Tue Jan 13 2015 at 11:57:53 PM Mohammed Rafi K C rkavu...@redhat.com
wrote:


 On 01/14/2015 12:11 AM, Anand Avati wrote:

 3) Why not have a separate iobuf pool for RDMA?


 Since every fops are using the default iobuf_pool, if we go with another
 iobuf_pool dedicated to rdma, we need to copy that buffer from default pool
 to rdma or so, unless we are intelligently allocating the buffers based on
 the transport which we are going to use. It is an extra level copying in
 the I/O path.


Not sure what you mean by that. Every fop does not use default iobuf_pool.
Only readv() and writev() do. If you really want to save on memory
registration cost, your first target should be the header buffers (which is
used in every fop, and currently valloc()ed and ibv_reg_mr() per call).
Making headers use an iobuf pool where every arena is registered during
arena creation and destruction will get you the highest overhead savings.

Coming to file data iobufs, today iobuf pools are used in a mixed way,
i.e, they hold both data being actively transferred/under IO, and also data
which is being held long term (cached by io-cache). io-cache just does an
iobuf_ref() and holds on to the data. This avoids memory copies in io-cache
layer. However that may be something we want to reconsider: io-cache could
use its own iobuf pool into which data is copied into from the transfer
iobuf (which is pre-registered with RDMA in bulk etc.)

Thanks






 On Tue Jan 13 2015 at 6:30:09 AM Mohammed Rafi K C rkavu...@redhat.com
 wrote:

 Hi All,

 When using RDMA protocol, we need to register the buffer which is going
 to send through rdma with rdma device. In fact, it is a costly
 operation, and a performance killer if it happened in I/O path. So our
 current plan is to register pre-allocated iobuf_arenas from  iobuf_pool
 with rdma when rdma is getting initialized. The problem comes when all
 the iobufs are exhausted, then we need to dynamically allocate new
 arenas from libglusterfs module. Since it is created in libglusterfs, we
 can't make a call to rdma from libglusterfs. So we will force to
 register each of the iobufs from the newly created arenas with rdma in
 I/O path. If io-cache is turned on in client stack, then all the
 pre-registred arenas will use by io-cache as cache buffer. so we have to
 do the registration in rdma for each i/o call for every iobufs,
 eventually we cannot make use of pre registered arenas.

 To address the issue, we have two approaches in mind,

  1) Register each dynamically created buffers in iobuf by bringing
 transport layer together with libglusterfs.

  2) create a separate buffer for caching and offload the data from the
 read response to the cache buffer in background.

 If we could make use of preregister memory for every rdma call, then we
 will have approximately 20% increment for write and 25% of increment for
 read.

 Please give your thoughts to address the issue.

 Thanks  Regards
 Rafi KC
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel