Re: [Gluster-devel] relative ordering of writes to same file from two different fds

2016-09-21 Thread Ric Wheeler

On 09/21/2016 08:58 PM, Jeff Darcy wrote:

However, my understanding is that filesystems need not maintain the relative
order of writes (as it received from vfs/kernel) on two different fds. Also,
if we have to maintain the order it might come with increased latency. The
increased latency can be because of having "newer" writes to wait on "older"
ones. This wait can fill up write-behind buffer and can eventually result in
a full write-behind cache and hence not able to "write-back" newer writes.

IEEE 1003.1, 2013 edition
http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html


After a write() to a regular file has successfully returned:

Any successful read() from each byte position in the file that was
modified by that write shall return the data specified by the write()
for that position until >such byte positions are again modified.

Any subsequent successful write() to the same byte position in the
file shall overwrite that file data.

Note that the reference is to a *file*, not to a file *descriptor*.
It's an application of the general POSIX assumption that time is
simple, locking is cheap (if it's even necessary), and therefore
time-based requirements like linearizability - what this is - are
easy to satisfy.  I know that's not very realistic nowadays, but
it's pretty clear: according to the standard as it's still written,
P2's write *is* required to overwrite P1's.  Same vs. different fd
or process/thread doesn't even come into play.

Just for fun, I'll point out that the standard snippet above
doesn't say anything about *non overlapping* writes.  Does POSIX
allow the following?

write A
write B
read B, get new value
read A, get *old* value

This is a non-linearizable result, which would surely violate
some people's (notably POSIX authors') expectations, but good
luck finding anything in that standard which actually precludes
it.



I will reply to both comments here.

First, I think that all file systems will perform this way since this is really 
a function of how the page cache works and O_DIRECT.


More broadly, this is not a promise or hard and fast thing - the traditional way 
applications that do concurrent writes is to make sure that they use either 
whole file or byte range locking when one or more threads/processes are doing IO 
to the same file concurrently.


I don't understand the Jeff snippet above - if they are non-overlapping writes 
to dfferent offsets, this would never happen.


If the writes are to the same offset and happened at different times, it would 
not happen either.


If they are the same offset and at the same time, then you can have an undefined 
results where you might get fragments of A and fragments of B (where you might 
be able to see some odd things if the write spans pages/blocks).


This last case is where the normal best practice comes in to suggest using 
locking.

Ric


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] relative ordering of writes to same file from two different fds

2016-09-21 Thread Raghavendra Gowdappa


- Original Message -
> From: "Ric Wheeler" 
> To: "Raghavendra Gowdappa" , "Gluster Devel" 
> 
> Cc: "ryan ding" 
> Sent: Wednesday, September 21, 2016 6:32:29 PM
> Subject: Re: [Gluster-devel] relative ordering of writes to same file from 
> two different fds
> 
> On 09/21/2016 08:06 AM, Raghavendra Gowdappa wrote:
> > Hi all,
> >
> > This mail is to figure out the behavior of write to same file from two
> > different fds. As Ryan quotes in one of comments,
> >
> > 
> >
> > I think it’s not safe. in this case:
> > 1. P1 write to F1 use FD1
> > 2. after P1 write finish, P2 write to the same place use FD2
> > since they are not conflict with each other now, the order the 2 writes
> > send to underlying fs is not determined. so the final data may be P1’s or
> > P2’s.
> > this semantics is not the same with linux buffer io. linux buffer io will
> > make the second write cover the first one, this is to say the final data
> > is P2’s.
> > you can see it from linux NFS (as we are all network filesystem)
> > fs/nfs/file.c:nfs_write_begin(), nfs will flush ‘incompatible’ request
> > first before another write begin. the way 2 request is determine to be
> > ‘incompatible’ is that they are from 2 different open fds.
> > I think write-behind behaviour should keep the same with linux page cache.
> >
> > 
> 
> I think that how this actually would work is that both would be written to
> the
> same page in the page cache (if not using buffered IO), so as long as they do
> not happen at the same time, you would get the second P2 copy of data each
> time.
> 
> Same story for using O_DIRECT - that write bypasses the page cache and will
> update the data directly.
> 
> What might happen in practice though is that your applications might use
> higher
> level IO routines and they might buffer data internally. If that happens,
> there
> is no ordering that is predictable.

Thanks Ric.

1. Are filesytems required to maintain that order?
2. Even if there is no such requirement, would there be any benefit in 
filesystems enforcing that order (probably at the cost of increased latency).

regards,
Raghavendra

> 
> Regards,
> 
> Ric
> 
> >
> > However, my understanding is that filesystems need not maintain the
> > relative order of writes (as it received from vfs/kernel) on two different
> > fds. Also, if we have to maintain the order it might come with increased
> > latency. The increased latency can be because of having "newer" writes to
> > wait on "older" ones. This wait can fill up write-behind buffer and can
> > eventually result in a full write-behind cache and hence not able to
> > "write-back" newer writes.
> >
> > * What does POSIX say about it?
> > * How do other filesystems behave in this scenario?
> >
> >
> > Also, the current write-behind implementation has the concept of
> > "generation numbers". To quote from comment:
> >
> > 
> >
> >  uint64_t gen;/* Liability generation number. Represents
> >  the current 'state' of liability. Every
> >  new addition to the liability list bumps
> >  the generation number.
> > 
> > 
> >  
> >  a newly arrived request is only required
> >  to perform causal checks against the
> >  entries
> >  in the liability list which were present
> >  at the time of its addition. the
> >  generation
> >  number at the time of its addition is
> >  stored
> >  in the request and used during checks.
> > 
> > 
> >  
> >  the liability list can grow while the
> >  request
> >  waits in the todo list waiting for its
> >  dependent operations to complete. however
> >  it is not of the request's concern to
> >  depend
> >  itself on those new entries which arrived
> >  after it arrived (i.e, those that have a
> >  liability generation higher than itself)
> >   */
> > 
> >
> > So, if a single thread is doing writes on two different fds, generation
> > numbers are 

Re: [Gluster-devel] relative ordering of writes to same file from two different fds

2016-09-21 Thread Jeff Darcy
> However, my understanding is that filesystems need not maintain the relative
> order of writes (as it received from vfs/kernel) on two different fds. Also,
> if we have to maintain the order it might come with increased latency. The
> increased latency can be because of having "newer" writes to wait on "older"
> ones. This wait can fill up write-behind buffer and can eventually result in
> a full write-behind cache and hence not able to "write-back" newer writes.

IEEE 1003.1, 2013 edition
http://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html

> After a write() to a regular file has successfully returned:
> 
> Any successful read() from each byte position in the file that was
> modified by that write shall return the data specified by the write()
> for that position until >such byte positions are again modified.
>
> Any subsequent successful write() to the same byte position in the
> file shall overwrite that file data.

Note that the reference is to a *file*, not to a file *descriptor*.
It's an application of the general POSIX assumption that time is
simple, locking is cheap (if it's even necessary), and therefore
time-based requirements like linearizability - what this is - are
easy to satisfy.  I know that's not very realistic nowadays, but
it's pretty clear: according to the standard as it's still written,
P2's write *is* required to overwrite P1's.  Same vs. different fd
or process/thread doesn't even come into play.

Just for fun, I'll point out that the standard snippet above
doesn't say anything about *non overlapping* writes.  Does POSIX
allow the following?

   write A
   write B
   read B, get new value
   read A, get *old* value

This is a non-linearizable result, which would surely violate
some people's (notably POSIX authors') expectations, but good
luck finding anything in that standard which actually precludes
it.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] relative ordering of writes to same file from two different fds

2016-09-21 Thread Raghavendra Talur
On Wed, Sep 21, 2016 at 6:32 PM, Ric Wheeler  wrote:

> On 09/21/2016 08:06 AM, Raghavendra Gowdappa wrote:
>
>> Hi all,
>>
>> This mail is to figure out the behavior of write to same file from two
>> different fds. As Ryan quotes in one of comments,
>>
>> 
>>
>> I think it’s not safe. in this case:
>> 1. P1 write to F1 use FD1
>> 2. after P1 write finish, P2 write to the same place use FD2
>> since they are not conflict with each other now, the order the 2 writes
>> send to underlying fs is not determined. so the final data may be P1’s or
>> P2’s.
>> this semantics is not the same with linux buffer io. linux buffer io will
>> make the second write cover the first one, this is to say the final data is
>> P2’s.
>> you can see it from linux NFS (as we are all network filesystem)
>> fs/nfs/file.c:nfs_write_begin(), nfs will flush ‘incompatible’ request
>> first before another write begin. the way 2 request is determine to be
>> ‘incompatible’ is that they are from 2 different open fds.
>> I think write-behind behaviour should keep the same with linux page cache.
>>
>> 
>>
>
> I think that how this actually would work is that both would be written to
> the same page in the page cache (if not using buffered IO), so as long as
> they do not happen at the same time, you would get the second P2 copy of
> data each time.
>

I apologize if my understanding is wrong but IMO this is exactly what we do
in write-behind too. The cache is inode based and ensures that writes are
ordered irrespective of the FD used for the write.


Here is the commit message which brought the change
-
write-behind: implement causal ordering and other cleanup


Rules of causal ordering implemented:¬






 - If request A arrives after the acknowledgement (to the app,¬

   i.e, STACK_UNWIND) of another request B, then request B is¬

   said to have 'caused' request A.¬



- (corollary) Two requests, which at any point of time, are¬

   unacknowledged simultaneously in the system can never 'cause'¬

   each other (wb_inode->gen is based on this)¬



 - If request A is caused by request B, AND request A's region¬

   has an overlap with request B's region, then then the fulfillment¬

   of request A is guaranteed to happen after the fulfillment of B.¬



 - FD of origin is not considered for the determination of causal¬

   ordering.¬



 - Append operation's region is considered the whole file.¬



 Other cleanup:¬



 - wb_file_t not required any more.¬



 - wb_local_t not required any more.¬



 - O_RDONLY fd's operations now go through the queue to make sure¬

   writes in the requested region get fulfilled be
---

Thanks,
Raghavendra Talur


>
> Same story for using O_DIRECT - that write bypasses the page cache and
> will update the data directly.
>
> What might happen in practice though is that your applications might use
> higher level IO routines and they might buffer data internally. If that
> happens, there is no ordering that is predictable.
>
> Regards,
>
> Ric
>
>
>
>> However, my understanding is that filesystems need not maintain the
>> relative order of writes (as it received from vfs/kernel) on two different
>> fds. Also, if we have to maintain the order it might come with increased
>> latency. The increased latency can be because of having "newer" writes to
>> wait on "older" ones. This wait can fill up write-behind buffer and can
>> eventually result in a full write-behind cache and hence not able to
>> "write-back" newer writes.
>>
>> * What does POSIX say about it?
>> * How do other filesystems behave in this scenario?
>>
>>
>> Also, the current write-behind implementation has the concept of
>> "generation numbers". To quote from comment:
>>
>> 
>>
>>  uint64_t gen;/* Liability generation number. Represents
>>  the current 'state' of liability. Every
>>  new addition to the liability list bumps
>>  the generation number.
>>
>>
>> a newly arrived request
>> is only required
>>  to perform causal checks against the
>> entries
>>  in the liability list which were present
>>  at the time of its addition. the
>> generation
>>  number at the time of its addition is
>> stored
>>  in the request and used during checks.
>>
>>
>> the liability list can
>> grow while the request
>>  waits in the todo list waiting for its
>>  dependent operations to complete. however
>>  it is 

Re: [Gluster-devel] Introducing Tendrl

2016-09-21 Thread Dan Mick
On 09/20/2016 11:38 AM, Ric Wheeler wrote:

> My assumption is that they might want to use these modules (from tendryl
> down to the ceph/gluster bits) to add support for ceph and gluster.

Is it Tendrl or Tendryl?  (or the actual word, which would be 'tendril'
and thus unambiguous and memorable)?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] centos-5 build failures on mainline

2016-09-21 Thread Kaushal M
On Wed, Sep 21, 2016 at 1:36 PM, Niels de Vos  wrote:
> On Wed, Sep 21, 2016 at 11:53:43AM +0530, Atin Mukherjee wrote:
>> 
>>
>> As of now we don't check for build sanity on RHEL5/centos-5 distros.
>> I believe Gluster still has legacy support for these distros. Here we could
>> either add a glusterfs-devrpms script for el5 for every patch submission or
>> at worst have a nightly build to check the sanity of el5 build on mainline
>> branch to ensure we don't break further?
>
> Currently there is no way to build only the Gluster client part. This is
> a limitation in the autoconf/automake scripts. The server-side requires
> fancy things that are not available (in the required versions) for
> different parts (mainly GlusterD?).
>
> Once we have a "./configre --without-server" or similar, there is no use
> in trying to build for RHEL/CentOS-5.
>
> Niels

This was discussed in today's community meeting.

Kaleb reminded everyone that the community decided to stop supporting
and building packages for EL5 from glusterfs-3.8 [1].

With this in mind, the opinion was that we will be willing to accept
patches that help build GlusterFs on EL5 (only patches like what Neils
has described above).
But we will not be building EL5 packages or be running tests on EL5.

For more information on the discussion, refer to the meeting logs. [2]

~kaushal

[1] https://www.gluster.org/pipermail/gluster-devel/2016-April/048955.html
[2] 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.log.html


>
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting - 21-Sep-2016

2016-09-21 Thread Kaushal M
This weeks meeting started slow. But snowballed into quite an active meeting.
Thank you all who attended the meeting!

The meeting logs for the meeting are available at the links below, and
the minutes have been pasted at the end.

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.log.html

Next weeks meeting will be hosted by Samikshan. See you all next week,
same place, same time.

Cheers,
Kaushal


Meeting summary
---
* Roll Call  (kshlm, 12:00:06)

* Next weeks host  (kshlm, 12:08:57)
  * samikshan is next weeks host  (kshlm, 12:10:19)

* Project Infrastructure  (kshlm, 12:10:26)

* GlusterFS-4.0  (kshlm, 12:15:50)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-September/050928.html
(kshlm, 12:18:55)

* GlusterFS-3.9  (kshlm, 12:21:30)

* GlusterFS-3.8  (kshlm, 12:27:05)

* GlusterFS-3.7  (kshlm, 12:29:45)

* NFS Ganesha  (kshlm, 12:34:27)

* Samba  (kshlm, 12:37:34)

* Last weeks AIs  (kshlm, 12:39:14)

* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts.  (kshlm, 12:39:26)
  * ACTION: rastar_afk/ndevos/jdarcy to improve cleanup to control the
processes that test starts.  (kshlm, 12:40:27)

* RC tagging to be done by this week for 3.9 by aravindavk.  (kshlm,
  12:41:47)

* RC tagging to be done by this week for 3.9 by aravindavk/pranithk
  (kshlm, 12:42:19)
  * ACTION: RC tagging to be done by this week for 3.9 by
aravindavk/pranithk  (kshlm, 12:42:34)

* jdarcy will bug amye regarding a public announcement for Gluster
  Summit talks  (kshlm, 12:42:39)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-September/050888.html
(kshlm, 12:43:27)

* Open floor  (kshlm, 12:43:42)

* RHEL5 build issues  (kshlm, 12:43:58)
  * LINK:
https://www.gluster.org/pipermail/gluster-infra/2016-September/002821.html
(kshlm, 13:01:52)

* Updates on documentation  (kshlm, 13:02:06)
  * LINK: https://rajeshjoseph.gitbooks.io/test-guide/content/
(rjoseph, 13:03:51)
  * LINK: https://github.com/rajeshjoseph/doctest   (rjoseph, 13:06:23)

* Announcements  (kshlm, 13:08:02)

Meeting ended at 13:08:26 UTC.




Action Items

* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts.
* RC tagging to be done by this week for 3.9 by aravindavk/pranithk




Action Items, by person
---
* aravindavk
  * RC tagging to be done by this week for 3.9 by aravindavk/pranithk
* ndevos
  * rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
that test starts.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (156)
* nigelb (42)
* ndevos (27)
* kkeithley (22)
* misc (20)
* rjoseph (20)
* aravindavk (11)
* samikshan (4)
* zodbot (4)
* amye (4)
* ankitraj (1)
* Klas (1)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] relative ordering of writes to same file from two different fds

2016-09-21 Thread Ric Wheeler

On 09/21/2016 08:06 AM, Raghavendra Gowdappa wrote:

Hi all,

This mail is to figure out the behavior of write to same file from two 
different fds. As Ryan quotes in one of comments,



I think it’s not safe. in this case:
1. P1 write to F1 use FD1
2. after P1 write finish, P2 write to the same place use FD2
since they are not conflict with each other now, the order the 2 writes send to 
underlying fs is not determined. so the final data may be P1’s or P2’s.
this semantics is not the same with linux buffer io. linux buffer io will make 
the second write cover the first one, this is to say the final data is P2’s.
you can see it from linux NFS (as we are all network filesystem) 
fs/nfs/file.c:nfs_write_begin(), nfs will flush ‘incompatible’ request first 
before another write begin. the way 2 request is determine to be ‘incompatible’ 
is that they are from 2 different open fds.
I think write-behind behaviour should keep the same with linux page cache.




I think that how this actually would work is that both would be written to the 
same page in the page cache (if not using buffered IO), so as long as they do 
not happen at the same time, you would get the second P2 copy of data each time.


Same story for using O_DIRECT - that write bypasses the page cache and will 
update the data directly.


What might happen in practice though is that your applications might use higher 
level IO routines and they might buffer data internally. If that happens, there 
is no ordering that is predictable.


Regards,

Ric



However, my understanding is that filesystems need not maintain the relative order of writes (as it received 
from vfs/kernel) on two different fds. Also, if we have to maintain the order it might come with increased 
latency. The increased latency can be because of having "newer" writes to wait on "older" 
ones. This wait can fill up write-behind buffer and can eventually result in a full write-behind cache and 
hence not able to "write-back" newer writes.

* What does POSIX say about it?
* How do other filesystems behave in this scenario?


Also, the current write-behind implementation has the concept of "generation 
numbers". To quote from comment:



 uint64_t gen;/* Liability generation number. Represents
 the current 'state' of liability. Every
 new addition to the liability list bumps
 the generation number.
 
 a newly arrived request is only required

 to perform causal checks against the entries
 in the liability list which were present
 at the time of its addition. the generation
 number at the time of its addition is stored
 in the request and used during checks.
 
 the liability list can grow while the request

 waits in the todo list waiting for its
 dependent operations to complete. however
 it is not of the request's concern to depend
 itself on those new entries which arrived
 after it arrived (i.e, those that have a
 liability generation higher than itself)
  */


So, if a single thread is doing writes on two different fds, generation numbers 
are sufficient to enforce the relative ordering. If writes are from two 
different threads/processes, I think write-behind is not obligated to maintain 
their order. Comments?

[1] http://review.gluster.org/#/c/15380/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] centos-5 build failures on mainline

2016-09-21 Thread Niels de Vos
On Wed, Sep 21, 2016 at 11:53:43AM +0530, Atin Mukherjee wrote:
> 
> 
> As of now we don't check for build sanity on RHEL5/centos-5 distros.
> I believe Gluster still has legacy support for these distros. Here we could
> either add a glusterfs-devrpms script for el5 for every patch submission or
> at worst have a nightly build to check the sanity of el5 build on mainline
> branch to ensure we don't break further?

Currently there is no way to build only the Gluster client part. This is
a limitation in the autoconf/automake scripts. The server-side requires
fancy things that are not available (in the required versions) for
different parts (mainly GlusterD?).

Once we have a "./configre --without-server" or similar, there is no use
in trying to build for RHEL/CentOS-5.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-21 Thread Niels de Vos
On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> Hi all,
> 
> [1] might have implications across different components in the stack. Your 
> reviews are requested.
> 
> 
> 
> rpc : Change the way client uuid is built
> 
> Problem:
> Today the main users of client uuid are protocol layers, locks, leases.
> Protocolo layers requires each client uuid to be unique, even across
> connects and disconnects. Locks and leases on the server side also use
> the same client uid which changes across graph switches and across
> file migrations. Which makes the graph switch and file migration
> tedious for locks and leases.
> As of today lock migration across graph switch is client driven,
> i.e. when a graph switches, the client reassociates all the locks(which
> were associated with the old graph client uid) with the new graphs
> client uid. This means flood of fops to get and set locks for each fd.
> Also file migration across bricks becomes even more difficult as
> client uuid for the same client, is different on the other brick.
> 
> The exact set of issues exists for leases as well.
> 
> Hence the solution:
> Make the migration of locks and leases during graph switch and migration,
> server driven instead of client driven. This can be achieved by changing
> the format of client uuid.
> 
> Client uuid currently:
> %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume 
> count/reconnect count)
> 
> Proposed Client uuid:
> "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> -  CTX_ID: This is will be constant per client.
> -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume 
> count)
> remains the same.
> 
> With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> constant across file migration, thus the migration is made easier.
> 
> Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> client identification. This means, when the new graph connects,
> the locks and leases xlator should walk through their database
> to update the client id, to have new GRAPH_ID. Thus the graph switch
> is made server driven and saves a lot of network traffic.

What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
applications? This would be important for NFS-Ganesha failover where one
NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
to an other NFS-Ganesha server.

Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
would allow us to add a configuration option to NFS-Ganesha and have the
whole NFS-Ganesha cluster use the same locking/leases.

Thanks,
Niels


> 
> Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> BUG: 1369028
> Signed-off-by: Poornima G 
> Signed-off-by: Susant Palai 
> 
> 
> 
> [1] http://review.gluster.org/#/c/13901/10/
> 
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] centos-5 build failures on mainline

2016-09-21 Thread Nigel Babu
On Wed, Sep 21, 2016 at 11:53:43AM +0530, Atin Mukherjee wrote:
> 
>
> As of now we don't check for build sanity on RHEL5/centos-5 distros.
> I believe Gluster still has legacy support for these distros. Here we could
> either add a glusterfs-devrpms script for el5 for every patch submission or
> at worst have a nightly build to check the sanity of el5 build on mainline
> branch to ensure we don't break further?
>
> We'll be sending email(s) on gluster-devel about the patches which are not
> compatible with centos-5/RHEL5 distribution and needs to be tweaked.
>
> ~Atin

Running an extra rpm job sounds okay to me. We'll have it ready by the time the
fixes for the current failures have gone in.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Removing strfmt job on Centos CI

2016-09-21 Thread Nigel Babu
Hello,

I'm going to remove the strfmt job on Centos CI. We have it working as
a pre-commit job now. Running it on Centos CI seems redundant now.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] centos-5 build failures on mainline

2016-09-21 Thread Atin Mukherjee


As of now we don't check for build sanity on RHEL5/centos-5 distros.
I believe Gluster still has legacy support for these distros. Here we could
either add a glusterfs-devrpms script for el5 for every patch submission or
at worst have a nightly build to check the sanity of el5 build on mainline
branch to ensure we don't break further?

We'll be sending email(s) on gluster-devel about the patches which are not
compatible with centos-5/RHEL5 distribution and needs to be tweaked.

~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel