Re: [Gluster-devel] Release 3.11: Pending features and reviews (2 days to branching)

2017-04-25 Thread Amar Tumballi
On Wed, Apr 26, 2017 at 5:11 AM, Shyam  wrote:

> Hi,
>
> This mail should have been out 3-4 days earlier than now as branching is 2
> days away, but hopefully it is not too late.
>
> The current release scope can be seen at [1].
>
> If you do *not* recognize your nick in the following list then you can
> help us out by looking at pending reviews [2] and moving them along and
> optionally skip the rest of the mail.
>
> If your nick is in the list below, please read along to update status on
> the action called out against you.
>
> nick list: @amarts, @csabahenk, @ndevos, @pranith, @kaushal, @jiffin,
> @rabhat, @kaleb, @samikshan, @poornimag, @kotresh, @susant
>
> If any reviews for the features listed below are still open and not
> appearing in [2] drop me a mail, and I will star it, so that it appears in
> the list as needed.
>
> Status request of features targeted for 3.11:
>
> 1) Starting with features that slipped 3.10 and were marked for 3.11
>
> 1.1) In gfapi fix memory leak during graph switch #61
> @ndevos I know a series of fixes are up for review, will this be completed
> for this release, or would it be an ongoing effort across releases? If the
> latter, we possibly continue tracking this for the next release as well.
>
> 1.2) SELinux support for Gluster Volumes #55
> Latest reviews indicate this may be ready by branching, @jiffin or @ndevos
> will this make it by branching date?
>
> 1.3) Introduce force option for Snapshot Restore #62
> There seems to be no owner for this now, @rabhat any updates or anything
> more than what we know about this at this time?
>
> 1.4) switch to storhaug for HA for ganesha and samba #59
> @kaleb, are there any open reviews for this? Is it already done?
>
> 2) New in 3.11 and tracked in the release scope [1]
>
> 2.1) get-state CLI needs to provide client and brick capacity related
> information as well #158
> Code is in. Documentation changes are pending (heads up, @samikshan). No
> updates needed at present.
>
> 2.2) Serve negative lookups from cache #82
> Code is in. Documentation changes are pending, which can come in later
> (heads up, @poornimag)
>
> 2.3) New xlator to help developers detecting resource leaks #176
> Code and developer documentation is in, issue is auto-closed post merge of
> the commit. (thanks @ndevos)
>
> 2.4) Make the feature metadata-caching/small file performance production
> ready #167
> Just a release-note update, hence issue will be updated post branching
> when the release notes are updated (heads up, @poornimag)
>
> 2.5) Make the feature "Parallel Readdir" production ready in 3.11 #166
> Just a release-note update, hence issue will be updated post branching
> when the release notes are updated (heads up, @poornimag)
>
> 2.6) bitrot: [RFE] Enable object versioning only if bitrot is enabled. #188
> Code is merged, needs release notes updates once branching is done,
> possibly no documentation changes from what I can see, hence will get
> closed once release notes are updated (heads up, @kotresh).
>
> 3) New in 3.11 and not tracked in release scope [1] as there are no
> visible mail requests to consider these for 3.11 in the gluster devel lists
>
> 3.1) Use standard refcounting functions #156
> @ndevos any updates? Should this be marked in the 3.11 scope?
>

I think it is a continued effort. But good to mention about this in 3.11
release notes IMO.


>
> 3.2) Rebalance performance improvement #155
> @susant any updates? Should this be marked in the 3.11 scope?
>
> 3.3) rpc-clnt reconnect timer #152
> @amarts any updates? Should this be marked in the 3.11 scope?
>
>
Don't have any particular option thought for this this yet. Lets keep it as
3.12 effort IMO.


> 3.4) [RFE] libfuse rebase to latest? #153
> @amarts, @csabahenk any updates? Should this be marked in the 3.11 scope?
>

Would be great to consider this as a 3.11 scope. Not a release blocker
though. A rebase would be submitted probably next week.

-Amar


>
> 4) Pending issue still to be opened at github (and possibly making into
> the relase)
>
> 4.1) IPv6 support enhancements from FB
> heads up, @kaushal. Mail discussions are already done, possibly if we make
> it by the cut a github issue would be needed.
>
> 4.2) Halo replication enhancements from FB
> Heads up, @pranith. As this may make it a week post branching and we will
> take in the backport, a github issue would be needed to track this.
>
> Thanks,
> Shyam
>
> [1] Release scope: https://github.com/gluster/glusterfs/projects/1
>
> [2] Reviews needing attention: https://review.gluster.org/#/q
> /status:open+starredby:srangana%2540redhat.com
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Statedumps as non-root can not be saved (under /var/run/gluster/...)

2017-04-25 Thread Satheesaran Sundaramoorthi
On Tue, Apr 25, 2017 at 11:23 PM, Niels de Vos  wrote:

> Hi,
>
> Recently a new ability to trigger statedumps through the Gluster-CLI [0]
> has been added. This makes it possible to get statedump from
> applications that use gfapi. By default, statedumps are saved under
> /var/run/gluster/... and this directory is only writable by root.
> Applications that use gfapi do not require root permissions (like QEMU),
> and therefore fail to write the statedump :-/
>
> One approach would be to create a "gluster" group and give the group
> permissions to write to /var/run/gluster/... Other 'fixes' include
> setting ACLs on the directory so that specified users can write there.
> because many daemons have a "home directory" that does not exist, it
> probably is not a good idea to use $HOME to store statedumps.
>
> What suggestions do others have?
>
> Thanks,
> Niels
>
>
> 0. https://github.com/gluster/glusterfs/blob/master/doc/
> debugging/statedump.md
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>

​This bug[1] is filed for this problem

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1445569​

​Thanks,
Satheesaran S ​
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Master regressions broken (by me!)

2017-04-25 Thread Shyam

On 04/25/2017 09:03 PM, Shyam wrote:

I merged https://review.gluster.org/#/c/16806/ which depends on
https://review.gluster.org/#/c/16796/ and hence is failing regressions
for all other patches.

Fixing it up as I type this mail!


Merged the dependent patch [1] (it had the required review scores), and 
fired off a new regression burn in [2] to ensure the status is healthy.


Atin is helping me monitor the same.

[1] Dependent patch: https://review.gluster.org/16796

[2] Regression burn-in: 
https://build.gluster.org/job/regression-test-burn-in/3041/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Master regressions broken (by me!)

2017-04-25 Thread Shyam
I merged https://review.gluster.org/#/c/16806/ which depends on 
https://review.gluster.org/#/c/16796/ and hence is failing regressions 
for all other patches.


Fixing it up as I type this mail!

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.11: Pending features and reviews (2 days to branching)

2017-04-25 Thread Shyam

Hi,

This mail should have been out 3-4 days earlier than now as branching is 
2 days away, but hopefully it is not too late.


The current release scope can be seen at [1].

If you do *not* recognize your nick in the following list then you can 
help us out by looking at pending reviews [2] and moving them along and 
optionally skip the rest of the mail.


If your nick is in the list below, please read along to update status on 
the action called out against you.


nick list: @amarts, @csabahenk, @ndevos, @pranith, @kaushal, @jiffin, 
@rabhat, @kaleb, @samikshan, @poornimag, @kotresh, @susant


If any reviews for the features listed below are still open and not 
appearing in [2] drop me a mail, and I will star it, so that it appears 
in the list as needed.


Status request of features targeted for 3.11:

1) Starting with features that slipped 3.10 and were marked for 3.11

1.1) In gfapi fix memory leak during graph switch #61
@ndevos I know a series of fixes are up for review, will this be 
completed for this release, or would it be an ongoing effort across 
releases? If the latter, we possibly continue tracking this for the next 
release as well.


1.2) SELinux support for Gluster Volumes #55
Latest reviews indicate this may be ready by branching, @jiffin or 
@ndevos will this make it by branching date?


1.3) Introduce force option for Snapshot Restore #62
There seems to be no owner for this now, @rabhat any updates or anything 
more than what we know about this at this time?


1.4) switch to storhaug for HA for ganesha and samba #59
@kaleb, are there any open reviews for this? Is it already done?

2) New in 3.11 and tracked in the release scope [1]

2.1) get-state CLI needs to provide client and brick capacity related 
information as well #158
Code is in. Documentation changes are pending (heads up, @samikshan). No 
updates needed at present.


2.2) Serve negative lookups from cache #82
Code is in. Documentation changes are pending, which can come in later 
(heads up, @poornimag)


2.3) New xlator to help developers detecting resource leaks #176
Code and developer documentation is in, issue is auto-closed post merge 
of the commit. (thanks @ndevos)


2.4) Make the feature metadata-caching/small file performance production 
ready #167
Just a release-note update, hence issue will be updated post branching 
when the release notes are updated (heads up, @poornimag)


2.5) Make the feature "Parallel Readdir" production ready in 3.11 #166
Just a release-note update, hence issue will be updated post branching 
when the release notes are updated (heads up, @poornimag)


2.6) bitrot: [RFE] Enable object versioning only if bitrot is enabled. #188
Code is merged, needs release notes updates once branching is done, 
possibly no documentation changes from what I can see, hence will get 
closed once release notes are updated (heads up, @kotresh).


3) New in 3.11 and not tracked in release scope [1] as there are no 
visible mail requests to consider these for 3.11 in the gluster devel lists


3.1) Use standard refcounting functions #156
@ndevos any updates? Should this be marked in the 3.11 scope?

3.2) Rebalance performance improvement #155
@susant any updates? Should this be marked in the 3.11 scope?

3.3) rpc-clnt reconnect timer #152
@amarts any updates? Should this be marked in the 3.11 scope?

3.4) [RFE] libfuse rebase to latest? #153
@amarts, @csabahenk any updates? Should this be marked in the 3.11 scope?

4) Pending issue still to be opened at github (and possibly making into 
the relase)


4.1) IPv6 support enhancements from FB
heads up, @kaushal. Mail discussions are already done, possibly if we 
make it by the cut a github issue would be needed.


4.2) Halo replication enhancements from FB
Heads up, @pranith. As this may make it a week post branching and we 
will take in the backport, a github issue would be needed to track this.


Thanks,
Shyam

[1] Release scope: https://github.com/gluster/glusterfs/projects/1

[2] Reviews needing attention: 
https://review.gluster.org/#/q/status:open+starredby:srangana%2540redhat.com

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Shyam



On 04/25/2017 10:08 AM, Shyam wrote:

On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote:



On Thu, Apr 13, 2017 at 8:17 PM, Shyam > wrote:

On 02/28/2017 10:17 AM, Shyam wrote:
1) Halo - Initial Cut (@pranith)


Sorry for the delay in response. Due to some other work engagements, I
couldn't spend time on this, I think I can get this done if there is one
week grace period, by 5thMay. Or I can get this done for 3.12.0. Do let
me know what you think.


Let us backport this to 3.11 post branching, that way the schedule is
kept as is. It would help to stick to the schedule if this gets
backported by, May-05th.

Considering this request, any other features that needs a few more days
to be completed, can target this date by when (post branching) we need
the backport of the feature to 3.11 branch.


Forgot to mention, please add a github issue to track this feature 
against the release scope (3.11 or otherwise).




Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Serkan Çoban
How this affect CPU usage? Does it read whole file and calculates a
hash after it is being written?
Will this patch land in 3.10.x?

On Tue, Apr 25, 2017 at 10:32 AM, Kotresh Hiremath Ravishankar
 wrote:
> Hi
>
> https://github.com/gluster/glusterfs/issues/188 is merged in master
> and needs to go in 3.11
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
>> From: "Kaushal M" 
>> To: "Shyam" 
>> Cc: gluster-us...@gluster.org, "Gluster Devel" 
>> Sent: Thursday, April 20, 2017 12:16:39 PM
>> Subject: Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and 
>> feature tracking
>>
>> On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:
>> > On 02/28/2017 10:17 AM, Shyam wrote:
>> >>
>> >> Hi,
>> >>
>> >> With release 3.10 shipped [1], it is time to set the dates for release
>> >> 3.11 (and subsequently 4.0).
>> >>
>> >> This mail has the following sections, so please read or revisit as needed,
>> >>   - Release 3.11 dates (the schedule)
>> >>   - 3.11 focus areas
>> >
>> >
>> > Pinging the list on the above 2 items.
>> >
>> >> *Release 3.11 dates:*
>> >> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
>> >> release and would be a Short Term Maintenance (STM) release.
>> >>
>> >> This puts 3.11 schedule as (working from the release date backwards):
>> >> - Release: May 30th, 2017
>> >> - Branching: April 27th, 2017
>> >
>> >
>> > Branching is about 2 weeks away, other than the initial set of overflow
>> > features from 3.10 nothing else has been raised on the lists and in github
>> > as requests for 3.11.
>> >
>> > So, a reminder to folks who are working on features, to raise the relevant
>> > github issue for the same, and post it to devel list for consideration in
>> > 3.11 (also this helps tracking and ensuring we are waiting for the right
>> > things at the time of branching).
>> >
>> >>
>> >> *3.11 focus areas:*
>> >> As maintainers of gluster, we want to harden testing around the various
>> >> gluster features in this release. Towards this the focus area for this
>> >> release are,
>> >>
>> >> 1) Testing improvements in Gluster
>> >>   - Primary focus would be to get automated test cases to determine
>> >> release health, rather than repeating a manual exercise every 3 months
>> >>   - Further, we would also attempt to focus on maturing Glusto[7] for
>> >> this, and other needs (as much as possible)
>> >>
>> >> 2) Merge all (or as much as possible) Facebook patches into master, and
>> >> hence into release 3.11
>> >>   - Facebook has (as announced earlier [3]) started posting their
>> >> patches mainline, and this needs some attention to make it into master
>> >>
>> >
>> > Further to the above, we are also considering the following features for
>> > this release, request feature owners to let us know if these are actively
>> > being worked on and if these will make the branching dates. (calling out
>> > folks that I think are the current feature owners for the same)
>> >
>> > 1) Halo - Initial Cut (@pranith)
>> > 2) IPv6 support (@kaushal)
>>
>> This is under review at https://review.gluster.org/16228 . The patch
>> mostly looks fine.
>>
>> The only issue is that it currently depends and links with an internal
>> FB fork of tirpc (mainly for some helper functions and utilities).
>> This makes it hard for the community to make actual use of  and test,
>> the IPv6 features/fixes introduced by the change.
>>
>> If the change were refactored the use publicly available versions of
>> tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
>> While I was able to build it against available versions of tirpc, I
>> wasn't able to get it working correctly.
>>
>> > 3) Negative lookup (@poornima)
>> > 4) Parallel Readdirp - More changes to default settings. (@poornima, @du)
>> >
>> >
>> >> [1] 3.10 release announcement:
>> >> http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html
>> >>
>> >> [2] Gluster release schedule:
>> >> https://www.gluster.org/community/release-schedule/
>> >>
>> >> [3] Mail regarding facebook patches:
>> >> http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html
>> >>
>> >> [4] Release scope: https://github.com/gluster/glusterfs/projects/1
>> >>
>> >> [5] glusterfs github issues: https://github.com/gluster/glusterfs/issues
>> >>
>> >> [6] github issues for features and major fixes:
>> >> https://hackmd.io/s/BkgH8sdtg#
>> >>
>> >> [7] Glusto tests: https://github.com/gluster/glusto-tests
>> >> ___
>> >> Gluster-devel mailing list
>> >> Gluster-devel@gluster.org
>> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
>> >
>> > ___
>> > Gluster-devel mailing list
>> > Gluster-devel@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>> 

[Gluster-devel] Statedumps as non-root can not be saved (under /var/run/gluster/...)

2017-04-25 Thread Niels de Vos
Hi,

Recently a new ability to trigger statedumps through the Gluster-CLI [0]
has been added. This makes it possible to get statedump from
applications that use gfapi. By default, statedumps are saved under
/var/run/gluster/... and this directory is only writable by root.
Applications that use gfapi do not require root permissions (like QEMU),
and therefore fail to write the statedump :-/

One approach would be to create a "gluster" group and give the group
permissions to write to /var/run/gluster/... Other 'fixes' include
setting ACLs on the directory so that specified users can write there.
because many daemons have a "home directory" that does not exist, it
probably is not a good idea to use $HOME to store statedumps.

What suggestions do others have?

Thanks,
Niels


0. https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] drop-in management/control panel

2017-04-25 Thread Nux!
Hi,

Anyone knows of any solutions I can just drop in my current gluster setup and 
help me with administrative tasks (create, delete, quota, acl etc) from a web 
ui?

Thanks,
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Shyam

On 04/25/2017 03:32 AM, Kotresh Hiremath Ravishankar wrote:

Hi

https://github.com/gluster/glusterfs/issues/188 is merged in master
and needs to go in 3.11


Appropriate github project board updates done, thanks.

Further, thanks for noting (in the github issue) that this commit was 
made before we started enforcing the "Fixes/Updates #n" rules, helps in 
keeping track of changes.




Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kaushal M" 
To: "Shyam" 
Cc: gluster-us...@gluster.org, "Gluster Devel" 
Sent: Thursday, April 20, 2017 12:16:39 PM
Subject: Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and 
feature tracking

On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:

On 02/28/2017 10:17 AM, Shyam wrote:


Hi,

With release 3.10 shipped [1], it is time to set the dates for release
3.11 (and subsequently 4.0).

This mail has the following sections, so please read or revisit as needed,
  - Release 3.11 dates (the schedule)
  - 3.11 focus areas



Pinging the list on the above 2 items.


*Release 3.11 dates:*
Based on our release schedule [2], 3.11 would be 3 months from the 3.10
release and would be a Short Term Maintenance (STM) release.

This puts 3.11 schedule as (working from the release date backwards):
- Release: May 30th, 2017
- Branching: April 27th, 2017



Branching is about 2 weeks away, other than the initial set of overflow
features from 3.10 nothing else has been raised on the lists and in github
as requests for 3.11.

So, a reminder to folks who are working on features, to raise the relevant
github issue for the same, and post it to devel list for consideration in
3.11 (also this helps tracking and ensuring we are waiting for the right
things at the time of branching).



*3.11 focus areas:*
As maintainers of gluster, we want to harden testing around the various
gluster features in this release. Towards this the focus area for this
release are,

1) Testing improvements in Gluster
  - Primary focus would be to get automated test cases to determine
release health, rather than repeating a manual exercise every 3 months
  - Further, we would also attempt to focus on maturing Glusto[7] for
this, and other needs (as much as possible)

2) Merge all (or as much as possible) Facebook patches into master, and
hence into release 3.11
  - Facebook has (as announced earlier [3]) started posting their
patches mainline, and this needs some attention to make it into master



Further to the above, we are also considering the following features for
this release, request feature owners to let us know if these are actively
being worked on and if these will make the branching dates. (calling out
folks that I think are the current feature owners for the same)

1) Halo - Initial Cut (@pranith)
2) IPv6 support (@kaushal)


This is under review at https://review.gluster.org/16228 . The patch
mostly looks fine.

The only issue is that it currently depends and links with an internal
FB fork of tirpc (mainly for some helper functions and utilities).
This makes it hard for the community to make actual use of  and test,
the IPv6 features/fixes introduced by the change.

If the change were refactored the use publicly available versions of
tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
While I was able to build it against available versions of tirpc, I
wasn't able to get it working correctly.


3) Negative lookup (@poornima)
4) Parallel Readdirp - More changes to default settings. (@poornima, @du)



[1] 3.10 release announcement:
http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html

[2] Gluster release schedule:
https://www.gluster.org/community/release-schedule/

[3] Mail regarding facebook patches:
http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html

[4] Release scope: https://github.com/gluster/glusterfs/projects/1

[5] glusterfs github issues: https://github.com/gluster/glusterfs/issues

[6] github issues for features and major fixes:
https://hackmd.io/s/BkgH8sdtg#

[7] Glusto tests: https://github.com/gluster/glusto-tests
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Shyam

On 04/20/2017 02:46 AM, Kaushal M wrote:

2) IPv6 support (@kaushal)

This is under review at https://review.gluster.org/16228 . The patch
mostly looks fine.

The only issue is that it currently depends and links with an internal
FB fork of tirpc (mainly for some helper functions and utilities).
This makes it hard for the community to make actual use of  and test,
the IPv6 features/fixes introduced by the change.

If the change were refactored the use publicly available versions of
tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
While I was able to build it against available versions of tirpc, I
wasn't able to get it working correctly.



I checked the patch and here are my comments on merging this,

1) We are encouraging FB to actually not use FB specific configure time 
options, and instead use a site.h like approach (wherein we can build 
with different site.h files and not proliferate options). This 
discussion I realize is not public, nor is there a github issue for the 
same.


Considering this, we would need this patch to change appropriately.

2) I also agree on the tirpc dependency, if we could make it work with 
the publicly available tirpc, it is better as otherwise it is difficult 
to use by the community.


Considering this, I would suggest we (as in all concerned) work on these 
aspects and get it right in master before we take it in for a release.


Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Shyam

On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote:



On Thu, Apr 13, 2017 at 8:17 PM, Shyam > wrote:

On 02/28/2017 10:17 AM, Shyam wrote:
1) Halo - Initial Cut (@pranith)


Sorry for the delay in response. Due to some other work engagements, I
couldn't spend time on this, I think I can get this done if there is one
week grace period, by 5thMay. Or I can get this done for 3.12.0. Do let
me know what you think.


Let us backport this to 3.11 post branching, that way the schedule is 
kept as is. It would help to stick to the schedule if this gets 
backported by, May-05th.


Considering this request, any other features that needs a few more days 
to be completed, can target this date by when (post branching) we need 
the backport of the feature to 3.11 branch.


Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2017-04-25-d6194de9 (master branch)

2017-04-25 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-25-d6194de9
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Pranith Kumar Karampuri
On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:

> On 02/28/2017 10:17 AM, Shyam wrote:
>
>> Hi,
>>
>> With release 3.10 shipped [1], it is time to set the dates for release
>> 3.11 (and subsequently 4.0).
>>
>> This mail has the following sections, so please read or revisit as needed,
>>   - Release 3.11 dates (the schedule)
>>   - 3.11 focus areas
>>
>
> Pinging the list on the above 2 items.
>
> *Release 3.11 dates:*
>> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
>> release and would be a Short Term Maintenance (STM) release.
>>
>> This puts 3.11 schedule as (working from the release date backwards):
>> - Release: May 30th, 2017
>> - Branching: April 27th, 2017
>>
>
> Branching is about 2 weeks away, other than the initial set of overflow
> features from 3.10 nothing else has been raised on the lists and in github
> as requests for 3.11.
>
> So, a reminder to folks who are working on features, to raise the relevant
> github issue for the same, and post it to devel list for consideration in
> 3.11 (also this helps tracking and ensuring we are waiting for the right
> things at the time of branching).
>
>
>> *3.11 focus areas:*
>> As maintainers of gluster, we want to harden testing around the various
>> gluster features in this release. Towards this the focus area for this
>> release are,
>>
>> 1) Testing improvements in Gluster
>>   - Primary focus would be to get automated test cases to determine
>> release health, rather than repeating a manual exercise every 3 months
>>   - Further, we would also attempt to focus on maturing Glusto[7] for
>> this, and other needs (as much as possible)
>>
>> 2) Merge all (or as much as possible) Facebook patches into master, and
>> hence into release 3.11
>>   - Facebook has (as announced earlier [3]) started posting their
>> patches mainline, and this needs some attention to make it into master
>>
>>
> Further to the above, we are also considering the following features for
> this release, request feature owners to let us know if these are actively
> being worked on and if these will make the branching dates. (calling out
> folks that I think are the current feature owners for the same)
>
> 1) Halo - Initial Cut (@pranith)
>

Sorry for the delay in response. Due to some other work engagements, I
couldn't spend time on this, I think I can get this done if there is one
week grace period, by 5thMay. Or I can get this done for 3.12.0. Do let me
know what you think.


> 2) IPv6 support (@kaushal)
> 3) Negative lookup (@poornima)
> 4) Parallel Readdirp - More changes to default settings. (@poornima, @du)
>
>
> [1] 3.10 release announcement:
>> http://lists.gluster.org/pipermail/gluster-devel/2017-Februa
>> ry/052188.html
>>
>> [2] Gluster release schedule:
>> https://www.gluster.org/community/release-schedule/
>>
>> [3] Mail regarding facebook patches:
>> http://lists.gluster.org/pipermail/gluster-devel/2016-Decemb
>> er/051784.html
>>
>> [4] Release scope: https://github.com/gluster/glusterfs/projects/1
>>
>> [5] glusterfs github issues: https://github.com/gluster/glusterfs/issues
>>
>> [6] github issues for features and major fixes:
>> https://hackmd.io/s/BkgH8sdtg#
>>
>> [7] Glusto tests: https://github.com/gluster/glusto-tests
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] fstat problems when killing with stat prefetch turned on

2017-04-25 Thread Raghavendra Gowdappa
Recently we had worked on some patches to ensure correct stats are returned.

https://review.gluster.org/15759
https://review.gluster.org/15659
https://review.gluster.org/16419

Referring to these patches and bugs associated with them might give you some 
insight into the nature of the problem. The major culprit was interaction 
between readdir-ahead and stat-prefetch. So, the issue you are seeing might be 
addressed by these patches.

- Original Message -
> From: "Miklós Fokin" 
> To: gluster-devel@gluster.org
> Sent: Tuesday, April 25, 2017 3:42:52 PM
> Subject: [Gluster-devel] fstat problems when killing with stat prefetch   
> turned on
> 
> Hello,
> 
> I tried reproducing the problem that Mateusz Slupny was experiencing
> before (stat returning bad st_size value on self-healing) on my own
> computer with only 3 bricks (one being an arbiter) on 3.10.0.
> The result with such a small setup was that the bug appeared both on
> killing and during the self-healing process, but only rarely (once in
> hundreds of tries) and only with performance.stat-prefetch turned on.
> This might be a completely different issue as on the setup Matt was
> using, he could reproduce it with the mentioned option being off, it
> always happened but only during recovery, not after killing.
> I did submit a bug report about this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1444892.
> 
> The problem is as Matt wrote is that this causes data corruption if one
> is to use the returned size on writing.
> Could I get some pointers as to what parts of the gluster code I should
> be looking at to figure out what the problem might be?
> 
> Thanks in advance,
> Miklós
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] fstat problems when killing with stat prefetch turned on

2017-04-25 Thread Miklós Fokin

Hello,

I tried reproducing the problem that Mateusz Slupny was experiencing 
before (stat returning bad st_size value on self-healing) on my own 
computer with only 3 bricks (one being an arbiter) on 3.10.0.
The result with such a small setup was that the bug appeared both on 
killing and during the self-healing process, but only rarely (once in 
hundreds of tries) and only with performance.stat-prefetch turned on.
This might be a completely different issue as on the setup Matt was 
using, he could reproduce it with the mentioned option being off, it 
always happened but only during recovery, not after killing.
I did submit a bug report about this: 
https://bugzilla.redhat.com/show_bug.cgi?id=1444892.


The problem is as Matt wrote is that this causes data corruption if one 
is to use the returned size on writing.
Could I get some pointers as to what parts of the gluster code I should 
be looking at to figure out what the problem might be?


Thanks in advance,
Miklós

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] finding and fixing memory leaks in xlators

2017-04-25 Thread Niels de Vos
On Tue, Apr 25, 2017 at 02:55:30PM +0530, Amar Tumballi wrote:
> Thanks for this detailed Email with clear instructions Niels.
> 
> Can we have this as github issues as well? I don't want this information to
> be lost as part of email thread in ML archive. I would like to see if I can
> get some interns to work on these leaks per component as part of their
> college project etc.

This is already reported as a GitHub issue as
https://github.com/gluster/glusterfs/issues/176 . Fixes for the leaks in
the different xlators will need a bug report in Bugzilla. For now I have
been creating new bugs per xlator, and added BZ 1425623 in the "blocks"
field of the xlator bugs.

Many fixes will be part of glusterfs-3.11. At the moment I am focussing
mostly on the Gluster core (libglusterfs) leaks. Leaks in the xlators
tend to be a little easier to fix, and would be more suitable for junior
engineers. I'm happy to explain more details if needed :)

Thanks!
Niels


> 
> -Amar
> 
> On Tue, Apr 25, 2017 at 2:29 PM, Niels de Vos  wrote:
> 
> > Hi,
> >
> > with the use of gfapi it has become clear that Gluster was never really
> > developed to be loaded in applications. There are many different memory
> > leaks that get exposed through the usage of gfapi. Before, memory leaks
> > were mostly cleaned up automatically because processes would exit. Now,
> > there are applications that initialize a GlusterFS client to access a
> > volume, and de-init that client once not needed anymore. Unfortunately
> > upon the de-init, not all allocated memory is free'd again. These are
> > often just a few bytes, but for long running processes this can become a
> > real problem.
> >
> > Finding the memory leaks in xlators has been a tricky thing. Valgrind
> > would often not know what function/source the allocation did, and fixing
> > the leak would become a real hunt. There have been some patches merged
> > that make Valgrind work more easily, and a few patches still need a
> > little more review before everything is available. The document below
> > describes how to use a new "sink" xlator to detect memory leaks. This
> > xlator can only be merged when a change for the graph initialization is
> > included too:
> >
> >  - https://review.gluster.org/16796 - glusterfs_graph_prepare
> >  - https://review.gluster.org/16806 - sink xlator and developer doc
> >
> > It would be most welcome if other developers can reviwe the linked
> > changes, so that everyone can easily debug memory leaks from thair
> > favorite xlators.
> >
> > There is a "run-xlator.sh" script in my (for now) personal
> > "gluster-debug" repository that can be used to load an arbitrary xlator
> > along with "sink". See
> > https://github.com/nixpanic/gluster-debug/tree/master/gfapi-load-volfile
> > for more details.
> >
> > Thanks,
> > Niels
> >
> >
> > From doc/developer-guide/identifying-resource-leaks.md:
> >
> > # Identifying Resource Leaks
> >
> > Like most other pieces of software, GlusterFS is not perfect in how it
> > manages
> > its resources like memory, threads and the like. Gluster developers try
> > hard to
> > prevent leaking resources but releasing and unallocating the used
> > structures.
> > Unfortunately every now and then some resource leaks are unintentionally
> > added.
> >
> > This document tries to explain a few helpful tricks to identify resource
> > leaks
> > so that they can be addressed.
> >
> >
> > ## Debug Builds
> >
> > There are certain techniques used in GlusterFS that make it difficult to
> > use
> > tools like Valgrind for memory leak detection. There are some build options
> > that make it more practical to use Valgrind and other tools. When running
> > Valgrind, it is important to have GlusterFS builds that contain the
> > debuginfo/symbols. Some distributions (try to) strip the debuginfo to get
> > smaller executables. Fedora and RHEL based distributions have sub-packages
> > called ...-debuginfo that need to be installed for symbol resolving.
> >
> >
> > ### Memory Pools
> >
> > By using memory pools, there are no allocation/freeing of single structures
> > needed. This improves performance, but also makes it impossible to track
> > the
> > allocation and freeing of srtuctures.
> >
> > It is possible to disable the use of memory pools, and use standard
> > `malloc()`
> > and `free()` functions provided by the C library. Valgrind is then able to
> > track the allocated areas and verify if they have been free'd. In order to
> > disable memory pools, the Gluster sources needs to be configured with the
> > `--enable-debug` option:
> >
> > ```shell
> > ./configure --enable-debug
> > ```
> >
> > When building RPMs, the `.spec` handles the `--with=debug` option too:
> >
> > ```shell
> > make dist
> > rpmbuild -ta --with=debug glusterfs-tar.gz
> > ```
> >
> > ### Dynamically Loaded xlators
> >
> > Valgrind tracks the call chain of functions that do memory allocations. The
> > addresses of the functions are stored and before 

Re: [Gluster-devel] finding and fixing memory leaks in xlators

2017-04-25 Thread Amar Tumballi
Thanks for this detailed Email with clear instructions Niels.

Can we have this as github issues as well? I don't want this information to
be lost as part of email thread in ML archive. I would like to see if I can
get some interns to work on these leaks per component as part of their
college project etc.

-Amar

On Tue, Apr 25, 2017 at 2:29 PM, Niels de Vos  wrote:

> Hi,
>
> with the use of gfapi it has become clear that Gluster was never really
> developed to be loaded in applications. There are many different memory
> leaks that get exposed through the usage of gfapi. Before, memory leaks
> were mostly cleaned up automatically because processes would exit. Now,
> there are applications that initialize a GlusterFS client to access a
> volume, and de-init that client once not needed anymore. Unfortunately
> upon the de-init, not all allocated memory is free'd again. These are
> often just a few bytes, but for long running processes this can become a
> real problem.
>
> Finding the memory leaks in xlators has been a tricky thing. Valgrind
> would often not know what function/source the allocation did, and fixing
> the leak would become a real hunt. There have been some patches merged
> that make Valgrind work more easily, and a few patches still need a
> little more review before everything is available. The document below
> describes how to use a new "sink" xlator to detect memory leaks. This
> xlator can only be merged when a change for the graph initialization is
> included too:
>
>  - https://review.gluster.org/16796 - glusterfs_graph_prepare
>  - https://review.gluster.org/16806 - sink xlator and developer doc
>
> It would be most welcome if other developers can reviwe the linked
> changes, so that everyone can easily debug memory leaks from thair
> favorite xlators.
>
> There is a "run-xlator.sh" script in my (for now) personal
> "gluster-debug" repository that can be used to load an arbitrary xlator
> along with "sink". See
> https://github.com/nixpanic/gluster-debug/tree/master/gfapi-load-volfile
> for more details.
>
> Thanks,
> Niels
>
>
> From doc/developer-guide/identifying-resource-leaks.md:
>
> # Identifying Resource Leaks
>
> Like most other pieces of software, GlusterFS is not perfect in how it
> manages
> its resources like memory, threads and the like. Gluster developers try
> hard to
> prevent leaking resources but releasing and unallocating the used
> structures.
> Unfortunately every now and then some resource leaks are unintentionally
> added.
>
> This document tries to explain a few helpful tricks to identify resource
> leaks
> so that they can be addressed.
>
>
> ## Debug Builds
>
> There are certain techniques used in GlusterFS that make it difficult to
> use
> tools like Valgrind for memory leak detection. There are some build options
> that make it more practical to use Valgrind and other tools. When running
> Valgrind, it is important to have GlusterFS builds that contain the
> debuginfo/symbols. Some distributions (try to) strip the debuginfo to get
> smaller executables. Fedora and RHEL based distributions have sub-packages
> called ...-debuginfo that need to be installed for symbol resolving.
>
>
> ### Memory Pools
>
> By using memory pools, there are no allocation/freeing of single structures
> needed. This improves performance, but also makes it impossible to track
> the
> allocation and freeing of srtuctures.
>
> It is possible to disable the use of memory pools, and use standard
> `malloc()`
> and `free()` functions provided by the C library. Valgrind is then able to
> track the allocated areas and verify if they have been free'd. In order to
> disable memory pools, the Gluster sources needs to be configured with the
> `--enable-debug` option:
>
> ```shell
> ./configure --enable-debug
> ```
>
> When building RPMs, the `.spec` handles the `--with=debug` option too:
>
> ```shell
> make dist
> rpmbuild -ta --with=debug glusterfs-tar.gz
> ```
>
> ### Dynamically Loaded xlators
>
> Valgrind tracks the call chain of functions that do memory allocations. The
> addresses of the functions are stored and before Valgrind exits the
> addresses
> are resolved into human readable function names and offsets (line numbers
> in
> source files). Because Gluster loads xlators dynamically, and unloads then
> before exiting, Valgrind is not able to resolve the function addresses into
> symbols anymore. Whenever this happend, Valgrind shows `???` in the output,
> like
>
> ```
>   ==25170== 344 bytes in 1 blocks are definitely lost in loss record 233
> of 324
>   ==25170==at 0x4C29975: calloc (vg_replace_malloc.c:711)
>   ==25170==by 0x52C7C0B: __gf_calloc (mem-pool.c:117)
>   ==25170==by 0x12B0638A: ???
>   ==25170==by 0x528FCE6: __xlator_init (xlator.c:472)
>   ==25170==by 0x528FE16: xlator_init (xlator.c:498)
>   ...
> ```
>
> These `???` can be prevented by not calling `dlclose()` for unloading the
> xlator. This will cause a small leak of the 

[Gluster-devel] finding and fixing memory leaks in xlators

2017-04-25 Thread Niels de Vos
Hi,

with the use of gfapi it has become clear that Gluster was never really
developed to be loaded in applications. There are many different memory
leaks that get exposed through the usage of gfapi. Before, memory leaks
were mostly cleaned up automatically because processes would exit. Now,
there are applications that initialize a GlusterFS client to access a
volume, and de-init that client once not needed anymore. Unfortunately
upon the de-init, not all allocated memory is free'd again. These are
often just a few bytes, but for long running processes this can become a
real problem.

Finding the memory leaks in xlators has been a tricky thing. Valgrind
would often not know what function/source the allocation did, and fixing
the leak would become a real hunt. There have been some patches merged
that make Valgrind work more easily, and a few patches still need a
little more review before everything is available. The document below
describes how to use a new "sink" xlator to detect memory leaks. This
xlator can only be merged when a change for the graph initialization is
included too:

 - https://review.gluster.org/16796 - glusterfs_graph_prepare
 - https://review.gluster.org/16806 - sink xlator and developer doc

It would be most welcome if other developers can reviwe the linked
changes, so that everyone can easily debug memory leaks from thair
favorite xlators.

There is a "run-xlator.sh" script in my (for now) personal
"gluster-debug" repository that can be used to load an arbitrary xlator
along with "sink". See
https://github.com/nixpanic/gluster-debug/tree/master/gfapi-load-volfile
for more details.

Thanks,
Niels


From doc/developer-guide/identifying-resource-leaks.md:

# Identifying Resource Leaks

Like most other pieces of software, GlusterFS is not perfect in how it manages
its resources like memory, threads and the like. Gluster developers try hard to
prevent leaking resources but releasing and unallocating the used structures.
Unfortunately every now and then some resource leaks are unintentionally added.

This document tries to explain a few helpful tricks to identify resource leaks
so that they can be addressed.


## Debug Builds

There are certain techniques used in GlusterFS that make it difficult to use
tools like Valgrind for memory leak detection. There are some build options
that make it more practical to use Valgrind and other tools. When running
Valgrind, it is important to have GlusterFS builds that contain the
debuginfo/symbols. Some distributions (try to) strip the debuginfo to get
smaller executables. Fedora and RHEL based distributions have sub-packages
called ...-debuginfo that need to be installed for symbol resolving.


### Memory Pools

By using memory pools, there are no allocation/freeing of single structures
needed. This improves performance, but also makes it impossible to track the
allocation and freeing of srtuctures.

It is possible to disable the use of memory pools, and use standard `malloc()`
and `free()` functions provided by the C library. Valgrind is then able to
track the allocated areas and verify if they have been free'd. In order to
disable memory pools, the Gluster sources needs to be configured with the
`--enable-debug` option:

```shell
./configure --enable-debug
```

When building RPMs, the `.spec` handles the `--with=debug` option too:

```shell
make dist
rpmbuild -ta --with=debug glusterfs-tar.gz
```

### Dynamically Loaded xlators

Valgrind tracks the call chain of functions that do memory allocations. The
addresses of the functions are stored and before Valgrind exits the addresses
are resolved into human readable function names and offsets (line numbers in
source files). Because Gluster loads xlators dynamically, and unloads then
before exiting, Valgrind is not able to resolve the function addresses into
symbols anymore. Whenever this happend, Valgrind shows `???` in the output,
like

```
  ==25170== 344 bytes in 1 blocks are definitely lost in loss record 233 of 324
  ==25170==at 0x4C29975: calloc (vg_replace_malloc.c:711)
  ==25170==by 0x52C7C0B: __gf_calloc (mem-pool.c:117)
  ==25170==by 0x12B0638A: ???
  ==25170==by 0x528FCE6: __xlator_init (xlator.c:472)
  ==25170==by 0x528FE16: xlator_init (xlator.c:498)
  ...
```

These `???` can be prevented by not calling `dlclose()` for unloading the
xlator. This will cause a small leak of the handle that was returned with
`dlopen()`, but for improved debugging this can be acceptible. For this and
other Valgrind features, a `--enable-valgrind` option is available to
`./configure`. When GlusterFS is built with this option, Valgrind will be able
to resolve the symbol names of the functions that do memory allocations inside
xlators.

```shell
./configure --enable-valgrind
```

When building RPMs, the `.spec` handles the `--with=valgrind` option too:

```shell
make dist
rpmbuild -ta --with=valgrind glusterfs-tar.gz
```

## Running Valgrind against a single xlator

Debugging a single 

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
Hi Serkan,

Even though bitrot is not enabled, versioning was being done.
As part of it, on every fresh lookup, getxattr calls were
made to find weather object is bad, to get it's current version
and signature. So a find on gluster mount sometimes would cause
high cpu utilization. 

Since this is an RFE, it would be available from 3.11 and would not
be back ported to 3.10.x


Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Serkan Çoban" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "Shyam" , "Gluster Users" 
> , "Gluster Devel"
> 
> Sent: Tuesday, April 25, 2017 1:25:39 PM
> Subject: Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, 
> schedule and feature tracking
> 
> How this affect CPU usage? Does it read whole file and calculates a
> hash after it is being written?
> Will this patch land in 3.10.x?
> 
> On Tue, Apr 25, 2017 at 10:32 AM, Kotresh Hiremath Ravishankar
>  wrote:
> > Hi
> >
> > https://github.com/gluster/glusterfs/issues/188 is merged in master
> > and needs to go in 3.11
> >
> > Thanks and Regards,
> > Kotresh H R
> >
> > - Original Message -
> >> From: "Kaushal M" 
> >> To: "Shyam" 
> >> Cc: gluster-us...@gluster.org, "Gluster Devel" 
> >> Sent: Thursday, April 20, 2017 12:16:39 PM
> >> Subject: Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and
> >> feature tracking
> >>
> >> On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:
> >> > On 02/28/2017 10:17 AM, Shyam wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> With release 3.10 shipped [1], it is time to set the dates for release
> >> >> 3.11 (and subsequently 4.0).
> >> >>
> >> >> This mail has the following sections, so please read or revisit as
> >> >> needed,
> >> >>   - Release 3.11 dates (the schedule)
> >> >>   - 3.11 focus areas
> >> >
> >> >
> >> > Pinging the list on the above 2 items.
> >> >
> >> >> *Release 3.11 dates:*
> >> >> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
> >> >> release and would be a Short Term Maintenance (STM) release.
> >> >>
> >> >> This puts 3.11 schedule as (working from the release date backwards):
> >> >> - Release: May 30th, 2017
> >> >> - Branching: April 27th, 2017
> >> >
> >> >
> >> > Branching is about 2 weeks away, other than the initial set of overflow
> >> > features from 3.10 nothing else has been raised on the lists and in
> >> > github
> >> > as requests for 3.11.
> >> >
> >> > So, a reminder to folks who are working on features, to raise the
> >> > relevant
> >> > github issue for the same, and post it to devel list for consideration
> >> > in
> >> > 3.11 (also this helps tracking and ensuring we are waiting for the right
> >> > things at the time of branching).
> >> >
> >> >>
> >> >> *3.11 focus areas:*
> >> >> As maintainers of gluster, we want to harden testing around the various
> >> >> gluster features in this release. Towards this the focus area for this
> >> >> release are,
> >> >>
> >> >> 1) Testing improvements in Gluster
> >> >>   - Primary focus would be to get automated test cases to determine
> >> >> release health, rather than repeating a manual exercise every 3 months
> >> >>   - Further, we would also attempt to focus on maturing Glusto[7] for
> >> >> this, and other needs (as much as possible)
> >> >>
> >> >> 2) Merge all (or as much as possible) Facebook patches into master, and
> >> >> hence into release 3.11
> >> >>   - Facebook has (as announced earlier [3]) started posting their
> >> >> patches mainline, and this needs some attention to make it into master
> >> >>
> >> >
> >> > Further to the above, we are also considering the following features for
> >> > this release, request feature owners to let us know if these are
> >> > actively
> >> > being worked on and if these will make the branching dates. (calling out
> >> > folks that I think are the current feature owners for the same)
> >> >
> >> > 1) Halo - Initial Cut (@pranith)
> >> > 2) IPv6 support (@kaushal)
> >>
> >> This is under review at https://review.gluster.org/16228 . The patch
> >> mostly looks fine.
> >>
> >> The only issue is that it currently depends and links with an internal
> >> FB fork of tirpc (mainly for some helper functions and utilities).
> >> This makes it hard for the community to make actual use of  and test,
> >> the IPv6 features/fixes introduced by the change.
> >>
> >> If the change were refactored the use publicly available versions of
> >> tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
> >> While I was able to build it against available versions of tirpc, I
> >> wasn't able to get it working correctly.
> >>
> >> > 3) Negative lookup (@poornima)
> >> > 4) Parallel Readdirp - More changes to default settings. (@poornima,
> >> > @du)
> >> >
> >> >
> >> >> 

Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
Hi

https://github.com/gluster/glusterfs/issues/188 is merged in master
and needs to go in 3.11

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Kaushal M" 
> To: "Shyam" 
> Cc: gluster-us...@gluster.org, "Gluster Devel" 
> Sent: Thursday, April 20, 2017 12:16:39 PM
> Subject: Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and 
> feature tracking
> 
> On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:
> > On 02/28/2017 10:17 AM, Shyam wrote:
> >>
> >> Hi,
> >>
> >> With release 3.10 shipped [1], it is time to set the dates for release
> >> 3.11 (and subsequently 4.0).
> >>
> >> This mail has the following sections, so please read or revisit as needed,
> >>   - Release 3.11 dates (the schedule)
> >>   - 3.11 focus areas
> >
> >
> > Pinging the list on the above 2 items.
> >
> >> *Release 3.11 dates:*
> >> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
> >> release and would be a Short Term Maintenance (STM) release.
> >>
> >> This puts 3.11 schedule as (working from the release date backwards):
> >> - Release: May 30th, 2017
> >> - Branching: April 27th, 2017
> >
> >
> > Branching is about 2 weeks away, other than the initial set of overflow
> > features from 3.10 nothing else has been raised on the lists and in github
> > as requests for 3.11.
> >
> > So, a reminder to folks who are working on features, to raise the relevant
> > github issue for the same, and post it to devel list for consideration in
> > 3.11 (also this helps tracking and ensuring we are waiting for the right
> > things at the time of branching).
> >
> >>
> >> *3.11 focus areas:*
> >> As maintainers of gluster, we want to harden testing around the various
> >> gluster features in this release. Towards this the focus area for this
> >> release are,
> >>
> >> 1) Testing improvements in Gluster
> >>   - Primary focus would be to get automated test cases to determine
> >> release health, rather than repeating a manual exercise every 3 months
> >>   - Further, we would also attempt to focus on maturing Glusto[7] for
> >> this, and other needs (as much as possible)
> >>
> >> 2) Merge all (or as much as possible) Facebook patches into master, and
> >> hence into release 3.11
> >>   - Facebook has (as announced earlier [3]) started posting their
> >> patches mainline, and this needs some attention to make it into master
> >>
> >
> > Further to the above, we are also considering the following features for
> > this release, request feature owners to let us know if these are actively
> > being worked on and if these will make the branching dates. (calling out
> > folks that I think are the current feature owners for the same)
> >
> > 1) Halo - Initial Cut (@pranith)
> > 2) IPv6 support (@kaushal)
> 
> This is under review at https://review.gluster.org/16228 . The patch
> mostly looks fine.
> 
> The only issue is that it currently depends and links with an internal
> FB fork of tirpc (mainly for some helper functions and utilities).
> This makes it hard for the community to make actual use of  and test,
> the IPv6 features/fixes introduced by the change.
> 
> If the change were refactored the use publicly available versions of
> tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
> While I was able to build it against available versions of tirpc, I
> wasn't able to get it working correctly.
> 
> > 3) Negative lookup (@poornima)
> > 4) Parallel Readdirp - More changes to default settings. (@poornima, @du)
> >
> >
> >> [1] 3.10 release announcement:
> >> http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html
> >>
> >> [2] Gluster release schedule:
> >> https://www.gluster.org/community/release-schedule/
> >>
> >> [3] Mail regarding facebook patches:
> >> http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html
> >>
> >> [4] Release scope: https://github.com/gluster/glusterfs/projects/1
> >>
> >> [5] glusterfs github issues: https://github.com/gluster/glusterfs/issues
> >>
> >> [6] github issues for features and major fixes:
> >> https://hackmd.io/s/BkgH8sdtg#
> >>
> >> [7] Glusto tests: https://github.com/gluster/glusto-tests
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-25 Thread ABHISHEK PALIWAL
Thanks Kotresh.

Let me discuss in my team and will let you know.

Regards,
Abhishek

On Tue, Apr 25, 2017 at 12:41 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi Abhishek,
>
> As this is an enhancement it won't be back ported to 3.7/3.8/3.10
> It would be only available from upcoming 3.11 release.
>
> But I did try applying it to 3.7.6. It has lot of conflicts.
> If it's important for you, you can upgrade to latest version.
> available and back port it. If it's impossible to upgrade to
> latest version, atleast 3.7.20 would do. It has minimal
> conflicts. I can help you out with that.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "ABHISHEK PALIWAL" 
> > To: "Kotresh Hiremath Ravishankar" 
> > Cc: "Pranith Kumar Karampuri" , "Gluster Devel" <
> gluster-devel@gluster.org>, "gluster-users"
> > 
> > Sent: Tuesday, April 25, 2017 10:58:41 AM
> > Subject: Re: [Gluster-users] High load on glusterfsd process
> >
> > Hi Kotresh,
> >
> > Could you please update whether it is possible to get the patch or
> bakport
> > this patch on Gluster 3.7.6 version.
> >
> > Regards,
> > Abhishek
> >
> > On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com>
> > wrote:
> >
> > > What is the way to take this patch on Gluster 3.7.6 or only way to
> upgrade
> > > the version?
> > >
> > > On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > > > wrote:
> > >
> > >> Hi Kotresh,
> > >>
> > >> I have seen the patch available on the link which you shared. It
> seems we
> > >> don't have some files in gluser 3.7.6 which you modified in the patch.
> > >>
> > >> Is there any possibility to provide the patch for Gluster 3.7.6?
> > >>
> > >> Regards,
> > >> Abhishek
> > >>
> > >> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
> > >> khire...@redhat.com> wrote:
> > >>
> > >>> Hi Abhishek,
> > >>>
> > >>> Bitrot requires versioning of files to be down on writes.
> > >>> This was being done irrespective of whether bitrot is
> > >>> enabled or not. This takes considerable CPU. With the
> > >>> fix https://review.gluster.org/#/c/14442/, it is made
> > >>> optional and is enabled only with bitrot. If bitrot
> > >>> is not enabled, then you won't see any setxattr/getxattrs
> > >>> related to bitrot.
> > >>>
> > >>> The fix would be available in 3.11.
> > >>>
> > >>>
> > >>> Thanks and Regards,
> > >>> Kotresh H R
> > >>>
> > >>> - Original Message -
> > >>> > From: "ABHISHEK PALIWAL" 
> > >>> > To: "Pranith Kumar Karampuri" 
> > >>> > Cc: "Gluster Devel" , "gluster-users" <
> > >>> gluster-us...@gluster.org>, "Kotresh Hiremath
> > >>> > Ravishankar" 
> > >>> > Sent: Monday, April 24, 2017 11:30:57 AM
> > >>> > Subject: Re: [Gluster-users] High load on glusterfsd process
> > >>> >
> > >>> > Hi Kotresh,
> > >>> >
> > >>> > Could you please update me on this?
> > >>> >
> > >>> > Regards,
> > >>> > Abhishek
> > >>> >
> > >>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> > >>> > pkara...@redhat.com> wrote:
> > >>> >
> > >>> > > +Kotresh who seems to have worked on the bug you mentioned.
> > >>> > >
> > >>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> > >>> > > abhishpali...@gmail.com> wrote:
> > >>> > >
> > >>> > >>
> > >>> > >> If the patch provided in that case will resolve my bug as well
> then
> > >>> > >> please provide the patch so that I will backport it on 3.7.6
> > >>> > >>
> > >>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> > >>> > >> abhishpali...@gmail.com> wrote:
> > >>> > >>
> > >>> > >>> Hi Team,
> > >>> > >>>
> > >>> > >>> I have noticed that there are so many glusterfsd threads are
> > >>> running in
> > >>> > >>> my system and we observed some of those thread consuming more
> cpu.
> > >>> I
> > >>> > >>> did “strace” on two such threads (before the problem
> disappeared by
> > >>> > >>> itself)
> > >>> > >>> and found that there is a continuous activity like below:
> > >>> > >>>
> > >>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> > >>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
> > >>> 126T113552+.log.gz",
> > >>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> > >>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
> > >>> 0170126T113552+.log.gz",
> > >>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA
> (No
> > >>> data
> > >>> > >>> available)
> > >>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
> > >>> 0170126T113552+.log.gz",
> > >>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-25 Thread Kotresh Hiremath Ravishankar
Hi Abhishek,

As this is an enhancement it won't be back ported to 3.7/3.8/3.10
It would be only available from upcoming 3.11 release.

But I did try applying it to 3.7.6. It has lot of conflicts.
If it's important for you, you can upgrade to latest version. 
available and back port it. If it's impossible to upgrade to
latest version, atleast 3.7.20 would do. It has minimal
conflicts. I can help you out with that. 

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "ABHISHEK PALIWAL" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "Pranith Kumar Karampuri" , "Gluster Devel" 
> , "gluster-users"
> 
> Sent: Tuesday, April 25, 2017 10:58:41 AM
> Subject: Re: [Gluster-users] High load on glusterfsd process
> 
> Hi Kotresh,
> 
> Could you please update whether it is possible to get the patch or bakport
> this patch on Gluster 3.7.6 version.
> 
> Regards,
> Abhishek
> 
> On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL 
> wrote:
> 
> > What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
> > the version?
> >
> > On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL  > > wrote:
> >
> >> Hi Kotresh,
> >>
> >> I have seen the patch available on the link which you shared. It seems we
> >> don't have some files in gluser 3.7.6 which you modified in the patch.
> >>
> >> Is there any possibility to provide the patch for Gluster 3.7.6?
> >>
> >> Regards,
> >> Abhishek
> >>
> >> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
> >> khire...@redhat.com> wrote:
> >>
> >>> Hi Abhishek,
> >>>
> >>> Bitrot requires versioning of files to be down on writes.
> >>> This was being done irrespective of whether bitrot is
> >>> enabled or not. This takes considerable CPU. With the
> >>> fix https://review.gluster.org/#/c/14442/, it is made
> >>> optional and is enabled only with bitrot. If bitrot
> >>> is not enabled, then you won't see any setxattr/getxattrs
> >>> related to bitrot.
> >>>
> >>> The fix would be available in 3.11.
> >>>
> >>>
> >>> Thanks and Regards,
> >>> Kotresh H R
> >>>
> >>> - Original Message -
> >>> > From: "ABHISHEK PALIWAL" 
> >>> > To: "Pranith Kumar Karampuri" 
> >>> > Cc: "Gluster Devel" , "gluster-users" <
> >>> gluster-us...@gluster.org>, "Kotresh Hiremath
> >>> > Ravishankar" 
> >>> > Sent: Monday, April 24, 2017 11:30:57 AM
> >>> > Subject: Re: [Gluster-users] High load on glusterfsd process
> >>> >
> >>> > Hi Kotresh,
> >>> >
> >>> > Could you please update me on this?
> >>> >
> >>> > Regards,
> >>> > Abhishek
> >>> >
> >>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> >>> > pkara...@redhat.com> wrote:
> >>> >
> >>> > > +Kotresh who seems to have worked on the bug you mentioned.
> >>> > >
> >>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> >>> > > abhishpali...@gmail.com> wrote:
> >>> > >
> >>> > >>
> >>> > >> If the patch provided in that case will resolve my bug as well then
> >>> > >> please provide the patch so that I will backport it on 3.7.6
> >>> > >>
> >>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> >>> > >> abhishpali...@gmail.com> wrote:
> >>> > >>
> >>> > >>> Hi Team,
> >>> > >>>
> >>> > >>> I have noticed that there are so many glusterfsd threads are
> >>> running in
> >>> > >>> my system and we observed some of those thread consuming more cpu.
> >>> I
> >>> > >>> did “strace” on two such threads (before the problem disappeared by
> >>> > >>> itself)
> >>> > >>> and found that there is a continuous activity like below:
> >>> > >>>
> >>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> >>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
> >>> 126T113552+.log.gz",
> >>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> >>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
> >>> 0170126T113552+.log.gz",
> >>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
> >>> data
> >>> > >>> available)
> >>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
> >>> 0170126T113552+.log.gz",
> >>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
> >>> data
> >>> > >>> available)
> >>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> >>> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T
> >>> 180550+.log.gz",
> >>> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
> >>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> >>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
> >>>