Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Kaushal M
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:
>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri
>  wrote:
>>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and we
>> are in agreement that they are this severe, we can definitely take them in.
>> Otherwise next release is better IMO. Thoughts?
>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

If a workaround is present, I don't consider it a blocker for the release.

> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri
>>>  wrote:

 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's
 http://review.gluster.org/#/c/15778 before going ahead with the release. If
 we missed any other crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintain...@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
>
> ~ Atin (atinm)
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Atin Mukherjee
On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> I am trying to understand the criticality of these patches. Raghavendra's
> patch is crucial because gfapi workloads(for samba and qemu) are affected
> severely. I waited for Krutika's patch because VM usecase can lead to disk
> corruption on replace-brick. If you could let us know the criticality and
> we are in agreement that they are this severe, we can definitely take them
> in. Otherwise next release is better IMO. Thoughts?
>

If you are asking about how critical they are, then the first two are
definitely not but third one is actually a critical one as if user upgrades
from 3.6 to latest with quota enable, further peer probes get rejected and
the only work around is to disable quota and re-enable it back.

On a different note, 3.9 head is not static and moving forward. So if you
are really looking at only critical patches need to go in, that's not
happening, just a word of caution!


> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
> wrote:
>
>> Pranith,
>>
>> I'd like to see following patches getting in:
>>
>> http://review.gluster.org/#/c/15722/
>> http://review.gluster.org/#/c/15714/
>> http://review.gluster.org/#/c/15792/
>>
>
>>
>>
>>
>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> hi,
>>>   The only problem left was EC taking more time. This should affect
>>> small files a lot more. Best way to solve it is using compound-fops. So for
>>> now I think going ahead with the release is best.
>>>
>>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>>> 15778 before going ahead with the release. If we missed any other
>>> crucial patch please let us know.
>>>
>>> Will make the release as soon as this patch is merged.
>>>
>>> --
>>> Pranith & Aravinda
>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Pranith Kumar Karampuri
I am trying to understand the criticality of these patches. Raghavendra's
patch is crucial because gfapi workloads(for samba and qemu) are affected
severely. I waited for Krutika's patch because VM usecase can lead to disk
corruption on replace-brick. If you could let us know the criticality and
we are in agreement that they are this severe, we can definitely take them
in. Otherwise next release is better IMO. Thoughts?

On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
wrote:

> Pranith,
>
> I'd like to see following patches getting in:
>
> http://review.gluster.org/#/c/15722/
> http://review.gluster.org/#/c/15714/
> http://review.gluster.org/#/c/15792/
>
>
>
> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>>   The only problem left was EC taking more time. This should affect
>> small files a lot more. Best way to solve it is using compound-fops. So for
>> now I think going ahead with the release is best.
>>
>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>> 15778 before going ahead with the release. If we missed any other
>> crucial patch please let us know.
>>
>> Will make the release as soon as this patch is merged.
>>
>> --
>> Pranith & Aravinda
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Raghavendra G
On Thu, Nov 10, 2016 at 12:58 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Il 10 nov 2016 08:22, "Raghavendra
>  ha scritto:
> >
> > Kyle,
> >
> > Thanks for your your response :). This really helps. From 13s to 0.23s
> seems like huge improvement.
>
> From 13 minutes to 23 seconds, not from 13 seconds :)
>

Yeah. That was one confused reply :). Sorry about that.


>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Gandalf Corvotempesta
Il 10 nov 2016 08:22, "Raghavendra
 ha scritto:
>
> Kyle,
>
> Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.

>From 13 minutes to 23 seconds, not from 13 seconds :)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Atin Mukherjee
Pranith,

I'd like to see following patches getting in:

http://review.gluster.org/#/c/15722/
http://review.gluster.org/#/c/15714/
http://review.gluster.org/#/c/15792/



On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>   The only problem left was EC taking more time. This should affect
> small files a lot more. Best way to solve it is using compound-fops. So for
> now I think going ahead with the release is best.
>
> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/15778
> before going ahead with the release. If we missed any other crucial patch
> please let us know.
>
> Will make the release as soon as this patch is merged.
>
> --
> Pranith & Aravinda
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>


-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Raghavendra G
Kyle,

Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.

regards,
Raghavendra

On Tue, Nov 8, 2016 at 8:21 PM, Kyle Johnson  wrote:

> Hey there,
>
> We have a number of processes which daily walk our entire directory tree
> and perform operations on the found files.
>
> Pre-gluster, this processes was able to complete within 24 hours of
> starting.  After outgrowing that single server and moving to a gluster
> setup (two bricks, two servers, distribute, 10gig uplink), the processes
> became unusable.
>
> After turning this option on, we were back to normal run times, with the
> process completing within 24 hours.
>
> Our data is heavy nested in a large number of subfolders under /media/ftp.
>
> A subset of our data:
>
> 15T of files in 48163 directories under /media/ftp/dig_dis.
>
> Without readdir-optimize:
>
> [root@colossus dig_dis]# time ls|wc -l
> 48163
>
> real13m1.582s
> user0m0.294s
> sys 0m0.205s
>
>
> With readdir-optimize:
>
> [root@colossus dig_dis]# time ls | wc -l
> 48163
>
> real0m23.785s
> user0m0.296s
> sys 0m0.108s
>
>
> Long story short - this option is super important to me as it resolved an
> issue that would have otherwise made me move my data off of gluster.
>
>
> Thank you for all of your work,
>
> Kyle
>
>
>
>
>
> On 11/07/2016 10:07 PM, Raghavendra Gowdappa wrote:
>
>> Hi all,
>>
>> We have an option in called "cluster.readdir-optimize" which alters the
>> behavior of readdirp in DHT. This value affects how storage/posix treats
>> dentries corresponding to directories (not for files).
>>
>> When this value is on,
>> * DHT asks only one subvol/brick to return dentries corresponding to
>> directories.
>> * Other subvols/bricks filter dentries corresponding to directories and
>> send only dentries corresponding to files.
>>
>> When this value is off (this is the default value),
>> * All subvols return all dentries stored on them. IOW, bricks don't
>> filter any dentries.
>> * Since a directory has one dentry representing it on each subvol, dht
>> (loaded on client) picks up dentry only from hashed subvol.
>>
>> Note that irrespective of value of this option, _all_ subvols return
>> dentries corresponding to files which are stored on them.
>>
>> This option was introduced to boost performance of readdir as (when set
>> on), filtering of dentries happens on bricks and hence there is reduced:
>> 1. network traffic (with filtering all the redundant dentry information)
>> 2. number of readdir calls between client and server for the same number
>> of dentries returned to application (If filtering happens on client, lesser
>> number of dentries in result and hence more number of readdir calls. IOW,
>> result buffer is not filled to maximum capacity).
>>
>> We want to hear from you Whether you've used this option and if yes,
>> 1. Did it really boost readdir performance?
>> 2. Do you've any performance data to find out what was the percentage of
>> improvement (or deterioration)?
>> 3. Data set you had (Number of files, directories and organisation of
>> directories).
>>
>> If we find out that this option is really helping you, we can spend our
>> energies on fixing issues that will arise when this option is set to on.
>> One common issue with turning this option on is that when this option is
>> set, some directories might not show up in directory listing [1]. The
>> reason for this is that:
>> 1. If a directory can be created on a hashed subvol, mkdir (result to
>> application) will be successful, irrespective of result of mkdir on rest of
>> the subvols.
>> 2. So, any subvol we pick to give us dentries corresponding to directory
>> need not contain all the directories and we might miss out those
>> directories in listing.
>>
>> Your feedback is important for us and will help us to prioritize and
>> improve things.
>>
>> [1] https://www.gluster.org/pipermail/gluster-users/2016-October
>> /028703.html
>>
>> regards,
>> Raghavendra
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Pranith Kumar Karampuri
hi,
  The only problem left was EC taking more time. This should affect
small files a lot more. Best way to solve it is using compound-fops. So for
now I think going ahead with the release is best.

We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/15778
before going ahead with the release. If we missed any other crucial patch
please let us know.

Will make the release as soon as this patch is merged.

-- 
Pranith & Aravinda
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Kyle Johnson

Hey there,

We have a number of processes which daily walk our entire directory tree 
and perform operations on the found files.


Pre-gluster, this processes was able to complete within 24 hours of 
starting.  After outgrowing that single server and moving to a gluster 
setup (two bricks, two servers, distribute, 10gig uplink), the processes 
became unusable.


After turning this option on, we were back to normal run times, with the 
process completing within 24 hours.


Our data is heavy nested in a large number of subfolders under /media/ftp.

A subset of our data:

15T of files in 48163 directories under /media/ftp/dig_dis.

Without readdir-optimize:

[root@colossus dig_dis]# time ls|wc -l
48163

real13m1.582s
user0m0.294s
sys 0m0.205s


With readdir-optimize:

[root@colossus dig_dis]# time ls | wc -l
48163

real0m23.785s
user0m0.296s
sys 0m0.108s


Long story short - this option is super important to me as it resolved 
an issue that would have otherwise made me move my data off of gluster.



Thank you for all of your work,

Kyle




On 11/07/2016 10:07 PM, Raghavendra Gowdappa wrote:

Hi all,

We have an option in called "cluster.readdir-optimize" which alters the 
behavior of readdirp in DHT. This value affects how storage/posix treats dentries 
corresponding to directories (not for files).

When this value is on,
* DHT asks only one subvol/brick to return dentries corresponding to 
directories.
* Other subvols/bricks filter dentries corresponding to directories and send 
only dentries corresponding to files.

When this value is off (this is the default value),
* All subvols return all dentries stored on them. IOW, bricks don't filter any 
dentries.
* Since a directory has one dentry representing it on each subvol, dht (loaded 
on client) picks up dentry only from hashed subvol.

Note that irrespective of value of this option, _all_ subvols return dentries 
corresponding to files which are stored on them.

This option was introduced to boost performance of readdir as (when set on), 
filtering of dentries happens on bricks and hence there is reduced:
1. network traffic (with filtering all the redundant dentry information)
2. number of readdir calls between client and server for the same number of 
dentries returned to application (If filtering happens on client, lesser number 
of dentries in result and hence more number of readdir calls. IOW, result 
buffer is not filled to maximum capacity).

We want to hear from you Whether you've used this option and if yes,
1. Did it really boost readdir performance?
2. Do you've any performance data to find out what was the percentage of 
improvement (or deterioration)?
3. Data set you had (Number of files, directories and organisation of 
directories).

If we find out that this option is really helping you, we can spend our 
energies on fixing issues that will arise when this option is set to on. One 
common issue with turning this option on is that when this option is set, some 
directories might not show up in directory listing [1]. The reason for this is 
that:
1. If a directory can be created on a hashed subvol, mkdir (result to 
application) will be successful, irrespective of result of mkdir on rest of the 
subvols.
2. So, any subvol we pick to give us dentries corresponding to directory need 
not contain all the directories and we might miss out those directories in 
listing.

Your feedback is important for us and will help us to prioritize and improve 
things.

[1] https://www.gluster.org/pipermail/gluster-users/2016-October/028703.html

regards,
Raghavendra
___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Kyle Johnson

Hey there,

We have a number of processes which daily walk our entire directory tree 
and perform operations on the found files.


Pre-gluster, this processes was able to complete within 24 hours of 
starting.  After outgrowing that single server and moving to a gluster 
setup (two bricks, two servers, distribute, 10gig uplink), the processes 
became unusable.


After turning this option on, we were back to normal run times, with the 
process completing within 24 hours.


Our data is heavy nested in a large number of subfolders under /media/ftp.

A subset of our data:

15T of files in 48163 directories under /media/ftp/dig_dis.

Without readdir-optimize:

[root@colossus dig_dis]# time ls|wc -l
48163

real13m1.582s
user0m0.294s
sys 0m0.205s


With readdir-optimize:

[root@colossus dig_dis]# time ls | wc -l
48163

real0m23.785s
user0m0.296s
sys 0m0.108s


Long story short - this option is super important to me as it resolved 
an issue that would have otherwise made me move my data off of gluster.



Thank you for all of your work,

Kyle


On 11/07/2016 10:07 PM, Raghavendra Gowdappa wrote:

Hi all,

We have an option in called "cluster.readdir-optimize" which alters the 
behavior of readdirp in DHT. This value affects how storage/posix treats dentries 
corresponding to directories (not for files).

When this value is on,
* DHT asks only one subvol/brick to return dentries corresponding to 
directories.
* Other subvols/bricks filter dentries corresponding to directories and send 
only dentries corresponding to files.

When this value is off (this is the default value),
* All subvols return all dentries stored on them. IOW, bricks don't filter any 
dentries.
* Since a directory has one dentry representing it on each subvol, dht (loaded 
on client) picks up dentry only from hashed subvol.

Note that irrespective of value of this option, _all_ subvols return dentries 
corresponding to files which are stored on them.

This option was introduced to boost performance of readdir as (when set on), 
filtering of dentries happens on bricks and hence there is reduced:
1. network traffic (with filtering all the redundant dentry information)
2. number of readdir calls between client and server for the same number of 
dentries returned to application (If filtering happens on client, lesser number 
of dentries in result and hence more number of readdir calls. IOW, result 
buffer is not filled to maximum capacity).

We want to hear from you Whether you've used this option and if yes,
1. Did it really boost readdir performance?
2. Do you've any performance data to find out what was the percentage of 
improvement (or deterioration)?
3. Data set you had (Number of files, directories and organisation of 
directories).

If we find out that this option is really helping you, we can spend our 
energies on fixing issues that will arise when this option is set to on. One 
common issue with turning this option on is that when this option is set, some 
directories might not show up in directory listing [1]. The reason for this is 
that:
1. If a directory can be created on a hashed subvol, mkdir (result to 
application) will be successful, irrespective of result of mkdir on rest of the 
subvols.
2. So, any subvol we pick to give us dentries corresponding to directory need 
not contain all the directories and we might miss out those directories in 
listing.

Your feedback is important for us and will help us to prioritize and improve 
things.

[1] https://www.gluster.org/pipermail/gluster-users/2016-October/028703.html

regards,
Raghavendra
___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Weekly Community Meeting - 2016-11-09

2016-11-09 Thread Kaushal M
On Wed, Nov 9, 2016 at 9:55 AM, Kaushal M  wrote:
> On Wed, Nov 9, 2016 at 9:54 AM, Kaushal M  wrote:
>> Hi all,
>> This a reminder to everyone to add the updates to the meeting etherpad
>> [1] before the meeting starts at 1200UTC today.
>
> Also, add any topics you want discussed to the Open floor.
>
>>
>> Thanks.
>>
>> ~kaushal
>>
>> [1]: https://public.pad.fsfe.org/p/gluster-community-meetings

Attendance to this weeks meeting was quite poor, initially but picked
up later on.
Two topics were discussed today, about the impending shutdown of the
FSFE etherpad and the trial run of the new format meetings.
We'll be running the same format next week as well.

The meeting pad has been archived at [1]. The logs can be found at [2],[3],[4].

I'll be hosting next weeks meeting, same time (people in NA, remember
to come an hour early), same place.
Don't forget, add your topics and updates to the agenda at [5].

Thanks.

~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-09/community_meeting_20161109.2016-11-09-12.01.log.html


## Topics of Discussion

### Next weeks meeting host

- kkeitheley volunteered last week.
- kkeithley can no longer host, thanks to DST.
- kshlm will host again

### Open floor

- [kshlm] FSFE etherpad is shutting down
- Need to find all existing Gluster etherpads on it
- archive whatever can be archived in the gh-wiki
(https://github.com/gluster/glusterfs/wiki)
- Find alternative for others
- https://github.com/ether/etherpad-lite/wiki/Sites-that-run-Etherpad-Lite
- Saravanakmr volunteered to lead the effort to find and collect
existing pads on FSFE etherpad
- https://hackmd.io suggested by post-factum
- New meeting format trial ending. Should we continue?
- The format was under trial for 3 weeks.
- +1 to continue from hchirram, post-factum, Saravanakmr, samikshan, rastar
- No changes suggested
- kshlm will ask for feedback on mailing lists.

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
- _Add updates here_
- GD2
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051421.html
- Brick Multiplexing
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051364.html
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051389.html

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.0rc2
- Next release : 3.9.0
  - Release date : End of Sept 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.0
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&id=glusterfs-3.9.0&hide_resolved=1
- Roadmap : https://www.gluster.org/community/roadmap/3.9/
- Updates:
  - _None_

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.5
- Next release : 3.8.6
  - Release date : 10 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.6
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&id=glusterfs-3.8.6&hide_resolved=1
- Updates:
  - Release is planned for the weekend
 - https://www.gluster.org/pipermail/maintainers/2016-November/001659.html

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.17
- Next release : 3.7.18
  - Release date : 30 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.18
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&id=glusterfs-3.7.18&hide_resolved=1
- Updates:
- samikshan sent out a release announcement
- https://www.gluster.org/pipermail/gluster-devel/2016-November/051414.html

### Related projects and efforts

 Community Infra

- _None_

 Samba

- Fedora updates for Samba v4.3.12, v4.4.7 and v4.5.1 was created and
pushed following the regression encountered with GlusterFS integration
tracked by https://bugzilla.samba.org/show_bug.cgi?id=12404

 Ganesha

- _None_

 Containers

- _None_

 Testing

- [loadtheacc] 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051369.html

 Others

- [atinm] GlusterD-1.0 updates
https://www.gluster.org/pipermail/gluster-devel/2016-November/051432.html


### Action Items from last week

- nigelb, kshlm, will document and start the practice of recording
etherpads into Github wikis.
- Meeting etherpad has been archived
- https://github.com/gluster/glusterfs/wiki/Community-Meeting-Archive
- A t

Re: [Gluster-devel] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-09 Thread ABHISHEK PALIWAL
could anyone reply on this.

On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL 
wrote:

> Hi,
>
> We could see that sync is getting failed to sync the GlusterFS bricks due
> to error trace "Transport endpoint is not connected "
>
> [2016-10-31 04:06:03.627395] E [MSGID: 114031] 
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
> connected]
> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
> (c_glusterfs-client-9)
> [2016-10-31 04:06:03.628466] E [MSGID: 114031] 
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
> connected]
> [2016-10-31 04:06:03.628475] I [MSGID: 108019] 
> [afr-lk-common.c:1086:afr_lock_blocking]
> 0-c_glusterfs-replicate-0: unable to lock on even one child
> [2016-10-31 04:06:03.628539] I [MSGID: 108019] 
> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
> connected)
> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
> (--> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
> ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
> [2016-10-31 04:06:03.629210] W [MSGID: 114031] 
> [client-rpc-fops.c:2971:client3_3_lookup_cbk]
> 0-c_glusterfs-client-9: remote operation failed. Path: /loadmodules_norepl/
> CXC1725605_P93A001/cello/emasviews (b0e5a94e-a432-4dce-b86f-a551555780a2)
> [Transport endpoint is not connected]
>
>
> Could you please tell us the reason why we are getting these trace and how
> to resolve this.
>
> Logs are attached here please share your analysis.
>
> Thanks in advanced
>
> --
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel